Recent Publications

We find that a neural network part-of-speech tagger implicity learns to model syntactic change.

I formally characterize the capacity and memory of various RNNs, given asymptotic assumptions.

Stack-augmented RNNs that are trained to perform language modeling learn to exploit linguistic structure without any supervision.

This paper analyzes the behavior of stack-augmented recurrent neural network models.

We present a graph-based Tree Adjoining Grammar parser that uses BiLSTMs, highway connections, and character-level CNNs.

Past Affiliations

 
 
 
 
 

Software Engineering Intern

Google

May 2018 – Aug 2018 New York, NY
 
 
 
 
 

Research Intern

Language Learning Lab at Boston College

Jun 2017 – Aug 2017 Boston, MA
 
 
 
 
 

Lab Member

Computational Linguistics at Yale

Sep 2016 – May 2019 New Haven, CT

Recent Posts

Summarizing Sequential Neural Networks as Automata.

Thoughts from my experience at ACL 2019.

Translation of the Old English poem The Wanderer.

Review of 2018 literature on capsule networks for NLP.

Review and discussion of Grefenstette et al., 2015.