Recent Publications

Formal language theory tutorial for an NLP audience.

We formally characterize RNN architectures in terms of their capacity.

We find that a neural network part-of-speech tagger implicity learns to model syntactic change.

I formally characterize the capacity and memory of various RNNs, given asymptotic assumptions.

Stack-augmented RNNs that are trained to perform language modeling learn to exploit linguistic structure without any supervision.

This paper analyzes the behavior of stack-augmented recurrent neural network models.

We present a graph-based Tree Adjoining Grammar parser that uses BiLSTMs, highway connections, and character-level CNNs.

I propose a unified semantic treatment of several English subordinate clause types.

Recent Posts

Summary of the ACL 2020 paper.

Summarizing Sequential Neural Networks as Automata.

Thoughts from my experience at ACL 2019.

Translation of the Old English poem The Wanderer.

Review of 2018 literature on capsule networks for NLP.