Nan Rosemary Ke at gmail dot com

I am a PhD student at Mila advised by Chris Pal and Yoshua Bengio.

My research centers around developing novel machine learning algorithms that an generalize well to changing environments. In my research, I focus on two key ingredients: credit assignment and causal learning. These two ingredients flow into and reinforce each other: Appropriate credit assignment can help a model refine itself only at relevant causal variables, while a model that comprehends causality sufficiently well can reason about the connections between causal variables and the effect of interveningon them. Such an improved model can adapt quickly to interventions, thereby avoiding a huge class of errors that impede systematic generalization, particularly out of distribution.

During my PhD, I have spent time at Google Deepmind, Facebook AI Research and Microsoft Research.

I have recently been named a Rising Star in Machine Learning. I have been awarded the Facebook fellowship in 2019.

Google Scholar  /  GitHub  /  Twitter


I am interested in developing novel machine learning algorithms that an generalize well to changing environments by improving credit assignment and encouraging causal learning in deep neural networks.

Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding
Nan Rosemary Ke, Anirudh Goyal, Olexa Blanik, Jonathan Binas, Michael Mozer, Chris Pal, Yoshua Bengio The Thirty-second Annual Conference on Neural Information Processing Systems (NeuIPS) Spotlight presentation , 2018
Arxiv / blog post (coming soon) /

Learning long-term dependencies in extended temporal sequences requires credit assignment to events far back in the past. The most common method for training recurrent neural networks, back-propagation through time (BPTT), requires credit information to be propagated backwards through every single step of the forward computation, potentially over thousands or millions of time steps. This becomes computationally expensive or even infeasible when used with long sequences. Importantly, biological brains are unlikely to perform such detailed reverse replay over very long sequences of internal states (consider days, months, or years.) However, humans are often reminded of past memories or mental states which are associated with the current mental state. We consider the hypothesis that such memory associations between past and present could be used for credit assignment through arbitrarily long sequences, propagating the credit assigned to the current state to the associated past state. Based on this principle, we study a novel algorithm which only back-propagates through a few of these temporal skip connections, realized by a learned attention mechanism that associates current states with relevant past states. We demonstrate in experiments that our method matches or outperforms regular BPTT and truncated BPTT in tasks involving particularly long-term dependencies, but without requiring the biologically implausible backward replay through the whole history of states. Additionally, we demonstrate that the proposed method transfers to longer sequences significantly better than LSTMs trained with BPTT and LSTMs trained with full self-attention.


Focused Hierarchical RNNs for Conditional Sequence Processing
Nan Rosemary Ke, Konrad Zolna, Alessandro Sordoni, Zhouhan Lin, Adam Trischler, Yoshua Bengio, Joelle Pineau, Laurent Charlin, Chris Pal International Conference on Machine Learning (ICML), 2018
Arxiv / blog post (coming soon) /

We present a mechanism for focusing RNN encoders for sequence modelling tasks which allows them to attend to key parts of the input as needed. We formulate this using a multilayer conditional sequence encoder that reads in one token at a time and makes a discrete decision on whether the token is relevant to the context or question being asked. The discrete gating mechanism takes in the context embedding and the current hidden state as inputs and controls information flow into the layer above.

Twin Networks: Using the future to generate sequences
Nan Rosemary Ke*, Dmitry Serdyuk*, Alessandro Sordoni, Adam Trischler, Chris Pal, Yoshua Bengio
International Conference on Learning Representations (ICLR), 2018

We propose a simple technique for encouraging generative RNNs to plan ahead. We train a “backward” recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states).

Z Forcing: Training Stochastic RNN's
Anirudh Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Rosemary Nan Ke, Yoshua Bengio, Neural Information Processing System (NIPS), 2017
arXiv / code (coming soon)

We proposed a novel approach to incorporate stochastic latent variables in sequential neural networks. The method builds on recent architectures that use latent variables to condition the recurrent dynamics of the network. We augmented the inference network with an RNN that runs backward through the sequence and added a new auxiliary cost that forces the latent variables to reconstruct the state of that backward RNN, i.e. predict a summary of future observations.

Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net
Anirudh Goyal, Nan Rosemary Ke, Surya Ganguli, Yoshua Bengio
Neural Information Processing System (NIPS), 2017
arXiv / blog post (coming soon) / code

We propose a novel method to directly learn a stochastic transition operator whose repeated application provides generated samples. Traditional undirected graphical models approach this problem indirectly by learning a Markov chain model whose stationary distribution obeys detailed balance with respect to a parameterized energy function.

A Deep Reinforcement Learning Chatbot
Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau and Yoshua Bengio, International Conference on Learning Representations (ICLR), 2017

We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed significantly better than many competing systems. Due to its machine learning architecture, the system is likely to improve with additional data.

Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations
David Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas,Nan Rosemary Ke, Anirudh Goyal Yoshua Bengio, Aaron Courville
Chris Pal
International Conference on Learning Representations (ICLR), 2017
arXiv / code

We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks.