I am a third-year Ph.D. candidate in the Stanford University Computer Science Department, advised by Benjamin Van Roy. My research is broadly focused on reinforcement learning. In the summer of 2021, I will be a Research Scientist Intern at DeepMind. In the past, I have completed internships at Microsoft Research Cambridge, Mila, and Microsoft Research Redmond.

I completed my Bachelor's and Master's degrees in the Brown University Computer Science Department. My time at Brown centered around work in reinforcement learning, under my advisor Michael Littman. In parallel, I was a member of the Humans to Robots Laboratory where I worked with Stefanie Tellex on natural language understanding for robots. I was also a member of the Brown Laboratory for Linguistic Information Processing run by Eugene Charniak.

Research Interests

I'm primarily interested in the area of reinforcement learning with the goal of building sequential decision-making agents that learn as efficiently and as remarkably as people do. These days, I'm especially interested in how information theory might offer principled insights on how to tackle the challenges of sample-efficient reinforcement learning. Other areas of active interest include reinforcement learning with natural language and hierarchical reinforcement learning.

Selected Papers & Publications

For my CV, please click here and, for a more complete list of papers, please check Google Scholar.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.