Welcome

I am a third-year Ph.D. candidate in the Stanford University Computer Science Department, where I am advised by Benjamin Van Roy. My research is broadly focused on reinforcement learning.

Previously, I completed my Bachelor's and Master's degrees in the Brown University Computer Science Department. My time at Brown centered around work in reinforcement learning, under my advisor Michael Littman. In parallel, I was a member of the Humans to Robots Laboratory where I worked with Stefanie Tellex on natural language understanding for robots. I was also a member of the Brown Laboratory for Linguistic Information Processing run by Eugene Charniak.

Research Interests

I'm primarily interested in the area of reinforcement learning with the goal of building sequential decision-making agents that learn as efficiently and as remarkably as people do. These days, I'm especially interested in how information theory might offer principled insights on how to tackle the challenges of sample-efficient reinforcement learning. Other areas of active interest include reinforcement learning with natural language and hierarchical reinforcement learning.

Papers & Publications

For my CV, please click here.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019.
ICLR Workshop on Structures and Priors in Reinforcement Learning, 2019.

Grounding English Commands to Reward Functions

James MacGlashan, Monica Babes-Vroman, Marie desJardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, Lei Yang.
Robotics: Science and Systems, 2015.