Welcome

I am a second-year Ph.D. student in the Stanford University Computer Science Department, where I am advised by Benjamin Van Roy. My research is broadly focused on reinforcement learning.

Previously, I completed my Bachelor's and Master's degrees in the Brown University Computer Science Department. My time at Brown centered around work in reinforcement learning, under my advisor Michael L. Littman. In parallel, I was a member of the Humans to Robots Laboratory where I worked with Stefanie Tellex on natural language understanding for robots. I was also a member of the Brown Laboratory for Linguistic Information Processing run by Eugene Charniak.

Research Interests

My core area of research is reinforcement learning with the goal of building and understanding sequential decision-making agents that learn as efficiently and as remarkably as people do.

I believe that a key ingredient lying at the heart of sample-efficient reinforcement learning is abstraction.

Other areas of active interest include reinforcement learning with natural language, information-theoretic decision making, and learning reward functions.

Papers & Publications

  • Conference
  • Workshop
  • Journal
  • Preprint

For my CV, please click here.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019.
ICLR Workshop on Structures and Priors in Reinforcement Learning, 2019.

Grounding English Commands to Reward Functions

James MacGlashan, Monica Babes-Vroman, Marie desJardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, Lei Yang.
Robotics: Science and Systems, 2015.