My name is Dilip (sounds like Philip except with a D) Arumugam and I'm a first-year Ph.D. student in the Stanford University Computer Science Department.

Previously, I completed my Bachelor's and Master's degrees in the Brown University Computer Science Department. My time at Brown centered around work in reinforcement learning under my advisor, Michael L. Littman. In parallel, I was a member of the Humans to Robots Laboratory where I worked with Stefanie Tellex on natural language understanding for robots. I was also a member of the Brown Laboratory for Linguistic Information Processing run by Eugene Charniak.

Research Interests

These are my primary areas of interest within the exciting and rapidly growing field of machine learning. As there is a high degree of overlap between them, I often like to think about problems that lie at the intersection of these categories.

  • Reinforcement Learning
  • Natural Language Processing
  • Robotics
  • Representation Learning
  • Multi-task & Lifelong Learning
  • Curriculum Learning

Papers & Publications

  • Conference
  • Workshop
  • Journal
  • Preprint

For my CV, please click here.

Value Preserving State-Action Abstractions

David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019.
ICLR Workshop on Structures and Priors in Reinforcement Learning, 2019.

Grounding English Commands to Reward Functions

James MacGlashan, Monica Babes-Vroman, Marie desJardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, Lei Yang.
Robotics: Science and Systems, 2015.