Deciding What to Learn: A Rate-Distortion Approach
Dilip Arumugam & Benjamin Van Roy.
Preprint - under review, 2020.
I am a third-year Ph.D. candidate in the Stanford University Computer Science Department, where I am advised by Benjamin Van Roy. My research is broadly focused on reinforcement learning.
Previously, I completed my Bachelor's and Master's degrees in the Brown University Computer Science Department. My time at Brown centered around work in reinforcement learning, under my advisor Michael Littman. In parallel, I was a member of the Humans to Robots Laboratory where I worked with Stefanie Tellex on natural language understanding for robots. I was also a member of the Brown Laboratory for Linguistic Information Processing run by Eugene Charniak.
I'm primarily interested in the area of reinforcement learning with the goal of building sequential decision-making agents that learn as efficiently and as remarkably as people do. These days, I'm especially interested in how information theory might offer principled insights on how to tackle the challenges of sample-efficient reinforcement learning. Other areas of active interest include reinforcement learning with natural language and hierarchical reinforcement learning.
For my CV, please click here.
Dilip Arumugam & Benjamin Van Roy.
Preprint - under review, 2020.
Dilip Arumugam, Peter Henderson, Pierre-Luc Bacon.
NeurIPS Workshop on Biological and Artificial Reinforcement Learning, 2020.
Dilip Arumugam & Benjamin Van Roy.
NeurIPS Workshop on Biological and Artificial Reinforcement Learning, 2020.
Aidan Curtis, Minjian Xin, Dilip Arumugam, Kevin Feigelis, Daniel Yamins.
International Conference on Machine Learning (ICML), 2020.
David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
Dilip Arumugam, Debadeepta Dey, Alekh Agarwal, Asli Celikyilmaz, Elnaz Nouri, Bill Dolan.
Preprint, 2019.
David Abel, Nate Umbanhowar, Khimya Khetarpal, Dilip Arumugam, Doina Precup, Michael L. Littman.
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019.
ICLR Workshop on Structures and Priors in Reinforcement Learning, 2019.
Pierre-Luc Bacon, Dilip Arumugam, Emma Brunskill.
Multidisciplinary Conference on Reinforcement Learning and Decision Making, 2019.
David Abel, Dilip Arumugam, Kavosh Asadi, Yuu Jinnai, Michael L. Littman, Lawson L.S. Wong.
Association for the Advancement of Artificial Intelligence (AAAI) Conference, 2019.
Dilip Arumugam, Jun Ki Lee, Sophie Saskin, Michael L. Littman.
Preprint, 2018.
Dilip Arumugam*, Sidd Karamcheti*, Nakul Gopalan, Edward C. Williams, Mina Rhee, Lawson L.S. Wong, Stefanie Tellex Autonomous Robots (AuRo), 2018.
David Abel, Dilip Arumugam, Lucas Lehnert, Michael L. Littman.
International Conference on Machine Learning (ICML), 2018.
Nakul Gopalan*, Dilip Arumugam*, Lawson L.S. Wong, Stefanie Tellex.
Robotics: Science and Systems, 2018.
David Abel, Dilip Arumugam, Lucas Lehnert, Michael L. Littman.
NIPS Workshop on Hierarchical Reinforcement Learning, 2017.
Christopher Grimm, Dilip Arumugam, Siddharth Karamcheti, David Abel, Lawson L.S. Wong, Michael L. Littman.
Preprint, 2017.
Siddharth Karamcheti, Edward C. Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L.S. Wong, Stefanie Tellex.
ACL Workshop on Language Grounding for Robotics, 2017.
Best Paper Award
Dilip Arumugam, David Abel, Kavosh Asadi, Nakul Gopalan, Christopher Grimm, Jun Ki Lee, Lucas Lehnert, Michael L. Littman.
Preprint, 2017.
Dilip Arumugam*, Siddharth Karamcheti*, Nakul Gopalan, Lawson L.S. Wong, Stefanie Tellex.
Robotics: Science and Systems, 2017.
James MacGlashan, Monica Babes-Vroman, Marie desJardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, Lei Yang.
Robotics: Science and Systems, 2015.
I've had the privilege of both learning from and researching with an amazing group of people: