I am a 5th-year PhD student in Computer Science at the University of Southern California working under supervision of Dr. Phebe Vayanos and Dr. Milind Tambe. I am also affiliated with CAIS Center for Artificial Intelligence in Society. I work on topics in optimization and algorithmic fairness with direct applications for public health and AI for Social Good. Prior to joining USC, I completed my master's degree under supervision of Dr. Kagan Tumer at Autonomous Agents and Distributed Intelligence (AADI) lab working on multi-agent reinforcement learning for tightly-coupled domains.
Computer Science Department
University of Southern California
Center for Artificial Intelligence in Society
I am working with LAHSA and CPL to assess the risk measurement tool currently used for allocating housing resources. In particular, I work with observational data on previous intervention assignments and study the treatement effect and fairness of current system using causal inference framework.
This work is motivated by the fairness concerns in public health settings. Recently several algorithmic solutions have been deployed in order to improve the effectiveness of interventions (e.g., HIV, substance abuse and suicide prevention) that leverage social network data to make recommendations. We show that individuals' position in the social network impacts how much they benefit from these interventions. In particular, bigger communities receive disproportionately more benefits that minorities. We extend methods from Opeorations Research and Welfare Economics to address different aspects of this problem.
Several social, behaviorial and public health interventions,such as suicide/HIV prevention or community preparedness against natural disasters, rely on social influence to spread and reinforce positive behavior. In particular, I have worked with social work researchers Dr. Rice, Dr. Fulginiti and Dr. Adhikari on indentifying data-driven approaches to improve the effectiveness of subsance abuse and suicide prevention interventions.
The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying “influential” members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of “influential” nodes needs to be explained to its users. In this work, we aimed to tackle this open problem by proposing a general paradigm for designing explanations for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability. Our paradigm treats IM algorithms as black boxes. We utilize this paradigm to build XplainIM, a suite of explanation systems. We tested the usability of XplainIM by explaining solutions among more then 200 human subjects on Amazon Mechanical Turk (AMT). In particular, our results demonstrate the importance of exploiting the accuracy-interpretability tradeoff in ensuring the intelligibility of our explanations.
Multiagent systems can be used in complex tasks to improve performance in terms of both speed and effectiveness. However, use of multi-agent systems poses important challenges. Specifically, in domains where the agents' actions are tightly coupled, i.e., each agents action has a significant impact on other agents' rewards, achieving cooperative behavior at the group level is extremely difficult. In such cases, learning policies can greatly benefit from reward shaping. In this work, I developed a reward function using the notion of counterfactuals to tackle the reward shaping problem in these tightly coupled multi-agent tasks.