About me

I am a 5th-year PhD student in Computer Science at the University of Southern California working under supervision of Dr. Phebe Vayanos and Dr. Milind Tambe. I am also affiliated with CAIS Center for Artificial Intelligence in Society. I work on topics in optimization and algorithmic fairness with direct applications for public health and AI for Social Good. Prior to joining USC, I completed my master's degree under supervision of Dr. Kagan Tumer at Autonomous Agents and Distributed Intelligence (AADI) lab working on multi-agent reinforcement learning for tightly-coupled domains.

  • I received the Grace Hopper Celebration Student Scholarship to attend GHC2021.
  • I will join RegLab, Stanford as a summer fellow. I will be working on data-driven approaches for public health improvement.
  • I will be presenting our work on "Fair Influence Maximization: A Welfare Optimization Approach" during AAAI21 poster session on Friday, Feb 5, 8:45-10:30 AM (PST) and 4:45 - 6:30 PM (PST).
  • Our paper titled "Fair Influence Maximization: A Welfare Optimization Approach" is accepted to AAAI2021!
  • I received Women in Operations Research Bayer Scholarship at INFORMS 2020!
  • I will be presenting our recent work on welfare optimization approach for fairness in influence maximization problem at INFORMS Annual Meeting 2020.
  • I am co-organizing a session on "Fairness in Machine Learning and Optimization" this year at INFORMS Annual Meeting 2020.
  • In collaboration with Code in the Schools and STEM Academy of Hollywood, we are holding a STEM outreach program to raise high school students' interest in pursuing STEM fields. Check out more details here .
  • We received INFORMS Diversity, Equity and Inclusion Ambassadors Award!
  • I am co-organizing AI for Social Impact workshop at Harvard University.
  • Our paper on fairness in robust graph covering problem is accepted at NeurIPS 2019.
Aida Rahmattalabi
Aida Rahmattalabi

PhD Candidate

Computer Science Department
University of Southern California
Center for Artificial Intelligence in Society

Email: rahmatta@usc.edu


Learning Fair Housing Allocation Policies among People Experiencing Homlessness.

I am working with LAHSA and CPL to assess the risk measurement tool currently used for allocating housing resources. In particular, I work with observational data on previous intervention assignments and study the treatement effect and fairness of current system using causal inference framework.

Fairness in Graph-based Problems.

This work is motivated by the fairness concerns in public health settings. Recently several algorithmic solutions have been deployed in order to improve the effectiveness of interventions (e.g., HIV, substance abuse and suicide prevention) that leverage social network data to make recommendations. We show that individuals' position in the social network impacts how much they benefit from these interventions. In particular, bigger communities receive disproportionately more benefits that minorities. We extend methods from Opeorations Research and Welfare Economics to address different aspects of this problem.

Social Network-based Interventions for Public Health.

Several social, behaviorial and public health interventions,such as suicide/HIV prevention or community preparedness against natural disasters, rely on social influence to spread and reinforce positive behavior. In particular, I have worked with social work researchers Dr. Rice, Dr. Fulginiti and Dr. Adhikari on indentifying data-driven approaches to improve the effectiveness of subsance abuse and suicide prevention interventions.

Explanation Systems for Influence Maximization Algorithms.

The field of influence maximization (IM) has made rapid advances, resulting in many sophisticated algorithms for identifying “influential” members in social networks. However, in order to engender trust in IM algorithms, the rationale behind their choice of “influential” nodes needs to be explained to its users. In this work, we aimed to tackle this open problem by proposing a general paradigm for designing explanations for IM algorithms by exploiting the tradeoff between explanation accuracy and interpretability. Our paradigm treats IM algorithms as black boxes. We utilize this paradigm to build XplainIM, a suite of explanation systems. We tested the usability of XplainIM by explaining solutions among more then 200 human subjects on Amazon Mechanical Turk (AMT). In particular, our results demonstrate the importance of exploiting the accuracy-interpretability tradeoff in ensuring the intelligibility of our explanations.

Credit Assignment for Multi-agent Reinforcement Learning in Tighely Coupled Settings.

Multiagent systems can be used in complex tasks to improve performance in terms of both speed and effectiveness. However, use of multi-agent systems poses important challenges. Specifically, in domains where the agents' actions are tightly coupled, i.e., each agents action has a significant impact on other agents' rewards, achieving cooperative behavior at the group level is extremely difficult. In such cases, learning policies can greatly benefit from reward shaping. In this work, I developed a reward function using the notion of counterfactuals to tackle the reward shaping problem in these tightly coupled multi-agent tasks.


Conference Publications and Preprints
Book Chapters