Hello, World!

I am a Research Assistant Professor at the College of Information and Computer Sciences at the University of Massachusetts Amherst, where I lead the EQUATE initiative and occasionally write for the Uncommon Good blog.

My research contributes statistical methods to understand and augment public opinion formation within our digital information society’s systems. My endeavors are geared towards ensuring that tomorrow’s AI and social media are representative, fair, explainable, and ready to interact with our open world for public good.

  • To deepen our understanding of public opinion, I innovate statistical methods for extracting representative signals about the public from social media and news media.
  • To augment public opinion formation, I pioneer representative social media designs facilitating political discourse and machine learning methods adhering to legal provisions, e.g., by preventing discrimination, while providing AI explanations.

Research areas: fair and explainable machine learning, data science, computational social science, social media, network science, causality, open-world learning.

Research labs

image

I am heading the SIMS (Socially Intelligent Media and Systems) lab, which includes the following PhD students:

We also collaborate with MS students:

image

I am also a member of the KDL (Knowledge Discovery Lab), where I work with these PhD students:

News

  • Out of our four submissions to ICWSM’24 (including JQD:DM), two manuscripts – about social media polls and news articles – were accepted, and two other ones received “Revise and Resubmit” decisions, which typically end in acceptance as well. Congratulations SIMS lab!!!
  • I’ve been quoted in the BusinessWest article “AI Promises To Impact The Workforce In Unexpected Ways”
  • I was honored to give a talk about a path towards fair and explainable automated decision-making and to participate in a panel at a Responsible AI workshop at Carnegie Mellon University. I outline our vision in these two slides.
  • Our manuscript Learning from Discriminatory Training Data has been accepted to AIES’23. We define the problem of discrimination prevention in machine learning as a dataset shift and propose a solution building on our prior work “Marrying Fairness and Explainability”.
  • UMass Amherst released an article quoting me and my graduate course on Responsible AI. In today’s globalized world, we need to design techno-social systems with social responsibility in mind.
  • In 2022 I instructed for the first time my Responsible AI course.
  • Our paper Marrying Fairness and Explainability in Supervised Learning was accepted to the FAccT’22 conference. Check out the recorded presentation.
  • In 2022 we successfully organized the NLP competition SemEval-2022 Task 8: Multilingual news article similarity that attracted over 30 research teams and released the largest labeled multilingual dataset of news articles published across 124 countries.

Grants and prizes

Press coverages

image Image generously contributed by Mohamed Hassan.