Seminar: Equitable and Explainable Artificial Intelligence

Graduate seminar, University of Massachusetts Amherst, 2021

This seminar will focus on recent research into equitable and transparent algorithms and systems. We will review cutting-edge research that supports the properties of fairness, accountability, and transparency across various research areas, in particular fair machine learning, explainable artificial intelligence, and their interdisciplinary underpinnings. The seminar will offer introductory lectures describing the origins of relevant research problems, highlighting major threads and approaches in this vivid research space, and describing the relations between them. The course will primarily involve reading and discussing papers and book chapters.

Classes: Wednesdays 10:10AM - 11:25AM, Computer Science Bldg rm 140

Schedule

Meeting 1 (1-Sep)

  • LECTURE Intro to the seminar and legal/normative notions of discrimination.

Meeting 2 (15-Sep)

  • FMLB CHAPTER Intro (pages 7-35). LINK
  • PAPER (SECTIONS 1-5) Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6). LINK

Meeting 3 (22-Sep)

  • FMLB CHAPTER Classification (pages 37-75). LINK

Meeting 4 (29-Sep)

  • LECTURE Terminology differences between ML and sociology: substantive and formal equality of opportunity, direct and indirect discrimination in sociology. Missing notions of fairness.
  • OPTIONAL PAPER Calders, T., Kamiran, F., & Pechenizkiy, M. (2009). Building classifiers with independency constraints. ICDM Workshops 2009 - IEEE International Conference on Data Mining, 13–18. LINK
  • PAPER Zafar, M. B., Valera, I., Rodriguez, M. G., Gummadi, K. P., & Weller, A. (2017). From Parity to Preference-based Notions of Fairness in Classification. In I. Guyon, U. V Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 229–239). LINK

Meeting 5 (6-Oct)

  • FMLB CHAPTER Causality (pages 79-120). LINK

Meeting 6 (13-Oct)

  • LECTURE Relation of coutnerfactual fairness to impact parity. Business necessity attributes – why do we need them? How can we implement them? The relation to the substantive equality of opportunity. Introduction to path-specific causal notions of fairness.
  • PAPER Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human Perceptions of Fairness in Algorithmic Decision Making. Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW ’18, 903–912. LINK

Meeting 7 (20-Oct)

  • OPTIONAL PAPER Chiappa, S. (2019). Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 7801–7808. LINK
  • PAPER Wu, Y., Zhang, L., Wu, X., & Tong, H. (2019). PC-Fairness: A unified framework for measuring causality-based fairness. Advances in Neural Information Processing Systems, 32(NeurIPS). LINK

Meeting 8 (27-Oct)

  • FMLB CHAPTER Testing discrimination in practice (pages 121-157). LINK

Meeting 9 (3-Nov)

  • LECTURE What impact is justified and how to measure feature impact (aka input influence, information flow, feature relevance)? A potential objective for preventing discrimination.
  • PAPER Chockler, H., & Halpern, J. Y. (2003). Responsibility and blame: A structural-model approach. IJCAI International Joint Conference on Artificial Intelligence, 22, 147–153. LINK

Meeting 10 (10-Nov)

  • PAPER Janzing, D., Minorics, L., & Blöbaum, P. (2019). Feature relevance quantification in explainable AI: A causal problem. (2015). LINK
  • PAPER Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. LINK

Meeting 11 (17-Nov)

  • LECTURE Interventional mixtures, their relation to coutnerfactual fairness, and how they preserve impact
  • PAPER Mishra, A., Perello, N., & Grabowicz, P. A. (2021). Towards Explainable and Fair Supervised Learning. The SRML Workshop at ICML’21 and a longer arXiv preprint.

Meeting 12 (1-Dec)

  • FMLB CHAPTER A broader view (pages 159-187). LINK
  • OPTIONAL FMLB CHAPTER Datasets (pages 187-222). LINK

Meeting 13 (8-Dec)

  • PAPER D’Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., & Halpern, Y. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. FAT*2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 525–534. LINK
  • PAPER Jiang, R., Chiappa, S., Lattimore, T., György, A., & Kohli, P. (2019). Degenerate feedback loops in recommender systems. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, (2016), 383–390. LINK

Relevant works grouped by research area

Interdisciplinary research and case studies

  • Lipton, Z. C., & Steinhardt, J. (2019). Troubling trends in machine-learning scholarship. Queue, 17(1), 1–15.
  • Bertrand, M., & Mullainathan, S. (2003). Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. In NBER Working Paper No. 9873.
  • Datta, A., Datta, A., Makagon, J., Mulligan, D. K., & Carl Tschantz, M. (2018). Discrimination in Online Advertising A Multidisciplinary Inquiry. Proceedings of Machine Learning Research, 81(Section 3), 1–15.
  • Wachter, S. (2019). Affinity Profiling and Discrimination by Association in Online Behavioural Advertising. SSRN Electronic Journal, 1–74.
  • Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human Perceptions of Fairness in Algorithmic Decision Making. Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW ’18, 903–912.

Fairness in machine learning

  • Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017). Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. Proceedings of the 26th International Conference on World Wide Web - WWW ’17, 1171–1180.
  • Zafar, M. B., Valera, I., Rodriguez, M. G., & Gummadi, K. P. (2017). Fairness Constraints: Mechanisms for Fair Classification. Artificial Intelligence and Statistics, 54.
  • Lipton, Z. C., Chouldechova, A., & McAuley, J. (2018). Does mitigating ML’s impact disparity require treatment disparity? Advances in Neural Information Processing Systems, 2018-Decem(ML), 8125–8135.
  • Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2), 153–163.

Explainable artificial intelligence

  • Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43.
  • Janzing, D., Minorics, L., & Blöbaum, P. (2019). Feature relevance quantification in explainable AI: A causal problem. (2015).
  • Datta, A., Sen, S., & Zick, Y. (2016). Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems. 2016 IEEE Symposium on Security and Privacy (SP), 598–617.
  • Mothilal, R. K., Sharma, A., & Tan, C. (2020). Explaining machine learning classifiers through diverse counterfactual explanations. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–617.

Casual inference vs fairness and explainability

  • Solon Barocas, Moritz Hardt, & Arvind Narayanan. (2019). Chapter 4 in Fairness and Machine Learning. fairmlbook.org.
  • Chiappa, S. (2019). Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 7801–7808.
  • Grabowicz, P. A., Perello, N., & Takatsu, K. (2019). Resilience of Supervised Learning Algorithms to Discriminatory Data Perturbations.

Long-term impact of fairness

  • Liu, L. T., Dean, S., Rolf, E., Simchowitz, M., & Hardt, M. (2019). Delayed Impact of Fair Machine Learning. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI’19, 6196–6200.
  • D’Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., & Halpern, Y. (2020). Fairness is not static: Deeper understanding of long term fairness via simulation studies. FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 525–534.