Session 2: Artificial Intelligence as social practice: ideology, ethics and human rights

Image may contain: Flag, Illustration, Sign, Electric blue.

Chair: Desmond McNeill

Desmond McNeill is Professor Emeritus at the Centre for Development and the Environment (SUM), University of Oslo, that he previously directed. He is an Advisory Board Member of the Independent Panel on Global Governance for Health and was member of The Lancet – University of Oslo Commission on Global Governance for Health (2014). His main research fields are governance, sustainable development and the relationship between research and policy. He has worked in many countries as researcher and consultant. He recently co-edited a special issue of Global Policy with Sakiko Fukuda-Parr titled "Knowledge and Politics in Setting and Measuring the SDGs"

Speakers

Elizabeth Gibbons

Elizabeth Gibbons is currently a Senior Fellow at the FXB Center for Health and Human Rights in the Harvard T.H. Chan School of Public Health, where she participates in initiatives which leverage her expertise in advancing the human rights of children and adolescents. These have included the development of a cross-disciplinary child protection curriculum for graduate students, an online HarvardX and Executive Education courses for child protection professionals, which she managed in her capacity as Director of the Child Protection Certificate Program. Since 2014, Gibbons has been engaged in the exploration of artificial intelligence (AI) and its impact on human rights, with particular attention to the potential for these technologies to affect inequality within and between global societies. As Chair of the Sustainable Development Committee, she led development of a chapter within IEEE’s publication Ethically Aligned Design, 1st Edition, which identifies issues and makes recommendations for ensuring AI benefits humanity by contributing to the attainment of the UN Sustainable Development Goals. Prior to her academic appointment at Harvard FXB Elizabeth Gibbons enjoyed a lengthy career international development, primarily with the United Nations Children’s Fund (UNICEF).

 

Xi Lin, "Re/producing Inequality in Healthcare:  How Does AI Create and Consolidate Health Inequity?"

Contrary to the optimism prevalent in the healthcare industry that eulogises the creative transformations brought to fore by AI, this article argues that AI re/produces health inequalities in a few ways, with empirical evidences collected from China. Firstly, the issue of data training. Before AI can offer diagnostic recommendations, we will need to feed it with a huge amount of data, while the collection of such data may be biased or prejudiced against certain social groups – a bias or prejudice that can reduce in no small way the reliability of the very data in issue. AI is trained or conducts self-learning from data collected in the real world, which may consolidate or entrench existing socioeconomic differences in healthcare. Many health research projects have the problem of either ethnic or sexuality underrepresentation of the coloured people, the LGBTQ, or women. For instance, for long in China the HIV/AIDS prevention campaign singles out the MSM (man-having-sex-with-men) community as the highest HIV/AIDS-risk group, creating a social stigma of and discrimination against the gay men community. An AI, upon teaching itself about health risks among social groups, may follow the footstep in targeting the MSM community. This will not only deepen the existing health inequity (discrimination against gay men), but also diverts our attention from targeting other groups that demand greater attention (e.g. the prostitute or the drug addicts).

Xi Lin (Ph.D. in Political Science), is an associate professor and assistant Dean of the Fudan Institute of Advanced Study in Social Sciences (IAS-Fudan). He received his doctoral degree from the London School of Economics. He is also the associate Editor-in-Chief of the Fudan Journal of the Humanities and Social Sciences, and was a former member of Higher Education Academy (UK). His main research areas include theories of justice, and the tradition of equity and justice in imperial China. He is the author of Emotions and Politics in Human Society (2012), Rethinking Chinese Jurisprudence and Exploring Its Future (2014), and a dozen of research papers.

Kadija Ferryman, "Ethics and Policy in Precision Medicine"

This presentation will discuss the findings of the Fairness in Precision Medicine study, which examined the potential for bias in the collecting, cleaning, and building of big health datasets, as well as the potential for discrimination in precision medicine implementation. This presentation will identify the ethical issues at stake, as well as cover current and future policy development on precision medicine and AI in health.

Dr. Kadija Ferryman is a cultural anthropologist who studies the social, ethical, and cultural dimensions of health information technology. She is Industry Assistant Professor of Ethics and Engineering at the NYU Tandon School of Engineering, and will be developing a core ethics and technology course as well as expanding the engineering ethics curriculum. Ferryman serves on the national institutional review board for the All of Us Research Program, and was formerly a policy researcher at the Urban Institute, and a Postdoctoral Scholar at the Data & Society Research Institute. She received her anthropology degrees from Yale University (BA)  and the New School for Social Research (PhD).

Discussant: Peter Asaro

Published Sep. 5, 2019 1:35 PM - Last modified July 3, 2021 1:56 AM