LSU to Embed Ethics in the Development of New Technologies, Including AI
April 28, 2022
Deborah Goldgaber, director of the LSU Ethics Institute and associate professor in the Department of Philosophy & Religious Studies, has received a $103,900 departmental enhancement grant from the Louisiana Board of Regents to begin to reshape LSU’s science, technology, engineering, and math (STEM) curriculum around ethics and human values.
The LSU program will be modeled on Harvard’s EthiCS program, where the last two letters stand for computer science. Embedded ethics education goes counter to the prevalent tendency to think of research objectives and ethics objectives as distinct or opposed, with ethics becoming “someone else’s” responsibility.
“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”
The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.
“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”
“Leaders don’t just do what they’re told—they make decisions with vision.”
Deborah Goldgaber
Artificial intelligence, or AI, is increasingly helping humans make decisions, come up with answers, and discover solutions both faster and better than ever before. But since AI becomes intelligent by learning from established patterns (seen or unforeseen) in available data, it can also inherit existing prejudices and biases. Some data, such as a person’s zip code, can accidentally become a proxy for other data, such as a person’s race.
“I think there’s a fear that technology could get beyond our control, which is alienating,” Goldgaber said. “Especially in such a massive terrain like AI, which already is supplementing, complementing, and taking over areas of human decision-making.”
“For us to benefit as much as possible from emerging technologies, rights and human values have to shape technology development from the beginning,” Goldgaber continued. “They cannot be an afterthought.”
Goldgaber gives the example of a mortgage application to illustrate the so-called “black box problem” in AI and how technologies developed to increase efficiency inadvertently can undermine equality and fairness, including our ability to judge what’s just or unjust for ourselves. Risk assessments based on lots of lots of data has become one of the core applications of AI.
“You get a report that you’re not accepted and don’t get the loan, but you don’t know why, or the basis on which the decision was made,” Goldgaber said. “If you can’t know and evaluate for yourself the kinds of reasons or logic used, it undercuts your rights and your autonomy. This goes back to the foundation of our normative culture where we believe we have a right to know the reasons behind the decisions that affect our lives.”
Western philosophy is based on the Enlightenment idea that humans have worth because they can reason and make decisions for themselves, and because they have these capacities, they also should have the right to choose and act autonomously. AI, meanwhile, often provides answers without any explanation of the reasoning behind the output. There is no further information.
“LSU students must know that what they do matters to the world we live in.”—
Deborah Goldgaber
Her collaborative curriculum development at LSU will span four areas: AI and data science; research ethics and integrity; bioethics; and human-centered design. Over the last year, she has also been collaborating with Hartmut Kaiser, senior research scientist at LSU’s Center for Computation & Technology, on an AI-focused speaker series where ethics is one of the recurring themes.
“Our goal is to place LSU at the forefront of emerging efforts of the world’s greatest universities and research institutions to embed ethics in all facets of knowledge production,” Goldgaber said. “LSU students must know that what they do matters to the world we live in.”
The LSU Ethics Institute was founded in 2018 with generous seed funding from James Maurin. It is the center for research, teaching, and training in the domain of ethics at Louisiana State University.