Teach students to make good choices in an algorithm-driven world

0
28

In January, Colby College announced the formation of the Davis Institute for Artificial Intelligence, calling it the “first interdisciplinary institute for artificial intelligence in a liberal arts college.” There is a reason no other liberal arts college has engaged in an endeavor of this nature. The role of these institutions was that of extensively train university students to live in a democratic society. In contrast, artificial intelligence centers, such as the Stanford Artificial Intelligence Laboratory, have largely focused on high-end specialized training for graduate students in complex fields of mathematical and computer engineering. What could small liberal arts colleges provide in response?

There is a clue in a statement from the first director of the Davis Institute, natural language processing expert Amanda Stent. “Artificial intelligence will continue to have a broad and profound social impact, which means society as a whole should have a say in what we do with it. For this to happen, each of us must have a fundamental understanding of the nature of this technology, “he said.

What constitutes a “fundamental understanding” of artificial intelligence? Can you really understand the twisted neural networks underneath driverless cars without doing advanced calculations? Do most of us need to understand this so deeply or just generally?

A relevant analogy might be to ask whether we need to train automotive mechanics and designers, or simply people who can drive a car responsibly.

If it’s the former, most liberal arts colleges are at a disadvantage. Many of them struggle to hire and retain people who have the technical knowledge and experience to teach in these fields. Someone skilled in algorithmic design is likely earning quite well in the industry or working at a large, well-funded institution with the economies of scale required by major science ventures.

If it’s the latter, most small liberal arts colleges are well equipped to train students on the social and ethical challenges that AI presents. These colleges specialize in providing broad training that prepares people not simply to acquire technical skills for the workforce, but to become complete and fully integrated citizens. Increasingly, this will involve struggling with the appropriate social use of algorithms, artificial intelligence, and machine learning in a world driven by extended datafication.

In a wonderful item, two researchers at the University of Massachusetts Applied Ethics Center, Nir Eisikovits and Dan Feldman, identify a key danger to our algorithm-driven society: the loss of the ability of humans to make good choices. Aristotle called him phronesis, the art of living well in community with others. Aristotle saw that the only way to acquire this knowledge was through habit, through the experience of engaging with others in different situations. By replacing human choice with machine choice, we run the risk of missing opportunities to develop civic wisdom. As algorithms increasingly choose what we watch, hear, or hear about what we hear on social media, we lose the practice of choosing. That may not matter when it comes to tonight’s Netflix pick, but it has more global implications. If we don’t make choices about our entertainment, does it affect our ability to make moral choices?

Eisikovits and Feldman offer a provocative question: If humans are unable to acquire phronesis, do we fail to justify the high esteem that philosophers like John Locke and others in the natural rights tradition had for humans’ ability to govern themselves? Do we lose the ability to govern ourselves? Or, perhaps more importantly, do we lose the ability to know when the ability to govern ourselves has been taken away from us? The liberal arts can provide us with the tools necessary to cultivate phronesis.

But without a fundamental understanding of how these technologies work, are the liberal arts at a disadvantage in applying their “wisdom” to a changing reality? Instead of discussing whether we need people who have read Chaucer or people who understand what gradient descent means, we should train people to do both. Colleges must take the lead in educating students who can adopt a “technological ethics” that includes practical knowledge of AI along with knowledge of the liberal arts to understand how they should situate themselves within an AI-driven world. This means not only being able to “drive a car responsibly” but also understanding how an internal combustion engine works.

Undoubtedly, engagement with these technologies can and should be woven throughout the curriculum, not just in courses on special topics such as “Philosophy of Technology” or “Surveillance in Literature”, but in introductory courses and as part of a core curriculum for all subjects. But this is not enough. The teachers of these courses require specialized training in the development or use of structures, metaphors and analogies that explain the ideas behind artificial intelligence without requiring high-level computational or mathematical knowledge.

In my case, I try to teach students to be algorithmically literate in a political science course that I have subtitled “Algorithms, Data and Politics”. The course covers the ways in which data collection and analysis have created unprecedented challenges and opportunities for the distribution of power, equity and justice. In this class, I speak in metaphors and analogies to explain complex concepts. For example, I explain neural networks as a giant panel with tens of thousands of quadrants (each representing a characteristic or parameter) that are tuned thousands of times per second to produce a desired result. I am talking about datafication and the effort to make the user predictable as a sort of “intensive farming” where the variability that affects the “product” is reduced.

Are they perfect analogies? No. I’m sure I’m missing the key elements in my description, partly for designing to promote critical thinking. But the alternative is not sustainable. A society of people who have no idea how artificial intelligence, algorithms and machine learning work is a captured and manipulated society. We cannot set such a high level of understanding that only mathematicians and computer scientists have the ability to speak about these tools. Nor can our education be so basic that students develop incomplete and misleading (e.g. techno-utopian or techno-dystopian) notions of the future. We need AI training for a society that is intentionally inefficient, just as the liberal arts emphasis on breadth, wisdom, and human development is inherently and intentionally inefficient.

Like Notes by Notre Dame Professor of Humanities Mark Roche, “The college experience is a once-in-a-lifetime opportunity for many to ask big questions without being overwhelmed by the distractions of material needs and practical applications.” Liberal arts education serves a fundamental foundation that, in its stability, allows students to navigate this ever faster and more baffling world. Knowledge of the classics, an appreciation of the arts and letters, and a recognition of how the physical and human sciences work are timeless traits that are useful to students of any age. But the growing complexity of the tools that govern our lives requires us to be more intentional in asking ourselves the “big questions”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here