Decision-Making, Artificial Intelligence and the Sociology of Futures
Interview with Gabriel Abend
Gabriel Abend is professor of sociology at the University of Lucerne and fellow of the Wissenschaftskolleg in 2021/2022. Recent articles include “The Love of Neuroscience,” “Outline of a Sociology of Decisionism,” and “Thick Concepts and Sociological Research.”
The interview was conducted by Katrin Sold and Bénédicte Zimmermann. At Gabriel Abend's request, it took the form of a written exchange.
Interviewer: Your research deals with the concepts of choice and decision-making in contemporary societies as well as in social research. Social science scholars commonly make use of categorizations to address their objects of study. How do they decide on how to categorize what they study? What kind of decisional issues are involved in categorizing in the social sciences?
GA: Uruguay, Paraguay, Argentina, and Chile. If you’re a political scientist, you may group these four countries together, label the group “Southern Cone countries,” and go on to make comparative claims about their electoral systems. Another political scientist may decide instead to group together Uruguay, Argentina, and Uganda, because of certain similarities in their institutions. Or maybe Uruguay, Uganda, and Uzbekistan, because their English names start with the same letter. Wait, what? Are they allowed to do that? What’s going on here?
For the most part, social scientists’ approach to categorization seems to be based on two beliefs. First, social categories are “socially constructed.” Unlike the periodic table of elements, they don’t carve nature at its joints. They’re the product of our judgments, conventions, practices, institutions, or something along these lines. It’d be pointless to ask if Scientology really or objectively belongs in the same category as Christianity, or Umbanda, or neither, or both. It’s relative. It varies. It depends on what you’re trying to do. Second, social scientists believe that categories and categorization choices should be “useful” to them. As long as they’re useful, as long as they pay off, you’re good to go.
What reasons are there to worry about this approach, though? For one, it depicts individual social scientists (or labs or teams) as autonomous decision-makers. Since everyone is free to make their own categorization choices, each study can have its own object of inquiry, dependent variable, or explanandum. Suppose I set out to explain religious participation according to my view of what is and isn’t a religion. You set out to explain religious participation according to your view of what is and isn’t a religion. You claim to provide the best explanation of “religious participation.” I do so, too. Both your and my claims are well supported by the evidence. But they contradict each other.
From a communal perspective, this state of affairs is troublesome. Social science fields and literatures are collective projects. Our research project should be able to corroborate you guys’ account, or amend or specify it, or show that your explanatory variables don’t explain much. It should be possible for us to collaborate with you guys; to join forces and work together. While there’s no fact of the matter as to what “religion” does and doesn’t encompass, the community should have some standards, some criteria to avoid classificatory free-for-alls. Criteria won’t be set in stone, they may be fruitfully disagreed over, sets may be fuzzy, and categories may be radial. But some sort of criteria are needed.
Further, according to this approach, a social scientist can classify and reclassify their objects as they wish, to their advantage, in whichever way is useful to their project. Couldn’t they discard inconvenient data points and outliers, ensure coefficients are statistically significant, and ensure truth claims come out true? Wouldn’t this amount to koshering data dredging or p-hacking practices? “Usefulness” gives you too much latitude. There are countless ways of being “useful.” Good and bad, acceptable and unacceptable. The community will have to decide.
Interviewer: Decision-making and the future of work - Let's imagine a world of work with a growing importance of AIs like social robots or driverless cars. Should they be considered as agents that (or who) make choices? And if they are not autonomous decision-makers, who is morally, legally, and financially responsible for the consequences of their “work”?
GA: I’m not an ethicist or legal philosopher, but a mere sociologist, so let me turn the tables and ask you, you guys, a representative sample of the population of this country. Should they? And if AIs aren’t agents and don’t make choices, who should be held responsible? (And what’s up with the scare quotes around the word “work”?)
Along with sociologists Patrick Schenk, Vanessa Müller, and Luca Keiser at Universität Luzern, we’re doing just that. We’re asking a representative sample of the German-, French-, and Italian-speaking population of Switzerland: do they believe AIs can make decisions? Under what conditions can AIs be described as decision-makers? Is an algorithm praiseworthy when things go well and blameworthy when things go awry?
Our survey respondents are presented with three activities that both human beings and AIs are capable of: hiring new employees at a firm, fact-checking articles at a newspaper, and diagnosing cancer at a hospital. We’ve designed our survey to vary key factors about these situations, which might influence people’s views and shed light on their understandings. In the cancer diagnosis case, the presumed agent (oncologist or AI) will get it right or wrong. The diagnosis will be correct or incorrect. In the recruitment case, the new employee might turn out to be a good or a bad hire. We’d like to find out whether this outcome—successful or unsuccessful—affects the attribution of decision-making capacity and responsibility, whether it’s the human or the AI version of the story.
Moreover, our team is interested in the mechanics of algorithmic opacity and algorithmic bias. Women and minorities are less likely to be offered high-status jobs than white men. Both by human recruiters and by AIs. How do people assess these outcomes? Are both humans and AIs viewed as responsible? Have both of them made a decision to hire a white man? We can also vary how the survey refers to the presumed agent. Oncologist Claudia or oncologist Claudio. The AI Claudia, or the AI Claudio, or the AI P4-LGV. We’d like to find out what effects anthropomorphism and gender may have.
Interviewer: Talking more broadly about the future as a topic of social research – How do you see the epistemological requirements of a sociology of the future? Under what conditions is it possible for the social sciences to make out of what hasn’t yet happened a subject matter?
GA:
- Many things may be meant by “a sociology of the future” and “the future as a topic of social research.” Let me try and draw a distinction. Nobody denies that sociologists can empirically investigate people’s, organizations’, and societies’ understandings of the future, the social and political uses of their understandings of the future, what they imagine and expect, their predictions and hopes, and the consequences of their expectations and predictions. This is sociology of knowledge and sociology of culture stuff.[1] But what can sociologists, and social scientists more generally, say about future events themselves?
- Social scientists are often asked to talk about the future. Much like meteorologists, seismologists, policy makers, actuaries, soccer commentators, futurologists, and the Oracle of Delphi are. They’re expected to provide social scientific insights into “what hasn’t yet happened.” Indeed, they might be expected to predict global economic crises, wars, the development of technology and scientific knowledge, the impact of new technologies and scientific knowledge on society, whether populism will continue to expand, and whether liberal democracies will survive. Despite these social and political demands on social scientists, they don’t have a reliable method to make successful predictions. Plus, they haven’t been successful at it. Their record isn’t great.
- Predictions, forecasts, prognoses. The future of societies and the world. Why are they sensitive and controversial issues for social science? One reason is that they involve social scientists’ scientific credentials, what their research is good for, and their responsibilities. Physicians’ and epidemiologists’ job description comprises telling us what’s likely to happen. Seismologists might even be found “guilty of manslaughter for failing to predict [an] earthquake.”[2] Can economists and political scientists be blamed when they’re caught totally off guard by a major event, e.g., a financial crisis, a military invasion, or a revolution?
- Many social scientists are adamant that they, qua social scientists, can’t speak about the future. Not in a precise, informative, and falsifiable manner. Which differs from vague statements and hunches and well-informed guesses, but also differs from the general area of extrapolation, trend analysis, time series, and projection of historical data—practically and substantively reasonable though these might be (and technically sophisticated though their methods and models have become).
- Several arguments against prediction have been advanced in the history of social science and philosophy. Inductive inferences don’t seem reliable enough, as “the man who has fed the chicken every day throughout its life at last wrings its neck instead.”[3] Then there’s the moving target problem: people have the annoying tendency to change their behavior in response to predictions about it, whether it’s to bring them about (self-fulfilling) or to prevent their occurrence (self-defeating).
- Not to mention the fact that societies’ future is dependent on scientific discoveries, new technologies, and conceptual developments. Yet, scientific, technological, and conceptual prophecies seem in principle impossible. As Poincaré observed in 1904: “Do not… expect of me any prophecy; if I had known what one will discover to-morrow, I would long ago published it to secure the priority.”[4] The same applies to new concepts, and the ways in which societies will understand and represent themselves. The problem here is that “conceptual innovation… alters human reality. The very terms in which the future will have to be characterized… are not available to us at present.”[5]
- Up to the present, social scientists haven’t been able to predict the future. At least, if the standard is precise, informative, and falsifiable statements based on scientific knowledge. However, I predict that in the future social scientists will counter all possible objections to prediction—including future objections, which hitherto have never been raised—and they’ll be able to predict the future with certainty.
[1] Cf. Beckert, Jens, and Lisa Suckert. 2021. “The Future as a Social Fact.” Poetics 84:101499.
[2] Brandmayr, Federico. 2021. “When Boundary Organisations Fail: Identifying Scientists and Civil Servants in L’Aquila Earthquake Trial.” Science as Culture 30:237-260.
[3] Russell, Bertrand. 1912. The Problems of Philosophy. Henry Holt and Company. Quotation is at page 98.
[4] Poincaré, Henri. 1904. “The Principles of Mathematical Physics.” The Monist XV:1-24. Quotation is at page 1. Cf. MacIntyre, Alasdair. 1972. “Predictability and Explanation in the Social Sciences.” Philosophic Exchange 1:5-13.
[5] Taylor, Charles. 1985. Philosophy and the Human Sciences. Cambridge University Press. Quotation is at page 56.