top of page



ChatGPT, Interpassivity and the Decay of the Utopian

Florian Maiwald

15 June 2023


It's a rainy afternoon and I'm bored. In other words, postponing things that I really should be doing. What could be more appealing in such a moment in 2023 than playing around with ChatGPT?


For a long time I have been thinking about writing an essay concerning the usefulness of “whataboutism” - an argumentative strategy in which a critical objection is not discussed but answered with a critical counter-objection. Especially now, in times of the war in Ukraine, the argument that not only Russia but also the West has committed numerous war crimes is often branded as a “whataboutism”. The argument (very crudely broken down) is that one wrong is not undone by another wrong. Applied concretely in terms of the current situation: even if the Iraq war was absolutely illegitimate from the perspective of international law, that still does not by any means justify Putin's war of aggression in Ukraine.


Of course, this criticism is absolutely justified from an ethical point of view. However, for some time now I have been asking myself whether the use of “whataboutism’’as an argumentative strategy - depending on the intention with which it is used - can also have some strengths.


With these considerations in mind, I asked ChatGPT to write a critical essay - out of sheer experimentation, of course, and never with the intention, this note is important, of considering it a serious work product - which was to be a plea for the argumentative strategy of “whataboutism’’.


The result was sobering in many ways, and also one of the central reasons why my fear that ChatGPT might replace the human capacity for creativity and intellectual reflection was somewhat muted. The chatbot's response was that it was not possible to write an essay on the strengths of “whataboutism”. However, after this preliminary remark, I received a long essay on defining “whataboutism” and, in a second step, why this argumentative strategy should be considered highly questionable.


At this point, it is by no means my intention to write a plea for “whataboutism’’ - this project is still on my personal to-do list. Let me just say this: There are good arguments for the contextual use of “whataboutism’’. The US-American philosopher Ben Burgis draws attention to this fact very aptly:


But if you aren’t at least asking the “what about…?” questions, you simply aren’t serious about applying morally consistent standards.Waxing indignant about the misdeeds of other powers while refusing to look in the mirror is what Vladimir Putin does when he simultaneously condemns American imperialism and wages war to keep a less powerful neighbor in his country’s sphere of influence. Let’s be better than that.


If one applies Burgis' argument to the current global and political situation, especially against the background of the war in Ukraine, it becomes clear that the argumentative strategy of “whataboutism”and its justification depend to a considerable extent on the intention with which one uses it. If this argument is only used to justify Putin's war of aggression (according to the motto: "We should rather be quiet, because the Western NATO countries have also committed several war crimes"), then this criticism of the “whataboutism” line of argumentation is more than justified. Not least because Putin's war of aggression is solely his fault.


The argument Burgis develops, however, accurately draws attention to the fact that the use of “whataboutism” can also empower human beings - or governments - to practice self-criticism. In times of increasing militarization, both in discursive and real political relations, which is justified by the defense of human rights, it is therefore quite useful to recall that Julian Assange still faces 175 years of prison in the U.S. for journalistic exposure of war crimes. Or that the Guantánamo prison camp still exists.


The Limitations of ChatGPT


ChatGPT would not acknowledge this fact on that rainy Sunday afternoon when I was bored and interacting with the chatbot. This identifies one of ChatGPT's core problems, which is actually the point of this article: The personal identification of socio-political grievances - a central prerequisite for any form of active political participation - is delegated to a system whose output is shaped by algorithmic calculations, thus letting the political potential inherent in many forms of creativity and critical engagements with the world succumb to passivity.


In this context, the U.S. video game designer Ian Bogost aptly points out why ChatGPT is precisely not to be regarded as a system that can replace human creativity and an accompanying creative power:


Computers have never been instruments of reason that can solve matters of human concern; they’re just apparatuses that structure human experience through a very particular, extremely powerful method of symbol manipulation. That makes them aesthetic objects as much as functional ones. GPT and its cousins offer an opportunity to take them up on the offer—to use computers not to carry out tasks but to mess around with the world they have created. Or better: to destroy it.


This news is of course comforting insofar as it makes it clear that fears of algorithms being able to completely replace human thinking and creative powers one day would be pointless. However - as has already been pointed out to some extent - the danger of such systems is that they create a form of passivity that makes any form of active political power, which is a basic prerequisite for social change on a large scale, impossible. In contrast to human beings, chatbots are not a representation of human reason, which is capable of critically reflecting the social conditions by which human beings are surrounded. If one realizes that the software developed by OpenAI is nothing more than that, we have nothing to fear. Rather, the danger is that we will reach a point where we understand ChatGPT as more than just an algorithm-based interaction engine perfectly capable of structuring the conglomeration of human experience through the manipulation of symbols.

More concretely: The danger is that we begin to impute some form of reason to such software, even though we are aware that it is incapable of reasoned thought. In other words, if we no longer confront such software on an interactive level, but on what the philosopher Robert Pfaller calls an interpassive level.


ChatGPT and Interpassivity


Interactivity would still imply that both the bot and the person operating the chatbot are actively involved in the process. To return to the example mentioned at the beginning: While the activity of ChatGPT concretely consisted in implementing my request to write an essay about the positive aspects regarding “whataboutism” by means of manipulating symbols of various algorithms, my activity - and thus my reason-based thinking activity - finally consisted in recognizing that the bot was not capable of doing so and thus was not suitable as an expressive instrument of genuine political will formation. With interpassivity, however, it is a different matter.


About the concept of interpassivity, Robert Pfaller writes:


Interpassivity is the case when somebody prefers to delegate their enjoyment (their passivity) to some other instead of enjoying themselves[...]. To give an example, I once encountered a man who was a big drinker. All of a sudden he changed, and did not drink any more. But he adopted a new passion: he became a perfect host. He would always have a bottle in his hand and take care that the glasses of his guests were refilled, so that he could, as it were, continue to be a drinker through his guests. He had become an interpassive drinker.


According to Pfaller, it is crucial that this process of delegating one's own enjoyment consists in the fact that the corresponding actor does not have to enjoy his object of pleasure (the glass of wine, the pizza, the cigarette, etc.) him/herself, but delegates it to another actor who carries out the process of enjoyment for him/her. Pfaller's concept of interpassivity can also be used to clarify precisely where the concrete dangers of ChatGPT lie. Even if Pfaller, especially in later works, is primarily concerned with the delegation of human enjoyment to a second instance - Pfaller also cites the example of a customer at a bar who orders a beer, pays for it, but lets another customer drink the beer -, Pfaller's thoughts can also be easily applied to political contexts by replacing the notion of enjoyment with the notion of creativity (even if one can indeed argue about whether creativity and enjoyment, at least partially, cannot also be inseparably linked).


Creativity has always been a necessary ingredient not only for artistic excellence, but also for being able to imagine alternative conceptions of society and, inextricably linked to it, to initiate political change.


Zygmunt Bauman once said that utopias are capable of relativizing the present. Bauman then goes on to say:


By exposing the partiality of current reality, by scanning the field of the possible in which the real occupies merely a tiny plot, utopias pave the way for a critical attitude and a critical activity which alone can transform the present predicament of man. The presence of a utopia, the ability to think of alternative solutions to the festering problems of the present, may be seen therefore as a necessary condition of historical change.


The ability to think of alternative solutions for the problems of the present, which is inseparably connected with the imagination of utopian conceptions of society, is not only - to use Stéphane Hessel's expression - connected with the ability to outrage, but also with the ability to think creatively, in order to be able to react to the outrage provoking grievances, which one finds, in a manner, which in the long run contributes to the development of solutions, which are able to eliminate the corresponding problems. Or formulated differently: To get from the actual state characterized by the social problems one finds oneself surrounded by, to a target state, which seems utopian. ChatGPT is not capable of that critical attitude and the associated activity of which Bauman speaks.

Thus it was precisely the impoverishment of the proletariat set in motion by the industrial development processes that induced Marx - with constant help from Engels - to write Capital in order to give expression to his own indignation and thus also to his own attitude towards these conditions in a creative and, from an intellectual point of view, eloquent manner and to mobilize politically generations of human beings - in both positive and negative ways.


The Destruction of Emancipatory Creativity


ChatGPT threatens to destroy this creative potential in human beings, which is the prerequisite for social change. The worries that students and pupils will fall prey to convenience and hand in mediocre work due to ChatGPT are absolutely justified from the point of view of intellectual emancipation - even if ChatGPT does not seem to be able to produce even rudimentarily acceptable results at the moment, mainly due to its strongly formalistic structure. However, the analysis should go much further at this point and our concerns should go much deeper: Should ChatGPT really ensure that human beings at a certain point no longer merely interact with this AI system, but enter into a relationship of interpassivity, it could be that we enter a state of society in which human beings no longer function as emancipatory actors capable of initiating sociopolitical change.


We may not (necessarily) delegate our pleasure to a chatbot, but we may delegate our imagination and creativity by transferring the actual human ability to interpret the world to an AI system. Of course, one might argue that these are all too dystopian scenarios, as most human beings are well aware of the limitations of such a system. On the other hand, exactly therein lies the danger that one delegates one's own creative potential of interpreting the world to an AI, knowing that its implementation is subject to strong limitations. Avantika Tewari makes it very clear that this form of an interpassive relationship is possible, regardless of the limitations of ChatGPT:


In order for humans to believe in the power of artificial intelligence they must learn to believe in it despite its own limits with all its factual inaccuracies, inconsistencies and blurriness. It is only when our human subjectivity is inscribed to ChatGPT – with our own gaze by catching its glitches and mistakes – that the system breathes life […]. In fact, the very limit of the AI forms the contour of our consciousness, retrospectively. We know the system cannot outsmart us yet we want to battle it out with our wits to better its capacity to fight us.


Of course, Tewari already hints at the dystopian scenario that ChatGPT might one day be able to outsmart human beings. Much more worrying, however, is the fact that human beings delegate their creative potential to an AI, knowing full well that the latter is characterized by limitations and that they derive pleasure from precisely this circumstance. What if Marx would have had Capital written by a sophisticated AI system that boasts phrases like, "Even though neoliberal capitalism produces inconceivable amounts of poverty, it is nevertheless the best of all systems." And what if Marx had been content with that and then just gone to the pub around the corner ordering a beer - to be consumed by another guest? I don't know exactly, of course. But what I do know for sure is that I would not want to live in such a world. In short, we should (to use Freud's words) stand by our political id, which longs for the enjoyment of a politically more just society, instead of being constantly bullied by our political superego, which wants us to believe that imagining alternative ways of organizing society that are characterized by more justice is to be regarded as something sinful. If we do not want to follow this political superego, we should also not follow everything that a bot tells us.


 



Comments


The Global South’s Climate Aid Strategy is Flawed
The Global South’s Climate Aid Strategy is Flawed
Aashis Joshi
Terrorism and Despair
Terrorism and Despair
W F Kunkowski
A Victory for Covid Policy Skeptics?
A Victory for Covid Policy Skeptics?
Toby Green and Thomas Fazi
Israel: A settler-colonial state? A clarification
Israel: A settler-colonial state? A clarification
Heading 6
ChatGPT Says What Our Unconscious Radically Represses
ChatGPT Says What Our Unconscious Radically Represses
Heading 6
A Pascalean Wager Against Scientific Determinism
A Pascalean Wager Against Scientific Determinism
Slavoj Žižek
Psychoanalysts Unite … Against Trans
Psychoanalysts Unite … Against Trans
Ian Parker
bottom of page