The Slovenian philosopher Slavoj Žižek is reportedly sanguine about the advent of deep learning AIs and their potential threat to creative work. In response to the suggestion that Artificial Intelligence (AI) “will be the death of learning & so on”, he said “NO! My student brings me their essay, which has been written by AI, & I plug it into my grading AI, & we are free! While the ‘learning’ happens, our superego satisfied, we are free now to learn whatever we want.”
This sounds great. It certainly engages a feeling I have after almost thirty years in the academy that there are better places to do philosophy than the lecture theatre. Philosophy at the beach seems closer to the Platonic Academy than anything offered by the contemporary university, with its metrics and pressures to publish or perish. This is a future in which philosophers can say and write whatever we want.
But the world which deep learning AIs do more of what we have traditionally done is also a world in which humanities academics may not get paid. How do we avoid the fates of the many people whose pleasure at editing Wikipedia means they do it for free? Žižek’s reputation and fame guarantee him a softer landing from the collapse of the business model of academic humanities than an analytic philosopher who has spent decades publishing barely cited journal articles on the semantics of abstract singular terms.
What creative workers should learn from chess computers
One the big surprises from progress in AI has been its potential to automate creative work. Creative workers have smugly accepted that the machines are coming for accountants with their statements of income and financial disclosures. But deep learners are now composing music and producing art. They are doing journalism and writing academic papers.
The AIs are coming for us, not by launching salvos of nukes, but by making ways of life that give meaning to so many of us, financially unviable.
ChatGPT, OpenAI’s prototype AI chatbot, was released on 30 November 2022, and it is shifting our understanding of the challenge posed by intelligent machines. The Turing Test, formulated by Alan Turing in a 1950 paper, proposed a test for machine intelligence — can a machine converse in ways that a human judge can’t distinguish from a human?
The responses of ChatGPT seem human. It’s a measure of progress in AI that the main criticisms are not that its replies seem machine-like, but rather that they are superficial. A successful journalist may not consider a superficial chatbot a threat to their job. But superficiality is a feature of the small talk that comprises much human conversation. There’s nothing especially deep in my most recent observation to a friend that today’s weather is hot but tomorrow’s could be hotter.
Journalists have been cheered by the superficiality of ChatGPT’s writing. Writing in the Guardian, Samantha Lock observes that ChatGPT “lacks the nuance, critical-thinking skills or ethical decision-making ability that are essential for successful journalism”. Amit Katwala in Wired opines that “its writing is superficially impressive and lacking in substance”. An Economist journalist looks at the program’s hammy attempts to write in the style of Shakespeare and reports that “it will be a while before your correspondent has to look for a new field of work”.
A feature of these reassurances is that they are right about the present, but wrong about the future. Creative workers should look to the example of chess to prepare for what is coming. In the early 1990s, the world champion Garry Kasparov accepted that machines would eventually beat the best human players. But the poor play of the best machines of that time convinced him that he had plenty of time— that is, until he was blindsided in 1997 by IBM’s Deep Blue.
Rather than reassuring ourselves about the superficiality of ChatGPT’s journalism and academic writing, it seems to me that we should learn from Kasparov’s comeuppance.
Anticipating imminent progress in deep learning
Deep learning AIs are adding parameters — in essence, ways they can learn — at a very rapid pace. The 110 million parameters of BERT (Bidirectional Encoder Representations from Transformers) released by Google in 2018 was big news at the time. There was a jump to GPT-3’s 175 billion parameters. We now await the Good Computer’s expected 500 trillion parameters.
What judgment, I wonder, will Samantha Lock make about the “nuance, critical-thinking skills or ethical decision-making” of the Good Computer’s journalism? In a world where machines cheaply produce journalistic copy and philosophical writings that read as if they are critically and ethically informed, how will creative workers get paid?
Deep learning AIs could bring a world of bullshit jobs
Creative workers should prepare for this future not by relying on likely falsifiable reassurances, but by doing what we are best at: imagining the many forms the future could take as vividly as possible. Here’s one scenario that frightens me.
The anthropologist David Graeber wrote about “bullshit jobs”, something that he defined as “a form of employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence.” Graeber had in mind the many middle managers, box tickers, and intermediaries that our economy creates.
Graeber’s idea took off when he first floated it in 2013 because so many people could look at their jobs and understand them as bullshit. These jobs seem stubbornly resistant to elimination. Stressed public institutions, aware that they need to change, respond by creating well-remunerated bullshit roles. Universities create layers of management to coax academics to change in response to senior management perceptions of the changing tastes of students.
Suppose that creative work is so enjoyable that we end up taking the lead of Wikipedians and we do it for free. Media conglomerates and Universities will presumably need many middle managers, box tickers, and intermediaries to organise the contributions of unpaid creative workers. This really could be a world of bullshit.
- This article was originally published on the ABC on January 13 2023, and is reproduced here with permission.
- Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand, and the author of How to be Human in the Digital Economy. His book “Dialogues on Human Enhancement” is forthcoming with Routledge.