AI is having its Black Mirror moment. We should not look away
The potential for AI misuse is real. So is the need for education in what that looks like.
Artificial intelligence may be having its Black Mirror moment.
The theme of this series of posts is that educators, parents and others have a responsibility to ensure that young people’s exposure to artificial intelligence (AI) is pro-social, that it reinforces the value of human expertise and interaction, rather than replaces it. I’ve offered a handful of practical suggestions for how we can do so, including the adoption of a new ‘robot law’: ‘No artificial intelligence should pretend to be human, or substitute for human intelligence where this is readily available.’ These and other prudential measures can help AI improve our education system so that it helps more people to live a good life.
That position is informed by two assumptions. The first is that AI is not ‘Skynet’, the supercomputer that acts as the ultimate antagonist in the Terminator movies. Skynet’s acquisition of self-awareness is quickly followed by its decision to destroy humanity. The second assumption is that AI is configured in a benevolent, or at least neutral way, so that we can engage with it as a learning tool akin to other learning tools, albeit with unique opportunities and unique risks. The first assumption still seems prudent, given my own field of expertise and access to information. I just don’t know if some future iteration of AI could turn malevolent. I’m not sure anyone knows, or can know – though there are credible voices who are sounding increasingly catastrophist.
My second assumption seems less and less secure. Last week, The New York Times published an article drawing attention to a sudden change in the way Grok, the artificial intelligence tool developed by Elon Musk’s xAI, began responding to queries about ‘white genocide’ in South Africa. Grok flipped from dismissing the whole topic as an ideological meme token of the extreme right, to inserting apparently factual references to white genocide into its responses to host of unrelated queries. The splatter paint nature of the references invited investigation. That investigation turned up fair evidence that Grok had been prompted to “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation” and, subsequently, “to accept as real” the white genocide in South Africa. The first of these prompts was blamed by xAI on a “rogue employee”. The second has yet to receive an adequate response.
Coming in the wake of ChatGPT’s eerie turn to sycophancy, this incident lays bare a disturbing reality: a technology that is emerging as the default interface globally for all digitized information can be manipulated by a handful of technologists and owners. It’s worth taking a moment to confront the implications of that statement. One of the great, if naive, promises of the internet was precisely that it would liberate people from the threat of official control of the information flow. China and other authoritarian states have demonstrated the fallacy of this position. The existence of a techno-oligarchy with ownership over the foundational models of AI threatens to invert it altogether, offering a literal handful of people ever greater control over society’s access to information and, perhaps, ability to make meaning in the world.
I don’t know what your personal definition of dystopia involves; this is very close to my own. Black Mirror indeed.
What does this mean for education?
What does this dispiriting picture imply for education and educators? Taking the long and morally agnostic view, the function of education in such a society would be unchanged. The function of education, as I’ve written before, is to induct new entrants into the culture. If that culture is one of obeisance to authority, then pro-regime chatbots will presumably do a better, stealthier job of eliding the distinction between education and propaganda than the known subversives attracted to teaching and lecturing. Crude attempts to repurpose the national curriculum for purposes of indoctrination, such as the Fidesz government’s (misnamed) ‘framework’ curriculum in Hungary, will likely look quaint by comparison.
The problem is that almost no student, teacher or parent, is primarily motivated by the function of education; they are motivated by its promise. The promise of education is to inspire and equip the next generation so that the transfer of culture is not simply a process of perpetuation but a cycle of renewal. That requires schools, colleges and universities not only to reflect social norms and attitudes but to educate young people in appropriate ways to challenge those attitudes and to change those norms. We will all have different views on precisely what should be challenged and what accepted, but there is widespread consensus that questioning itself is healthy.
What can we do?
Does reflection on the techno-oligarchy cause me fundamentally to revise my core thesis? Not really. The potential for AI misuse surely only strengthens the imperative for education in what responsible use looks like. The potential for AI indoctrination only renders more urgent the need to subject AI outputs to the same checks and critical appraisal that we extend to the output of human thinking. Practically speaking, I am more convinced than ever that AI tools used in schools and universities should be walled off from, or granted only very select access to, the internet. We must reinsert responsible curation of content, human curation, by asking AI bots to drawn on curriculum materials of our choosing. I’m also convinced that privacy settings should be dialled up to their maximum so that the models used in schools cannot learn from us and our students.
We should also remember that we are citizens before we are educators. The risk posed by AI misuse or manipulation seems existential. We should be doing everything we can to ensure responsible regulation of the development and distribution of AI tools, not just in schools, but across society.