Pasquinelli’s Eye of the Master provides a materialist analysis of AI and technology, which Kevin Crane finds to be an excellent antidote to all the nonsense and hype spouted about AI
Any of you who tried to follow the twists and turns of Artificial Intelligence (AI) hype in 2023 will probably agree that it has been an exhausting exercise. The year began with the high-profile public introductions of large language models like ChatGPT and other generative systems, which got millions of people excited about what seemed like a magical breakthrough in technology, only for the story to pass through a series of massive controversies, including strikes, legal challenges, and major scientific and philosophical debates.
We also had absurdities, such as the ludicrous palace drama around ChatGPT figurehead Sam Altman being fired and unfired as CEO. And, of course, we had depressing displays of staggering ignorance, such as discredited failing alt-right tech entrepreneur Elon Musk talking complete nonsense while being interviewed by discredited failing alt-right prime minister Rishi Sunak. The fact that stories about AI are so often dominated by ridiculous wealthy men who often have no real qualification to discuss the subject isn’t just annoying, it also makes it factually harder for people who want to get to grips with the real implications of these technologies to do so.
All of which leads to the excellent timing of this book. Eye of the Master: A Social History of Artificial Intelligence by Matteo Pasquinelli is a solid antidote to the both the boosterish hype and fanciful doom-mongering that tends to colour mainstream coverage of AI.
Technology in historical materialism
Pasquinelli lays out the adversary he is seeking to defeat in the introduction:
‘Writing a history of AI in the current predicament means reckoning with a vast ideological construct: among the ranks of Silicon Valley companies and also hi-tech universities, propaganda about the almighty power of AI is the norm and sometimes even repeats the folklore of machines achieving “superhuman intelligence” and “self-awareness”. This folklore is well exemplified by Apocalyptic Terminator narratives, in which AI systems would achieve technological singularity and pose an “existential risk” to the survival of the human species…’ (pp.8-9).
He proceeds from here to point out that at many stages of technological development, new technologies have been fetishised into god-like entities beyond reasonable human control. Promoting these ideas is an excellent way for the people utilising them to disavow personal responsibility for what the technologies do and making it seem hopeless for human beings to oppose or redirect the outcomes of these technologies. This is the reason for the apparent contradiction we currently have that a lot of the wildest, and most doom-laden, predictions about AI usurping humanity actually come from tech-bros who are invested in the sector: the last thing they want is an informed discussion about how AI could be democratically accountable and socially useful.
Pasquinelli reaches back into ancient times to show how the development of technologies, from the very origins of mathematics in the Bronze age when a development of increasingly large and complicated agricultural communities found that they could use an abstraction of the simple business of counting to support better organised and more successful ways of working.
The requirement for the means of production to facilitate a technological change in production by creating a purpose for it is a recurring theme throughout the book. It is widely understood that the steam engine was, in principle, known about in Classical Greece, but there was simply no industrial application for it until the development of capitalism in the English industrial revolution created such applications. The jobs it was supporting, though, pumping water and turning gears, had already been created by new divisions in labour that capitalism had established first.
Pasquinelli shows that where that was true for mechanical energy, it was also true for mechanical computing. Charles Babbage, often seen as the co-inventor of what we now call ‘computing’ with Ada Lovelace (neither of them used that term at the time), originally developed his ‘difference engine’ because of the new need in colonial times for extensive charts of navigational data. A need to make calculations faster and more accurately had been created, so an opportunity to make a technology for this purpose now existed. Babbage began to design the machine that could potentially do this. Lovelace innovated by recognising that the functionality of the machine could be further abstracted, so it could not simply do what calculation you told it to do, but actually calculate what further calculations it should do, which is how she comes to be called the ‘mother of software’ today.
In a really fun little aside, Pasquinelli seizes on a particular aspect of Lovelace’s notes in which she raises – but dismisses – the idea that this would eventually make a difference engine self-aware like a human, saying that they would never not be ‘extensions of human power, or additions to human knowledge’: it’s pretty amusing to realise that she had tried to head off idiots like Elon Musk two hundred years ago!
Both Lovelace and Babbage served their class, of course: they were part of the bourgeoisie and looking to use computation to further divide labour for thoroughly bourgeoise purposes. However, this did mean that the fundamentally collaborative and collective origin and purpose of the technology was of distinct importance to critics of capitalism. Karl Marx himself was quick to see the value in Babbage’s work for developing those critiques.
Developments and debates
As the book progresses through the development of computing as we know it, an increasing number of fields of study are fascinatingly drawn together in sometimes surprising interactions. Early electrics gives rise to telegraph communication, which begins the study of electronics. It also starts people thinking about the human brain as a network of signals like a telegraph exchange, displacing efforts to consider it like a set of turning gears as in Babbage’s difference engine. Ideas about how the human mind might work, and how a calculating machine might emulate it, are a two-way exchange of concepts, and an increasingly multi-dimensional view of how human intelligence might work begins a key debate in computing.
The first form of reasoning in computers was deductive: ‘if X is true, then Y, else Z’. Programming like this is straightforward to do, but it runs up against limits of capacity as soon as you ask the computer to do something which involves making a judgement where the true of X contains ambiguity, such as working out if two crossed lines is or isn’t a letter X. The proposed solution is inductive logic, in which the machine learns, in a process more similar to how a human would, what the letter X is, by gradually evaluating more or less successful attempts at recognising one and building up the ability to do so.
Politics and economics are never far from the surface of any of these processes. The founder of neoliberal thought, Fredick von Hayek, was a major champion of what we now call neural networks. He likens a learning system to a marketplace, his sole ideal of a self-organised system. Pasquinelli points out, via Marx’s writings, that this is a deliberate ahistoricism ignoring that productive collectives were actually more key to the development of computing. He also takes us through the fascinating historical paradox that in the 1960s, the distributed processing technologies that form the internet were being driven on the one hand by a military-industrial complex that wanted a means to retain their control and authority in the event of nuclear war, but on the other by an academy that believed they were developing the same systems in order to free people from authoritarianism. Today’s libertarian bores are, in a lot of cases, just rehashing old arguments badly.
Technopolitics
At his conclusion in the present day, Pasquinelli ends the book by calling on the reader to understand what AI technologies are through a specifically labour theory of automation, and that AI is not some end point or apocalypse, but the result of a set of technological advancements that have abstracted automation to the point where it can now automate itself. This has happened because we have both the technical ability to make such machines, but also there is an economic incentive to do so, and to reorganise the divisions of labour once again.
AI is not pursuing some alien or eldritch agenda separate from the capitalist system that has produced it, and it has physical limitations of capacity and energy consumption, like every previous technology. He implores that the solutions to problems in the AI era are not to be found in determinism, but that ‘The first step in technopolitics is not technological but political’ (p.253), and advises people to take an interest in groups and individuals who are engaged in what he calls ‘action research’. This is the work, and activism, of uncovering and beginning to challenge the excessive power of the small set of bloated monopolistic tech corporations that have risen to the top of society in the present era. It sounds to me like a thoroughly good way to learn more about how the labour movement should be organising in the time of AI.
Before you go
The ongoing genocide in Gaza, Starmer’s austerity and the danger of a resurgent far right demonstrate the urgent need for socialist organisation and ideas. Counterfire has been central to the Palestine revolt and we are committed to building mass, united movements of resistance. Become a member today and join the fightback.