In his critique of AI, Pasquinelli writes “The ‘intelligence’ of neural networks is, therefore, just a statistical inference of the correlations of a training dataset… neural networks cannot escape the boundary of the categories that are implicitly embedded in the training dataset”. Pasquinelli’s commentary belongs to a growing but also historically well-established literature linking the epistemological limits of AI to questions of political economy. If machines can only reproduce well-defined tasks — however ingenuously — human activity must remain an irreducible residual in any economic calculus. At the same time, and against big tech and accelerationist enthusiasm for AI, machinic labour serves only to reproduce existing relations of production. He concludes “statistical inference is the distorted, new eye of the capital’s Master”. Whether the current AI connectionist paradigm is in fact constrained in the ways Pasquinelli and other critics diagnose remains questionable. Might there instead be scope, within the massive parallel operations conducted deep inside today’s neural networks, to produce novelty? If so, what implications would such novelties have for incentive-based economies — including academia — premised substantially upon innovation? What happens when, as legal scholars have begun to theorise, algorithmic outputs become patentable? This paper examines heralded instances of machine learning in gameplay, code generation and knowledge production that stretch at the limits of what is thought computable. Such examples suggest at sufficient scale, what begins as “statistical inference” may become indistinguishable from other, more privileged forms of human cognition. Specifically, the ability to generate scientific hypothesises, poetic metaphor or disruptive market innovation — examples of what Peirce referred to as “abduction” — would no longer be exceptionally human. The paper suggests that under current tendencies, this realisation will only further distance the owners of technologically-invested capital from disenfranchised subjects, with barely even their labour to sell. At the same time, full cognitive automation also deflates moral arguments for the merit-based differential distributions of rewards that underpin free-marketism. If algorithms come to dominate the commanding heights of cognitive capitalism, other systems for resource distribution may appear more compelling, morally and politically. Such prospects reflect back upon ethical questions raised in connection with AI. Echoing Marx, heterodox economists today (e.g. Dumenil & Levy) argue any talk of the “good life” must remain perniciously ideological in the presence of widespread domination. The paper concludes with desiderata for a virtuous AI sociality, including a retracing of rising inequality, that would form the ground for any ethical encounter between individual human and machine.
Liam Magee is a sociologist of science and technology, specialising in the application and impact of software on urban ways of life. His current research centres on machine learning, digital games and data analytics, and how these technologies interact and interfere with social systems such as cities, organisations, labour, environmental movements and financial markets.