As the value of words shifts from conveyor of meaning to conveyor of capital, has Google become an all powerful usurer of language, and if so, how long before the linguistic bubble bursts?
I’m giving a talk at Trinity College Dublin next week as part of the CONNECT centre and Engineering Fictions. I’ll be using a lot of the material from the talk I gave at NUIG a couple of weeks ago, but I also want to try out some of the new ideas I’ve been developing around the idea of subprime language and linguistic liquidity. Below is an extended abstract/intro for the new stuff. It is work in progress – any thoughts are welcome…. I hope also to develop these ideas at the AAG in Boston and at the RGS-IBG in London later this year.
As tech companies such as Google increasingly mediate and monetise the informational landscape through search and advertising platforms such as AdWords and AdSense, the ongoing effects on and of the language they auction, sell and exploit are becoming more and more palpable. In the viral spreading of fake news and political click-bait, and in the daily battles for exposure, it seems that words are being lent against a narrative so tenuous as to make their linguistic function negligible. Infused with a neoliberal logic which favours advertising dollars over truth and the systemic bias of algorithmic processing, the discursive side-effects of this semantic shift reveal a deep-rooted weakness in the linguistic marketplace which reaches far beyond the linguistic sphere and into the political, with powerful and potentially devastating consequences. Were it not for an overriding metanarrative of neoliberal logic, this evolution in the ontology of digital language may seem like an obvious manifestation of the postmodern condition. But as the value of words shifts from conveyor of meaning to conveyor of capital, should we be thinking of Google as the all powerful usurer of language, and if so, how long before the linguistic bubble bursts?
In this paper I set out some recent thoughts about the idea of subprime language – asking questions such as how much and how often language can be bought, sold or ‘borrowed’ before it becomes exhausted of meaning and restrictive of expression and understanding. How resilient is language to a quasi-capitalist operating system, and what happens if/when linguistic capitalism crashes? And finally, knowing the historical and cultural power that a control of language can have, the fragility and unpredictability of the economic system which now seems to underpin it, and with a growing awareness of the power wielded by technology companies such as Google, should we not be more aware of the the potential dangers in these techno-linguistic shifts?
In recent weeks the fake news debate has been evoking numerous references to Newspeak, the language of thought control and state propaganda employed to further the ideology and control of English Socialism (INGSOC) in George Orwell’s 1984. It is an interesting analogy, but I think rather than a straight forward comparison to the misinformation and alternative facts seemingly employed during the Trump campaign, there are deeper problems within today’s informational infrastructure that a more thorough reading of Orwell’s text draws out. Firstly, there is the assumption in Newspeak that “thought is dependent on words”, a somewhat problematic yet entirely relevant causal linkage when it comes to debates about search results, auto-predictions, filter bubbles and algorithmically generated social media newsfeeds, which can be instrumental in the cultivation of extreme views, hate crime and even terrorist attacks.
The second issue concerns the limitations and restrictions of language that is so important to the idea of Newspeak, a language which “differed from most all other languages in that its vocabulary grew smaller instead of larger every year”. We can see echoes of this in the shrinking of creative vocabulary of digital language in favour of words which might be cheaper, easier to find, or more alluring either to algorithms or to human readers.
The third point I want to explore takes the culmination of the first two – i.e. that words have a real effect on how we think, yet the way information flows through the digital spaces encourages the shrinking of our online vocabulary and discourages non-normative language – and complicates this already worrying formula with an overriding motive not of state political control (as in Orwell’s dystopia), but of private capital gain (as in advertisers and tech/media companies). In the digital networks of information and communication we have created, the potential for political control comes often as a side effect of the economic incentive, or as a manipulation of a system which allows language, and therefore thought, to be so dependent on and subject to a neoliberal logic which is itself so precariously mediated by algorithmic systems and networks.
Pip is a PhD candidate in Geopolitics and Cybersecurity. She is one of the first cohort of students in the Centre for Doctoral Training in Cybersecurity at RHUL, where she is researching Language in the Age of Algorithmic Reproduction. She blogs at linguisticgeographies.com, where this post first appeared.