[ About | People | Substack | Bhalisa | Upcoming | Unified Theory | Seminar | Archive | Publications ]
General seminar arrangements in 2025
Mis/trust, language, and ‘Artificial Intelligence’ in global digital economies
Large-Language Models (LLM) such as OpenAI’s ChatGPT are key among the so-called ‘AI’ agents that increasingly mediate global digital economies. This AI-driven wave of digitalization has breathed fresh life into public anxieties around the associated implications for interpersonal and institutional forms of trust and mistrust. Yet neither techno-optimistic nor critical political-economic voices have convincingly addressed the problem of opacity or inscrutability inherent to LLMs and other machine-learning systems based on deep-learning architectures. This paper is an (early) attempt to do so by taking a leaf from the ways in which communities in rural Southwest Kenya deal with the difficulty of knowing others’ minds, feelings, and motivations (including one’s own) in everyday socio-economic life. In particular, I show how the routine negotiation of opacity through religious language that recognises the agency of supernatural non- humans parallels the interpolation and influence of LLMs in ordinary human interaction and communication. I then contrast the ethnographic theory of language this analogy rests on with the way in which language and linguistic forms have featured in the history of computation, before ending with some (very) preliminary reflections on how critical linguistic anthropology may sharpen political-economic approaches to the digitalization of mis/trust in global economies.