Between Cyberutopia and Cyberphobia : the Humanities on the African continent in the Era of Machine Learning
Organisers : Faeeza Ballim (History UJ), Keith Breckenridge (WISER), Iginio Gagliardone (Media Studies, Wits), Richard Rottenburg (WISER and the LOST Network)
Programme of Events below
This workshop takes its focus from the upheaval in popular and scholarly understandings of the intellectual (and political) prospects of the networked planet. A decade ago advocates and precocious users were celebrating the levelling, democratic and emancipatory possibilities of the Internet, and of social media platforms in particular. Today an elaborated loathing of these technologies and their political effects -- succinctly captured by the UK Channel 4 series Black Mirror -- has become common and politically compelling. Popular and scholarly disillusionment with the promises of the network society now seems close to self-evident, and is the subject of daily reports in the major international newspapers. Much more difficult to assess is the critical and political power of the dystopian critique of cybernetics -- as the emergent field was named in 1948 by Norbert Wiener -- that emerges from the humanist tradition. In this workshop we propose an assessment of these two movements, and their mutual engagement, in the special circumstances of the African university.
The combination of ubiquitous social media, feedback-centred devices and the sorting and predictive techniques of machine-learning, now released from the old constraints on data-processing, seems to present an existential shift to the human condition and certainly a danger to the long-established habits of disciplinary enquiry in the humanities. Carefully engineered features of the global network affect young and old alike. An explosion of source materials -- to focus only on the most obvious problem -- has been combined with a technological order of continuous distraction through technologically mediated simultaneous co-presence in multiple social sites. User-produced data (much of it in text form) has become the key profit-driver for the wealthiest firms in the world. In this new global economy, and in the simplest formulation, it is the absence of uninterrupted time that makes the cultivation of the post-Renaissance ethic of self-directed reading increasingly untenable. It is this concern that drives a deep and persuasive pessimism about the digital infrastructure and the production, consumption and meaning of digital content that now dominates the humanities.
In this workshop we will focus on the intellectual and philosophical claims of artificial intelligence and particularly of machine learning as both a subject of humanist enquiry and as the key set of skills and technologies shaping the workings of the network society. Precisely because the field is concerned with the engineering of an infrastructure that not only shapes but even creates knowledge and impacts consciousness, the history and contemporary debates in artificial intelligence and machine learning are philosophically sophisticated. They are richly informed by the old core problems of humanism, and many of the key technological claims of AI and ML are evaluated using the old philosophical arguments. These include the ancient juxtaposition of Plato and Aristotle, of the adequacy of Hume’s defence of Empiricism, or more specific references to literary insights, like Borges’ “Funes, the Memorius” who, like the Black Mirror episode “"The Entire History of You", is condemned to remember everything. Indeed the most meaningful contemporary debates are contested (by practitioners) on the condescending terrain of history, as repetitions of earlier instances of overblown enthusiasms and disappointments that have dominated the field over the last century.
In his provocative essay, “Petite Poucette”, the French philosopher Michel Serres captures the openness of the post-cybernetic future by presenting us with a love-letter to the networked generation of his grandchildren whom he admires for their digital savviness. In doing so he accuses the institutions of higher education of being stuck in the past and hence unable to make the best out of what he believes is a totally new frame of mind that the new generation has. At the core of this new frame of mind he suspects a radical liberation engendered by digital machines that take over from our human brains the labours of knowledge as stock and finally enable us to use our minds to focus on intuition, innovation and serendipity, the principle of unsought finding. However, at the same time Serres bemoans the attention-devouring power of the cyberspace as a machine of seduction and particularly of social media. It is here that the institutions of higher education are meant to intervene to ensure that the digital machine does not seduce us to mindlessness but liberates us from mindless storing of knowledge. However, not only does he not show how this should be achieved but he also underestimates the powers of serendipitous discovery in data mining and machine learning. “Petite Poucette” was originally the title of a fairy tale spread all over Europe and known to English speakers as “Tom Thumb” (and to German speakers as “Däumling”). The moral of “Petite Poucette” is that particularly the most inconspicuous among the children – i.e. those of our children seemingly lost to the debilitating seduction of their devices – might turn out as the most precious blessing for the family. What Serres’ narrative does not adapt from the fairy tale, though, is the resilience with which Petite Poucette resists all the external forces that act upon her and her siblings. Translating this back to our concern requires us to ask the question from where the critical analytical capabilities of the networked generation are meant to come from so that they can function as Petite Poucette?
The most influential and persuasive scholarly accounts of the current state and future prospects of artificial intelligence are those that take this history of philosophical and institutional dispute as a guide to its future. Likewise it is the researchers working inside this field -- especially the youngest researchers -- who are most alert to the potential for the new technologies to renew and entrench the oldest structures of inequality and are preoccupied with developing automated remedies. It is important to acknowledge that this acute sensitivity to the philosophical and political critique of AI and ML gives the humanities unusual leverage in contemporary debates. This is mainly due to the openness of post-cybernetic futures beyond cyber-utopia and cyber-dystopia.
We only know that the potential post-cybernetic futures will be co-produced with digital infrastructures. The contemporary global discontent suggests that they need to be freed from the financial markets and individualized scoring systems that are also largely driven by the rewards of the market. The humanities are invited to experiment with imaginaries of a post-cybernetic future that can avoid being trapped by these path-dependencies.
These questions are, we believe, especially acute on the African continent. There is, already, a long list of well-articulated political and intellectual dangers for Africans that follow from the development of generalised tools of machine learning. The most obvious obvious risk is that AI will exaggerate the already existing brutal deficits of infrastructure – of high-speed network connections, reliable power supplies, data processing centres and, especially, of human expertise. Many experts worry that the growing power of the centres of artificial intelligence in the United States and China – and the global monopoly power of a small number of firms secured by AI -- will produce a new era of data-driven extraversion and dependency that will remove the decisive philosophical and political deliberations from the continent. Similarly undeniable are those that derive from the dense, hidden and ingrained structures of racism. There are problems of bias that result from the absence of high-quality large datasets -- for example of African names or facial images or words that can be used to train learning algorithms. A less obvious and thus controversial question for the humanities on the African continent and abroad is whether these new tools will support insidious and powerful infrastructures of social ordering. Companies that specialise in the technologies of surveillance and social scoring that the Chinese state is fostering have already found easy accommodation in the African countries -- including Ethiopia, Tanzania, Uganda and Rwanda -- that share a common vision of bureaucratic control and surveillance and weak privacy laws. As the wealthiest societies in the Americas, Europe and Asia -- under the growing popular influence of dystopian critique -- have begun over the last two years to propose meaningful regulation of the digital economy, some of the most influential figures -- like Paul Kagame and Jack Ma at the recent WEF meetings --- are arguing that the unrestricted, and welcoming, economies on this continent can serve -- like China thirty years ago -- as an unregulated laboratory of networked innovation. This is the second troubled concern that drives heated debates, and it poses an important and interesting research question.
After several decades of naive optimism the humanities have reached a moment of bad-tempered critical reflection that offers productive insights into the strengths of the disciplines, the priorities for the future and opportunities that may still be realised in the networked society. These questions have particular urgency on the African continent where the weakness of the humanities, the limits of regulatory constraint, the offshoring of data-processing, and elites’, donors’ and states’ enthusiasm for automated tools of production, surveillance and social ordering suggest the possibility that the networked dystopia that is much worried about in the rich countries may first take form here.
- What can we learn from the intellectual history and philosophical debates within the fields of artificial intelligence about the prospects of these technologies and their relationships with the humanities? What exactly do we mean by the well-worn popular descriptions of Artificial Intelligence, Machine Learning, and Cybernetics?
- Does the universal distribution of attention-mining social media represent an inescapable existential danger to the human condition and the (often unconscious) intellectual habits on which the humanities have been established?
- Does the development of ubiquitous and automated scoring -- of either the Chinese or American kind -- subvert the normative and political core of democracy and the ambitions of the core disciplines of the humanities -- of philosophy, political studies, religion, literature?
- Are the grand -- and obviously naive -- claims currently being made about the social and political implications of machine learning especially vulnerable to dystopian humanist critique?
- Will the universally distributed network, and the open-sourcing of the tools and platforms of machine learning, strengthen or weaken African universities and research, and the humanities disciplines within them?
- What opportunities and remedies are available for those who seek to disrupt the ordering and extractive logics at work on the network? Can we use the same -- or similar -- technologies to achieve contradictory ends?
Programme
Day 1 March 7 The Humanities in the Era of Learning Machines
9:30 Welcome and Introductions
10:00 Panel 1 | Humanism and the Learning Machines
Beth Coleman “Technology of the Surround" | David Goldberg "Machine Dread" | Penelope Papailias “Experimental Humanities and the Datalogical Turn”
11:30 Panel 2 | The Possibilities of AI and ML
Nelishia Pillay “The Impact of Artificial Intelligence and the Fourth Industrial Revolution on the Sustainable Development Goals for South Africa | Hussein Suleman “State of the field” | Benji Rosman “Reinforcement learning : risks and rewards”
14:00 Panel 3 | Politics of African Big Data Projects
René Umlauf “Give Work, Not Aid”? – Impact sourcing and digital factories in Northern Uganda | Fazil Moradi Modernity’s Inspecting Eye: Making-seen and known the world | Faeeza Ballim “Divinatory Computing : Machine Learning on the African Continent” | Bulelani Jili “Chinese social scoring and the exporting of authoritarianism?”
15:30 Panel 4 | Automating Authoritarianism?
Girmaw Abebe Tadesse, Iginio Gagliardone, Matti Pohjonen : chaired by Nicole Stremlau
17: 00 Wrapup
Richard Rottenburg -- Concluding Observations
Day 2 Evaluating Proposals for Funding
9:30 Introductions
10:00 Dan Cohen | Lessons from the building of the Digital Public Library of America
10:30 Tim Sherratt | Adventures with Historic Hansard
11:00 Richard Padley | Common problems and processes in Digitisation
12:00 Session 1A
Mendelsohn Kaplan Centre for Jewish Studies and Research Jews and Radical Politics
Netshivhambe University of Pretoria African Indigenous Musical Knowledge
Lucia Stellenbosch University Music Manuscripts
12:00 Session 1B
de Kamper University of Pretoria Art Archives
Weinberg UJ Afrapix
Skotnes Centre for Curating the Archive, UCT Bleek and Lloyd archives and Language Lives
13:00 LUNCH
14:00 Session 2A
Nkoala Cape Peninsula University of Technology Multi-lingual Journalism
Rushohora Stellenbosch University Memories of Maji-Maji
Krishnamurthy Namibia U of Science and Technology Memories of the Herero Genocide
14:00 Session 2B
Hurst University of Cape Town Open Textbooks
Kahn Archive and Public Culture Research Initiative, UCT Five Hundred Year Archive
Joseph Wits University Press Press Backlists
15:00 Session 3A
Schoots UCT isiXhosa Intellectual Traditions
Pillay SAFLII - UCT Constituent Assembly Records
Badassy Wits & UJ Hansard
Merlo Wits, UB, KZNM African Urbanism
15:00 Session 3B
Mwaura Wits Nvivo training
Erlank UJ Archival methods course development
du Toit UWC Students’ Reading
Gagliardone Wits Big Data Method
Thurman Wits Digitising and translating Shakespeare
16:30 WRAPUP | Keith Breckenridge & Hlonipha Mokoena