Divinatory Computation : Artificial Intelligence and Africa
Faeeza Ballim & Keith Breckenridge, 24 October 2018
According to Google’s global search trends, South Africans are strangely obsessed by the fourth industrial revolution. (South Koreans and Malaysians have some curiosity about it, but much less than we do, and there is close to zero interest in the idea in Brazil, China, Germany, Japan and the US.) The term was coined by Klaus Schwab, the founder of the World Economic Forum, at Davos in January 2016, and he later explained what he meant in a short book of the same name. His claim was overshadowed by the events of that year, which included the European migration crisis, Brexit, ongoing brutal civil war in Syria, nuclear postering on the Korean peninsula and the Trump election. All of which seemed very much to be extensions of the long term conflicts of the 20th century. These political continuities are interesting because the core of the claim that we are experiencing the fourth industrial revolution is that it marks a break with the epoch that immediately precedes it. Schwab calls this period – the years from the 1960s to 2007 – the third – digital – industrial revolution. The new era is “characterized by a much more ubiquitous and mobile internet, by smaller and more powerful sensors that have become cheaper, and by artificial intelligence and machine learning.”(Schwab 2017, 12). The idea that the global economy has moved into a fundamentally new technological epoch captured (in the title of another best seller by Erik Brynjolfsson and Andrew McAfee) as The Second Machine Age and driven by ubiquitous internet and artificial intelligence has entered into common sense. Google shows that Chinese interest in Machine Learning matches our distinctive interest in 4IR and, in truth, that’s a much better word to describe the current focus of our technological moment.
What does this new machine age promise and what does it portend for the African continent? It is probably good to begin with the opportunities. There are now many programmes offering young, mathematically-inclined Africans training and exposure in the core skills of machine learning and artificial intelligence. The most influential is a network of training projects that was started by Neil Turok -- who, at the time, was a professor of physics at Cambridge. The African Institute for Mathematical Sciences (AIMS) draws on the enthusiasm of global experts to support the activities of six schools -- in Cameroon, Ghana, Rwanda, Senegal, Tanzania and SA -- that offer free training in the mathematics and computer science of data science and, increasingly, machine learning. Half a dozen similar and loosely related projects, offering the same kinds of exposure and connections are now operating on the continent, including the Deep Learning Indaba, which is supported by young South African researchers at Google’s famous Deep Mind AI lab in London. The Indaba held large training workshops in Stellenbosch and Johannesburg over the last year and plans a third next year in Nairobi. Perhaps the most startlingly elitest of these projects, which aims to leverage the enormous and growing demography of talent on the continent, is the Next Einstein Forum, which draws on the AIMS network to anoint a handful of young researchers from across the continent each year. This creates, in science and mathematics research, an economy of talent similar to what has long been true in football and music.
It is important to notice that these open and free networks and training workshops are matched by a broad suite of resources that are freely accessible to anyone interested in pursuing ML as a field of expertise. Like the widely used Khan Academy courses, there is a host of free on-line learning resources for Machine Learning, including the step-by-step upper-level undergraduate computer science courses offered at Stanford and MIT. In stark contrast to the other main areas of science and engineering research (which have long been bitterly under-resourced and unavailable on most of this continent) the tools of ML are all freely available on-line. This includes the main programming languages -- Python and Numpy -- and the platforms for developing applications like Kaggle and OpenML. The most powerful computational platforms like Google’s TensorFlow Research Cloud and Amazon’s ML cloud -- the same platforms used by researchers at MIT and Cambridge -- are available, for free, to African researchers with access to the Internet. This is something like a revolution -- the material opposite of what has been true about world leading scientific research for at least a century.
But there is also, already, a long list of well-articulated dangers. The most obvious are problems of bias that derive from an excess of information – from the dense, hidden and ingrained structures of racism that infect the autonomous development of algorithms, especially in the United States. There are also problems of bias that are the result of the absence of high-quality training datasets -- for example of African names or facial images or words. A third risk is that AI will exaggerate the already existing brutal deficits of infrastructure – of high-speed network connections, reliable power supplies, data processing centres and, especially, of human expertise. For centuries the products that African people, firms and societies have produced have been monopolised and discounted by metropolitan corporations, with the energetic assistance of local elites. Will the growing power of the centres of artificial intelligence in the United States and China – and the global monopoly power of a small number of firms secured by AI -- produce a new era of data-driven extraversion and dependency? And finally there is the problem of work itself. Economists worry that Africa’s historically unprecedented labour market growth -- which sees 12 million new young workers every year -- can only be met by labour-intensive industrial investments, and that those are specifically endangered by the new forms of automation.
It is also worrying that Chinese companies have found easy accommodation in the African countries -- including Ethiopia, Tanzania, Uganda and Rwanda -- that share a common vision of bureaucratic control and surveillance and weak privacy laws. In April, 2018, the Zimbabwean government announced that it had selected a Chinese firm, CloudWalk, to provide an AI-based facial recognition system using the national identity database. Only later did the fact emerge that the state, which is notoriously short of foreign currency and in the middle of a contested military seizure of power, gave the Chinese company millions of records from the national population register --including photographs and real name and identity numbers -- in exchange for the new surveillance tools. The companies involved benefit from a wide expansion of the scope of their training data through the inclusion of millions of African faces and names, which could potentially correct the racist bias that has long plagued facial recognition systems. These are all obvious dangers, but the most serious problem with AI is hidden, and intrinsic to the technology itself.
Scientists have dreamed of endowing machines with human-like intelligence since at least the 1950s. But the intractable difficulty of mimicking one core component of this intelligence -- the capacity to learn -- meant that it remained a pipedream for much of the twentieth century. The way forward was muddied by deep disagreement among designers about the nature of knowledge and the way that humans learn. The dominant model of the current age, Deep Learning, is but one subset of the vast field of artificial intelligence. Its rise is due to the presence of troves of data generated each second on ubiquitous internet platforms as well as the invention of speedy computer processing systems based on the graphics processing units (GPUs) used by gamers. At the heart of Deep Learning lies the neural network, which aims to simulate the neural functioning of the human brain. While biological in name, the neural network is in reality a system of programmed matrix multiplications governed by long-standing and commonly used statistical techniques. It is rooted in principles of behavioural psychology, introduced by a New York-based psychologist, Frank Rosenblatt, in the 1950s. Rosenblatt taught the network to correctly respond to certain stimuli by a system of rewards and punishment. While rudimentary by contemporary standards, the same principle of adjusting the weights in training from some random initial distribution of weights persists in the neural network, in the optimisation function known as gradient descent.
Gradient descent -- a powerful and ubiquitous technique used to reduce errors in wildly disparate data -- is the heart of the current scientific controversy in Deep Learning. When plotting the loss or error function of a given training set, the gradient descent function begins at a random initial starting point and then descends in an iterative manner towards the lowest point of the function. Identifying this minimum point is crucial because it signals the optimal fit for the data and sets the parameters of the predictive function that can then be used to calculate functions outside of the training set. In practice the minimum is difficult to reach so that some of the leading ML practitioners at the Conference on Neural Information and Processing Systems in 2017 argued that the models more closely resemble alchemy than an exact science. Over thousands of iterations, it is not always clear that the minimum has been reached and, if so, whether or not this is the absolute minimum of the function or simply one of many troughs. This uncertainty means that practitioners have surrendered control over the operationability (and mathematical specification) of gradient descent.
Uncertainty about the way that the technology works does not bode well on the African continent, where governance has long been characterised by the fundamental unknowability of the citizenry. Both colonial and post-colonial governments governed from behind a cloak of ignorance. In the beginning this was due to the constrained administrative budgets of colonial governments, which historian Sara Berry has called “hegemony on a shoestring”. Deep Learning decision-making has the strong potential to extend this long tradition of unaccountable and arbitrary decision-making, utilising only sketchy details about the people who are affected.
Understanding how ML works, why it is such a powerful and popular technique of automation, but also, and importantly, what its limits are as a system of intelligence is a good start in limiting its dangers as a decision making technology. Critically unpacking what is at stake in the fourth industrial revolution is a good place to begin that work.