Page 30 - Monocle Quarterly Journal Vol 3 Issue 2 Spring
P. 30
MONOCLE QUARTERLY JOURNAL | DEEP LEARNING
There is, however, another kind of singularity that has become a favourite topic of debate – that of the technological singularity. This theory is based on the notion that, one day, an artificial super-intelligence will be created that is so far superior to its creators that it will begin a cycle of self-learning and self-improvement, spiralling beyond the control of human intervention. But the opinions on how close we are to this technological singularity, or if it is even possible, vary greatly.
The first mention of a technological singularity in the 1950s was aptly, and perhaps somewhat tellingly, uttered by the Hungarian-American mathematician, physicist, and computer scientist, John von Neumann, who is widely recognised as a founding figure in the world of computing. Von Neumann was no stranger to potentially world-ending technological advancements. As one of the leading scientists in the Manhattan Project during World War II, he helped to produce the nuclear weapons that were dropped on Hiroshima and Nagasaki in August of 1945. In the 1950s, a peer of von Neumann, Stanislaw Ulam, recalled a conversation with him that “centred on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
In recent years, perhaps the most prominent voice of the singularity has been that of computer scientist and futurist, Raymond Kurzweil. Among Kurzweil’s predictions were the disintegration of the Soviet Union because of the advancement of technologies such as cellphones, and the explosion in internet usage from the 1990s. He also foresaw that chess software would beat the best human player by the year 2000 – a feat that was achieved in 1997 when IBM’s Deep Blue beat world champion Garry Kasparov in a globally-broadcast match. And in terms of the concept of a technological singularity, Kurzweil predicts that by the year 2045, “the pace of change will be so astonishingly quick that we won’t be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating.”
Kurzweil’s prediction, as described in his book The Singularity is Near (2005), relies heavily on a theory called “the law of accelerating returns.” He argues that the singularity is closer than many think, because humans tend to reason in terms of linear progression. Yet, as
he describes in his book, technology, as with many of our most important advancements, is progressing at an exponential rate – a reality observed by Gordon Moore, co-founder of Intel, in 1965. Moore observed that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented, and he predicted that this would continue to be the case for the foreseeable future. In recent years, the pace of technological development has slowed down, but only slightly, with the capacity
There is, however, another kind of singularity that
has become a favourite topic of debate – that of the
technological singularity.
of computer chips roughly doubling every two years, according to what has become known as “Moore’s Law”. At certain times the rate of this growth seems linear, Kurzweil explains, because when looking back at the first half of the curve, it is much flatter than what comes after the “elbow” of the curve. At that point, beyond the elbow, advancements that previously took decades to show major progress, could suddenly double and then quadruple in effectiveness, usability and adoption. And one such advancement that many have deemed slow and laborious in its development and practical applications in the last few decades is artificial intelligence. Just as Kurzweil explains, when we stand at this point in time and look back at the rate of progress in the field of AI since the beginning of the 20th century, it certainly can seem linear in nature, if not pedestrian.
However, what has predominantly been holding AI back is not a lack of ideas or useful implementations, but a shortage of both computing power and the data necessary to achieve deep learning in AI. In recent years, both of these necessities have experienced substantial growth, providing the major players who have collected these vast amounts of data with a seemingly endless number of possibilities for the penetration of artificial intelligence into every industry imaginable, as well as into almost every sphere of our daily lives. In many ways, if we are indeed currently situated at the “elbow of the curve”, the conditions do seem perfect for AI to accelerate exponentially in the coming years – perhaps even in
28

