Clever Computing

The possibility of intelligence in the inorganic world has been a fascination to man dating back at least to ancient Greece when mythical beings of bronze and ivory protected islands and seduced men. Now, a couple thousand years later, we are rapidly closing the gap between these myths and reality. The possibility of developing a computerized mind accompanied the advent of computing science in the early-to-mid 20th century, and in 1956 the formal study of this subject was born here in Hanover at the Dartmouth Summer Research Project on Artificial Intelligence.

Google’s victory over the world champion Go player in March marks the latest in a series of publicity stunts that have tracked the rapid advancement of artificial intelligence (AI). Previous such displays have included IBM’s triumphs in chess with their computer Deep Blue and Jeopardy! with Watson.

These events may seem to be all fun and games on the surface, but there is a dramatic shift in the AI industry that is taking place behind the scenes. According to Quid, a data firm, spending on AI deals by tech giants like Facebook, Google, Microsoft and Baidu quadrupled between 2010 and 2015. This spending spike is symptomatic of a high stakes race among tech companies to become the preeminent supplier and developer of AI technology, which promises to be the platform of the future much like the PC operating system and Google’s search engine were before.

To build an artificial brain computer scientists need real ones, which has made human capital the most precious and therefore most aggressively pursued resource in this clash of the tech titans. This explosion in demand for AI talent has created fears of a brain-drain at America’s most elite universities. In the “AI Winter” of the 1980’s and 1990’s when the research being done wasn’t nearly as marketable, the best and brightest in the field found academia to be the most welcoming and lucrative option. Now, however, professors are finding it hard to hold on to their grad-students who are being lured away, even before they graduate, by million-dollar salaries – the promise of making a tangible impact and freedom from a world of uncertain academic funding. As Andrew Moore, the dean of Carnegie Mellon University’s computer-science department, stated in a recent Economist article, this phenomenon raises concerns about a possible “seed corn” problem that drained many top universities of the very resources necessary to produce the next generation of talent. Yet, these same salaries that are so effectively luring talent out of academia ought to lure talent into it. After all, it is in the best interest of the tech companies to maintain a sufficient pool of talent from which to draw.

If the development of superior AI is a race, what does it mean to win? Given the nature of AI, this is a loaded question. Because an artificial intelligence system would learn and improve upon itself, the better system would do so more quickly than its competitors leading to a snowball effect that could quickly lead to a drastic imbalance in the industry. This combined with the existential and moral concerns attached to the field has lead some big names in the industry, like Elon Musk, to take precaution in preventing a single entity from gaining too much power. Musk went about this by pledging $1 billion along with other donors to fund OpenAI, a non-profit research organization whose goal is “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

So who among the tech titans appears to pose the greatest threat of becoming the AI hegemon? Given its track record, it is likely no surprise that the perennial powerhouse Google is developing a convincing lead in the field. Not only is it well-equipped with the capital and infrastructure to pursue such an economy of scope but it has also demonstrated impressive foresight and instinct in their approach to the issue. As one of the first tech conglomerates to dip their toes into AI, Google recruited Stanford researcher Andrew Ng in 2011, providing him the funding and freedom to pursue advancements in the field of “deep learning” and therein kick-starting the project known as “Google Brain.” Deep learning is a method for machine-learning based on the construction of artificial “neural networks” that mimic the functioning of the human brain.

This method is superior to others in that it is a general-purpose technology that can be applied to myriad specific tasks using a singular algorithm for processing and learning from data. Of course, Google wouldn’t dare put all its eggs in one basket and in early 2014 supplemented their AI efforts with the purchase of DeepMind. Reportedly shelling out some $600 million for the firm, Google incubated the efforts of Demis Hassabis, DeepMind’s founder, in his quest to “solve” artificial intelligence using a method known as reinforcement learning. AlphaGo, the software that recently beat the world champion at Go, is based on this technique. Not only is Google leading the way in terms of developing the underlying software for AI, but it is also making considerable inroads in applying these technologies. According to the California DMV, as of March 1, 2016 Google has 73 test-permits for self-driving cars compared to Tesla’s eight (the second most in the state).

While some like Elon Musk and Stephen Hawking have voiced their concerns about the existential threat of rapidly advancing AI, there are more salient concerns to be had about its impact on our lives. The heart and soul of AI in its current applications are data. For example, the machine-learning aspects of the software behind Google’s targeted advertising are based on collecting information about users’ day-to-day online behavior. As the applications of AI expand into areas such as self-driving cars, the amount of data collected about consumers and thier behaviors will skyrocket. Then these machine-learning algorithms will be processing oceans of data regarding daily travel habits including things like where everyday people eat, shop, work, etc. Giving away the control over a car may influence someone’s future travel based on marketing tools like corporate sponsorship without the consumer’s awareness. The car will likely even listen to passengers’ conversations. As The Atlantic’s Adrienne LaFrance put it, “In this near-future filled with self-driving cars, the price of convenience is surveillance.”

AI has a profound potential to benefit humanity but we must also respect its potentially adverse manifestation. Clearly there must be regulation of the means, methods and motivations behind the technology’s advancement but we must be careful not to stifle its progress. The world has reached an inflection point in terms of artificial intelligence and its future will be a balancing act at the pace of Moore’s law.