AI Rising

By Michael Castelluccio
March 1, 2017
Female robot touching the words machine learning on a hexagon grid with a education background and smart machines keywords 3D illustration

The early history of AI (artificial intelligence) includes alternating periods when the technology experienced dramatic swings between highs and lows. The periods when funding and interest collapsed were called AI Winters, and they were prolonged, each lasting six years (1974-80 and 1987-93). At those points, many thought the research had reached an inevitable dead end and would remain frozen there. Both times, the critics were wrong.


In 1955, John McCarthy, a mathematics professor at Dartmouth, settled on the name artificial intelligence to describe the proposed research into human-like reasoning done by machines. The 1956 conference on artificial intelligence at his university etched a significant marker on the timeline of AI’s history, but the enterprise was already well under way, even before his christening of the effort.


Early research in machine reasoning had diverged down two paths. There were those who concentrated on writing rule-based AI logarithms for their computers, the more labor-intensive of the approaches, and those who tried to model the human brain by creating neural networks that might mimic the way we process information. As early as the 1980s, the neural network researchers were making good but very limited progress with the learning algorithms they were feeding their neural nets. There were three problems. Their computers were not very capable on the processing end, and the exemplar they were trying to model, the human brain, has billions of neurons that are 10 layers deep, while their neural networks were shallow shadows of these vast structures. And the third impediment was the lack of large data sets or libraries to provide the content for the instruction.


Now, 30 years later, the researchers have supercomputers and massive data sets with which to build immense neural networks. And the machines capable of deep learning are now incorporating another talent—natural language processing—in what is starting to look like the dawning of pervasive AI. In the words of John C. Mather, senior astrophysicist at NASA, “So far, we’ve found no law of nature forbidding true general artificial intelligence, so I think that it will happen—and fairly soon, given the trillions of dollars worldwide being invested in electronic hardware and the trillions of dollars of potential business available for the winners.”




In the last week of 2016, Gideon Lewis-Kraus published “The Great AI Awakening,” a long piece in The New York Times Magazine, explaining how Google had converted its Google Translate app to an AI system. This effort, he explained, was just part of “an industry-wide machine-learning delirium.” Lewis-Kraus pointed to efforts over the past four years by six companies—Google, Facebook, Apple, Amazon, Microsoft, and the Chinese firm Baidu—that “have touched off an arms race for AI talent, particularly within universities.” Citing starting salaries reaching seven figures, the companies have “thinned out top academic departments.” Baidu has a 1,300-person AI team, which is led by Andrew Ng, who previously was at the Google Brain division. For its part, Google has now declared itself an “AI first company.”


Lewis-Kraus explains, “What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.”




At a Deep Learning Summit in 2015, Google senior research scientist Gregg Corrado described the three kinds of machine learning now possible in AI systems.


Supervised learning involves providing examples to be stored by the system. This would work, for instance, with an email spam program.


Unsupervised learning allows the computer itself to examine data sets in order for it to discover patterns. This could be used in a program designed to execute data clustering.


Reinforcement learning is what was used by the programmers who recently taught their computers to win at championship Go and poker. After the rules of the games were input, the machines were told to start playing games against themselves. After billions of high-speed games, the feedback and right/wrong decisions provided the two different AI machines sufficient intelligence to defeat notable human professionals.


So will this finally be the Year of AI? Some would say we’re a year late. Writing in the Harvard Business Review, Shivon Zillis and James Cham point to $5 billion in venture investment and the big AI acquisitions when they conclude, “If this year’s landscape shows anything, it’s that the impact of machine intelligence is already here. Almost every industry is already being affected, from agriculture to transportation. Companies have at their disposal, for the first time, the full set of building blocks to begin embedding machine intelligence in their businesses.” The date of Zillis and Cham’s article? November 2, 2016.


Michael Castelluccio has been the Technology Editor for Strategic Finance for 21 years. His SF TECHNOTES blog is in its 19th year. You can contact Mike at mcastelluccio@imanet.org.

0 No Comments

You may also like