SF Technotes

Google’s AI Hardware Launch

By Michael Castelluccio
October 12, 2017


At Google’s Pixel launch on Wednesday, October 4, 2017, the company rolled out a wheelbarrow full of new devices—phones, home assistants, Chrome computers, cameras, VR rigs, and earbuds that can translate for you. All provided tangible proof that the company had completed the pivot away from a mobile-first company to an artificial-intelligence-first company. Poised like an animated human footnote beneath the new company raison d’etre, AI + software + hardware, CEO Sundar Pichai delivered his opening remarks.










The two most anticipated products at the launch were the Pixel 2 phones and a premium Chrome notebook, but the voice of Google Assistant became more prominent as each product was introduced.


Home Max and Home Mini


There were two new Google Home speakers, Home Max and a smaller Home Mini that’s cheap enough at $49 to add Google’s Assistant and music to other rooms in the house. The Mini is similar to the Amazon Echo Dot. The larger Max can pump out sound 20 times more powerful than the standard Google Home speaker, with two 4.5″ woofers and two 0.7″ tweeters as well as Smart Sound with machine learning that adjusts the output of the sound depending on where you place it. Assistant on both of these devices has the acknowledged best search of any home digital assistant.




This is a Chrome OS 2-in-1 laptop that’s intended to compete in the established laptop market. It has 128GB of storage, runs the reasonably priced Play catalog of Google apps, and has a Pixel Pen accessory and a 10-hour battery. And it’s the first laptop with Google Assistant built in. You can call for its help by voice or you can use the dedicated hardware key.


Pixel 2


The new Pixel 2 and Pixel 2 XL phones have some incidental improvements, but early reviewers like TechCrunch took special notice of the new Assistant features of the phone. Reviewer Brian Heater wrote, “Google felt like it didn’t have much to add to an already great smartphone from a hardware perspective, but this moment in time represents a lot of ML (machine learning), AI and contextual data meshing together in such a way that could help the smartphone take its next evolutionary step into an even more connected device.” Also quite smart, the phone camera has been designed for the augmented reality (AR) and virtual reality (VR) apps of the future.


Pixel Buds


The Pixel 2 phones don’t have earphone plugs, but Google has new wireless earbuds that will also give you a direct connection to Google Assistant by pressing the button built in to the earbud. Google Translate is also built in, so you have translation in real time. These are pretty smart earbuds.


Daydream View VR


The new Daydream View VR headsets have the more than 250 VR titles from Google, and there will be new content created by another Google property, a YouTube series in VR. On the AR side, the new cameras on the Pixel 2s can handle and create AR imagery for projection into real spaces, as with the demo AR stickers Google previewed at the launch.


Google Clips


Google Clips is a GoPro-like utility camera that you clip to yourself or at a fixed location. Using artificial intelligence (AI), it makes decisions about taking candids as stills, videos, or GIFs, which are then sent to your phone over a Wi-Fi connection.


As mentioned, the persistent theme reappearing in almost all the products at the launch is AI, specifically with Google Assistant. Google has decided to take its research lead in AI and implant it wherever it can serve as a basic UI (user interface).




Human speech was one of the more difficult frontiers for machine intelligence. The early experiments go back to the 1980s when Terry Sejnowski and Charles Rosenberg assembled a 300-neuron neural network and set out to teach it to read text aloud from a page. Their project was called NETtalk, and the pair started by teaching it single words, one letter at a time. They progressed from a children’s book, with a 100-word vocabulary, to a 20,000-word Webster’s dictionary.


At first, NETtalk could only distinguish between vowels and consonants, but eventually it developed a working vocabulary of 1,000 words. Sejnowski recalls how “We were absolutely amazed. Not least because computers at the time had less computing power than your watch does today.”


The next step was a much further stretch—getting the machine to understand something more than just the sound of the phonemes that it was mimicking. That work took several more decades.


Today there are no fewer than five AI digital assistants built on the advances made over the years in machine speech-recognition—Siri (Apple), Google Assistant (Google), Cortana (Microsoft), Bixby (Samsung), and, of course, Alexa (Amazon).


The first patent for Amazon’s Alexa was filed in the summer of 2012, and today the company owns about 70% of the market with the sales of its Echo and Dot assistants. The Echo first appeared in 2014, and it can read out loud and even correctly respond to your questions or requests to dim the lights, turn down the heat, and so on. It continues to learn the more you use it.


Today, the different assistants display different strengths, with Alexa out in front with shopping commands and Google acknowledged as having the sharpest “mind” for wide-ranging search commands. But for all of these, we’re still in a clipped-speech world of phrases and brief requests. There’s now an interesting experiment that has a goal of conversations that last many minutes. Last year, Amazon sent out the call to engineering students at a dozen universities around the world. The goal is to build a voice bot that can hold up its end of a 20-minute conversation. The student group that provides the most impressive progress will win $500,000. The winner will be announced next month, and it should be interesting.




The direction taken by Google regarding the company’s obsession with AI should come as no surprise to anyone who has been following the company. The investments have been numerous and costly. For example, Google purchased the U.K. AI company DeepMind in 2014 for more than $500 million. DeepMind is the company that recently created a program that defeated a world grand master of Go, an accomplishment more difficult than the defeat of chess grandmaster Garry Kasparov by IBM’s Big Blue computer in February 1996. DeepMind’s founder, Demis Hassabis, has explained the company’s two-part mission as:

Step one: Solve intelligence.

Step two: Use it to solve everything else.


Before DeepMind agreed to the sale, Hassabis outlined two prerequisites:

  1. The work it produces can never be used for espionage or defense purposes.
  2. There must be an ethics board established to oversee the research as it approaches achieving AI.

The intellectual investment at Google is also extensive. Quora.com estimates the number of Ph.D.s working at the company at somewhere between 1,500 and 2,000 (extrapolated from the 7% estimate of total employees holding doctorate degrees).


It just makes sense that when designing new products Google would want to take advantage of its lead in AI over hardware giants like Apple. Additionally, speech could be the ultimate interface for humans and their computers. It might not be the quickest or most efficient way to connect—that would be the esoteric coding languages like Assembler and Python—but it’s the most like us.


Michael Castelluccio has been the Technology Editor for Strategic Finance for 23 years. His SF TECHNOTES blog is in its 20th year. You can contact Mike at mcastelluccio@imanet.org.

0 No Comments