Google Machine Learning Dominates I/O 2018 Google Machine Learning Dominates I/O 2018
Skip to main content

Select your location

Convention attendees listening to the speech

Google Machine Learning Applications Dominates I/O 2018

I’ve been attending Google I/O 2018 all this week in Mountain View. I’ve pulled together my thoughts here after Sundar Pichai’s keynote, and the subsequent busy schedule of sessions on (amongst other things) Machine Learning, Android and the future of computing. Check-out the videos on their website if you want to delve into the details as there have been some great talks and conversations here.

Google Machine learning

Google is harvesting the spoils from its big bet on Machine Learning. It’s time to work out how the rest of the world can share in them too.

The leitmotif of Sundar Pichai’s opening keynote, and many of the talks over the last three days, has been Machine Learning - or ‘AI' if you prefer to keep it marketing friendly. And Google, understandably, do.

Google has invested considerably in this space in the past few years, but really doubled down last year with Pichai's proclamation at Google I/O ’17 that Google is now an 'AI first’ company (‘mobile first’ is old-hat now). This focus is clearly paying off, Google AI and machine learning integration across the product line - from Google Assistant, to Maps, Google Lens, AR/VR and updates to Android. These products are benefitting, and in some cases showing marked progress ahead of the competition - see Google’s Duplex demo for evidence of that.

The question I had for Google Machine Learning team here, was: how do we accelerate the utilisation of this new fundamental technology, and help it start to deploy out to the rest of the world beyond Mountain View? This was largely met with a scratch of the head and a shrug. While Googlers in their Brain, Cloud and DeepMind enclaves believe that they are developing something that will become as critical to computing as network connectivity is today, it’s still only largely understood within the bounds of the open-source communities that have fostered it.

Convention attendees listening to the speech

The commendable openness of this community is great. But, there’s still a chasm to cross in terms of ensuring Machine Learning becomes part of the toolkit of designers, product and senior business leaders when they conceptualise what new services and software might be possible. And ultimately, what to invest in to help their customer better achieve what they want, and impact the bottom line.

At Kin + Carta, we’ve started to think about this, but over the coming months our new Data and Machine Learning community of practise will focus on bridging this chasm. Watch this space for more there.

Below, are my highlights from the announcements and discussions over the last few days.

Duplex might actually be about to deliver on the ‘Assistant’ metaphor.

The most impressive and commented upon demo amongst the announcements yesterday, was an enhancement to Google’s voice UI Assistant: Duplex. Duplex takes your requests to book a hair appointment or table for dinner, and rings up the restaurant or salon to make it happen.

Most impressively, the maître d’ doesn’t hear the usual wooden delivery of text-to-speech that anyone who’s interacted with a voice assistant might expect. Instead, Google played actual calls in which Assistant rang up to make the booking, and it ‘spoke’ in a realistic way. Duplex understood complex responses, introduced pauses and realistic “ums” and “ahs”, and got the appointment booked - all without the person on the other end of the line realising they were chatting to a machine. We were spared the blooper reel, but this seemed like a genuine step change toward more realistic voice interactions. And, something highly valuable for Assistant users.

For years, mobile devices and services have grasped for the assistant metaphor - ‘it’s like having an assistant that knows your next move’ has appeared in countless tech press-releases. In reality, the burden of effort has still sat with the customer. Tech might be able to set a reminder, but still leaves the heavy lifting of ringing up or doing the thing itself for you to handle. Duplex promises to genuinely take a job off your hands - ringing the businesses you want to visit soon so you don’t have to.

The question now, is how will those businesses react if this results in a cavalcade of rather convincing robots clogging their phone lines? Surely the realistic “ums" and “ahs" of Duplex’s booking calls are getting in the way of a more efficient interaction with a small businesses? And, will Google really get their Assistant to call up millions of small businesses themselves to check details and opening times for their search listings? Might they open up Assistant to SMEs to answer these calls, and therefore have two computers having a realistic human conversation with each other? This impressive demo left a lot of questions to be answered, but was the clear highlight of I/O.

Google Machine Learning delivering invention across all its products.

Google was keen to show how Machine Learning touches and features should now start to spill out across every interaction you have with its software, and beyond.

In Android P (the new iteration of the mobile OS), this comes to the fore in a build on predicted apps feature - predicted Actions and app ‘Slices’. If you plug your headphones in, it suggests you might want to fire up Spotify to stick Lady Gaga on (if, of course, that’s what you’ve been doing in that context lately). The implication for app developers on Android is clear — make sure you do the work to fit into this new part of the OS. If it really delivers on the promise shown at I/O, then it might start to crowd out the more familiar tap-on-the-icon way into your app.  

In Google Maps, it’s the creation of a Visual Positioning System (VPS) that uses the data from Street View to assess your orientation in a city accurately from the landmarks around you. And, it introduces computer-game style overlays to keep you on the right track. With Google Lens, it’s taking photographed content and being able to copy and paste from documents, translate, explain or look up things that appear in the text in front of you.

In News and local search, it’s attempting to solve the curation problem - bringing together the information Google gathers in a way that is more likely to be relevant for you. Whether that’s surfacing the stories you’ll be interested in, or a place for dinner that you’ll love.

Most importantly for other organisations, Google also announced it will share some of these increasingly familiar machine learning powered features and techniques with all developers via ML Kit. ML Kit promises to make it easy for third party developers to plug-in things like image recognition and bar code scanning to their apps. Google is clearly keen to sell the picks and shovels in the AI Gold Rush, not just take the spoils for its own apps. 

Healthcare is full of high stakes challenges for Machine Learning to tackle.

Healthcare took up a significant portion of the schedule here. Sundar focused on diabetes diagnostics in his keynote, as the first example of the potential that AI has, in changing products and industries.

Medics are generally just one profession that ‘weathers an information storm’ every day (as Greg Corrado put it in on Wednesday in a session on the future of AI). It’s these areas where machine learning can really start to reap not only easier restaurant booking experiences, but step-changes in medical care, diagnosis, operations and even drug discovery. The knock-on effects should mean better outcomes, more efficiently, in what is one of the most resource intensive areas of our economy.

In the current climate of scepticism around the larger technology companies’ approach to gathering data and influencing our behaviour, it’s clear that health provides a strong counterbalance to ‘you’ll just use to sell more ads’ jibes about Google machine learning efforts. 

Google Assistant is popping up on new surfaces around the home and increasingly in tandem with a screen.

One of the emerging consumer battlegrounds in technology today is for the living room - with Amazon’s Alexa the most prominent voice assistant, particularly in the form of the Echo. Google’s Assistant certainly offers the most compelling alternative to Alexa. They showed off a number of enhancements and new form factors for the Assistant to pop-up in it’s Home devices.

Google Assistant

This is an evident marketing focus on announcements here - with John Legend adding his voice to the device, the new feature ‘Pretty Please’ to encourage politer interactions from children, and the general focus on the mainstream family life. All of this suggests that Google is seeing an uptick in Home device sales and wants to push them into as many homes as possible, before Amazon gets there first.


One important area where Google might have the upper hand, is the combination of voice and visual interactions. Whereas Amazon relies on delivering anything visual that Alexa finds to an app in iOS or Android - or to the relatively uncommon set of Amazon proprietary screen- Google already has the Android Empire in billions of pockets. This is alongside Chrome Cast and Android Auto, which provide plenty of nearby screen space that it can start to utilise in combination with the Voice UI. In the follow-up sessions, the assistant developer teams were strongly pushing would-be Assistant developers to consider this proliferation of potential visual interfaces when crafting voice interactions for their apps.


Autonomous mobility is the biggest potential payoff for machine learning and Waymo is taking the lead.

The centerpiece of Google’s potential AI accomplishments is Waymo - the autonomous vehicle division which is attempting to bring together a lot of sensors and robotics to deliver people and things to their destination with no one in the driver's seat.

Waymo

Having recently launched trials in Phoenix Arizona, Google is now preparing to open up Waymo vehicles for hailing as a new on-demand transportation service. Creating an autonomous vehicle is clearly a phenomenally complex job, combining different sensors, robotics and lot of machine Some of the details shared around the product development here were fascinating. Particularly the fact that a lot of the training happens ‘in silica’ rather than in real life, as models are fine-tuned based on video feeds of cars on roads, not just a sensor laden trucks zipping around Mountain View.

If the optimism on show here is to be believed, it looks like Google will launch a viable demand-responsive mobility service based around Waymo vehicles this year. Even if it is in the relatively simple context of a arid, gridded block city like Phoenix, being the first to deliver in this hyped space would be a huge achievement.

This post was originally published by George Proudfoot over on Medium

Want to learn more?

Contact us