Last May, from the 8th to the 11th, Google I/O took place: the annual ‘High Mass’ during which the Mountain View firm presented the new features of its products and services. It was an opportunity to gather 22,500 people in its renowned, open-air amphitheatre, for a much-expected keynote speech as well as several technical sessions dedicated to developers from around the world.
An edition devoted to AI
During this Google I/O, Sundar Pichai first took the stage by reminding us that tech players should feel responsible for the products and services they create and for the potential impact they may have in terms of ethics and protection of user privacy. This is one way of bouncing back from the Cambridge Analytica scandal that rocked Facebook and affecting everyone in recent weeks.
The Google I/O was also the opportunity for the Google CEO to announce the name change of the Google Research subsidiary, which will now be called “Google AI”. By doing so, Google is clarifying what Google Research has transformed into in recent years: a laboratory dedicated to technology that uses artificial intelligence. This is a step forward for the Mountain View firm, which is often criticised for undertaking projects that are hard to interpret clearly and that end up swallowing each other up.
This name change is also a clue about the way Google is planning its future: AI (which has been central to Google products for many years) will be coming more and more to the forefront and applied to the company’s overall strategy. The laboratory’s mission is to have artificial intelligence brought into every home, or in other words, to popularise technologies that are still only accessible to the happy few. Today, almost everyone knows and uses Google Assistant or Google Translate, but not so much TensorFlow and its ability to recognise images, shapes, and faces.
Products and services that are more human-scale and autonomous
AI is becoming increasingly central to the company’s services, including Assistant. With Continued Conversation, you no longer need to start every request by saying “Okay, Google” since the assistant program can understand that the user is still talking to it. Similarly, it can tell when the user is talking to it or to someone else in the room, creating more natural interactions with the machine. Google is also putting thought into children, who seem to be talking more and more with connected speakers! After recording and recognising your little angel’s voice, Pretty Please will be demanding more respect from them. Indeed, the assistant won’t reply to their questions without hearing the magic words “please” and “thank you.” This is a way for the speaker to reinforce parents’ role by reminding children of good manners.
Duplex is another innovation that shows the ability of AI. This service lets users delegate the task of making an appointment with the assistant. All by itself, it will call up the correspondent and talk with them. To top it off, the assistant will even copy human language and behaviour by slipping in a few “ums” and “ahs” into its speech.
This demonstration shows how technology is starting to reach maturity. This is only a step away from Sci-Fi movies! We might even wonder if, in a few years, assistants will talk to each other for everyday tasks such as booking a table, planning a trip, or organising an evening out with friends.
Android & Google apps simplifying users’ lives
Android P, the new breed of their in-house mobile
OS, relies on
machine learning in order to become more intuitive and offer better usability through new interaction gestures and smart virtual buttons.
Adaptive Battery will attempt to understand the apps you use and optimise your phone’s energy consumption appropriately. That way, the battery is only available to the apps you use.
A dashboard has also been introduced so you can get an overview and better readability of how your smartphone is performing in a single interface, which is accessible from the notification centre.
Google apps will also benefit from the integration of artificial intelligence and machine learning in order to offer even more innovative features.
Gmail will help you write emails by displaying suggestions for words and expressions (Smart Compose & Reply), while Maps and News will suggest outings, activities, and articles to read depending on your location and preferences (Your Match & For You). The same goes for your phone’s camera, with features that help you improve your pictures and videos. Finally, in addition to people and places, Lens will recognise objects in a photo so you can find them more easily on the internet.
In the end, the Google I/O was loaded with announcements and new features focusing on machine learning and artificial intelligence.
In particular, we noted Mountain View’s desire to facilitate our lives and to have us spend less time on our smartphones by delegating more and more tasks to these services. Booking a table, ordering the next book to read, putting on a movie in the evening: all these actions will soon be left up to our assistants… perhaps to the point that we couldn’t live without them!
Translated from English by Charles Rogers.