Google I/O Keynote Reveals Google’s Master Plan


Google+ Really Is Google 2.0
Take, for instance, Google+. The redesigned social platform looks a lot like Pinterest (and maybe a little like Facebook). A three-column look gets a lot more information in front of users, but it’s what’s behind that information that interests me.

Google demonstrated how Google+ can show you more about whatever is on your Google+ page, even if there is no description. So behind a photo of the Eiffel Tower (literally behind it, because these new cards flip over), is more relevant, Knowledge Graph-guided information about the landmark.

Google’s “Related Hashtags” in Google+ analyze the contents of any post and add the hashtags Google thinks you need (yes, I said “thinks”).

Since Google knows this could freak some people out, it’s included the ability to opt out for one post or all of them. I’m not a nervous nelly about privacy, so I’d likely leave it on, preferring to get more information as opposed to less.

This hyper intelligence is also in evidence in Google’s new, powerful photo tools — also in residence in Google+.

Autoenhance is, by no means, a new concept. I’ve been using similar tools since the first day I loaded Photoshop on a PC. The ability to smartly reduce noise and even tell the difference between skin and, say, hair or jewelry is appreciated, but not game changing.

It’s the oddly named “Auto Awesome” that may raise a few eyebrows. Google SVP and Google+ lead Vic Gundotra described it as having the ability to create new images from one that did not even exist. If you upload 100 images of a recent vacation, Google’s new algorithms probably pay more attention to all of them than some of your closest friends.

It sees (yes, I said “sees”) things like similar photos shot in a burst to find all the smiling faces so it can create one composite where everyone is smiling. The same technology is capable of making collages, animations and panoramas. At one point during the keynote, Gundotra said the technology had spent the past two weeks “gifting” Google+ members with auto-generated animations (essentially Google’s own form of GIFs). I imagine not everyone was thrilled with this bit of news: “Oh look, Marge, Google went through all our photos and made us some movin’ pictures!” “They did what?!”

Because Google and Google+ know so much about you, it can look at that same monster collection of photos and boil it down to just the best or “Highlights.” You really don’t have to do a thing. The technology will identify family members and make sure they’re part of the reel. If people are happy in the pic, they’ll probably make the collection, too.

How Smart Is Too Smart?
And it’s like this with everything Google is doing these days; using what it knows about the world, you and content to make something new, whether it’s a better photo collection, a smoother search experience or a music selection that you want to hear.

Yes, as everyone anticipated, Google’s new $9.99 per month streaming music service All Access was among the myriad announcements. Again, what interested me most was not the fact that Google made the streaming deals with major music labels or that you can now access millions of songs from your phone, tablet and PC, but that Google instantly delivered custom “radio stations” based on your interests.

It’s what Google can do: For whatever service it wants to launch, it can leverage a Pacific Ocean-sized well of highly organized data and bundle up something useful (or creepy, depending on your perspective).

Obviously, this is not just about data, because mountains of data are meaningless if you can’t drill deep into it and see very single critical relationship on the fly. Google’s groundbreaking work in building out its Knowledge Graph is clearly at play in almost everything Google is doing here today.

Smart Talk
Voice search, which already exists on Android phones is making the leap to the desktop (via Google Chrome). Its ability to understand natural language questions is, from where I sit, more impressive than Apple’s Siri.

The Google I/O demonstration was flawless, but the real power is, once again, Google’s data backend and what it knows about you. In one search example, the demonstrator asked Voice Search a question that only identified location with “here” (as in “near here"). She also identified one of the search components with “it.” Google Voice search instantly brought up the perfect answer.

Apple can certainly do all these things with Siri, but not necessarily to the same depth as Google. Google simply knows more about you, and if you’ve signed in to Google and Google+, multiply that knowledge by a factor of 10.

In the next iteration of Google Maps on the desktop, Google cast aside much of the interface to overlay all the key information on the map. A lot of it comes from people you know, where you’re near, and search preferences. Plus, Google is using all of our information to make its own services much, much richer. So when Google Maps takes you inside St. Peter’s Basilica, the beautiful 3D images that comprise the room view are from Google users (who uploaded the photos to Google’s service).

The Vast Web
I used to think that Google was going in a dozen different directions at once, with no unifying strategy or destination. Even Google’s own recognize that a method was not always clear in the madness. “Frankly, even Google’s own services have been fragmented and confused at times,” said Google Android Lead Sundar Pichai during the keynote.

Now, however, Google’s worldview is finally coming into focus. The tenuous threads that connect these dozens of different applications and services are strengthening and gradually being pulled closer together. Underneath it all is Google’s vast web of information and smarts, which is all about us.

What Google is about to do with all of it is either a thrilling or very scary prospect.


by lance ulanoff