Of all the new Google products announced, Google Clips is the most interesting by far — which is to say that it represents the most interesting trend. This consumer device represents the future of enterprise A.I.
But wait, you might say. Isn’t Google’s Pixel Buds product the most revolutionary? Its ability to translate language in real time is something out of science fiction, and the elimination of language barriers surely has major implications for the future of mankind.
All that’s true — sort of. Many companies (including Google) have been building real-time language translation software and delivering it fast to smartphones. Google Translate is amazing, and I’ve been using it for years as I travel around the world.
The only translation innovation in the Pixel Buds is that the earbuds have external, outward-facing speakers in addition to the inward-facing ones, and outgoing translations play through those speakers while incoming translations play through the regular earbud speakers.
In other words, Pixel Buds simply play the audio from Google Translate, but intelligently choosing between two sets of speakers for playback.
The effect is mind-blowing, but the “innovation” of speaker selection … not so much.
Google Clips, on the other hand, is the real revolution.
Why Google Clips changes everything
I’ll speculate as to why Google chose to target that particular demographic in a moment. But first, a few facts about the camera itself.
Clips is a 12-megapixel camera. The housing is two inches by two inches square, and it’s got a clip on the back. The front features a round, black wide-angle lens housing (the lens captures 130 degrees) and a blinking light while taking pictures, which makes it obvious that it’s a camera — it’s not a spy camera.
The Clips camera itself has no screen. Instead, you use a smartphone both to review pictures and control the camera in other ways. The camera does have a button for taking pictures, but that’s not supposed to be the main way pictures are taken.
So far, the camera I’ve described sounds like any number of existing products, including the “lifelogging” cams I’ve told you about previously in this space.
But the revolutionary part is the software. Google Clips uses artificial intelligence (A.I.) to choose when to take pictures. To “use” the camera, you twist the lens to get it started, place it somewhere, then forget about it.
It learns familiar faces, then favors those people (and pets!) when deciding when to take pictures. It looks for smiles and action, novel situations and other criteria. It discards blurry shots.
Each time it takes pictures, it captures a burst of photos at 15 frames per second, which you can use or edit as a GIF or from which you can cherry-pick your favorite still photographs.
The Clips has no microphone, and it cannot record sound.
In short, the A.I. is designed to take great pictures and GIFs, but with the advantage of taking pictures where there’s no photographer around to change the actions of the photographed.
And here’s the revolution: The face recognition takes place on the device, not in the cloud. Pictures are stored on the device, not in the cloud.
These are the attributes Google touts as ensuring privacy. No sound. No automatic uploading.
Of course, you can use the app to choose clips for uploading to Google Photos. Once uploaded into Google Photos, the pictures will be processed again for face recognition, and this time with names attached, if you’ve used the name-to-face feature in Google Photos.
Why Google targeted Clips at parents
I’m speculating here, but I believe Google arrived at parents through a process of elimination.
Google was slammed hard with its Google Glass experiment, because that device put a camera on people’s faces, which many in the public and press felt uncomfortable with.
A large number of startups have since come out with little, square, wearable clip-on cameras, most of which fizzled in the market due to high price, low picture quality and the fact that wearing a camera can be socially awkward.
Google Clips looks externally like any number of these cameras, and my guess is that Google’s initial intent was to both join ’em and beat ’em, by offering a clip-on, wearable camera powered by A.I.