Secrets from Google's labs

6th Sep 2008 | 08:30

Secrets from Google's labs

Google week: Discover the next big thing at Google's HQ

Google has become a behemoth of innovation and a harbinger of intellectual capital.

Yet, despite its obviously self-written Google Finance summary that says it "maintains an index of websites and other online content," the company is actually a dual-purpose entity. It's a very successful experiment in social engineering (where people flock to its Internet properties) and a vast advertising network (where those same people see countless ads).

Google has tapped – like no other company – the power of several million or perhaps billion sites that serve its ad links, usually for free. It's an amazing concept: if you build powerful and useful tools and establish your company as an Internet oracle, you can attract millions of people to your advertising network and fuel even more innovation. If it fails at innovation, people will stop using its ad system and its revenue could start to fizzle out. We're here to tell you: that is not going to happen, because we have seen the future of Google in the form of its ongoing research projects.

Universal 'one-box' search

Universal search – or 'one box' search – has to do with how the company presents search results. In 2007, in a subtle yet important change, Google shifted from presenting just text links to more universal results that include photos, news, blog entries, video and even book excerpts.

David Bailey, a Google engineer, says they are experimenting with the algorithms for universal search. For example, if you use the term 'Martin Luther King' you may see more archival information, such as book excerpts and far fewer news reports. If you search for a movie star, you may see more news, YouTube videos or photos. The implication here is that Google is categorising through artificial intelligence: during the split second that the company analyses its database of web indexes, it's also analysing the term, figuring out how to present the UI so it's more focused on images, text or video. Yet, it's going deeper than that. For video, as an example, it's analysing the file size, codec, star ratings and other data to determine the best video results.

Google has also moved away from 'operators' such as 'movie:iron man' that dictate results. Power users can still use this search syntax, but Google automatically looks at your search term (for instance, 'Iron Man'), knows it's a recent movie and so presents showtimes and reviews. Universal search is also becoming more 'web 2.0' aware and crawls through details such as Yelp.com restaurant reviews or Zillow.com home prices to present more detailed results. "A big part of my job is to shine a spotlight into all these remote corners of the web," explains Bailey.

What is actually happening with universal search is that Google has an index of each category for photos, web, blog entries and so on. These indexes are database files almost continually updated by crawling the web and searching through millions of URLs. In a very real sense, the heart of the company – these indexes – depend on the processing power of the Google server farms that crawl the web. Bailey says that this confluence of categorisation requires more and more data centres, more electricity and more processing power as the web continues to expand. At the moment it's hard to see what the end result of universal search could be, because it will continue to evolve and become more intelligent.

Language translation

Most of Google's efforts in translation have revolved around 'language pairs' – the translation from one language to another. It has focused on two areas: developing new language pairs and improving the algorithms used for translating pairs.

As for improving existing language pairs, the main challenge has been in understanding idioms and emerging language use (an ongoing battle), but also breaking down sentence structures using artificial intelligence. For example, in English, a verb might come at the end of a sentence, whereas in Japanese it might come at the beginning. When languages have similar historic roots, such as French and Spanish, the language pairing is easier.

With Japanese and Korean, for example, the project goal is to accumulate more and more data about the language which is more difficult to translate. The more data it has about a language's morphology, the easier it can translate it into other languages.

Developing new language pairs – such as English to Finnish – is even more difficult. The most difficult languages to pair are those that have a vast morphology (the units of language and how they fit together to form into meanings). The Finnish language in particular has a rich morphology where one word used with another can form into an expression that has inherent meaning, such as race or gender, which the individual words do not have.

These word pairs are complex and the more processing power you throw at the problem, the more accurate the results. Google operates two translation engines, one for public use that is faster but less accurate, and one internal (and experimental) engine that runs slower and is more accurate. The internal project runs on faster server farms, has richer data sets and uses better algorithms. "Putting more data into the system makes language translation better," says Franz Och, a Google research scientist.

It's interesting to note that machine translation is less about human knowledge of a language and more about data collection. Few members of the Google translation team can actually speak the languages they are translating, but they are very good at collecting the morphological data. In the end, translation is a major test of data collection and software programming prowess, and will continue to evolve – making it easier for users to both learn and use a language in their daily lives.

Computer vision search

Computer vision is one of the most difficult problems in computer science. The idea is to have a computer analyse an image and recognise it through artificial intelligence.

The implications are profound: if a computer recognises images, it can process them more accurately. Think of a bank account. If a computer could analyse a live video of you and verify your identity, your account would be much more secure. At Google, computer vision is less about security and more about indexing the image data. Today, when you search for 'Lindsay Lohan', the results presented are based on metatag data attributed to photos of the starlet. Some of those attributions are wrong, which is why sometimes results are returned that are inaccurate.

Computer vision, conversely, analyses the space between the eyes, nose shape, forehead width and other data, and compares them to a reference image. This analysis applies equally to video and photos and it's much more accurate. In a demo at Google, Shumeet Baluja – a Google research scientist – showed how a computer vision search for George Bush returned a series of videos of the US president's recent speeches.

One implication of computer vision search – once it evolves beyond a simple recognition phase – is that you could then categorise the results. Google is focused only on detection today, but its mission is all about categorisation. Computer vision will aid the company in building a library of searchable images and video beyond just text descriptions and metatags.

The first step in computer vision search is to analyse a database of millions of videos and images to see if there is a face. Baluja says most of its resources in computer vision are currently dedicated to just detection: is there a face? The next step is to perform the pattern matching against the reference image or video.

The company uses multiple approaches to detection: comparison to the reference, analysis of the image itself and comparison against other search results. A 'classifier' gathers all statistics describing an image, such as skin tones, shadows, facial hair and other attributes. These classifiers are fed into what Baluja calls 'visual rank', which determines the accuracy of the search. They decide which images form the basis for what the person normally looks like. That's why, in the Lindsay Lohan search, a cartoon of her might appear much lower in the search results.

Eventually, Google will apply its computer vision research to more than just faces. For example, Baluja explained that they plan to use a visual search system for products. When you type the search term 'Apple iPhone', you might want to see a computer vision search that shows the device in use in the field, the iconic image from Apple or the cartoon images where people make fun of the 'Jesus phone'. Baluja says they are agnostic about the kind of images they are searching, but the main goal is to provide results that they think its customers want, which could in turn raise advertising revenues.

Google Android update

Android is the most important project at Google. Part software platform for mobile phones and part third-party development initiative, Android could dictate the future of Google.

More and more users are searching the Internet from their phones, and the phone itself is evolving into a computer platform. In the future, there may be no desktop or laptop computers; instead, the only computer you use could be your phone, especially once computer scientists figure out complex issues as speech-to-text recognition, cell phone video projection and virtual keyboards.

Google knows that this platform shift will happen and it has chosen to be much more involved in the core architecture. It's interesting to note that Google fans were disappointed when they realised Google will not be releasing an Apple iPhone competitor. Yet, it turns out Google has much loftier ambitions: to deliver the OS for future computers.

In another interesting twist to the story, Android itself is not a fully functional OS with all the applets and features you would ever need. Instead, Google has tapped a much more extensive resource of third-party developers. The model is similar to the one Nokia uses with the Symbian OS (which it just squired) where the most innovative applications are all user created. Its recent developer challenge led to some amazingly innovative apps that can determine your location and help you find a taxi, feed weather information to your location-aware device or let you search for movie showtimes.

The Android platform is a departure from the hub and spoke model of the iPhone. There is no home screen. Instead, all apps can run concurrently and are 'application aware'. You can click on a contact and click an option to see a map of where that person lives, dial their mobile phone, copy the data to a clipboard or browse for it on the web. Applications can share information between them as well: a contact program can pull information from a spreadsheet.

Oddly, while every other Google project involves hard factual data and some interesting implications for real world use, most of the Android project seems to depend on the users to tap the power of the OS. For example, when we spoke with Erick Tseng, the Android product manager, he could not quantify for us whether Android would support multitouch (where you can zoom in or pan across an image using your fingers) or haptics (where the device provides a tactile response).

Instead, he says the developer community would have to develop those features. Tseng wouldn't let us see an Android phone, although he did say that he has been using one 'for months' as his primary phone. It's possible that Google has deferred all of the power and capability of Android to the user base, which would be disastrous.

The platform seems to be less a competitor to the Blackberry or iPhone and more like a marginal gadget such as the Chumby Wi-Fi radio, which also relies almost entirely on user-created open source apps for innovation. Many of these apps are clunky and poorly designed. Still, the results of the development challenge reveal that there is hope for the platform, even if the specifics are still sketchy. Google has been saying for some time that the first Android-powered devices will ship by the end of 2008.

There's another major challenge in store for Android, however. If the OS does support innovative APIs for multitouch, haptics and other features, and if the developer community creates some truly innovative apps, Google must also contend with the complexities of dealing with mobile phone carriers.

In most cases (other than the Apple iPhone), carriers dictate not only what services are offered on a phone, but which software. For example, on the Motorola Q in the US, Sprint dictates exactly how you can download and use a ringtone. You can't just download any ringtone and use it on the phone. Google seems to think that mobile carriers support open-source software just as much as it does, which is not true.

Energy initiatives

Google's energy initiatives are clearly designed to be an example. Some, in fact, seem a bit superfluous or over-the-top in terms of practical implications. For example, any employee can 'check out' a hybrid car – located in an openair shade port that is itself powered by a solar array on the roof – for a few hours. Considering there are just a handful of cars but the company has over 19,000 employees, it's hard to see how the hybrid car strategy is anything but an example to help encourage environmental awareness.

Similarly, six buildings and two parking garages at Google have solar arrays on the roof. In total, 9,612 panels provide about 1.6 megaWatts of power. They provide about 30 per cent of peak power usage, so it could be said that two out of every 10 computers is powered by solar energy.

The reason the arrays only provide 30 per cent of the total power has to do with the energy density of the office buildings. It's much higher than that of a typical home due to how many lights are on all day and the number of computers and other electrical components running.

Google App Engine

Looking back in time decades hence, it might be possible to say that Google officially built 'the computing cloud' on April 7, 2008. That's when the company released Google App Engine (formerly known as Big Table), an infrastructure for companies to host their applications on the web instead of internally.

There are several advantages to the cloud: easier maintenance, reliable archives, anywhere access and a common UI. Of course, App Engine also has a few intangibles and disadvantages, including privacy concerns when a company hands over its data to a web host, offline access (which Google counteracts with Google Gears), speed of access over the Internet and such hard to predict factors as programming environment.

Pete Koomen, the product manager for App Engine, says the project has been underway since the summer of 2007. When we talked to Koomen, it was unclear whether the company even had any customers using the project. "The motivation behind App Engine is to combat the challenges of developing applications," says Koomen.

"When you decide to build an application, you have to define and provision machines, configure SQL and Apache, configuring files – none of this is rocket science, but it takes time and money. The second challenge is when an app starts to get traffic it's difficult to scale – with SQL for example, you might have to shard up machines, de-normalising data – and a lot of this is not taken into account when they first start out."

Koomen says App Engine is the result of Google itself bundling and scaling applications – a trial by fire, so to speak – which benefits from its distributed server architecture. Essentially, this means any app will run with the resources it needs at the time of operation and scales automatically. Failover, load balancing and distributed data stores (data split over multiple machines) all contribute to the scalability. Developers can use Google authentication and email APIs.

Today, it runs only on Python, but Google is working on expanding the language offerings quickly. So far, most of the activity in the App Engine project has been related to research docs and requests submitted by users – it's still unclear exactly who is or will be using it, especially in light of the success of Amazon web services like EC2.

How many of these projects will be a raging success for Google is hard to tell. Android seems like it's an early platform that is untested and carriers may decide to abandon ship. Machine translation and computer vision are ongoing projects, but early demos for visual search show promise.

Universal search is a fully functioning part of its search platform today that is already evolving. These projects are key to understanding Google's future and legacy in technology – which is sure to be great indeed.

PC Plus Google Internet Future tech
Share this Article
Google+
Edition: UK
TopView classic version