Since the iPhone 4S came out Siri, the enhanced voice control functionality found only on the iPhone 4S (for the time being), has become a a focal point for the state of the art of speech recognition. With smartphones packing quite a punch in terms of CPU power and access to the cloud becoming ubiquitous through traditional fixed broadband connections and popular cellular connections (GPRS, 3G and LTE) getting voice queries processed is becoming easier, more efficient and more accurate.
Of course, Google has had speech recognition in Android for a while now but it has taken a while to evolve to its current point and some people might say its fallen behind Apple’s offering (which was admittedly an acquisition).
So why is Google trailing?
My take on this is that Google relies heavily on its current ad model to bring in the cash (so ten links on a page, ads on the right hand side) and the diversion of search to a voice channel bypasses the opportunity to serve up ads without being obtrusive. Trying to slip in a spoken ad at the beginning will turn people off using a voice channel while putting it at the end will simply see people ignoring it by terminating the channel.
While the current method of obtaining information may be through a web browser it might not always be that way. With enablers like Siri, we may be moving to bite sized chunks of data to fulfil a search request or a query rather than pages and blocks of data that we receive now. Google needs to work out how voice will work for consumers while innovating its business model to support it.
Follow Us!