Among many other integrations, Nuance has worked with Apple on their Siri feature, and also with Swype (see video) to replace our traditional keyboards.
So what’s next for speech recognition? As demoed at CES 2014, Nuance technology might be integrated in wearables, in Intel and ZTE devices, as well as in cars. Well, unlike IBM and other competitors who have given up on speech recognition, Nuance is still in the race.
Still, two challenges need to be addressed:
- when the device has little compute or storage capabilities (typically smart watches or even smartphones), the requests need to be analyzed by cloud-based services to be understood. But that should be easy to avoid with constant increase in capacities
- noise reduction (in loud environments) and multi-speakers recognition still do not get great results.
Is speech recognition the future? I don’t think so. Beyond the above-mentioned challenges, I personally enjoy typing a text silently in a quiet environment (like a conference or meeting), or just keeping it secret! I also think about this cloud-based database and I heard it was not working so well with other languages than English. My brother has been trying his latest Xbox speech command system with very little success. So, most probably, speech recognition will be used for dedicated usage like understanding simple commands (for instance, “OK Glass… take a picture”) or replacing doctors’ secretary…
Getting sound can be very discrete and silent using ear phones or even sound transmission through the skull. Maybe the mic could be fitting in a necklace capturing vocal cords, allowing very quiet speaking and easier noise reduction?