![ispeech ivr ispeech ivr](https://bugendaitech.com/wp-content/uploads/2020/08/IVR-01.jpg)
Today, we are announcing three updates, all currently in beta, that make SpeechContext even more helpful for manually tuning ASR to improve transcription accuracy. For example, a company’s customer service support line might want to better recognize the company’s product names. This tuning process can help improve recognition of phrases that are common in the specific use case involved. When using Cloud Speech-to-Text, developers use what are called SpeechContext parameters to provide additional contextual information that can make transcription more accurate.
ISPEECH IVR MANUAL
Richer manual speech adaptation tuning in Cloud Speech-to-Text These updates improve the quality of transcription accuracy to support human agents.
![ispeech ivr ispeech ivr](https://blog.ultatel.com/wp-content/uploads/2021/08/IVR--2048x614.png)
Improving transcription to better support human agentsĪccurate transcriptions of customer conversations can help human agents better respond to customer requests, resulting in better customer care.
ISPEECH IVR CODE
We’re constantly adding more quality improvements to the roadmap-an automatic benefit to any IVR or phone-based virtual agent, without any code changes needed-and will share more about these updates in future blog posts. Applying speech adaptation can also provide additional improvements on top of that gain. English on a relative basis beyond the improvements we previously announced. The new model is now 15% more accurate for U.S. Today, we’ve further optimized our phone model for the short utterances that are typical of interactions with phone-based virtual agents. We followed that up last February by announcing the availability of those models to all customers, not just those who had opted in to our data logging program. In April 2018, we introduced pre-built models for improved transcription accuracy from phone calls and video. It’s easy to activate Auto Speech Adaptation: just click the “on” switch in the Dialogflow console (off by default), and you’re all set!Ĭloud Speech-to-Text baseline model improvements for IVRs and phone-based virtual agents In some cases, this feature can result in a 40% or more increase in accuracy on a relative basis. To meet that goal, the new Auto Speech Adaptation feature in Dialogflow helps the virtual agent automatically understand context by taking all training phrases, entities, and other agent-specific information into account. Similarly, if the virtual agent knew that the term “mail” is a common term in the context of a product return, it wouldn’t confuse it with the words “male” or “nail”. Using the example in the animation above, if the Dialogflow agent knew the context was “ordering a burger” and that “cheese” is a common burger ingredient, it would probably understand that the user meant “cheese” and not “these”. In Dialogflow-our development suite for creating automated conversational experiences-knowing context can help virtual agents respond more accurately. We use the term speech adaptation to describe this learning process. Just like knowing the context in a conversation makes it easier for people to understand one another, ASR improves when the underlying AI understands the context behind what a speaker is saying. To help virtual agents quickly understand what customers need, and respond accurately, we’re introducing an exciting new feature in Dialogflow.Īuto Speech Adaptation in Dialogflow Beta