I use the speech synthesis for a simple program, and I was
wondering if there is supporting in other languages than english?
I want that the speech will be in the local language. Is it possible?
You can use SpeechSynthesizer.GetInstalledVoices to obtain a list of all available voices, together with some Culture Information. On my Windows 8.1 machine, there is a German and an English language installed. You should be able to check if there is a capable voice present with the GetInstalledVoices method.
Here is a list of the supported languages on the Microsoft Speech Platform SDK 11
Related
How do we change the voice to be used from within our apps? I cannot seem to figure out what or where the default voices are for Windows 8.
I've read articles online that say that Microsoft David is MS Windows 8's latest and greatest voice, but I have a new Windows 8 Pro system and no such voice exists on my system (only Zira and some other guy).
I am awware of Select Voice which lets you use a string as the Name of the voice, but after trying about 30 different names, including David (and Dave), it throws an exception saying that voice does not exist.
I have tried InstalledVoice - but not sure how to use that.
How/where can we download different voices to be used with Windows Speech Recognition, and how do we select different voices to be used from within our code?
Also, SelectVoicebyHints() does absolutely nothing at all. I don't know why.
The SpeechSynthesizer has a GetInstalledVoices method which returns a ReadOnlyCollection of Voices installed in your system (InstalledVoice type), in order to change the synthesizer voice you should call the SelectVoice Method in which requires the voice name(String type)
SpeechSynthesizer synt = new SpeechSynthesizer();
IReadOnlyCollection<InstalledVoice> InstalledVoices = synt.GetInstalledVoices();
InstalledVoice InstalledVoice = InstalledVoices.First();
synt.SelectVoice(InstalledVoice.VoiceInfo.Name);
synt.Speak("This is how you select an installed voice");
To see what voices are installed in your computer you can see them in:
Control Painel -> Speech Recognition -> Text to Speech
You can specify more info there as well like voice speed
if you want to add more voices to your computer you should search some company like
Ivona - http://www.ivona.com/us/for-individuals/voices-for-windows/
Not sure but SelectVoicebyHints should try to selected any voice from those installed where is more alike to the specifications that you passed through the method
I'm looking for a way for my Windows Store Application (Windows 8 Metro) to pronounce words. Microsoft Speech isn't available in WinRT. My programming language is C# and I'm working in a multi-language dictionary. I want to know is there any way for providing words pronunciation in my application? Even English words are enough for me.
Is there any way to use Google Translate Pronunciation ?
You can use Microsoft Translator service, that provides text-to-speech functionalities to applications. If you are interested in this solution, you can take a look to http://translatorservice.codeplex.com.
I'm looking for speech (wave files) to text on windows server 2008 (or win server 2008 r2) using c# (at least an api that i can call from c#) that supports multilanguage.
As far as i know i can't use .net speech (sapi) because it works only on vista \ windows 7.
I can't use Microsoft Speech Platform because it not supports all the languages i need (as far as i checked there is no Hebrew (he) support).
It can't be a web based service (i need it on my server).
I'm looking for something that can be used in commercial software and i'm also willing to pay for a third party product.
Can you please help me with that?
Thanks
You have text-to-speech listed as a tag but the description sounds like speech recognition. If I understand what you want to do it is to take a wav file with speech in it and convert it too text. Actually this is not even normal speech recognition because most of the speech reco systems work on targeted speech input that use grammars to restrict the search space that the speech engine has to use. I think what you are describing is automatic translation or transcription, akin to what Google Voice does to your voice mail messages when it sends you a text translation in an email. This is a much more difficult problem and the state-of-the-art is not that advanced right now. Most of these solutions are offered as services and the best ones still use human translators when the speech recognition confidence rate is low. I think the leader in this area is Nuance. I would check with them for a solution. I know they recently bought out a company that provides this automated transcription service and perhaps they now offer it as a product. They are also a leader in transcribing doctors orders/findings automatically to text with their product Dragon Naturally Speaking.
I would like to write a program in C# that includes limited vocabulary speech recognition of languages such as Finnish or Polish. Microsoft's Speech SDK works great for English but can it support other languages like those? If not, what other (hopefully affordable) software tools are available?
Have a look at Microsoft Server Speech Platform 10.2. It supports both STT and TTS.
For 26 Languages, including Finnish and Polish!
Here's a link that will get you started.
http://www.codeproject.com/KB/audio-video/TTSandSR.aspx
A bit late post, sorry for that.
What is the difference between these two methods in C# using the speech API or SAPI?
using SpeechLib;
SpVoice speech = new SpVoice();
speech.Speak(text, SpeechVoiceSpeakFlags.SVSFlagsAsync);
returns the Apacela voices, and
SpeechSynthesizer ss = new SpeechSynthesizer();
ss.SpeakAsync ("Hello, world");
Does not work with Apacela voices.
The first one return all voices but the second one only return few voices. Is this something related to SAPI 5.1 and SAPI 5.3?
The behavior is same on Vista and XP, on both SpVoice was able to detect the Apacela voice but using SpeechSynthesizer, the voices does not detected on both XP and Vista.
I guess XP uses SAPI 5.1, and Vista uses SAPI 5.3 then why the same behavior on all OS, but different behavior with the API?
Also which API is more powerful and what are the difference between the two ways/API?
SpeechLib is an Interop DLL that makes use of classic COM-based SAPI under the covers. System.Speech was developed by Microsoft to interact with Text-to-speech (and voice recognition) directly from within managed code.
In general, it's cleaner to stick with the managed library (System.Speech) when you're writing a managed application.
It's definitely not related to SAPI version--the most likely problem here is that a voice vendor (in this case Acapela) has to explicitly implement support for certain System.Speech features. It's possible that the Acapela voices that you have support everything that is required, but it's also possible that they don't. Your best bet would be to ask the Acapela Group directly.
Voices are registered in HKLM\SOFTWARE\Microsoft\Speech\Tokens, and you should see the Windows built-in voices, as well as the Acapela voices that you have added listed there. If you spot any obvious differences in how they're registered, you might be able to make the Acapela voices work by making their registration match that of, for example, MS-Anna.
But I'd say the most likely possibility is that the Acapela voices have not been updated to support all of the interfaces required by System.Speech.
SpeechLib is an interop DLL and so maps to whatever version of SpeechLib it was created for (you can check it's properties).
System.Speech.* is the "official" support for speech in the .NET framework. SpeechSynthesizer chooses which speech library to use at runtime (much like the System.Web.Mail classes did).
I'm not sure why they return a different number of voices but it is likely to be related to the SAPI version being used.