I'm from Greece and I want to make an application which will use SAPI to interact with the user, but I can't find a way to change the language of SAPI from English to Greek.
My OS is by default Greek & English, and I have SAPI SDK installed; the Greek Language is supported by SAPI.
The problem is that SAPI doesn't automatically recognise the language passed to it, and reverts to saying the individual letters one-by-one.
Here is the code I'm using, with English text:
using SpeechLib;
SpVoice voice = new SpVoice();
voice.Speak("Pdf File Successfully Installed", SpeechVoiceSpeakFlags.SVSFlagsAsync);
voice.WaitUntilDone(30000);
This works, but when I pass Greek text to the function (eg "Να ενα κειμενο"), I see the problem occur.
You can set a language by passing SSML to the Speak API, and including the xml:lang attribute.
For example this should work:
SpVoice voice = new SpVoice();
voice.Speak(
"<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='el-GR'>"
+ "Να ενα κειμενο"
+ "</speak>",
SpeechVoiceSpeakFlags.SVSFlagsAsync|SpeechVoiceSpeakFlags.SVSFIsXML);
voice.WaitUntilDone(30000);
You can also switch language mid-speech. The documentation has this example:
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xml:lang="en-US">
For English, press 1.
<voice xml:lang="fr-FR" gender="female">
Pour le français, appuyez sur 2 </voice>
</speak>
For more, see here:
https://msdn.microsoft.com/en-us/library/jj127898.aspx
Related
I have WPF application which launches web apps. I want to have a spell check for the Finnish language. By default spell check for English is there. I have written the below code to add support to the Finnish language.
Cef.UIThreadTaskFactory.StartNew(delegate
{
var browser = (sender as ChromiumWebBrowser);
var requestContext = browser.GetBrowserHost().RequestContext;
requestContext.SetPreference("browser.enable_spellchecking", true, out _);
requestContext.SetPreference("spellcheck.dictionaries", new List<string> { "en-US", "fi-FI" }, out _);
});
When I set this code, there are below problems
Spell check which was earlier working, red underline for incorrect words for the English language is stopped.
Spell check is not working for Finnish language.
I checked "C:\Users<someUser>\AppData\Local\CEF\User Data\Dictionaries", English dictornory got downloaded but not the Finnish.
Does that mean that CEF doesn't support Finnish language, when I try "en-AU", this dictionary got downloaded.
Basically spell check is available for those languages of which dictionary is present, refer https://github.com/cvsuser-chromium/third_party_hunspell_dictionaries
Also, I have asked a question on CEF Forum if we can add missing languages, refer post here - https://magpcss.org/ceforum/viewtopic.php?f=10&t=17852#p46719
For now, there is no support for the Finnish language "fi-FI".
So I'am trying to add a new language, spesifically norwegian, to SpeechSynthesizer, but it doesn't seem to get installed.
Found this:
Add another voice into .NET Speech
(But here the problem is that Czech isn't supported)
I have installed the norwegian pack from here:
http://www.microsoft.com/en-us/download/details.aspx?id=27224
In my code I use this to check if it is installed:
foreach (var voice in speaker.GetInstalledVoices())
{
Console.WriteLine(voice.VoiceInfo.Description);
}
But it only outputs:
Microsoft Zira Desktop - English (United States)
Have checked the text-to-Speech tool were this is also the only option.
Have also tried to log off/log on and restart the computer.
Anyone know how to fix this?
You may need to add a Speech Language to Windows 10 and set your Locale, Country, Windows display language and Speech language so they are all aligned with one of Cortana's supported locale configurations.
To confirm the settings are set correctly:
Open Settings. Select Time& language, and then Region & Language.
Check the Language (set as default) setting for your Windows display language. If your desired language is not available, add your desired language:
Click Add Language.
Select your desired language from the list.
Select the desired locale, which is the language/country combination.
Click on the newly selected locale and select Options.
Under Download language pack, click Download.
Under Speech, click Download.
After the downloads are complete (this could take several minutes), return to the Time & Language settings.
Click on your new language and select Set as Default.
NOTE: IF you changed languages, you must sign out of your account and back in for the new setting to take effect.
Check the Country or region setting. Make sure the country selected corresponds with the Windows display language set in the Language setting.
Return to Settings and Time & language, and then select Speech. Check the Speech language setting, and make sure it’s aligned with the previous settings.
After you have correctly done the above, your language should appear in the SpeechSynthesizer.AllVoices collection. You should then be able to assign this voice to your SpeechSynthesizer instance's Voice property:
private async void SpeakText(MediaElement audioPlayer, string TTS)
{
SpeechSynthesizer ttssynthesizer = new SpeechSynthesizer();
//Set the Voice/Speaker to Spanish
using (var speaker = new SpeechSynthesizer())
{
speaker.Voice = (SpeechSynthesizer.AllVoices.First(x => x.Gender == VoiceGender.Female && x.Language.Contains("ES")) );
ttssynthesizer.Voice = speaker.Voice;
}
SpeechSynthesisStream ttsStream = await ttssynthesizer.SynthesizeTextToStreamAsync(TTS);
audioPlayer.SetSource(ttsStream, "");
}
http://answers.microsoft.com/en-us/windows/forum/windows_10-other_settings/speech-language-in-windows-10-home/3f04bc02-9953-40b1-951c-c1d262fc3f63?auth=1
So I'am trying to add a new language, spesifically norwegian, to SpeechSynthesizer, but it doesn't seem to get installed.
Found this:
Add another voice into .NET Speech
(But here the problem is that Czech isn't supported)
I have installed the norwegian pack from here:
http://www.microsoft.com/en-us/download/details.aspx?id=27224
In my code I use this to check if it is installed:
foreach (var voice in speaker.GetInstalledVoices())
{
Console.WriteLine(voice.VoiceInfo.Description);
}
But it only outputs:
Microsoft Zira Desktop - English (United States)
Have checked the text-to-Speech tool were this is also the only option.
Have also tried to log off/log on and restart the computer.
Anyone know how to fix this?
You may need to add a Speech Language to Windows 10 and set your Locale, Country, Windows display language and Speech language so they are all aligned with one of Cortana's supported locale configurations.
To confirm the settings are set correctly:
Open Settings. Select Time& language, and then Region & Language.
Check the Language (set as default) setting for your Windows display language. If your desired language is not available, add your desired language:
Click Add Language.
Select your desired language from the list.
Select the desired locale, which is the language/country combination.
Click on the newly selected locale and select Options.
Under Download language pack, click Download.
Under Speech, click Download.
After the downloads are complete (this could take several minutes), return to the Time & Language settings.
Click on your new language and select Set as Default.
NOTE: IF you changed languages, you must sign out of your account and back in for the new setting to take effect.
Check the Country or region setting. Make sure the country selected corresponds with the Windows display language set in the Language setting.
Return to Settings and Time & language, and then select Speech. Check the Speech language setting, and make sure it’s aligned with the previous settings.
After you have correctly done the above, your language should appear in the SpeechSynthesizer.AllVoices collection. You should then be able to assign this voice to your SpeechSynthesizer instance's Voice property:
private async void SpeakText(MediaElement audioPlayer, string TTS)
{
SpeechSynthesizer ttssynthesizer = new SpeechSynthesizer();
//Set the Voice/Speaker to Spanish
using (var speaker = new SpeechSynthesizer())
{
speaker.Voice = (SpeechSynthesizer.AllVoices.First(x => x.Gender == VoiceGender.Female && x.Language.Contains("ES")) );
ttssynthesizer.Voice = speaker.Voice;
}
SpeechSynthesisStream ttsStream = await ttssynthesizer.SynthesizeTextToStreamAsync(TTS);
audioPlayer.SetSource(ttsStream, "");
}
http://answers.microsoft.com/en-us/windows/forum/windows_10-other_settings/speech-language-in-windows-10-home/3f04bc02-9953-40b1-951c-c1d262fc3f63?auth=1
I have added a speech recognition feature to my program. However, if I try to run the program and the language in the Speech Properties is set to anything other than "Microsoft Speech Recognizer 8.0 for Windows (English - US), the program fails to load.
I would like to have it so that the program will load no matter which language is selected.
The code for my voice command is as follows:
vcstat.Text = "Voice Control Enabled";
recognizer = new SpeechRecognizer();
recognizer.SpeechDetected += recognizer_SpeechDetected;
recognizer.SpeechRecognitionRejected += recognizer_SpeechRecognitionRejected;
recognizer.SpeechRecognized += recognizer_SpeechRecognized;
GrammarBuilder grammar = new GrammarBuilder();
grammar.Append(new Choices("Cut", "Copy", "Paste", "Select All Text", "Print", "Unselect All Text", "Delete", "Save", "Save As", "Open", "New", "Close Basic Word Processor"));
recognizer.LoadGrammar(new Grammar(grammar));
There is some more code, but that's to do with the actual commands, so I don't think it's necessary to post it here.
If somebody could help me figure out a way to allow the program to start, regardless of the Speech Recognition Engine in use, I'd really appreciate it.
You can only use Speech Recognition in a different language if a MUI language pack is installed on the client's computer for one of the supported languages.
http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_programs/windows-7-speech-recognition-language-selection/0a859099-a76d-4799-abe9-847997399927
I'm trying to change the pitch of spoken text via SSML and the .NET SpeechSynthesizer (System.Speech.Synthesis)
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
PromptBuilder builder = new PromptBuilder();
builder.AppendSsml(#"C:\Users\me\Documents\ssml1.xml");
synthesizer.Speak(builder);
The content of the ssml1.xml file is:
<?xml version="1.0" encoding="ISO-8859-1"?>
<ssml:speak version="1.0"
xmlns:ssml="http://www.w3.org/2001/10/synthesis"
xml:lang="en-US">
<ssml:sentence>
Your order for <ssml:prosody pitch="+30%" rate="-90%" >8 books</ssml:prosody>
will be shipped tomorrow.
</ssml:sentence>
</ssml:speak>
The rate is recognized: "8 books" is speaken much slower than the rest, but no matter what value is set for "pitch", it makes no difference ! Allowed values can be found here:
http://www.w3.org/TR/speech-synthesis/#S3.2.4
Am I missing something or is changing the pitch just not supported by the Microsoft Speech engine ?
fritz
While the engine SsmlParser used by System.Speech accepts a pitch attribute in the ProcessProsody method, it does not process it.
It only processes the range, rate, volume and duration attributes. It also parses contour but is processed as range (not sure why)...
Edit: if you don't really need to read the text from a SSML xml file, you can create the text programatically.
Instead of
builder.AppendSsml(#"C:\Users\me\Documents\ssml1.xml");
use
builder.Culture = CultureInfo.CreateSpecificCulture("en-US");
builder.StartVoice(builder.Culture);
builder.StartSentence();
builder.AppendText("Your order for ");
builder.StartStyle(new PromptStyle() { Emphasis = PromptEmphasis.Strong, Rate = PromptRate.ExtraSlow });
builder.AppendText("8 books");
builder.EndStyle();
builder.AppendText(" will be shipped tomorrow.");
builder.EndSentence();
builder.EndVoice();