SAPI does not implement phonetic alphabet selection. Speech Command App - c#

In my speech command app, I am loading files from an external source, processing that data and loading them into a list of possible commands for execution
When I run the app, I get the message in the console
Main.exe Information: 0: SAPI does not implement phonetic alphabet selection.
I tried solutions such as adding
gram.Culture = New System.Globalization.CultureInfo("en-GB")
I think this is either outdated or does not work on this type of WPF application
Any tips?
Code:
{
InitializeComponent();
}
SpeechSynthesizer synth = new SpeechSynthesizer();
PromptBuilder builder = new PromptBuilder();
SpeechRecognitionEngine recog = new SpeechRecognitionEngine();
private System.Windows.Input.Key k;
private int vkey;
private Choices cmd;
private void Button_Click_1(object sender, RoutedEventArgs e)
{
b1.IsEnabled = false;
Choices list = new Choices();
cmd = Singleton.getInstance().getChoices();
list.Add(new string[] { "hello", "test", "it works", "sup", "windows", "grenade" });
Grammar gr = new Grammar(new GrammarBuilder(cmd));
try
{
recog.RequestRecognizerUpdate();
recog.LoadGrammar(gr);
recog.SpeechRecognized += Recog_SpeechRecognized;
recog.SetInputToDefaultAudioDevice();
recog.RecognizeAsync(RecognizeMode.Multiple);
}
catch
{
return;
}
}

Related

how to execute voice google search

I can't search in google using my voice. I disabled search by default and can only be enabled by saying the word "search". I separated the words and phrases to be searched for to another file so that it wont get mixed with my recognition file and the response file. I placed the path file under the if statement of "search" so that it can only be access when the I speak "search"
public partial class Form1 : Form
{
//user and jarvis texts
string[] grammarFile = (File.ReadAllLines(#"C:\Friday AI\user.txt.txt"));
string[] responseFile = (File.ReadAllLines(#"C:\Friday AI\jarvis.txt.txt"));
//speech synthesis
SpeechSynthesizer speechSynth = new SpeechSynthesizer();
//speech recognition
Choices grammarList = new Choices();
SpeechRecognitionEngine speechRecognition = new SpeechRecognitionEngine();
public Boolean search = false;
public Form1()
{
//initialize grammarfile
grammarList.Add(grammarFile);
Grammar grammar = new Grammar(new GrammarBuilder(grammarList));
try
{
speechRecognition.RequestRecognizerUpdate();
speechRecognition.LoadGrammar(grammar);
speechRecognition.SpeechRecognized += rec_SpeechRecognized;
speechRecognition.SetInputToDefaultAudioDevice();
speechRecognition.RecognizeAsync(RecognizeMode.Multiple);
}
catch
{
return;
}
private void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
string result = e.Result.Text;
int resp = Array.IndexOf(grammarFile, result);
if (search)
{
Process.Start(#"chrome.exe", "--incognito https://www.google.com/search?q=" + result);
label5.Text = "Enabled";
}
if (wake == true)
{
if (result.Contains("search"))
{
search = true;
string[] globalsearch = (File.ReadAllLines(#"C:\Friday AI\global search words.txt"));
grammarList.Add(globalsearch);
label5.Text = "Enabled";
}

Dictation is not working on my WPF application - need suggestion

Trying to use free dictated words for voice recognition. But it is not working for me. Maybe someone can have a look.
Here it starts :
public MainWindow()
{
InitializeComponent();
recognizer = new SpeechRecognitionEngine();
freeTextDictation();
}
Here is the logic:
private void freeTextDictation()
{
GrammarBuilder startStop = new GrammarBuilder();
GrammarBuilder dictation = new GrammarBuilder();
dictation.AppendDictation();
startStop.Append(new SemanticResultKey("StartDictation", new SemanticResultValue("Start Dictation", true)));
startStop.Append(new SemanticResultKey("DictationInput", dictation));
startStop.Append(new SemanticResultKey("StopDictation", new SemanticResultValue("Stop Dictation", false)));
Grammar grammar = new Grammar(startStop);
grammar.Enabled = true;
grammar.Name = " Free-Text Dictation ";
recognizer.LoadGrammar(grammar);
recognizer.SetInputToDefaultAudioDevice();
recognizer.SpeechRecognized += Recognizer_SpeechRecognized;
}
private void Recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
//MessageBox.Show(e.Result.Text.ToString());
Console.WriteLine(e.Result.Text.ToString());
}
The code is from microsoft website but I cant integrate it to get output.
Unfortunately nothing happens.
What can I do?

Speech Recognition windows 10

I'm trying to build a speech recognition in Windows 10 (using Cortana) in Visual C#.
This is part of my code for speech recognition using old System.Speech.Recognition and works great, but it only support english.
SpeechSynthesizer sSynth = new SpeechSynthesizer();
PromptBuilder pBuilder = new PromptBuilder();
SpeechRecognitionEngine sRecognize = new SpeechRecognitionEngine();
Choices sList = new Choices();
private void Form1_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
pBuilder.ClearContent();
pBuilder.AppendText(textBox2.Text);
sSynth.Speak(pBuilder);
}
private void button2_Click(object sender, EventArgs e)
{
button2.Enabled = false;
button3.Enabled = true;
sList.Add(new string[] { "who are you", "play a song" });
Grammar gr = new Grammar(new GrammarBuilder(sList));
try
{
sRecognize.RequestRecognizerUpdate();
sRecognize.LoadGrammar(gr);
sRecognize.SpeechRecognized += sRecognize_SpeechRecognized;
sRecognize.SetInputToDefaultAudioDevice();
sRecognize.RecognizeAsync(RecognizeMode.Multiple);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Error");
}
}
private void sRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
textBox1.Text = textBox1.Text + " " + e.Result.Text.ToString() + "\r\n";
}
How can I do it using new speech recognition in windows 10?
Use Microsoft Speech Platform SDK v11.0 (Microsoft.Speech.Recognition).
It works like System.Speech, but you can use Italian language (separeted install) and also use SRGS Grammar. I work with both kinect (SetInputToAudioStream) and default input device (SetInputToDefaultAudioDevice) without hassle.
Also it works offline, so no need to be online as with Cortana.
With the SRGS grammar you can get a decent level of complexity for your commands
UPDATE
Here is how I initialize the recognizer
private RecognizerInfo GetRecognizer(string culture, string recognizerId)
{
try
{
foreach (var recognizer in SpeechRecognitionEngine.InstalledRecognizers())
{
if (!culture.Equals(recognizer.Culture.Name, StringComparison.OrdinalIgnoreCase)) continue;
if (!string.IsNullOrWhiteSpace(recognizerId))
{
string value;
recognizer.AdditionalInfo.TryGetValue(recognizerId, out value);
if ("true".Equals(value, StringComparison.OrdinalIgnoreCase))
return recognizer;
}
else
return recognizer;
}
}
catch (Exception e)
{
log.Error(m => m("Recognizer not found"), e);
}
return null;
}
private void InitializeSpeechRecognizer(string culture, string recognizerId, Func<Stream> audioStream)
{
log.Debug(x => x("Initializing SpeechRecognizer..."));
try
{
var recognizerInfo = GetRecognizer(culture, recognizerId);
if (recognizerInfo != null)
{
recognizer = new SpeechRecognitionEngine(recognizerInfo.Id);
//recognizer.LoadGrammar(VoiceCommands.GetCommandsGrammar(recognizerInfo.Culture));
recognizer.LoadGrammar(grammar);
recognizer.SpeechRecognized += SpeechRecognized;
recognizer.SpeechRecognitionRejected += SpeechRejected;
if (audioStream == null)
{
log.Debug(x => x("...input on DefaultAudioDevice"));
recognizer.SetInputToDefaultAudioDevice();
}
else
{
log.Debug(x => x("SpeechRecognizer input on CustomAudioStream"));
recognizer.SetInputToAudioStream(audioStream(), new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
}
}
else
{
log.Error(x => x(Properties.Resources.SpeechRecognizerNotFound, recognizerId));
throw new Exception(string.Format(Properties.Resources.SpeechRecognizerNotFound, recognizerId));
}
log.Debug(x => x("...complete"));
}
catch (Exception e)
{
log.Error(m => m("Error while initializing SpeechEngine"), e);
throw;
}
}
Cortana API usage example is here. You can copy it and start modifying according to your needs. It creates a dialog with the user. You can not exactly reproduce your System.Speech code with Cortana API because it is designed for another purpose. If you still want to recognize just few words, you can continue using System.Speech API.
System.Speech API supports other languages, not just English. You can find more information here:
Change the language of Speech Recognition Engine library

Speech recognition not working

I am developing a WPF application which uses speech recognition. The events does not fire up when the grammar words are spoken. Secondly, I am not sure whether the engine starts up on not. How to check that? Following is the code.
namespace Summerproject_trial
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
private SpeechRecognitionEngine recEngine =
new SpeechRecognitionEngine();
public MainWindow()
{
InitializeComponent();
Choices mychoices = new Choices();
mychoices.Add(new string[] {"Ok", "Test", "Hello"});
GrammarBuilder gb = new GrammarBuilder();
gb.Append(mychoices);
Grammar mygrammar = new Grammar(gb);
recEngine.LoadGrammarAsync(mygrammar);
recEngine.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>
(recEngine_SpeechRecognized);
recEngine.SetInputToDefaultAudioDevice();
}
void recEngine_SpeechRecognized(object sender,
SpeechRecognizedEventArgs e)
{
MessageBox.Show("You said: " + e.Result.Text);
}
}
}
You forgot to start listening to input.
Try this in the end of your constructor.
recEngine.RecognizeAsync(RecognizeMode.Multiple);
#Anri's answer is needed, but you also need to create the SpeechRecognitionEngine with a CultureInfo. (You can create a SpeechRecognitionEngine without a CultureInfo, but then you need to set the recognizer language explicitly.)
Also: Mobile earphones (by which I assume you mean some sort of Bluetooth headset) will typically NOT work with System.Speech. The SR engine used in the desktop SR engine requires higher quality audio input than it can get from Bluetooth.
So, complete code that should work:
private SpeechRecognitionEngine recEngine =
new SpeechRecognitionEngine("en-US");
public MainWindow()
{
InitializeComponent();
Choices mychoices = new Choices();
mychoices.Add(new string[] {"Ok", "Test", "Hello"});
GrammarBuilder gb = new GrammarBuilder();
gb.Append(mychoices);
Grammar mygrammar = new Grammar(gb);
recEngine.LoadGrammarAsync(mygrammar);
recEngine.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>
(recEngine_SpeechRecognized);
recEngine.SetInputToDefaultAudioDevice();
recEngine.RecognizeAsync(RecognizeMode.Multiple);
}

Why won't my AT Command read SMS messages?

I'm attempting to write a program which would enable texts to be sent out to customers, I'm using AT Commands with a GSM modem to accomplish this, I have looked at various bits of Documentation but have been unable to find a solution for the following problem.
I am attempting to make the GSM modem return all of the text messages contained within its memory, I have tried many combinations of AT Commands and Parsing techniques to throw this into a text box, but to no avail.
Any help on this would be most appreciated, my code is below
private SerialPort _serialPort2 = new SerialPort("COM3", 115200);
private void MailBox_Load(object sender, EventArgs e)
{
}
private void button1_Click(object sender, EventArgs e)
{
_serialPort2.Open();
//_serialPort2.Write("AT+CMGF=1 \r");
_serialPort2.Write("AT+CMGL=\"ALL\"");
string SerialData = _serialPort2.ReadExisting();
var getnumbers = new string((from s in SerialData where char.IsDigit(s) select s).ToArray());
var getText = SerialData;
SendTxt.Text = getnumbers;
SendMsgBox.Text = getText;
//for (int i = 0; i < SerialData.Length; i++ )
//{
// if (char.IsDigit(SerialData))
//}
//.Text = _serialPort2.ReadExisting();
//string[] text = { textBox1.Text };
//IEnumerable<string> formattext = from words in text where words.("+447") select words;
// foreach (var word in formattext)
//{
//SenderBox.Items.Add(word.ToString());
// }
_serialPort2.Close();
//_serialPort2.DataReceived += new SerialDataReceivedEventHandler(_serialPort2_DataReceived);
}

Categories

Resources