Microsoft Speech Recognition setInputToDefaultAudioDevice throws exception - c#

hello guys I'm in trouble in MS Speech recognition.
my code is simple.
static void init()
{
string enUsEngine = string.Empty;
foreach (RecognizerInfo ri in SpeechRecognitionEngine.InstalledRecognizers())
{
Console.WriteLine(ri.Culture);
if (ri.Culture.Name.Equals("en-US") == true)
{
enUsEngine = ri.Id;
}
}
SpeechRecognitionEngine recogEngine = new SpeechRecognitionEngine(enUsEngine);
Grammar grammar = new Grammar("grammar.xml");
recogEngine.LoadGrammar(grammar);
recogEngine.SpeechRecognized += recogEngine_SpeechRecognized;
recogEngine.RecognizeCompleted += recogEngine_RecognizeCompleted;
recogEngine.SetInputToDefaultAudioDevice();
recogEngine.RecognizeAsync(RecognizeMode.Multiple);
}
and then throws InvalidOperationException in call
(System.InvalidOperationException: Cannot find the requested data
item, such as a data key or value.)
SetInputToDefaultAudioDevice(); method
I downloaded MSSpeech sdk and installed it (Microsoft.speech.dll).
also downloaded language packs. (en-us, ko-kr)
and also My microphone driver installed and enabled in control panel.
please help me.
My operating system is Windows 10 is that a problem for using Speech Recognition api?

Most probably you are using Microsoft.Speech.Recognition and you reall should be using System.Speech.Recognition.
Change this:
using Microsoft.Speech.Recognition;
to this:
using System.Speech.Recognition;
You can leave the rest of the code as it is.
Wh? Well here are some answers:
What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?
In short Microsoft.Speech.Recognition is for servers and works with low quality audio like you find in call centres (used for automation etc.), this means it is not compatible with all audio input devices.
On contrary System.Speech.Recognition is for Desktop apps and it fully supports default recording devices installed on Windows.

Related

Access MacBook camera using C# and Xamarin.Mac forms on Visual Studio?

I need to integrate a video stream from my Macbook camera using a Xamarin.Mac form. However, all of the documentation that I find only tells you how to do so in iOS and Android platforms.
How would you go at getting a video stream from a Macbook then? Any libraries I should look at?
You will want to review the AVFoundation APIs (QTKit is deprecated).
You can create a custom Xamarin.Forms View renderer based on a NSView and assign the AVCaptureVideoPreviewLayer as the control's layer to stream the camera output to this control.
Store class level references to following and make sure you Dispose them when your control goes out of scope otherwise there will be leaks:
AVCaptureDevice device;
AVCaptureDeviceInput input;
AVCaptureStillImageOutput output;
AVCaptureSession session;
In your capture setup, you can grab the default AV device assuming you want to use the build-in FaceTime Camera (also known as iSight).
macOS/Forms Example:
device = AVCaptureDevice.GetDefaultDevice(AVMediaTypes.Video);
input = AVCaptureDeviceInput.FromDevice(device, out var error);
if (error == null)
{
session = new AVCaptureSession();
session.AddInput(input);
session.SessionPreset = AVCaptureSession.PresetPhoto;
var previewLayer = AVCaptureVideoPreviewLayer.FromSession(session);
previewLayer.Frame = Control.Bounds;
Control.Layer = previewLayer;
output = new AVCaptureStillImageOutput();
session.AddOutput(output);
session.StartRunning();
}
Note: A lot of the AVFoundation framework is shared between iOS and MacOS, but there are some differences so if you end up looking at iOS sample code be aware you might need to alter it for macOS.
https://developer.apple.com/documentation/avfoundation

How to add a custom word list to inkrecognizer in uwp?

I have developped a uwp app that has a custom user control allowing handwriting recognition of the user input with a stylus into a textbox.
It works fine for common words, however my users often use technical terms and acronyms that are not always recognised or not recognised at all because they are not featured in the language I have set as the default recognizer for my InkRecognizer.
Here is how I set the default Inkrecognizer :
InkRecognizerContainer inkRecognizerContainer = new InkRecognizerContainer();
foreach (InkRecognizer reco in inkRecognizerContainer.GetRecognizers())
{
if (recognizerName == "Reconnaissance d'écriture Microsoft - Français")
{
inkRecognizerContainer.SetDefaultRecognizer(reco);
break;
}
}
And how I get my recognition results :
IReadOnlyList<InkRecognitionResult> recognitionResults = new List<InkRecognitionResult>();
recognitionResults = await inkRecognizerContainer.RecognizeAsync(myInkCanvas.InkPresenter.StrokeContainer, InkRecognitionTarget.All);
foreach (var r in recognitionResults)
{
string result = r.GetTextCandidates()[0];
myTextBox.Text += " "+result;
}
The expected results are generally not contained in the other textcandidates either.
I have read through the msdn documentation of Windows.UI.Input.Inking but haven't found anything useful on this particular topic.
Same goes for the Channel9 videos existing around handwriting recognition and the results that my google-fu have been able to conjure up.
Ideally, I'm looking for something similar to the WordList that existed in Windows.Inking
Is there a way to programatically add a wordlist to the pool of words of an Inkrecognizer or create a custom dictionary for handwriting recognition in a uwp app ?

SpeechRecognitionEngine.InstalledRecognizers returns No recognizer installed

I am trying to get a simple speech recognition program started but it does not work, I've installed some languages (en-GB & en-US) but whenever I use the following:
SpeechRecognitionEngine.InstalledRecognizers
it returns an empty collection. Even when I just try to start a recognizer it will return "no recognizer installed". But when I reinstall a language, it says that it is already installed.
using ( SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
For what reason is it unable to find the installed recognizers?
Edit:
This is the exception: {System.ArgumentException: No recognizer of the required ID found.
Parameter name: culture
at System.Speech.Recognition.SpeechRecognitionEngine..ctor(CultureInfo culture)
and: var recognizers = SpeechRecognitionEngine.InstalledRecognizers(); returns a collection with a count of 0
The problem was that I installed language packs that can be accessed by Microsoft.Speech and I was using System.Speech, when I switched to Microsoft.Speech it worked.

C# Speech recognition error - The language for the grammar does not match the language of the speech recognizer

I have problem with my speech recognition.
It works on "English" windows with no problem.
It also works on some "Foreign" windows too. But only some.
I'm getting that exception:
The language for the grammar does not match the language of the speech recognizer
I added my own words to dictionary.
How can I fix it?
Not sure which version .net you are on, but I'll attempt to answer.
On your English Windows version, please navigate to C:\Program Files\Reference Assemblies\Microsoft\Framework[YOUR .NET VERSION]
You should find System.Speech.dll,
Make sure to bring this .dll over to your foreign computer, and everything should run smoothly.
I had the same problem on my friends Computer. So I made this (it is just part of the code, because all the code is really long ):
...
RecognizerInfo recognizerInfo = null;
foreach (RecognizerInfo ri in SpeechRecognitionEngine.InstalledRecognizers())
{
if ((ri.Culture.TwoLetterISOLanguageName.Equals("en")) && (recognizerInfo == null))
{
recognizerInfo = ri;
break;
}
}
SpeechRecognitionEngine SpeachRecognition = new SpeechRecognitionEngine(recognizerInfo);
GrammarBuilder gb = new GrammarBuilder(startLiserninFraze);
gb.Culture = recognizerInfo.Culture;
grammar = new Grammar(gb);
SpeachRecognition.RequestRecognizerUpdate();
SpeachRecognition.LoadGrammar(grammar);
SpeachRecognition.SpeechRecognized += SpeachRecognition_SpeechRecognized;
SpeachRecognition.SetInputToDefaultAudioDevice();
SpeachRecognition.RecognizeAsync(RecognizeMode.Multiple);
...
So this should work. My friends PC supported 2 instances of "en" or in "eng". Not sure why. So the code selects first one. I found some pieces of code on the internet and some of this is made by me.
SpeachRecognition.SpeechRecognized += SpeachRecognition_SpeechRecognized;
is made to make an event when everything is recognized. just type:
SpeachRecognition.SpeechRecognized +=
and the press TAB button (atleast in VS 2013) few times. and then in the bottom of code it will generate something like this:
void SpeachRecognition_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
//then will be some line that you need to replace with your code
}
I hope this will help. :)

Playing Audio in .Net / C#

I'm an experienced MFC programmer of many years, who for the more recent years has been developing commercial apps in Objective C for Mac and iOS - I'm trying to get up to speed with .Net and C# (as I'm soon going to be required to convert one of my commercial apps from Mac to PC).
I've now worked my way through a couple of books and as an exercise to get more familiar with .Net (and C#) I've decided to have a go at converting one of my none commercial apps to .Net as a learning exercise and all is going well (interface is working, data structures all good) but I need to be able to play audio.
My Mac app generates audio from a series of mathematical formula - imagine a wave generator - not quite the same but similar. On the Mac I generate the audio as 16 bit signed raw audio, use Core Audio to setup audio output routing and then get a callback whenever a new buffer of audio is required for the audio routing (so I can generate the audio on the fly).
I need to do the same on the PC. Unfortunately I find MSDN documentation to be a case of "Can't see the wood for the trees" as there is such a vast amount of documentation. I can find classes that will let me load and play mp3/wav etc files, but I need to generate the audio realtime. Can anyone point me in the right direction to find something that will allow me to fill buffers on the fly as it plays them?
Thx
I have used this sample in several projects with good results. It is basically a .Net wrapper for Windows Waveform Audio API using P/Invoke.
Other choices:
NAudio
Sound Player class from .Net framework
I have created a class that can play audio given Stream derivate as an input. So if you are able to pack your sound-generator into the Stream compatible interface, it could be suitable for you.
How I did it - I used unmanaged waveOut* methods from old Windows multimedia API, and handled the playback from there.
Other options - that I am aware of - use waveOut directly, from this: http://windowsmedianet.sourceforge.net/ or write your own DirectShow source filter, but that might be too complicated, since it has to be written in c++.
If you are interested in giving my component a try, I can make it available to you at no charge, since I need it beta tested (I only used it in several of my projects).
EDIT:
Since there are 6 upvotes to the question, I am offering my component free of charge (if you find useful) here: http://dl.dropbox.com/u/10020780/SimpleAudioPlayer.zip
Maybe you can reflect on it :)
I use Audiere to accomplish this and find it works very well.
It's a C++ lib really, but there are a set of bindings available for C#.
For more info, see the question I asked.
You should have a look at FMOD which allows this kind of operation and much more. It is also cross platform which can be interested if you are also working on a mac.
Alvas.Audio has 3 audio players: Player
player.FileName = "123.mp3";
player.Play();
PlayerEx
public static void TestPlayerEx()
{
PlayerEx plex = new PlayerEx();
plex.Done += new PlayerEx.DoneEventHandler(plex_Done);
Mp3Reader mr = new Mp3Reader(File.OpenRead("in.mp3"));
IntPtr format = mr.ReadFormat();
byte[] data = mr.ReadData();
mr.Close();
plex.OpenPlayer(format);
plex.AddData(data);
plex.StartPlay();
}
static void plex_Done(object sender, DoneEventArgs e)
{
if (e.IsEndPlaying)
{
((PlayerEx)sender).ClosePlayer();
}
}
and RecordPlayer
public static void TestRecordPlayer()
{
RecordPlayer rp = new RecordPlayer();
rp.PropertyChanged += new PropertyChangedEventHandler(rp_PropertyChanged);
rp.Open(new Mp3Reader(File.OpenRead("in.mp3")));
rp.Play();
}
static void rp_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
switch (e.PropertyName)
{
case RecordPlayer.StateProperty:
RecordPlayer rp = ((RecordPlayer)sender);
if (rp.State == DeviceState.Stopped)
{
rp.Close();
}
break;
}
}

Categories

Resources