How to use Bing Speech for C# WPF Application - c#

I'm trying to get Bing Speech followed this link:
www.microsoft.com/cognitive-services/en-us/speech-api it says: "Show Quota is temporarily unavailable. We are working hard to bring an improved version back in the middle of October." and I can't see it in the list, there is only Bing Search.
I found this www.microsoft.com/cognitive-services/en-us/speech-api/documentation/overview I've pressed Get started for free and got two keys.
Now I have to find basic code to use it. I found this: www.github.com/Microsoft/Cognitive-Speech-STT-Windows, Actually I'm just moved my code from Windows Form Application to WPF with same interface and it is works without any special changes, but fact is I'm new with WPF so not sure what I have to use to get needed output.
Maybe I have to follow instruction just by this: https://www.microsoft.com/cognitive-services/en-us/Speech-api/documentation/GetStarted/GetStartedCSharpWin10

Assuming you're interested in a C# sample, maybe this can help: https://github.com/Microsoft/BotBuilder-Samples/tree/master/CSharp/intelligence-SpeechToText. It is a sample speech-to-text bot, which uses the Bing Speech API. The bot reads the audio file that the user uploaded, and then converts it into text. Obviously, you can leverage the same code to run speech-to-text for non-bot applications.

Related

Microsoft Cognitive Services: Bing Speech Recognition XAML

I am trying to develop a humble AI to help the daily routines of my family.
Voice recognition is a must, and commands can not be limited to a command library.
so command library mode is out of the table
I tried dictation mode, which already has a terrible recognition with headset, wont be able to understand anything with a room mic.
So I am trying to use Microsoft Cognitive Services: Bing Speech Recognition:
I downloaded the documents and the example, I see everything is in XAML form. I don't understand why.
I am asking some guidance from those who are experienced in this, is it possible to make it in console app or windows form? (I am using .Net 4.6).
If not do you have any suggestion for me to solve my problem?
Thank you for your time and patience.
You can use NuGets to achieve the same. Find sample over here https://github.com/Azure-Samples/Cognitive-Speech-STT-Windows
Further details about BingSpeechSDK -> https://learn.microsoft.com/en-us/azure/cognitive-services/Speech/GetStarted/GetStartedCSharpDesktop
You can use the same for Console App also just need to define the input segment from MicIn.

windows c# application for web browser control

I am looking a tutorial for develop a C# application for windows, which can give a extra functionality to the browser controls. i.e. input fields such as input box, text area.
When I using Google+, Facebook, and Twitter also my blog I use my native language. (Sinhala)
My native language has a complexity when is it come to using in computer. it has its own keyboard but it is really difficult to remember. In this case there are some tools to convert English characters in to its phonetic word in my language
i.e.
http://www.ucsc.cmb.ac.lk/ltrl/services/feconverter/t2.html
this above online tool help to convert English character in to is phonetic meaning in my language
i.e. Apal => ඇපල් (Apple)
This tool is developed by University of Colombo School Of Computing SriLanka. That English to Sinhala conversion control by JS.
My requirement is how this get in to C# application and running in particular pc, and when I open my web browser (Firefox, IE, Chrome, etc. ).
It should have run on system background and there should be a system tray icon to on/off it functions on browser (or Short cut key). And when it turn on.
I am wondering how I write a C# app for control Browser inputs and so on.
if you can show me a way to start or If I can have any suitable tutorial, guideline, or code sample, it will be great.
p.s. C# or Java is my prefer language
many thanks
Cheers
Umanda
I would check out the MSDN documentation here for the Web Browser Control. It also provides a sample of the source code at the bottom of the page.
It should have run on system background and there should be a system tray icon to on/off it functions on browser (or Short cut key). And when it turn on.
What you require is a Windows Service, that can be found here.

Silverlight Speech Recognition (In browser)

Since this topic is a bit out dated I would like to re-discuss it here.
After searching the web, I came across the following link:
http://archive.msdn.microsoft.com/nesl which runs only out of browser because Silverlight (in browser) can't access certain COM libraries that are related to windows.
I wish (for obvious performance purposes) to perform the speech recognition through Silverlight (on the client machine) and then send the result (text) to the server via a postback to perform the corresponding action.
I already achieved a way to get the voice from the microphone and store it in Silverlight in a byte array. Is there a way to convert the speech byte array to text?
HTML5 Google service is not an acceptable approach since IE is required.
My final goal is to implement a speech recognition in ASP.NET Web Application.
Any suggestion is appreciated.
You can't do it in Silverlight. You'll need to send the audio somewhere. You can call some third-party service (I'm sure there are plenty--and it shouldn't matter that you're using IE) or your own ASP.NET (which can call System.Speech or any other free or commercial system). But before you do that, you should compress the audio. There aren't a lot of options in Silverlight. I recommend NSpeex, or at least convert it to 16kHz PCM (either linear or a-law).
Here's a list of Speech SDKs (many of which have a cloud service component): http://www.toolsjournal.com/mobile-articles/item/918-top-10-sdks-to-voice-enable-mobile-apps-quickly
To make Trusted In-browser Silverlight application:
http://msdn.microsoft.com/en-us/library/gg192793(v=vs.95).aspx
http://www.pitorque.de/MisterGoodcat/post/Silverlight-5-Tidbits-Trusted-applications.aspx
And for security background:
http://msdn.microsoft.com/en-us/library/ee721083%28v=vs.95%29.aspx
Note that NESL doesn't support DictionaryGrammar. Grammar needs to be pre-defined:
http://archive.msdn.microsoft.com/nesl/Thread/View.aspx?ThreadId=4905

Two problems with Speech Recognition C# WPF app on Windows7

I've made an app that uses the SpeechRecognizer class to setup a simple grammar and recognize simple words.
When I run it on Win7 I notice two things.
1) The first time I start the app the Speech recognition bar (thingy) comes up, but the UI of my app is not shown (it is running as I can see in the Task Manager).
When I start the app for the 2nd time (after killing the first instance) it display normally (with the windows speech recognition toolbar already running).
2) When I speak one of the words I'm recognizing in my app a 2nd time, it does not trigger an event -instead- it selects the text on my app where I print out in a listbox the history of the recognized words.
Note: When I remove the history listbox from the main screen, it works as expected. Apperently Win7 tries to find the word in my UI first and when it cannot find it - only then does it trigger my programmatic event...??
Both problems seem very weird to me.
More info on the app: Its a VS2008/.NET 3.0 WPF app written in C#. The application allows a user to edit setting groups (patches) for sending Midi commands. Each Patch is tagged with a phrase. When that Phrase is spoken (recognized by the app) all configured Midi commands are sent to the output(s). The history of patches that were recalled by the user are printed in a 'history' list on the apps main screen.
I hope someone can help me with this. Any suggestions are most welcome.
Thanx,
Marc Jacobi
I think you are using the shared speech recognizer (SpeechRecognizer). When you instantiate
SpeechRecognizer you get a recognizer that can be shared by other applications and is typically used for building applications to control windows and applications running on the desktop.
It sounds like you want to use your own private recognition engine (SpeechRecognitionEngine). So instantiate a SpeechRecognitionEngine instead.
see http://msdn.microsoft.com/en-us/library/system.speech.recognition.speechrecognizer(v=vs.90).aspx
What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition? and Disable built-in speech recognition commands? may also have some helpful info.
I got it working, thanx!
The main difference between using the SpeechRecognizer and the SpeechRecognitionEngine are:
Construct the SpeechRecognitionEngine using a RecognizerInfo from InstalledRecognizers.
Call one of the SetInputToXxxx methods
Call RecognizeAsync(RecognizeMode.Multiple) to mimic the SpeechRecognizer (SpeechRecognized) events.
Call RecognizeCancel/Stop to quit.
Hope it helps.

Is there a way to open the Bing Maps App on Windows Phone 7 to a specific location?

The built-in emulator from the WP7 Tools doesn't have the Bing App installed, and I don't have any phone hardware to test with. So I'm simply wondering, how can I open the Bing Maps Application to a specific Lat/Long?
Related Questions:
iPhone -- How can I launch the Google Maps iPhone application from within my own native application?
Android -- https://developer.android.com/guide/appendix/g-app-intents.html
It seems that starting from the OS version 7.1 there's a specific task available for this, see BingMapsTask and for directions the BingMapsDirectionsTask.
Unfortunately there is no way to launch the Bing Maps App from within your own application.
In an early CTP there was a way but this has been removed. Hopefully it will return in the future but it is not on any current, public, roadmaps.
This leaves two alternatives.
Option 1
You could perform a search for the lat/long you want to show. The search app does directly integrate with the bing maps app so, assuming that bing can take the lat/long you provide and return something useful, the user would still be able to do whatever they wished within the bing maps app.
This has 2 downsides though. Firstly, you have no control over the search results. And, secondly, you cannot test this on the emulator.
Option 2
You could use the BingMaps control within your own silverlight application.
(Prior to the RTM, it was posible to use the full Silverlight version of the control within your app. But, this had a few quirks and was only ever intended as a stop gap solution.)
While not as fully featured as the app, the control does offer a lot of functionality.
Without a real device, but you could simulate location data, for testing, with the Reactive Extensions.
Even with a real device you will probably want to look at doing this as it's a lot easier than trying to debug while walking or driving around.
Edit:
As per this post by Kevin Marshall, if you're going to use the WebBrowserTask() (option 1 above) prefix your query with "maps:" and URL encode your query string. eg:
var task = new WebBrowserTask();
task.URL = "maps:1%20N%20Franklin%2060606";
or
task.URL = "maps:37.788153%2C-122.440162";
Bing maps silverlight control is now supported out of the box and is part of the tools... learn more about it here: http://channel9.msdn.com/Learn/Courses/WP7TrainingKit/WP7Silverlight/UsingBingMapsLab/Exercise-1-Introduction-to-the-Bing-Map-Control
Yes you can do this. I've got it running in the emulator (however, as many people have said there's no guarantee the Bing Maps for Silverlight control will run on the actual device)
Here is the xaml:
<m:Map Grid.Row="0" x:Name="mapMain" ZoomLevel="5" Mode="AerialWithLabels" CredentialsProvider="YOURBINGMAPSLICENSE" />
and here's some code to set the location in the .cs class
var ppLoc = new Location(-37.821285, 144.97785);
mapMain.SetView(ppLoc, 17);

Categories

Resources