I have a universal app the uses voice synthesis. Running under WP8.1, it works fine, but as soon as I try Win8.1 I start getting strange behaviour. The actual voice seems to speak once, however, on the second run (within the same app), the following code hangs:
string toSay = "hello";
System.Diagnostics.Debug.WriteLine("{0}: Speak {1}", DateTime.Now, toSay);
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
System.Diagnostics.Debug.WriteLine("{0}: After sythesizer instantiated", DateTime.Now);
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
System.Diagnostics.Debug.WriteLine("{0}: After voice stream", DateTime.Now);
The reason for the debug statements is that the code seems to have an uncertainty principle to it. That is, when I debug through it, the code executes and passes the SynthesizeTextToStreamAsync statement. However, when the breakpoits are removed, I only get the debug statement preceding it - never the one after.
The best I can deduce is that during the first run through something bad happens (it does claim to complete and actually speaks the first time), then it gets stuck and can't play any more. The full code looks similar to this:
string toSay = "hello";
System.Diagnostics.Debug.WriteLine("{0}: Speak {1}", DateTime.Now, toSay);
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
System.Diagnostics.Debug.WriteLine("{0}: After sythesizer instantiated", DateTime.Now);
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
System.Diagnostics.Debug.WriteLine("{0}: After voice stream", DateTime.Now);
MediaElement mediaElement;
mediaElement = rootControl.Children.FirstOrDefault(a => a as MediaElement != null) as MediaElement;
if (mediaElement == null)
{
mediaElement = new MediaElement();
rootControl.Children.Add(mediaElement);
}
mediaElement.SetSource(voiceStream, voiceStream.ContentType);
mediaElement.Volume = 1;
mediaElement.IsMuted = false;
var tcs = new TaskCompletionSource<bool>();
mediaElement.MediaEnded += (o, e) => { tcs.TrySetResult(true); };
mediaElement.MediaFailed += (o, e) => { tcs.TrySetResult(true); };
mediaElement.Play();
await tcs.Task;
Okay - I think I managed to get this working... although I'm unsure why.
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
MediaElement mediaElement;
mediaElement = rootControl.Children.FirstOrDefault(a => a as MediaElement != null) as MediaElement;
if (mediaElement == null)
{
mediaElement = new MediaElement();
rootControl.Children.Add(mediaElement);
}
mediaElement.SetSource(voiceStream, voiceStream.ContentType);
mediaElement.Volume = 1;
mediaElement.IsMuted = false;
var tcs = new TaskCompletionSource<bool>();
mediaElement.MediaEnded += (o, e) => { tcs.TrySetResult(true); };
mediaElement.Play();
await tcs.Task;
// Removing the control seems to free up whatever is locking
rootControl.Children.Remove(mediaElement);
}
I am not sure what program language you are using. However this may help. This is in C# so this could help lead you in the right direction.
namespace Alexis
{
public partial class frmMain : Form
{
SpeechRecognitionEngine _recognizer = new SpeechRecognitionEngine();
SpeechSynthesizer Alexis = new SpeechSynthesizer();
SpeechRecognitionEngine startlistening = new SpeechRecognitionEngine();
DateTime timenow = DateTime.Now;
}
//other coding such as InitializeComponent and others.
//
//
//
//
private void frmMain_Load(object sender, EventArgs e)
{
_recognizer.SetInputToDefaultAudioDevice();
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(File.ReadAllLines(#"Default Commands.txt")))));
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Shell_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Social_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Web_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Default_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(AlarmClock_SpeechRecognized);
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(AlarmAM))));
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(AlarmPM))));
_recognizer.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(_recognizer_SpeechDetected);
_recognizer.RecognizeAsync(RecognizeMode.Multiple);
startlistening.SetInputToDefaultAudioDevice();
startlistening.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices("alexis"))));
startlistening.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(startlistening_SpeechRecognized);
//other stuff here..... Then once you have this then you can generate a method then with your code as follows
//
//
//
private void Default_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
int ranNum;
string speech = e.Result.Text;
switch (speech)
{
#region Greetings
case "hello":
case "hello alexis":
timenow = DateTime.Now;
if (timenow.Hour >= 5 && timenow.Hour < 12)
{ Alexis.SpeakAsync("Goodmorning " + Settings.Default.User); }
if (timenow.Hour >= 12 && timenow.Hour < 18)
{ Alexis.SpeakAsync("Good afternoon " + Settings.Default.User); }
if (timenow.Hour >= 18 && timenow.Hour < 24)
{ Alexis.SpeakAsync("Good evening " + Settings.Default.User); }
if (timenow.Hour < 5)
{ Alexis.SpeakAsync("Hello " + Settings.Default.User + ", it's getting late"); }
break;
case "whats my name":
case "what is my name":
Alexis.SpeakAsync(Settings.Default.User);
break;
case "stop talking":
case "quit talking":
Alexis.SpeakAsyncCancelAll();
ranNum = rnd.Next(1, 2);
if (ranNum == 2)
{ Alexis.Speak("sorry " + Settings.Default.User); }
break;
}
}
instead of using the commands in the code. I recommend that you use a text document. once you have that then you can add your own commands to it then put it in code. Also reference the System.Speech.
I hope this helps on getting you on the right track.
Related
I am trying to develop the following functionality.
The first task to convert text to voice - DONE
The second task to convert voice to text - Getting issue
The third task to implement these both on the given chat board where already AI chat is
I am using following code to get the text from voice/speech.
I am getting the result but is not proper which I want.
Please check below code snippet.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Speech.Recognition;
using System.Speech.Synthesis;
namespace StartingWithSpeechRecognition
{
class Program
{
static SpeechRecognitionEngine _recognizer = null;
static ManualResetEvent manualResetEvent = null;
static void Main(string[] args)
{
manualResetEvent = new ManualResetEvent(false);
Console.WriteLine("To recognize speech, and write 'test' to the console, press 0");
Console.WriteLine("To recognize speech and make sure the computer speaks to you, press 1");
Console.WriteLine("To emulate speech recognition, press 2");
Console.WriteLine("To recognize speech using Choices and GrammarBuilder.Append, press 3");
Console.WriteLine("To recognize speech using a DictationGrammar, press 4");
Console.WriteLine("To get a prompt building example, press 5");
ConsoleKeyInfo pressedKey = Console.ReadKey(true);
char keychar = pressedKey.KeyChar;
Console.WriteLine("You pressed '{0}'", keychar);
switch (keychar)
{
case '0':
RecognizeSpeechAndWriteToConsole();
break;
case '1':
RecognizeSpeechAndMakeSureTheComputerSpeaksToYou();
break;
case '2':
EmulateRecognize();
break;
case '3':
SpeechRecognitionWithChoices();
break;
case '4':
SpeechRecognitionWithDictationGrammar();
break;
case '5':
PromptBuilding();
break;
default:
Console.WriteLine("You didn't press 0, 1, 2, 3, 4, or 5!");
Console.WriteLine("Press any key to continue . . .");
Console.ReadKey(true);
Environment.Exit(0);
break;
}
if (keychar != '5')
{
manualResetEvent.WaitOne();
}
if (_recognizer != null)
{
_recognizer.Dispose();
}
Console.WriteLine("Press any key to continue . . .");
Console.ReadKey(true);
}
#region Recognize speech and write to console
static void RecognizeSpeechAndWriteToConsole()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("test"))); // load a "test" grammar
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("exit"))); // load a "exit" grammar
_recognizer.SpeechRecognized += _recognizeSpeechAndWriteToConsole_SpeechRecognized; // if speech is recognized, call the specified method
_recognizer.SpeechRecognitionRejected += _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected; // if recognized speech is rejected, call the specified method
_recognizer.SetInputToDefaultAudioDevice(); // set the input to the default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech asynchronous
}
static void _recognizeSpeechAndWriteToConsole_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "test")
{
Console.WriteLine("test");
}
else if (e.Result.Text == "exit")
{
manualResetEvent.Set();
}
}
static void _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine("Speech rejected. Did you mean:");
foreach (RecognizedPhrase r in e.Result.Alternates)
{
Console.WriteLine(" " + r.Text);
}
}
#endregion
#region Recognize speech and make sure the computer speaks to you (text to speech)
static void RecognizeSpeechAndMakeSureTheComputerSpeaksToYou()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("hello computer"))); // load a "hello computer" grammar
_recognizer.SpeechRecognized += _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognized; // if speech is recognized, call the specified method
_recognizer.SpeechRecognitionRejected += _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognitionRejected;
_recognizer.SetInputToDefaultAudioDevice(); // set the input to the default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech asynchronous
}
static void _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "hello computer")
{
SpeechSynthesizer speechSynthesizer = new SpeechSynthesizer();
speechSynthesizer.Speak("hello user");
speechSynthesizer.Dispose();
}
manualResetEvent.Set();
}
static void _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
{
if (e.Result.Alternates.Count == 0)
{
Console.WriteLine("No candidate phrases found.");
return;
}
Console.WriteLine("Speech rejected. Did you mean:");
foreach (RecognizedPhrase r in e.Result.Alternates)
{
Console.WriteLine(" " + r.Text);
}
}
#endregion
#region Emulate speech recognition
static void EmulateRecognize()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("emulate speech"))); // load "emulate speech" grammar
_recognizer.SpeechRecognized += _emulateRecognize_SpeechRecognized;
_recognizer.EmulateRecognize("emulate speech");
}
static void _emulateRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "emulate speech")
{
Console.WriteLine("Speech was emulated!");
}
manualResetEvent.Set();
}
#endregion
#region Speech recognition with Choices and GrammarBuilder.Append
static void SpeechRecognitionWithChoices()
{
_recognizer = new SpeechRecognitionEngine();
GrammarBuilder grammarBuilder = new GrammarBuilder();
grammarBuilder.Append("I"); // add "I"
grammarBuilder.Append(new Choices("like", "dislike")); // load "like" & "dislike"
grammarBuilder.Append(new Choices("dogs", "cats", "birds", "snakes", "fishes", "tigers", "lions", "snails", "elephants")); // add animals
_recognizer.LoadGrammar(new Grammar(grammarBuilder)); // load grammar
_recognizer.SpeechRecognized += speechRecognitionWithChoices_SpeechRecognized;
_recognizer.SetInputToDefaultAudioDevice(); // set input to default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech
}
static void speechRecognitionWithChoices_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Do you really " + e.Result.Words[1].Text + " " + e.Result.Words[2].Text + "?");
manualResetEvent.Set();
}
#endregion
#region Speech recognition with DictationGrammar
static void SpeechRecognitionWithDictationGrammar()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("exit")));
_recognizer.LoadGrammar(new DictationGrammar());
_recognizer.SpeechRecognized += speechRecognitionWithDictationGrammar_SpeechRecognized;
_recognizer.SetInputToDefaultAudioDevice();
_recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
static void speechRecognitionWithDictationGrammar_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "exit")
{
manualResetEvent.Set();
return;
}
Console.WriteLine("You said: " + e.Result.Text);
}
#endregion
#region Prompt building
static void PromptBuilding()
{
PromptBuilder builder = new PromptBuilder();
builder.StartSentence();
builder.AppendText("This is a prompt building example.");
builder.EndSentence();
builder.StartSentence();
builder.AppendText("Now, there will be a break of 2 seconds.");
builder.EndSentence();
builder.AppendBreak(new TimeSpan(0, 0, 2));
builder.StartStyle(new PromptStyle(PromptVolume.ExtraSoft));
builder.AppendText("This text is spoken extra soft.");
builder.EndStyle();
builder.StartStyle(new PromptStyle(PromptRate.Fast));
builder.AppendText("This text is spoken fast.");
builder.EndStyle();
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
synthesizer.Speak(builder);
synthesizer.Dispose();
}
#endregion
}
}
If this is the wrong way then please suggest me right way or any reference link or tutorial will be highly appreciated.
The System.Speech.Recognition is an old API.
I think you have to use Google Speech API: https://cloud.google.com/speech/docs/basics Or MS Bing speech API: https://azure.microsoft.com/en-us/services/cognitive-services/speech/
I preferred the Google API. And here is very small example:
using Google.Apis.Auth.OAuth2;
using Google.Cloud.Speech.V1;
using Grpc.Auth;
using System;
var speech = SpeechClient.Create( channel );
var response = speech.Recognize( new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "hu",
}, RecognitionAudio.FromFile( "888.wav" ) );
foreach ( var result in response.Results )
{
foreach ( var alternative in result.Alternatives )
{
Console.WriteLine( alternative.Transcript );
}
}
But you can find more samples:
https://cloud.google.com/speech/docs/samples
Regrads
I am currently messing around with the System.Speech to make a Amazon Alexa type program where you will ask it a command and will speak back an answer.
Right now I have it set up to where I click a button so it starts detecting audio input from the microphone, but I want to do the Alexa style or Google style ("Alexa" or "Hey Google").
Here is what I have so far:
public partial class SpeechRecognition : Form
{
SpeechRecognitionEngine recEngine = new SpeechRecognitionEngine();
SpeechSynthesizer synth = new SpeechSynthesizer();
public SpeechRecognition()
{
InitializeComponent();
}
private void SpeechRecognition_Load(object sender, EventArgs e)
{
Choices command = new Choices();
command.Add(new string[] { "Say Hello", "Print my name", "speak selected text", "time", "current weather", "humidity"});
GrammarBuilder gBuilder = new GrammarBuilder();
gBuilder.Append(command);
Grammar grammar = new Grammar(gBuilder);
recEngine.LoadGrammarAsync(grammar);
recEngine.SetInputToDefaultAudioDevice();
recEngine.SpeechRecognized += recEngine_SpeechRecognized;
}
void recEngine_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
synth.SelectVoiceByHints(VoiceGender.Neutral);
var request = new ForecastIORequest("...", 38.47382f, -76.50636f, DateTime.Now, Unit.us);
var response = request.Get();
switch (e.Result.Text)
{
case "Say Hello":
PromptBuilder builder = new PromptBuilder();
builder.StartSentence();
builder.AppendText("Hello Zack");
builder.EndSentence();
builder.AppendBreak(new TimeSpan(0, 0, 0, 0,50));
builder.StartSentence();
builder.AppendText("How are you?");
builder.EndSentence();
synth.SpeakAsync(builder);
break;
case "Print my name":
richTextBox1.Text += "\nZack";
break;
case "speak selected text":
synth.SpeakAsync(richTextBox1.SelectedText);
break;
case "time":
DateTime time = DateTime.Now;
synth.SpeakAsync(time.ToShortTimeString());
break;
case "current weather":
response = request.Get();
synth.SpeakAsync("The weather in St. Leonard is " + response.currently.summary + "with a temperature of " + Math.Round((decimal)response.currently.temperature, 0) + " degrees farenheit");
break;
case "humidity":
response = request.Get();
synth.SpeakAsync("Current humidity is " + 100 * response.currently.humidity + "percent");
break;
default:
synth.SpeakAsync("I did not recognize a command");
break;
}
}
private void enableBtn_Click(object sender, EventArgs e)
{
recEngine.RecognizeAsync(RecognizeMode.Multiple);
disableBtn.Enabled = true;
}
private void disableBtn_Click(object sender, EventArgs e)
{
recEngine.RecognizeAsyncStop();
disableBtn.Enabled = false;
}
How would I alter this so that it constantly listens for that one phrase then it will listen for a command after that? I'm guessing I need to create another instance of the recognition engine? Any help would be great.
The execs at my company would like a particular custom control of our ui to make a best-effort at preventing screen capture. I implemented a slick solution using SetWindowDisplayAffinity and DWMEnableComposition at the top Window level of our application to prevent screen capture of the entire app but the previously mentioned execs weren't happy with that. They want only the particular UserControl to prevent screen capture, not the entire app.
The custom control is a .NET 2.0 Windows.Forms.UserControl wrapped in a 4.5 WPF WindowsFormsHost contained by a 4.5 WPF Window control.
Before I tell the execs where to go, I want to be certain there isn't a reasonable way to implement this.
So, my question is: How do I implement screen capture prevention of a .NET 2.0 UserControl?
Thanks
i had an idea:
the user click "PrintScreen", a capture from my program is copy to ClipBoard.
So, all what i need it: to catch cliboard an check if he contains an image.
if yes: i delete the image from clipboard.
the code:
//definition a timer
public static Timer getImageTimer = new Timer();
[STAThread]
static void Main()
{
getImageTimer.Interval = 500;
getImageTimer.Tick += GetImageTimer_Tick;
getImageTimer.Start();
......
}
private static void GetImageTimer_Tick(object sender, EventArgs e)
{
if (Clipboard.ContainsImage())//if clipboard contains an image
Clipboard.Clear();//delete
}
*but it avoid any time program is running. (you need to know if your program is foreground, and then check clipboard);
You might be able to capture the key stroke.
Below is a method I used for that, but it had limited success:
protected override bool ProcessCmdKey(ref Message msg, Keys keyData) {
if ((m_parent != null) && m_parent.ScreenCapture(ref msg)) {
return true;
} else {
return base.ProcessCmdKey(ref msg, keyData);
}
}
protected override bool ProcessKeyEventArgs(ref Message msg) {
if ((m_parent != null) && m_parent.ScreenCapture(ref msg)) {
return true;
} else {
return base.ProcessKeyEventArgs(ref msg);
}
}
Returning "True" was supposed to tell the system that the PrintScreen routine was handled, but it often still found the data on the clipboard.
What I did instead, as you can see from the call to m_parent.ScreenCapture, was to save the capture to a file, then display it in my own custom image viewer.
If it helps, here is the wrapper I created for the custom image viewer in m_parent.ScreenCapture:
public bool ScreenCapture(ref Message msg) {
var WParam = (Keys)msg.WParam;
if ((WParam == Keys.PrintScreen) || (WParam == (Keys.PrintScreen & Keys.Alt))) {
ScreenCapture(Environment.GetFolderPath(Environment.SpecialFolder.Desktop));
msg = new Message(); // erases the data
return true;
}
return false;
}
public void ScreenCapture(string initialDirectory) {
this.Refresh();
var rect = new Rectangle(Location.X, Location.Y, Size.Width, Size.Height);
var imgFile = Global.ScreenCapture(rect);
//string fullName = null;
string filename = null;
string extension = null;
if ((imgFile != null) && imgFile.Exists) {
filename = Global.GetFilenameWithoutExt(imgFile.FullName);
extension = Path.GetExtension(imgFile.FullName);
} else {
using (var worker = new BackgroundWorker()) {
worker.DoWork += delegate(object sender, DoWorkEventArgs e) {
Thread.Sleep(300);
var bmp = new Bitmap(rect.Width, rect.Height);
using (var g = Graphics.FromImage(bmp)) {
g.CopyFromScreen(Location, Point.Empty, rect.Size); // WinForm Only
}
e.Result = bmp;
};
worker.RunWorkerCompleted += delegate(object sender, RunWorkerCompletedEventArgs e) {
if (e.Error != null) {
var err = e.Error;
while (err.InnerException != null) {
err = err.InnerException;
}
MessageBox.Show(err.Message, "Screen Capture", MessageBoxButtons.OK, MessageBoxIcon.Stop);
} else if (e.Cancelled) {
} else if (e.Result != null) {
if (e.Result is Bitmap) {
var bitmap = (Bitmap)e.Result;
imgFile = new FileInfo(Global.GetUniqueFilenameWithPath(m_screenShotPath, "Screenshot", ".jpg"));
filename = Global.GetFilenameWithoutExt(imgFile.FullName);
extension = Path.GetExtension(imgFile.FullName);
bitmap.Save(imgFile.FullName, ImageFormat.Jpeg);
}
}
};
worker.RunWorkerAsync();
}
}
if ((imgFile != null) && imgFile.Exists && !String.IsNullOrEmpty(filename) && !String.IsNullOrEmpty(extension)) {
bool ok = false;
using (SaveFileDialog dlg = new SaveFileDialog()) {
dlg.Title = "ACP Image Capture: Image Name, File Format, and Destination";
dlg.FileName = filename;
dlg.InitialDirectory = m_screenShotPath;
dlg.DefaultExt = extension;
dlg.AddExtension = true;
dlg.Filter = "PNG Image|*.png|Jpeg Image (JPG)|*.jpg|GIF Image (GIF)|*.gif|Bitmap (BMP)|*.bmp" +
"|EWM Image|*.emf|TIFF Image|*.tif|Windows Metafile (WMF)|*.wmf|Exchangable image file|*.exif";
dlg.FilterIndex = 0;
if (dlg.ShowDialog(this) == DialogResult.OK) {
imgFile = imgFile.CopyTo(dlg.FileName, true);
m_screenShotPath = imgFile.DirectoryName;
ImageFormat fmtStyle;
switch (dlg.FilterIndex) {
case 2: fmtStyle = ImageFormat.Jpeg; break;
case 3: fmtStyle = ImageFormat.Gif; break;
case 4: fmtStyle = ImageFormat.Bmp; break;
case 5: fmtStyle = ImageFormat.Emf; break;
case 6: fmtStyle = ImageFormat.Tiff; break;
case 7: fmtStyle = ImageFormat.Wmf; break;
case 8: fmtStyle = ImageFormat.Exif; break;
default: fmtStyle = ImageFormat.Png; break;
}
ok = true;
}
}
if (ok) {
string command = string.Format(#"{0}", imgFile.FullName);
try { // try default image viewer
var psi = new ProcessStartInfo(command);
Process.Start(psi);
} catch (Exception) {
try { // try IE
ProcessStartInfo psi = new ProcessStartInfo("iexplore.exe", command);
Process.Start(psi);
} catch (Exception) { }
}
}
}
}
I wrote that years ago, and I don't work there now. The source code is still in my Cloud drive so that I can leverage skills I learned once (and forgot).
That code isn't meant to get you completely done, but just to show you a way of doing it. If you need any help with something, let me know.
Not a silver bullet, but could form part of a strategy...
If you were worried about the windows snipping tool or other known snipping tools, you could subscribe to Windows events to capture a notification of when SnippingTool.exe was launched and then hide your application until it was closed:
var applicationStartWatcher = new ManagementEventWatcher(new WqlEventQuery("SELECT * FROM Win32_ProcessStartTrace"));
applicationStartWatcher.EventArrived += (sender, args) =>
{
if (args.NewEvent.Properties["ProcessName"].Value.ToString().StartsWith("SnippingTool"))
{
Dispatcher.Invoke(() => this.Visibility = Visibility.Hidden);
}
};
applicationStartWatcher.Start();
var applicationCloseWatcher = new ManagementEventWatcher(new WqlEventQuery("SELECT * FROM Win32_ProcessStopTrace"));
applicationCloseWatcher.EventArrived += (sender, args) =>
{
if (args.NewEvent.Properties["ProcessName"].Value.ToString().StartsWith("SnippingTool"))
{
Dispatcher.Invoke(() =>
{
this.Visibility = Visibility.Visible;
this.Activate();
});
}
};
applicationCloseWatcher.Start();
You will probably need to be running as an administrator for this code to work.
ManagementEventWatcher is in the System.Management assembly.
This coupled with some strategy for handling the print screen key: for example something like this may at least make it difficult to screen capture.
I need help on how to use APM pattern, i am now reading some articles, but i am afraid i don't have much time. What i really want is to get all the persons(data from db) then get the photos and put it on a autocompletebox
Code:
void listanomesautocomplete(object sender, ServicosLinkedIN.queryCompletedEventArgs e)
{
if (e.Result[0] != "")
{
for (int i = 0; i < e.Result.Count(); i = i + 3)
{
Pessoa pessoa = new Pessoa();
pessoa.Nome = e.Result[i];
pessoa.Id = Convert.ToInt32(e.Result[i + 1]);
if (e.Result[i + 2] == "")
pessoa.Imagem = new BitmapImage(new Uri("Assets/default_perfil.png", UriKind.Relative));
else
{
ServicosLinkedIN.ServicosClient buscaimg = new ServicosLinkedIN.ServicosClient();
buscaimg.dlFotoAsync(e.Result[i + 2]);
buscaimg.dlFotoCompleted += buscaimg_dlFotoCompleted;//when this completes it saves into a public bitmapimage and then i save it into pessoa.Imagem
//basicly, what happends, the listadlfotosasync only happends after this function
//what i want is to wait until it completes and have into the class novamsg
pessoa.Imagem = img;//saving the photo from dlFotoAsync
}
listaPessoas.Add(pessoa);
}
tbA_destinatario.ItemsSource = null;
tbA_destinatario.ItemsSource = listaPessoas;
BackgroundWorker listapessoas = new BackgroundWorker();
listapessoas.DoWork += listapessoas_DoWork;
listapessoas.RunWorkerAsync();
tb_lerdestmsg.Text = "";
}
else
{
tbA_destinatario.ItemsSource = null;
tb_lerdestmsg.Text = "Não encontrado";
}
}
There are a few things to address here:
The listanomesautocomplete event handler should be async. Handle and report all exceptions possibly thrown inside it (generally, this rule applies to any event handler, but it's even more important for async event handlers, because the asynchronous operation inside your handler continues outside the scope of the code which has fired the event).
Register the dlFotoCompleted event handler first, then call dlFotoAsync. Do not assume that dlFotoAsync will be always executed asynchronously.
Think about the case when another listanomesautocomplete is fired while the previous operation is still pending. This well may happen as a result of the user's action. You may want to cancel and restart the pending operation (like this), or just queue a new one to be executed as soon as the pending one has completed (like this).
Back to the question, it's the EAP pattern that you need to wrap as Task, not APM. Use TaskCompletionSource for that. The relevant part of your code may look like this:
async void listanomesautocomplete(object sender, ServicosLinkedIN.queryCompletedEventArgs e)
{
if (e.Result[0] != "")
{
for (int i = 0; i < e.Result.Count(); i = i + 3)
{
Pessoa pessoa = new Pessoa();
pessoa.Nome = e.Result[i];
pessoa.Id = Convert.ToInt32(e.Result[i + 1]);
if (e.Result[i + 2] == "")
pessoa.Imagem = new BitmapImage(new Uri("Assets/default_perfil.png", UriKind.Relative));
else
{
// you probably want to create the service proxy
// outside the for loop
using (ServicosLinkedIN.ServicosClient buscaimg = new ServicosLinkedIN.ServicosClient())
{
FotoCompletedEventHandler handler = null;
var tcs = new TaskCompletionSource<Image>();
handler = (sHandler, eHandler) =>
{
try
{
// you can move the code from buscaimg_dlFotoCompleted here,
// rather than calling buscaimg_dlFotoCompleted
buscaimg_dlFotoCompleted(sHandler, eHandler);
tcs.TrySetResult(eHandler.Result);
}
catch (Exception ex)
{
tcs.TrySetException(ex);
}
};
try
{
buscaimg.dlFotoCompleted += handler;
buscaimg.dlFotoAsync(e.Result[i + 2]);
// saving the photo from dlFotoAsync
pessoa.Imagem = await tcs.Task;
}
finally
{
buscaimg.dlFotoCompleted -= handler;
}
}
}
listaPessoas.Add(pessoa);
}
tbA_destinatario.ItemsSource = null;
tbA_destinatario.ItemsSource = listaPessoas;
BackgroundWorker listapessoas = new BackgroundWorker();
listapessoas.DoWork += listapessoas_DoWork;
listapessoas.RunWorkerAsync();
tb_lerdestmsg.Text = "";
}
else
{
tbA_destinatario.ItemsSource = null;
tb_lerdestmsg.Text = "Não encontrado";
}
}
void listanomesautocomplete(object sender, ServicosLinkedIN.queryCompletedEventArgs e)
{
if (e.Result[0] != "")
{
List<Pessoa> listaPessoas = new List<Pessoa>();
for (int i = 0; i < e.Result.Count(); i = i + 3)
{
Pessoa pessoa = new Pessoa();
pessoa.Nome = e.Result[i];
pessoa.Id = Convert.ToInt32(e.Result[i + 1]);
if (e.Result[i + 2] == "")
pessoa.Imagem = new BitmapImage(new Uri("Assets/default_perfil.png", UriKind.Relative));
else
{
ServicosLinkedIN.ServicosClient buscaimg = new ServicosLinkedIN.ServicosClient();
buscaimg.dlFotoAsync(e.Result[i + 2]);
//THIS ACTUALLY WORKS!!!
//THE THREAD WAITS FOR IT!
buscaimg.dlFotoCompleted += (s, a) =>
{
pessoa.Imagem = ConvertToBitmapImage(a.Result);
};
}
listaPessoas.Add(pessoa);
}
if (tbA_destinatario.ItemsSource == null)
{
tbA_destinatario.ItemsSource = listaPessoas;
}
tb_lerdestmsg.Text = "";
}
else
{
tbA_destinatario.ItemsSource = null;
tb_lerdestmsg.Text = "Não encontrado";
}
}
Man, i am not even mad, i am amazed.
Noseratio, you answer gave me an idea and it actually worked!!
Thanks a lot man, i can't thank you enough ;)
I am writing a speech recognition program using system.speech from MS. I have been going through the online tutorials and all the great info on StackOverflow however i seem to keep running into an issue where the recognizer seems to throw an error.
Below is the code I am using (minus the grammar creation).
Grammar grammarQuestionsSingle;
Grammar grammarQuestionsShort;
Grammar grammarQuestionsLong;
Grammar grammarStatement;
//Grammar grammarDeclarationShort;
//Grammar grammarDeclarationLong;
Grammar grammarCommandsSingle;
Grammar grammarCommandsShort;
Grammar grammarCommandsLong;
SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine();
CreateGrammar grammar = new CreateGrammar();
Think brain = new Think();
bool privacy, completed;
//bool timer;
public void OpenEars()
{
completed = true;
if (grammarQuestionsSingle == null || grammarQuestionsShort == null || grammarQuestionsLong == null || grammarStatement == null || grammarCommandsSingle == null || grammarCommandsShort == null || grammarCommandsLong == null)
{
grammarQuestionsSingle = grammar.createGrammarQuestionsSingle();
grammarQuestionsShort = grammar.createGrammarQuestionsShort();
grammarQuestionsLong = grammar.createGrammarQuestionsLong();
grammarStatement = grammar.createGrammarStatement();
grammarCommandsSingle = grammar.createGrammarCommandsSingle();
grammarCommandsShort = grammar.createGrammarCommandsShort();
grammarCommandsLong = grammar.createGrammarCommandsLong();
}
recognizer.RequestRecognizerUpdate();
if (!grammarQuestionsSingle.Loaded)
{
recognizer.LoadGrammar(grammarQuestionsSingle);
}
if (!grammarQuestionsShort.Loaded)
{
recognizer.LoadGrammar(grammarQuestionsShort);
}
if (!grammarQuestionsLong.Loaded)
{
recognizer.LoadGrammar(grammarQuestionsLong);
}
if (!grammarStatement.Loaded)
{
recognizer.LoadGrammar(grammarStatement);
}
if (!grammarCommandsSingle.Loaded)
{
recognizer.LoadGrammar(grammarCommandsSingle);
}
if (!grammarCommandsShort.Loaded)
{
recognizer.LoadGrammar(grammarCommandsShort);
}
if (!grammarCommandsLong.Loaded)
{
recognizer.LoadGrammar(grammarCommandsLong);
}
DictationGrammar dictationGrammar = new DictationGrammar("grammar:dictation");
dictationGrammar.Name = "DictationQuestion";
recognizer.LoadGrammar(dictationGrammar);
recognizer.RequestRecognizerUpdate();
recognizer.SetInputToDefaultAudioDevice();
Listening();
}
public void Listening()
{
while (!completed)
{
Thread.Sleep(333);
}
recognizer.SpeechRecognized += recognizer_SpeechRecognized;
recognizer.RecognizeAsync(RecognizeMode.Single);
}
private void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
completed = false;
SemanticValue sem = e.Result.Semantics;
if (!privacy)
{
if (e.Result.Grammar.Name=="CommandsSingle" && sem["keyCommandsSingle"].Value.ToString() == "go to sleep")
{
privacy = true;
brain.useMouth("ear muffs are on");
completed = true;
Listening();
}
else
{
brain.Understanding(sender, e);
completed = true;
}
}
else
{
if (e.Result.Grammar.Name == "CommandsSingle" && sem["keyCommandsSingle"].Value.ToString() == "wake up")
{
privacy = false;
brain.useMouth("I am listening again");
completed = true;
Listening();
}
}
completed = true;
Listening();
}
}
It recognizes the first phrase correctly but as soon as it completes the actions in the speechrecognized handler, it throws the exception "Cannot perform this operation while the recognizer is doing recognition.".
I have tried with the recognition being all in a single method however it has the same results. This was my most recent attempt prior to posting this question.
What am I doing wrong?
As to clarify...
The program launches into the systray and calls this class.OpenEars(). OpenEars then calls class.Listening() which has the RecognizeAsync. After speaking the first phrase and the recognizer hearing it correctly and following the handler, the second phrase when spoken ends up triggering the exception.
The problem is that you are calling RecognizeAsync from the SpeechRecognized event handler. It is throwing the exception because the previous recognition has not yet completed. The event handler is blocking it from completing. Try starting a different task/thread to call RecognizeAsync.