I have a Windows Form app that recognizes voice commands and then performs the action accordingly. However, I can't figure out how to speak one command after the other.
Code:
if (e.Result.Text == "initiate power saving mode")
{
Taskbar taskbar = new Taskbar();
taskbar.Show();
SoundPlayer deacr = new SoundPlayer(Properties.Resources.deacr);
deacr.PlaySync();
if (e.Result.Text== "confirm")
{
SoundPlayer deacd = new SoundPlayer(Properties.Resources.deacd);
deacd.PlaySync();
Application.SetSuspendState(PowerState.Suspend, true, true);
}
else if (e.Result.Text == "cancel")
{
SoundPlayer cancelled = new SoundPlayer(Properties.Resources.cancelled);
cancelled.PlaySync();
}
}
Am I missing something, or just doing something wrong?
You need to use System.Speech.This is how i do it on my system. You can do the following:
using System.Speech.Synthesis;
using System.Speech.Recognition;
namespace Alexis
{
public partial class frmMain : Form
{
SpeechRecognitionEngine _recognizer = new SpeechRecognitionEngine();
SpeechSynthesizer Alexis = new SpeechSynthesizer();
SpeechRecognitionEngine startlistening = new SpeechRecognitionEngine();
}
// ...
}
then in the main form
private void frmMain_Load(object sender, EventArgs e)
{
_recognizer.SetInputToDefaultAudioDevice();
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(File.ReadAllLines(#"Default Commands.txt")))));
_recognizer.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(_recognizer_SpeechDetected);
_recognizer.RecognizeAsync(RecognizeMode.Multiple);
startlistening.SetInputToDefaultAudioDevice();
startlistening.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices("alexis"))));
startlistening.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(startlistening_SpeechRecognized);
}
then do your coding. Now in order to call the commands you need to create a txt document. and put in the commands one line at a time (do not leave any open lines).
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(File.ReadAllLines(#"Default Commands.txt")))));
will trigger the commands, so that way you can call the commands. Also i would not use if (e.Result.Text == "initiate power saving mode") i would use it this way if (speech == "initiate power saving mode")
so if you wanted to continue you could do this
if (speech == "initiate power saving mode")
{
Taskbar taskbar = new Taskbar();
taskbar.Show();
SoundPlayer deacr = new SoundPlayer(Properties.Resources.deacr);
deacr.PlaySync();
}
else if (speech== "confirm")
{
SoundPlayer deacd = new SoundPlayer(Properties.Resources.deacd);
deacd.PlaySync();
Application.SetSuspendState(PowerState.Suspend, true, true);
}
else if (speech == "cancel")
{
SoundPlayer cancelled = new SoundPlayer(Properties.Resources.cancelled);
cancelled.PlaySync();
}
be sure to put the command "initiate power saving mode" and "cancel" in the commands txt document (case sensitive).
then if you wanted to keep releases down then you can create a tabbed form and add your own custom commands. Hope this helps. But remember this is an example for you to go by.
Related
Have been searching info about it on Google, but haven't found neither the way to do this, nor why is it impossible. My aim is to make a C# app, which would record the sound of a single window, while another window can make any kind of noise - the app would save only the sound of the first.
Is there any way to record single window loopback in C#? And does Windows allow it?
I have wrotten code using NAudio that is recording all loopback sound on 1 track. Have a look.
private void btnRec_Click(object sender, RoutedEventArgs e)
{
//Setting up dialog to create WAV file
var destFileDialog = new Microsoft.Win32.SaveFileDialog();
destFileDialog.Filter = "Wave files | *.wav";
destFileDialog.ShowDialog();
destFileName = destFileDialog.FileName;
//Creating capturer and file writer to save captured info
capture = new WasapiLoopbackCapture();
var writer = new WaveFileWriter(destFileName, capture.WaveFormat);
//Setting capturer behaviour
capture.DataAvailable += async (s, a) =>
{
if (writer != null)
{
await writer.WriteAsync(a.Buffer, 0, a.BytesRecorded);
await writer.FlushAsync();
}
};
capture.RecordingStopped += (s, a) =>
{
if (writer != null)
{
writer.Dispose();
writer = null;
}
btnRec.IsEnabled = true;
capture.Dispose();
};
//Disabling button to prevent exception
btnRec.IsEnabled = false;
btnStop.IsEnabled = true;
capture.StartRecording();
}
I am trying to develop the following functionality.
The first task to convert text to voice - DONE
The second task to convert voice to text - Getting issue
The third task to implement these both on the given chat board where already AI chat is
I am using following code to get the text from voice/speech.
I am getting the result but is not proper which I want.
Please check below code snippet.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Speech.Recognition;
using System.Speech.Synthesis;
namespace StartingWithSpeechRecognition
{
class Program
{
static SpeechRecognitionEngine _recognizer = null;
static ManualResetEvent manualResetEvent = null;
static void Main(string[] args)
{
manualResetEvent = new ManualResetEvent(false);
Console.WriteLine("To recognize speech, and write 'test' to the console, press 0");
Console.WriteLine("To recognize speech and make sure the computer speaks to you, press 1");
Console.WriteLine("To emulate speech recognition, press 2");
Console.WriteLine("To recognize speech using Choices and GrammarBuilder.Append, press 3");
Console.WriteLine("To recognize speech using a DictationGrammar, press 4");
Console.WriteLine("To get a prompt building example, press 5");
ConsoleKeyInfo pressedKey = Console.ReadKey(true);
char keychar = pressedKey.KeyChar;
Console.WriteLine("You pressed '{0}'", keychar);
switch (keychar)
{
case '0':
RecognizeSpeechAndWriteToConsole();
break;
case '1':
RecognizeSpeechAndMakeSureTheComputerSpeaksToYou();
break;
case '2':
EmulateRecognize();
break;
case '3':
SpeechRecognitionWithChoices();
break;
case '4':
SpeechRecognitionWithDictationGrammar();
break;
case '5':
PromptBuilding();
break;
default:
Console.WriteLine("You didn't press 0, 1, 2, 3, 4, or 5!");
Console.WriteLine("Press any key to continue . . .");
Console.ReadKey(true);
Environment.Exit(0);
break;
}
if (keychar != '5')
{
manualResetEvent.WaitOne();
}
if (_recognizer != null)
{
_recognizer.Dispose();
}
Console.WriteLine("Press any key to continue . . .");
Console.ReadKey(true);
}
#region Recognize speech and write to console
static void RecognizeSpeechAndWriteToConsole()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("test"))); // load a "test" grammar
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("exit"))); // load a "exit" grammar
_recognizer.SpeechRecognized += _recognizeSpeechAndWriteToConsole_SpeechRecognized; // if speech is recognized, call the specified method
_recognizer.SpeechRecognitionRejected += _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected; // if recognized speech is rejected, call the specified method
_recognizer.SetInputToDefaultAudioDevice(); // set the input to the default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech asynchronous
}
static void _recognizeSpeechAndWriteToConsole_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "test")
{
Console.WriteLine("test");
}
else if (e.Result.Text == "exit")
{
manualResetEvent.Set();
}
}
static void _recognizeSpeechAndWriteToConsole_SpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine("Speech rejected. Did you mean:");
foreach (RecognizedPhrase r in e.Result.Alternates)
{
Console.WriteLine(" " + r.Text);
}
}
#endregion
#region Recognize speech and make sure the computer speaks to you (text to speech)
static void RecognizeSpeechAndMakeSureTheComputerSpeaksToYou()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("hello computer"))); // load a "hello computer" grammar
_recognizer.SpeechRecognized += _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognized; // if speech is recognized, call the specified method
_recognizer.SpeechRecognitionRejected += _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognitionRejected;
_recognizer.SetInputToDefaultAudioDevice(); // set the input to the default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech asynchronous
}
static void _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "hello computer")
{
SpeechSynthesizer speechSynthesizer = new SpeechSynthesizer();
speechSynthesizer.Speak("hello user");
speechSynthesizer.Dispose();
}
manualResetEvent.Set();
}
static void _recognizeSpeechAndMakeSureTheComputerSpeaksToYou_SpeechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)
{
if (e.Result.Alternates.Count == 0)
{
Console.WriteLine("No candidate phrases found.");
return;
}
Console.WriteLine("Speech rejected. Did you mean:");
foreach (RecognizedPhrase r in e.Result.Alternates)
{
Console.WriteLine(" " + r.Text);
}
}
#endregion
#region Emulate speech recognition
static void EmulateRecognize()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("emulate speech"))); // load "emulate speech" grammar
_recognizer.SpeechRecognized += _emulateRecognize_SpeechRecognized;
_recognizer.EmulateRecognize("emulate speech");
}
static void _emulateRecognize_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "emulate speech")
{
Console.WriteLine("Speech was emulated!");
}
manualResetEvent.Set();
}
#endregion
#region Speech recognition with Choices and GrammarBuilder.Append
static void SpeechRecognitionWithChoices()
{
_recognizer = new SpeechRecognitionEngine();
GrammarBuilder grammarBuilder = new GrammarBuilder();
grammarBuilder.Append("I"); // add "I"
grammarBuilder.Append(new Choices("like", "dislike")); // load "like" & "dislike"
grammarBuilder.Append(new Choices("dogs", "cats", "birds", "snakes", "fishes", "tigers", "lions", "snails", "elephants")); // add animals
_recognizer.LoadGrammar(new Grammar(grammarBuilder)); // load grammar
_recognizer.SpeechRecognized += speechRecognitionWithChoices_SpeechRecognized;
_recognizer.SetInputToDefaultAudioDevice(); // set input to default audio device
_recognizer.RecognizeAsync(RecognizeMode.Multiple); // recognize speech
}
static void speechRecognitionWithChoices_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Do you really " + e.Result.Words[1].Text + " " + e.Result.Words[2].Text + "?");
manualResetEvent.Set();
}
#endregion
#region Speech recognition with DictationGrammar
static void SpeechRecognitionWithDictationGrammar()
{
_recognizer = new SpeechRecognitionEngine();
_recognizer.LoadGrammar(new Grammar(new GrammarBuilder("exit")));
_recognizer.LoadGrammar(new DictationGrammar());
_recognizer.SpeechRecognized += speechRecognitionWithDictationGrammar_SpeechRecognized;
_recognizer.SetInputToDefaultAudioDevice();
_recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
static void speechRecognitionWithDictationGrammar_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result.Text == "exit")
{
manualResetEvent.Set();
return;
}
Console.WriteLine("You said: " + e.Result.Text);
}
#endregion
#region Prompt building
static void PromptBuilding()
{
PromptBuilder builder = new PromptBuilder();
builder.StartSentence();
builder.AppendText("This is a prompt building example.");
builder.EndSentence();
builder.StartSentence();
builder.AppendText("Now, there will be a break of 2 seconds.");
builder.EndSentence();
builder.AppendBreak(new TimeSpan(0, 0, 2));
builder.StartStyle(new PromptStyle(PromptVolume.ExtraSoft));
builder.AppendText("This text is spoken extra soft.");
builder.EndStyle();
builder.StartStyle(new PromptStyle(PromptRate.Fast));
builder.AppendText("This text is spoken fast.");
builder.EndStyle();
SpeechSynthesizer synthesizer = new SpeechSynthesizer();
synthesizer.Speak(builder);
synthesizer.Dispose();
}
#endregion
}
}
If this is the wrong way then please suggest me right way or any reference link or tutorial will be highly appreciated.
The System.Speech.Recognition is an old API.
I think you have to use Google Speech API: https://cloud.google.com/speech/docs/basics Or MS Bing speech API: https://azure.microsoft.com/en-us/services/cognitive-services/speech/
I preferred the Google API. And here is very small example:
using Google.Apis.Auth.OAuth2;
using Google.Cloud.Speech.V1;
using Grpc.Auth;
using System;
var speech = SpeechClient.Create( channel );
var response = speech.Recognize( new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "hu",
}, RecognitionAudio.FromFile( "888.wav" ) );
foreach ( var result in response.Results )
{
foreach ( var alternative in result.Alternatives )
{
Console.WriteLine( alternative.Transcript );
}
}
But you can find more samples:
https://cloud.google.com/speech/docs/samples
Regrads
I have a universal app the uses voice synthesis. Running under WP8.1, it works fine, but as soon as I try Win8.1 I start getting strange behaviour. The actual voice seems to speak once, however, on the second run (within the same app), the following code hangs:
string toSay = "hello";
System.Diagnostics.Debug.WriteLine("{0}: Speak {1}", DateTime.Now, toSay);
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
System.Diagnostics.Debug.WriteLine("{0}: After sythesizer instantiated", DateTime.Now);
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
System.Diagnostics.Debug.WriteLine("{0}: After voice stream", DateTime.Now);
The reason for the debug statements is that the code seems to have an uncertainty principle to it. That is, when I debug through it, the code executes and passes the SynthesizeTextToStreamAsync statement. However, when the breakpoits are removed, I only get the debug statement preceding it - never the one after.
The best I can deduce is that during the first run through something bad happens (it does claim to complete and actually speaks the first time), then it gets stuck and can't play any more. The full code looks similar to this:
string toSay = "hello";
System.Diagnostics.Debug.WriteLine("{0}: Speak {1}", DateTime.Now, toSay);
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
System.Diagnostics.Debug.WriteLine("{0}: After sythesizer instantiated", DateTime.Now);
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
System.Diagnostics.Debug.WriteLine("{0}: After voice stream", DateTime.Now);
MediaElement mediaElement;
mediaElement = rootControl.Children.FirstOrDefault(a => a as MediaElement != null) as MediaElement;
if (mediaElement == null)
{
mediaElement = new MediaElement();
rootControl.Children.Add(mediaElement);
}
mediaElement.SetSource(voiceStream, voiceStream.ContentType);
mediaElement.Volume = 1;
mediaElement.IsMuted = false;
var tcs = new TaskCompletionSource<bool>();
mediaElement.MediaEnded += (o, e) => { tcs.TrySetResult(true); };
mediaElement.MediaFailed += (o, e) => { tcs.TrySetResult(true); };
mediaElement.Play();
await tcs.Task;
Okay - I think I managed to get this working... although I'm unsure why.
using (SpeechSynthesizer synth = new SpeechSynthesizer())
{
var voiceStream = await synth.SynthesizeTextToStreamAsync(toSay);
MediaElement mediaElement;
mediaElement = rootControl.Children.FirstOrDefault(a => a as MediaElement != null) as MediaElement;
if (mediaElement == null)
{
mediaElement = new MediaElement();
rootControl.Children.Add(mediaElement);
}
mediaElement.SetSource(voiceStream, voiceStream.ContentType);
mediaElement.Volume = 1;
mediaElement.IsMuted = false;
var tcs = new TaskCompletionSource<bool>();
mediaElement.MediaEnded += (o, e) => { tcs.TrySetResult(true); };
mediaElement.Play();
await tcs.Task;
// Removing the control seems to free up whatever is locking
rootControl.Children.Remove(mediaElement);
}
I am not sure what program language you are using. However this may help. This is in C# so this could help lead you in the right direction.
namespace Alexis
{
public partial class frmMain : Form
{
SpeechRecognitionEngine _recognizer = new SpeechRecognitionEngine();
SpeechSynthesizer Alexis = new SpeechSynthesizer();
SpeechRecognitionEngine startlistening = new SpeechRecognitionEngine();
DateTime timenow = DateTime.Now;
}
//other coding such as InitializeComponent and others.
//
//
//
//
private void frmMain_Load(object sender, EventArgs e)
{
_recognizer.SetInputToDefaultAudioDevice();
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(File.ReadAllLines(#"Default Commands.txt")))));
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Shell_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Social_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Web_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Default_SpeechRecognized);
_recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(AlarmClock_SpeechRecognized);
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(AlarmAM))));
_recognizer.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices(AlarmPM))));
_recognizer.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(_recognizer_SpeechDetected);
_recognizer.RecognizeAsync(RecognizeMode.Multiple);
startlistening.SetInputToDefaultAudioDevice();
startlistening.LoadGrammarAsync(new Grammar(new GrammarBuilder(new Choices("alexis"))));
startlistening.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(startlistening_SpeechRecognized);
//other stuff here..... Then once you have this then you can generate a method then with your code as follows
//
//
//
private void Default_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
int ranNum;
string speech = e.Result.Text;
switch (speech)
{
#region Greetings
case "hello":
case "hello alexis":
timenow = DateTime.Now;
if (timenow.Hour >= 5 && timenow.Hour < 12)
{ Alexis.SpeakAsync("Goodmorning " + Settings.Default.User); }
if (timenow.Hour >= 12 && timenow.Hour < 18)
{ Alexis.SpeakAsync("Good afternoon " + Settings.Default.User); }
if (timenow.Hour >= 18 && timenow.Hour < 24)
{ Alexis.SpeakAsync("Good evening " + Settings.Default.User); }
if (timenow.Hour < 5)
{ Alexis.SpeakAsync("Hello " + Settings.Default.User + ", it's getting late"); }
break;
case "whats my name":
case "what is my name":
Alexis.SpeakAsync(Settings.Default.User);
break;
case "stop talking":
case "quit talking":
Alexis.SpeakAsyncCancelAll();
ranNum = rnd.Next(1, 2);
if (ranNum == 2)
{ Alexis.Speak("sorry " + Settings.Default.User); }
break;
}
}
instead of using the commands in the code. I recommend that you use a text document. once you have that then you can add your own commands to it then put it in code. Also reference the System.Speech.
I hope this helps on getting you on the right track.
I am developing a WPF application which uses speech recognition. The events does not fire up when the grammar words are spoken. Secondly, I am not sure whether the engine starts up on not. How to check that? Following is the code.
namespace Summerproject_trial
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
private SpeechRecognitionEngine recEngine =
new SpeechRecognitionEngine();
public MainWindow()
{
InitializeComponent();
Choices mychoices = new Choices();
mychoices.Add(new string[] {"Ok", "Test", "Hello"});
GrammarBuilder gb = new GrammarBuilder();
gb.Append(mychoices);
Grammar mygrammar = new Grammar(gb);
recEngine.LoadGrammarAsync(mygrammar);
recEngine.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>
(recEngine_SpeechRecognized);
recEngine.SetInputToDefaultAudioDevice();
}
void recEngine_SpeechRecognized(object sender,
SpeechRecognizedEventArgs e)
{
MessageBox.Show("You said: " + e.Result.Text);
}
}
}
You forgot to start listening to input.
Try this in the end of your constructor.
recEngine.RecognizeAsync(RecognizeMode.Multiple);
#Anri's answer is needed, but you also need to create the SpeechRecognitionEngine with a CultureInfo. (You can create a SpeechRecognitionEngine without a CultureInfo, but then you need to set the recognizer language explicitly.)
Also: Mobile earphones (by which I assume you mean some sort of Bluetooth headset) will typically NOT work with System.Speech. The SR engine used in the desktop SR engine requires higher quality audio input than it can get from Bluetooth.
So, complete code that should work:
private SpeechRecognitionEngine recEngine =
new SpeechRecognitionEngine("en-US");
public MainWindow()
{
InitializeComponent();
Choices mychoices = new Choices();
mychoices.Add(new string[] {"Ok", "Test", "Hello"});
GrammarBuilder gb = new GrammarBuilder();
gb.Append(mychoices);
Grammar mygrammar = new Grammar(gb);
recEngine.LoadGrammarAsync(mygrammar);
recEngine.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>
(recEngine_SpeechRecognized);
recEngine.SetInputToDefaultAudioDevice();
recEngine.RecognizeAsync(RecognizeMode.Multiple);
}
I have an app that I need to play a wav file when a key or button pressed or clicked, I use the SoundPlayer class but when i try to play another wav file at the same time the one that was playing stops.
Is there a way to play multiple wav files at the same time?
If its one could you please give me examples or tutorial?
Here's what I got so far:
private void pictureBox20_Click(object sender, EventArgs e)
{
if (label30.Text == "Waiting 15.wav")
{
MessageBox.Show("No beat loaded");
return;
}
using (SoundPlayer player = new SoundPlayer(label51.Text))
{
try
{
player.Play();
}
catch (FileNotFoundException)
{
MessageBox.Show("File has been moved." + "\n" + "Please relocate it now!");
}
}
}
Thanks!
You can do this with the System.Windows.Media.MediaPlayer class. Note that you will need to add references to WindowsBase and PresentationCore.
private void pictureBox20_Click(object sender, EventArgs e)
{
const bool loopPlayer = true;
if (label30.Text == "Waiting 15.wav")
{
MessageBox.Show("No beat loaded");
return;
}
var player = new System.Windows.Media.MediaPlayer();
try
{
player.Open(new Uri(label51.Text));
if(loopPlayer)
player.MediaEnded += MediaPlayer_Loop;
player.Play();
}
catch (FileNotFoundException)
{
MessageBox.Show("File has been moved." + "\n" + "Please relocate it now!");
}
}
EDIT: You can loop the sound by subscribing to the MediaEnded event.
void MediaPlayer_Loop(object sender, EventArgs e)
{
MediaPlayer player = sender as MediaPlayer;
if (player == null)
return;
player.Position = new TimeSpan(0);
player.Play();
}
Based on your code, I'm guessing that you are writing some kind of music production software. I'm honestly not sure that this method will loop perfectly every time, but as far as I can tell, it's the only way to loop using the MediaPlayer control.
You cannot play two sounds at once using SoundPlayer.
SoundPlayer uses the Native WINAPI PlaySound which does not support playing multiple sound at same instance.
Better Option would be to Reference WindowsMediaPlayer
Add a reference to C:\Windows\System32\wmp.dll
var player1 = new WMPLib.WindowsMediaPlayer();
player1.URL = #"C:\audio_output\sample1.wav";
var player2 = new WMPLib.WindowsMediaPlayer();
player2.URL = #"C:\audio_output\sample2.wav";