Getting data from a microphone in C# - c#

I'm trying to record audio data from a microphone (or line-in), and then replay it again, using C#.
Any suggestions on how I can achieve this?

See Console and multithreaded recording and playback
class Program
{
static void Main(string[] args)
{
rex.Data += new RecorderEx.DataEventHandler(rex_Data);
rex.Open += new EventHandler(rex_Open);
rex.Close += new EventHandler(rex_Close);
rex.Format = pcmFormat;
rex.StartRecord();
Console.WriteLine("Please press enter to exit!");
Console.ReadLine();
rex.StopRecord();
}
static RecorderEx rex = new RecorderEx(true);
static PlayerEx play = new PlayerEx(true);
static IntPtr pcmFormat = AudioCompressionManager.GetPcmFormat(1, 16, 44100);
static void rex_Open(object sender, EventArgs e)
{
play.OpenPlayer(pcmFormat);
play.StartPlay();
}
static void rex_Close(object sender, EventArgs e)
{
play.ClosePlayer();
}
static void rex_Data(object sender, DataEventArgs e)
{
byte[] data = e.Data;
play.AddData(data);
}
}

A live link of NAudio.
https://github.com/naudio/NAudio
It's available as a NuGet Package
Information about Output devices:
https://github.com/naudio/NAudio/blob/master/Docs/OutputDeviceTypes.md
Keep in mind for performance the following from the FAQ:
"Is .NET Performance Good Enough for Audio?
While .NET cannot compete with unmanaged languages for very low latency audio work, it still performs better than many people would expect. On a fairly modest PC, you can quite easily mix multiple WAV files together, including pass them through various effects and codecs, play back glitch free with a latency of around 50ms."

Related

600 Milliseconds delay in EmguCv real-time video Decoding

I am developing a real-time computer vision application using C#. But I am not able to optimize Emgucv decoding. I have 800-millisecond delay from the ground truth and 600-millisecond delay from the Ip camera provider application AXIS.
How can I optimize the code that I can have at-most 250-milliseconds delay?
Here is code for grabbing an image.
capture1 = new Capture(IpFirstCamTxt.Text); //create a camera captue from RTSP Stream
capture2 = new Capture(Ip2ndCamTxt.Text);
capture3 = new Capture(Ip3rdCamTxt.Text);
capture4 = new Capture(Ip4thCamTxt.Text);
capture1.Start();
capture2.Start();
capture3.Start();
capture4.Start();
capture1.ImageGrabbed += ProcessFrame1;
capture2.ImageGrabbed += ProcessFrame2;
capture3.ImageGrabbed += ProcessFrame3;
capture4.ImageGrabbed += ProcessFrame4;
private void ProcessFrame1(object sender, EventArgs arg)
{
_capture.RetrieveBgrFrame().ToBitmap());
capture1.Retrieve(img1, 3);
pictureBox1.Image = img1.ToBitmap();
}
private void ProcessFrame2(object sender, EventArgs arg)
{
capture2.Retrieve(img2, 3);
pictureBox3.Image = img2.ToBitmap();
}
private void ProcessFrame3(object sender, EventArgs arg)
{
capture3.Retrieve(img3, 3);
pictureBox4.Image = img3.ToBitmap();
}
private void ProcessFrame4(object sender, EventArgs arg)
{
capture4.Retrieve(img4, 3);
pictureBox5.Image = img4.ToBitmap();
}
Stopwatch results of my application comparing with camera provider application:
The above-mentioned problem has been solved using one of the real-time RTSP stream capturing library named as LIVE555. I have used it in C++ and shared the memory of images with C#.
The delay is reduced up to round about 200 milliseconds only.
If anyone wants real-time video streaming then LIVE555 is the best.
I will upload the project to my Github.
Source Real-time RTSP Stream Decoding

NAudio windows application form, has delay loopingback(Input to DirectSoundOut)

Problem:
As a part of school project, I attempt to build an application that provides a guitar AMP using the NAudio library.
When i plug in the guitar it recognizes it, and everything is working properly, but there is a huge delay between the input and the output from the speakers.
Here is my source code:
private void button2_Click(object sender, EventArgs e)
{
if (sourceList.SelectedItems.Count == 0) return;
int deviceNumber = sourceList.SelectedItems[0].Index;
sourceStream = new WaveIn();
sourceStream.DeviceNumber = deviceNumber;
sourceStream.WaveFormat = new WaveFormat(44100, WaveIn.GetCapabilities(deviceNumber).Channels);
sourceStream.StartRecording();
WaveInProvider waveIn = new WaveInProvider(sourceStream);
waveOut = new DirectSoundOut();
waveOut.Init(waveIn);
waveOut.Play();
}
in this code I catch an event of a button click that uses the selected input (microphone/guitar) and converts the sound it recieves to output.
The delay between the input and the output is around ~1sec and it's a deal breaker.
How do I improve the delay, to make the application more responsive?
DirectSoundOut and WaveIn are not particularly low-latency audio APIs. For something like this, ASIO is preferable. AsioOut is unfortunately a bit more complicated to work with, but it should allow you to get much lower latencies.

Wasapi audio quality

Im new to using Wasapi in Naudio and Im having n issue with the sound quality. About 1/10 times the audio will sound perfect when I record and the other 9 times it will be fuzzy. I was wondering if there is any reason for this.
Here is my code i'm using to record the audio:
public void CaptureAudio(String Name)
{
capture = new WasapiLoopbackCapture();
capture.Initialize();
w = new WaveWriter(Name, capture.WaveFormat);
capture.DataAvailable += (s, capData) =>
{
w.Write(capData.Data, capData.Offset, capData.ByteCount);
};
capture.Start();
}
public void StartRecording(String Name)
{
new Thread(delegate(){CaptureAudio(Name); }).Start();
}
public void StopCapture()
{
capture.Stop();
capture.Dispose();
w.Dispose();
}
First of all. As Mark already said, your code does not look like NAudio. It looks like CSCore. If you are using CSCore please create a new console application and copy paste the following code (I've modified your code). I just tried out that code on three different systems without any bugs and all 20 files were ok without beeing fuzzy.
private static void Main(string[] args)
{
for (int i = 0; i < 20; i++)
{
Console.WriteLine(i);
Capture(i);
}
}
private static void Capture(int index)
{
string Name = String.Format("dump-{0}.wav", index);
using (WasapiCapture capture = new WasapiLoopbackCapture())
{
capture.Initialize();
using (var w = new WaveWriter(Name, capture.WaveFormat))
{
capture.DataAvailable += (s, capData) => w.Write(capData.Data, capData.Offset, capData.ByteCount);
capture.Start();
Thread.Sleep(10000);
capture.Stop();
}
}
}
The problem turned out to be xbox music or windows media player running in the background, apparently they hog all the sound cards resources.
A few comments on your code:
First, have you modified WasapiLoopbackCapture in some way? The WaveInEventArgs on DataAvailable does not have the properties shown in your code. I'd expect you have some kind of block alignment error going on, so that your fuzzy sound is not reading on exact sample boundaries. Also NAudio does not have a class called WaveWriter - it's WaveFileWriter. Are you sure you are using NAudio?
Second, there is no need to start a new thread in StartRecording. WasapiLoopbackCapture will be using a background thread already.

Read Weight from a Serial Mettler Toledo Digital Scale

I am trying to read weight from digital scale in c# app, found this question
this is exactly what I am trying to do
but for me below function never runs.
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
this.Invoke(new EventHandler(DoUpdate));
}
I have checked the scale in device manager, its location is set to Port_#0004.Hub_#0003 and appear to be working fine
I was not sure about port number of the scale so I did
var test = SerialPort.GetPortNames();
and only COM1 gets returned
Edit 1: When I do int a = port.ReadByte(); my application hangs and execution never moves forward from this statement.
I faced a problem like this and I solved it changing a COM configuration (Configuration > Comunication > Conections) to SICS in the device. I don't know your scale model but maybe my code can help. [Reading data from Mettler Toledo (IND560) scale device using C#]
Could you try polling instead of using the DataReceived event? Also, I believe the DataReceived event has a threshold before it fires, you might want to look into that too.
Are you able to get the serial number from the balance? This should be the first thing you do when connecting. It will let you verify the connection is established. If you are having trouble connecting through a C# interface try using HyperTerminal first. You can vary a lot of setting really quickly and dial in on the right ones to use. Although the balance should be able to use a wide variety of baud rates and stop bits and such. They are usually pretty adaptable. But do try HyperTerminal.
I'm looking for the pdf but there is a very long list of available commands (depending on your model). The pdf is like 130 pages long. Have you read this?
Please see this post, I used Mike library to connect.
using System;
using System.Linq;
using System.Text;
using HidLibrary;
namespace MagtekCardReader
{
class Program
{
private const int VendorId = 0x0801;
private const int ProductId = 0x0002;
private static HidDevice _device;
static void Main()
{
_device = HidDevices.Enumerate(VendorId, ProductId).FirstOrDefault();
if (_device != null)
{
_device.OpenDevice();
_device.Inserted += DeviceAttachedHandler;
_device.Removed += DeviceRemovedHandler;
_device.MonitorDeviceEvents = true;
_device.ReadReport(OnReport);
Console.WriteLine("Reader found, press any key to exit.");
Console.ReadKey();
_device.CloseDevice();
}
else
{
Console.WriteLine("Could not find reader.");
Console.ReadKey();
}
}
private static void OnReport(HidReport report)
{
if (!_device.IsConnected) { return; }
var cardData = new Data(report.Data);
Console.WriteLine(!cardData.Error ? Encoding.ASCII.GetString(cardData.CardData) : cardData.ErrorMessage);
_device.ReadReport(OnReport);
}
private static void DeviceAttachedHandler()
{
Console.WriteLine("Device attached.");
_device.ReadReport(OnReport);
}
private static void DeviceRemovedHandler()
{
Console.WriteLine("Device removed.");
}
}
}

Simple C# Screen sharing application

I am looking to create a very basic screen sharing application in C#. No remote control necessary. I just want a user to be able to broadcast their screen to a webserver.
How should I implement this? (Any pointer in the right direction will be greatly appreciated).
It does NOT need to be high FPS. Would be sufficient to even update ever 5s or so. Do you think it would be sufficient to just upload a screenshot ever 5 seconds to my web server?
I previously blogged about how remote screen sharing software works here, it is not specific to C# but it gives a good fundamental understanding on the topic. Also linked in that article is the remote frame buffer spec which you'll also probably want to read up on.
Basically you will want to take screenshots and you can transmit those screenshots and display them on the other side. You can keep the last screenshot and compare the screenshot in blocks to see which blocks of the screenshot you need to send. You would typically do some sort of compression before sending the data.
To have remote control you can track mouse movement and transmit it and set the pointer position on the other end. Also ditto about keystrokes.
As far as compression goes in C#, you can simply use JpegBitmapEncoder to create your screenshots with Jpeg compression with the quality that you want.
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.QualityLevel = 40;
To compare file blocks you are probably best to create a hash on the old block and the new one, and then check to see if they are the same. You can use any hashing algorithm you want for this.
Here's code to take a screenshot, uncompressed as a bitmap:
public static Bitmap TakeScreenshot() {
Rectangle totalSize = Rectangle.Empty;
foreach (Screen s in Screen.AllScreens)
totalSize = Rectangle.Union(totalSize, s.Bounds);
Bitmap screenShotBMP = new Bitmap(totalSize.Width, totalSize.Height, PixelFormat.
Format32bppArgb);
Graphics screenShotGraphics = Graphics.FromImage(screenShotBMP);
screenShotGraphics.CopyFromScreen(totalSize.X, totalSize.Y, 0, 0, totalSize.Size,
CopyPixelOperation.SourceCopy);
screenShotGraphics.Dispose();
return screenShotBMP;
}
Now just compress it and send it over the wire, and you're done.
This code combines all screens in a multiscreen setup into one image. Tweak as needed.
Well, it can be as simple as taking screenshots, compressing them, and then sending them over the wire. However, there is existing software that already does this. Is this for practice?
I'm looking to do something similar, and I just found this up on CodeProject. I think this will help you.
http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server
The key player on sharing/replicating a screen is a COM Component called: RPDViewer
Add that com component to your window form and in References as well..
and thin add this code to your form load and you will get the screen replicated in your form:
using RDPCOMAPILib;
using System;
using System.Windows.Forms;
namespace screenSharingAttempt
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
RDPSession x = new RDPSession();
private void Incoming(object Guest)
{
IRDPSRAPIAttendee MyGuest = (IRDPSRAPIAttendee)Guest;
MyGuest.ControlLevel = CTRL_LEVEL.CTRL_LEVEL_INTERACTIVE;
}
//access to COM/firewall will prompt
private void button1_Click(object sender, EventArgs e)
{
x.OnAttendeeConnected += Incoming;
x.Open();
}
//connect
private void button2_Click(object sender, EventArgs e)
{
IRDPSRAPIInvitation Invitation = x.Invitations.CreateInvitation("Trial", "MyGroup", "", 10);
textBox1.Text = Invitation.ConnectionString;
}
//Share screen
private void button4_Click(object sender, EventArgs e)
{
string Invitation = textBox1.Text;// "";// Interaction.InputBox("Insert Invitation ConnectionString", "Attention");
axRDPViewer1.Connect(Invitation, "User1", "");
}
//stop sharing
private void button5_Click(object sender, EventArgs e)
{
axRDPViewer1.Disconnect();
}
}
}

Categories

Resources