Decrease or compress audio quality in CSCORE - c#

I am creating an application which streams audio over UDP. Currently it works fine, however it uses alot of network usage (upto 500kbps). Is this normal? Is there a way I can compress or slightly reduce the quality of the audio so it uses up less network usage?
WasapiCapture capture = new WasapiLoopbackCapture();
capture.Initialize();
capture.Start();
capture.DataAvailable += (object sender, DataAvailableEventArgs e) =>
{
// Send data here (works fine)
};

Yes this is normal. Its capturing at the default rate. You need to set your bitrate and codec.
This is what I am doing:
_soundIn = new WasapiLoopbackCapture(0, new CSCore.WaveFormat(44100, 16, 1, CSCore.AudioEncoding.MpegLayer3));

Related

Screen recording as video, audio on SharpAvi - Audio not recording

Requirement:
I am trying to capture Audio/Video of windows screen with SharpAPI Example with Loopback audio stream of NAudio Example.
I am using C#, wpf to achieve the same.
Couple of nuget packages.
SharpAvi - forVideo capturing
NAudio - for Audio capturing
What has been achieved:
I have successfully integrated that with the sample provided and I'm trying to capture the audio through NAudio with SharpAPI video stream for the video to record along with audio implementation.
Issue:
Whatever I write the audio stream in SharpAvi video. On output, It was recorded only with video and audio is empty.
Checking audio alone to make sure:
But When I try capture the audio as separate file called "output.wav" and It was recorded with audio as expected and can able to hear the recorded audio. So, I'm concluding for now that the issue is only on integration with video via SharpApi
writterx = new WaveFileWriter("Out.wav", audioSource.WaveFormat);
Full code to reproduce the issue:
https://drive.google.com/open?id=1H7Ziy_yrs37hdpYriWRF-nuRmmFbsfe-
Code glimpse from Recorder.cs
NAudio Initialization:
audioSource = new WasapiLoopbackCapture();
audioStream = CreateAudioStream(audioSource.WaveFormat, encodeAudio, audioBitRate);
audioSource.DataAvailable += audioSource_DataAvailable;
Capturing audio bytes and write it on SharpAvi Audio Stream:
private void audioSource_DataAvailable(object sender, WaveInEventArgs e)
{
var signalled = WaitHandle.WaitAny(new WaitHandle[] { videoFrameWritten, stopThread });
if (signalled == 0)
{
audioStream.WriteBlock(e.Buffer, 0, e.BytesRecorded);
audioBlockWritten.Set();
Debug.WriteLine("Bytes: " + e.BytesRecorded);
}
}
Can you please help me out on this. Any other way to reach my requirement also welcome.
Let me know if any further details needed.
Obviously, author doesn't need it, but since I run to the same problem others might need it.
Problem in my case was that I was getting audio every 0.1 seconds and attempted to write both new video and audio at the same time. And getting new video data (taking screenshot) took me too long. Causing each frame was added every 0.3 seconds instead of 0.1. And that caused some problems with audio stream being not sync with video and not being played properly by video players (or whatever it was). And after optimizing code a little bit to be within 0.1 second, the problem is gone

1 second video stream latency occurred with Emgu CV

I have a sight line decoder device connected to my PC via Ethernet.
I used Emgucv to capture the video stream and view it in an image box.
Here is part of the code:
_capture = new Capture("udp://#169.254.1.144:15004");
_capture.ImageGrabbed += ProcessFrame;
Image<Bgr, Byte> frame,frame1;
private void ProcessFrame(object sender, EventArgs arg)
{
frame = _capture.RetrieveBgrFrame();
pictureBox1.Image = frame.ToBitmap();
}
The video viewed in the Imagebox, but with 1 sec latency; I counted the frame reached the ProcessFramefunction and its 12fps and its correct;
Does the ImageGrabbed event cause this latency?
Why does the latency occur?
Note : I used an usb camera instead the sightline and worked fine; also the sightline plus which can play the camera via Ethernet works fine too.
This is caused by the length of the default buffer used by the 'Capture' object. Raw OpenCV has a 'CV_CAP_PROP_BUFFERSIZE' flag you can set to alter this value using .set()

c# what is the best way to record live frames as Video

I have been checking around to convert live frames into video. And I found (NReco.VideoConverter) ffmpeg lib to convert live frames to Video, but the problem is it is taking time to write each frame to ConvertLiveMediaTask (async live media task conversion).
I have an event that provides (raw) frames (1920x1080) (25fps) from IpCamera. Whenever I get frame I am doing the following
//Image availbale event fired
//...
//...
// Record video is true
if(record)
{
//////////////############# Time taking part ##############//////////////////////
var bd = frameBmp.LockBits(new Rectangle(0, 0, frameBmp.Width, frameBmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
var buf = new byte[bd.Stride * frameBmp.Height];
Marshal.Copy(bd.Scan0, buf, 0, buf.Length);
// write to ConvertLiveMediaTask
convertLiveMediaTask.Write(buf, 0, buf.Length); // ffMpegTask
frameBmp.UnlockBits(bd);
//////////////////////////////////////////////////////////////////////////////////
}
As the above part is taking much time, I am loosing the frames.
//Stop recording
convertLiveMediaTask.Stop(); //ffMpegTask
Stop recording, for this part I have used BackgroundWorker, because this takes too smuch time to save the media to file.
My question is how can I write the frame to ConvertLiveMediaTask in faster way? are there any possibilites to write it in background?
Please give me suggestions.
I'm sure that most time takes encoding and compressing raw bitmaps (if you encode them with h264 or something like that) by FFMpeg because of FullHD resolution (NReco.VideoConverter is a wrapper to FFMpeg). You must know that real-time encoding of FullHD is VERY CPU consuming task; if your computer is not able to do that, you may try to play with FFMPeg encoding parameters (decrease video quality / compression ratio etc) or use encoder that requires less CPU resources.
If You need to record some limited time live stream, You can split video capturing and compressing/saving into two threads.
Use for example ConcurrentQueue to buffer live frames (En-queue) on one thread without delay, and other thread could save those frames at a pace it can (De-queue). This way you will not loose frames.
Obviously You will have strain on RAM and also after stopping Live video You will have a delay while saving thread finishes.

Detect pause in SerialPort input

In my application, any phone can voice-connect to my 3G USB modem and the call gets picked up immediately. It receives audio as PCM (8000 samples, 16 bits, mono) through a serial port and uses Microsoft's Speech Synthesizer to talk back to the caller.
The problem is, the application should talk back only when the caller has stopped speaking. How can I detect that ?
I tried implementing a 3-second timer which resets itself when data is received from the serial port, so when the timer gets 'ticked' it should mean that that the caller was silent for 3 seconds. But it doesn't work that way. What did I do wrong ?
private void DataRecdFromSerial(object sender, SerialDataReceivedEventArgs e)
{
say.Stop(); say.Start(); // reset timer with interval 5000
int n = usb.BytesToRead;
byte[] comBuffer = new byte[90000];
usb.Read(comBuffer, 0, n);
if(comBuffer.Length > 0)
{
wfw.Write(comBuffer, 0, n); // NAudio Wave File Writer
}
}
private void say_Tick(object sender, EventArgs e)
{
// Caller stopped speaking for 5 seconds (not working)
}
By what magic on earth should data flow interrupt when there is silence ? you will get a continuous stream as long as the line is connected that is the most logical of all software and electronical engineering implementation that one can excpect today. So you need to analyze the spectrum of the sound wave and calculate root mean square amplitude to get energy. you compare to a threshold that you fix by empirical testing (because silence is actually a small noise that you need to accept).

problem with download picture from canon camera to pc

i connected a eos canon camera to pc
i have an application that i could take picture remotly ,and download image to pc,
but when i remove the SD card from camera , i cant download image from buffer to pc
// register objceteventcallback
err = EDSDK.EdsSetObjectEventHandler(obj.camdevice, EDSDK.ObjectEvent_All, objectEventHandler, new IntPtr(0));
if (err != EDSDK.EDS_ERR_OK)
Debug.WriteLine("Error registering object event handler");
///
public uint objectEventHandler(uint inEvent, IntPtr inRef, IntPtr inContext)
{
switch(inEvent)
{
case EDSDK.ObjectEvent_DirItemCreated:
this.getCapturedItem(inRef);
Debug.WriteLine("dir item created");
break;
case EDSDK.ObjectEvent_DirItemRequestTransfer:
this.getCapturedItem(inRef);
Debug.WriteLine("file transfer request event");
break;
default:
Debug.WriteLine(String.Format("ObjectEventHandler: event {0}", inEvent));
break;
}
return 0;
}
anyone could help me , why this event does not call ,
or how i download image from buffer to pc, with out have Sd card on my camera
thanks
You probably ran into the same problem as I did yesterday: the camera tries to store the image for a later download, finds no memory card to store it to and instantly discards the image.
To get your callback to fire, you need to set the camera to save images to the PC (kEdsSaveTo_Host) at some point during your camera initialization routine. In C++, it worked like this:
EdsInt32 saveTarget = kEdsSaveTo_Host;
err = EdsSetPropertyData( _camera, kEdsPropID_SaveTo, 0, 4, &saveTarget );
You probably need to build an IntPtr for this. At least, that's what Dmitriy Prozorovskiy did (prompted by a certain akadunno) in this thread.
The SDK (as far as I know) only exposes the picture taking event in the form of the object being created on the file system of the camera (ie the SD card). There is not a way of which I am familiar to capture from buffer. In a way this makes sense, because in an environment where there is only a small amount of onboard memory, it is important to keep the volatile memory clear so that it can continue to take photographs. Once the buffer has been flushed to nonvolatile memory, you are then clear to interact with those bytes. Limiting, I know, but it is what it is.
The question asks for C#, but in Java one will have to setProperty as:
NativeLongByReference number = new NativeLongByReference( new NativeLong( EdSdkLibrary.EdsSaveTo.kEdsSaveTo_Host ) );
EdsVoid data = new EdsVoid( number.getPointer() );
NativeLong l = EDSDK.EdsSetPropertyData(edsCamera, new NativeLong(EdSdkLibrary.kEdsPropID_SaveTo), new NativeLong(0), new NativeLong(NativeLong.SIZE), data);
And the usual download will do

Categories

Resources