I am developing an application to show video using webcam and IpCamera.
For IpCamera, it shows video stream for sometime but after that it stops streaming and application hangs.
I am using Emgu.CV Library to grab frames and show it in the picture control.
I have tried below code for display of video by using function QueryFrame().
for connecting Ip camera
Capture capture = new Capture(URL);
for grabbing frames
Image<Bgr, Byte> ImageFrame = capture.QueryFrame();
After some time the QueryFrame() provide null value and application hangs.
Can any one tell me why this is happening and how I can handle it?
Thank you in advance.
Sorry about the delay but I have provide an example that works with several public IP cameras. It will need the EMGU reference replacing with your current version and the target build directory should be set to "EMGU Version\bin" alternatively extract it to the examples folder.
http://sourceforge.net/projects/emguexample/files/Capture/CameraCapture%20Public%20IP.zip/download
Rather than using the older QueryFrame() method it uses the RetrieveBgrFrame() method. It has worked reasonably well and I have had no null exceptions. However if you do replace the ProcessFrame() method with something like this
private void ProcessFrame(object sender, EventArgs arg)
{
//If you want to access the image data the use the following method call
//Image<Bgr, Byte> frame = new Image<Bgr,byte>(_capture.RetrieveBgrFrame().ToBitmap());
if (RetrieveBgrFrame.Checked)
{
Image<Bgr, Byte> frame = _capture.RetrieveBgrFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if(frame!=null) DisplayImage(frame.ToBitmap());
}
else if (RetrieveGrayFrame.Checked)
{
Image<Gray, Byte> frame = _capture.RetrieveGrayFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if (frame != null) DisplayImage(frame.ToBitmap());
}
}
Cheers
Chris
Related
I'm using OpenCVSharp4 and It necessary to make a capture from webcam.
But sometimes camera could be busy by another application as Skype, Teams, etc.
I try to use IsOpened() method to define that, but it returns true when webcam is busy and makes black capture. (I've used default windows 10 Camera application to make my webcamera busy)
using (var videoCapture = new VideoCapture(0))
using (var frame = new Mat())
{
if (videoCapture.Open(0) && videoCapture.IsOpened())
{
if (videoCapture.Read(frame) && !frame.Empty())
{
var image = frame.ToBitmap();
return image;
}
}
}
Also I've tried to use -1 instead of 0 as a parameter of Open method, but it is totally not working by that way.
Does anybody have an idea how can I deal with that issue, and correctly define that camera is busy?
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.
I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.
I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.
So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:
private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
if (colorFrame != null)
{
if (currentMode == StreamingMode)
{
colorRenderer.Draw(colorFrame, true);
}
if (currentMode == RecordingMode)
{
MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
}
}
}
private async Task CreateVideo(MediaComposition composition, string outputFileName)
{
try
{
await mediaFrameReaderColor.StopAsync();
mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
mediaFrameReaderColor.Dispose();
StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
stopwatch.Stop();
Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);
composition.Clips.Clear();
composition = null;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.
Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.
Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:
#MediaCapture.AddVideoEffectAsync
https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.
#Custom video effects
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.
#Effects for analyzing camera frames
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.
Thanks.
I have a C# desktop application in which one thread that I create continously gets an image from a source(it's a digital camera actually) and puts it on a panel(panel.Image = img) in the GUI(which must be another thread as it is the code-behind of a control.
The application works but on some machines I get the following error at random time intervals(unpredictable)
************** Exception Text **************
System.InvalidOperationException: The object is currently in use elsewhere.
Then the panel turns into a red cross, red X - i think this is the invalid picture icon that is editable from the properties. The application keeps working but the panel is never updated.
From what I can tell this error comes from the control's onpaint event where I draw something else on the picture.
I tried using a lock there but no luck :(
The way I call the function that puts the image on the panel is as follows:
if (this.ReceivedFrame != null)
{
Delegate[] clients = this.ReceivedFrame.GetInvocationList();
foreach (Delegate del in clients)
{
try
{
del.DynamicInvoke(new object[] { this,
new StreamEventArgs(frame)} );
}
catch { }
}
}
this is the delegate:
public delegate void ReceivedFrameEventHandler(object sender, StreamEventArgs e);
public event ReceivedFrameEventHandler ReceivedFrame;
and this is how the function inside the control code-behind registers to it:
Camera.ReceivedFrame +=
new Camera.ReceivedFrameEventHandler(camera_ReceivedFrame);
I also tried
del.Method.Invoke(del.Target, new object[] { this, new StreamEventArgs(b) });
instead of
del.DynamicInvoke(new object[] { this, new StreamEventArgs(frame) });
but no luck
Does anyone know how I could fix this error or at least catch the error somehow and make the thread put the images on the panel once again?
This is because Gdi+ Image class is not thread safe. Hovewer you can avoid InvalidOperationException by using lock every time when you need to Image access, for example for painting or getting image size:
Image DummyImage;
// Paint
lock (DummyImage)
e.Graphics.DrawImage(DummyImage, 10, 10);
// Access Image properties
Size ImageSize;
lock (DummyImage)
ImageSize = DummyImage.Size;
BTW, invocation is not needed, if you will use the above pattern.
I had a similar problem with the same error message but try as I might, locking the bitmap didn't fix anything for me. Then I realized I was drawing a shape using a static brush. Sure enough, it was the brush that was causing the thread contention.
var location = new Rectangle(100, 100, 500, 500);
var brush = MyClass.RED_BRUSH;
lock(brush)
e.Graphics.FillRectangle(brush, location);
This worked for my case and lesson learned: Check all the reference types being used at the point where thread contention is occurring.
Seems to me, that the same Camera object is used several times.
E.g. try to use a new buffer for each received frame. It seems to me, that while the picture box is drawing the new frame, your capture library fills that buffer again. Therefore on faster machines this might not be an issue, with slower machines it might be an issue.
I've programmed something similar once, after each received frame, we had to request to receive the next frame and set the NEW frame receive buffer in that request.
If you can not do that, copy the received frame from the camera first to a new buffer and append that buffer to a queue, or just use 2 alternating buffers and check for overruns. Either use myOutPutPanel.BeginInvoke to call the camera_ReceivedFrame method, or better have a thread running, which checks the queue, when it has a new entry it calls mnyOutPutPanel.BeginInvoke to invoke your method to set the new buffer as image on the panel.
Furthermore, once you received the buffer, use the Panel Invoke Method to invoke the setting of the image (guarantee that it runs in the window thread and not the thread from your capture library).
The example below can be called from any thread (capture library or other separate thread):
void camera_ReceivedFrame(object sender, StreamEventArgs e)
{
if(myOutputPanel.InvokeRequired)
{
myOutPutPanel.BeginInvoke(
new Camera.ReceivedFrameEventHandler(camera_ReceivedFrame),
sender,
e);
}
else
{
myOutPutPanel.Image = e.Image;
}
}
I think this is multithreading problem
Use windows golden rule and update the panel in the main thread use panel.Invoke
This should overcome cross threading exception
I have a sight line decoder device connected to my PC via Ethernet.
I used Emgucv to capture the video stream and view it in an image box.
Here is part of the code:
_capture = new Capture("udp://#169.254.1.144:15004");
_capture.ImageGrabbed += ProcessFrame;
Image<Bgr, Byte> frame,frame1;
private void ProcessFrame(object sender, EventArgs arg)
{
frame = _capture.RetrieveBgrFrame();
pictureBox1.Image = frame.ToBitmap();
}
The video viewed in the Imagebox, but with 1 sec latency; I counted the frame reached the ProcessFramefunction and its 12fps and its correct;
Does the ImageGrabbed event cause this latency?
Why does the latency occur?
Note : I used an usb camera instead the sightline and worked fine; also the sightline plus which can play the camera via Ethernet works fine too.
This is caused by the length of the default buffer used by the 'Capture' object. Raw OpenCV has a 'CV_CAP_PROP_BUFFERSIZE' flag you can set to alter this value using .set()
I am currently trying to save not only the skeletal data, but also the color frame images for post processing reasons. Currently, this is the section of the code that handles the color video, and outputs a color image in the UI. I figure, this is where the saving of the color frame images has to take place.
private void ColorFrameEvent(ColorImageFrameReadyEventArgs colorImageFrame)
{
//Get raw image
using (ColorImageFrame colorVideoFrame = colorImageFrame.OpenColorImageFrame())
{
if (colorVideoFrame != null)
{
//Create array for pixel data and copy it from the image frame
Byte[] pixelData = new Byte[colorVideoFrame.PixelDataLength];
colorVideoFrame.CopyPixelDataTo(pixelData);
//Set alpha to 255
for (int i = 3; i < pixelData.Length; i += 4)
{
pixelData[i] = (byte)255;
}
using (colorImage.GetBitmapContext())
{
colorImage.FromByteArray(pixelData);
}
}
}
}
I have tried reading up on OpenCV, EmguCV, and multithreading; but I am pretty confused. It would be nice to have a solid good explanation in one location. However, I feel like the best way to do this without losing frames per second, would be to save all the images in a List of arrays perhaps, and then when the program finishes do some post processing to convert arrays->images->video in Matlab.
Can someone comment on how I would go about implementing saving the color image stream into a file?
The ColorImageFrameReady event is triggered 30 seconds a second (30fps) if everything goes smoothly. I think it's rather heavy to save every picture at once.
I suggest you use a Backgroundworker, you can check if the worker is busy and if not just pass the bytes to the backgroundworker and do your magic.
You can easily save or make an image from a byte[]. Just google this.
http://www.codeproject.com/Articles/15460/C-Image-to-Byte-Array-and-Byte-Array-to-Image-Conv
How to compare two images using byte arrays