I am trying to play a stream of bitmap images using openCVSharp to generate the Bitmaps from YUV. But I am unable to display it as a video.
I found some links regarding using it as an AVI wrapper here and some more to save on the hard disk like here and FFMPEG might work great on LINUX, but it is not so good in Windows.
I even tried with by using the following code. But it just displays the last frame in the sequence, and I do not have a URI for using MediaElement as the bitmaps are generated by my program.
image.source = ToBitmapSource(bitmapImage);
where
public static BitmapSource ToBitmapSource(System.Drawing.Bitmap bitmap){
IntPtr ip = bitmap.GetHbitmap();
BitmapSource bs = null;
bs = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(ip,
IntPtr.Zero, System.Windows.Int32Rect.Empty,
System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
return bs;
}
I am trying to play the video (similar to streaming) without saving it on the computer. Is direct show a must for this? I desperately need your help, my deadline is fast approaching!
You can use a DispatcherTimer (equivalent to a Timer in Winform):
DispatcherTimer dt = new DispatcherTimer();
dt.Interval = 25; //25 ms --> 50 frames per second
dt.Tick += delegate(object sender, EventArgs e){
//get the image and display it
}
dt.Start(); //to start record
Related
I'm creating an application that takes a video and overlays sensor data from a drone.
Please, watch it in action here:
https://youtu.be/eAOjImJci3M
But that doesn't edit the video, it's just on the application.
My goal is to produce a new video with the overlaid data on it. How can I capture an element of the UI over time and make a MediaClip out of it?
Short answer: Take a screenshot of the UI with a certain interval, and then accumulate them to produce a video:
Long answer: Let's name the UI you want to record as myGrid, to get screenshots in an interval you can use a DispatcerTimer, and handle the Tick event like this:
private async void Tm_Tick(object sender, object e)
{
RenderTargetBitmap rendertargetBitmap = new RenderTargetBitmap();
await rendertargetBitmap.RenderAsync(myGrid);
var bfr = await rendertargetBitmap.GetPixelsAsync();
CanvasRenderTarget rendertarget = null
using (CanvasBitmap canvas = CanvasBitmap.CreateFromBytes(CanvasDevice.GetSharedDevice(), bfr, rendertargetBitmap.PixelWidth, rendertargetBitmap.PixelHeight, Windows.Graphics.DirectX.DirectXPixelFormat.B8G8R8A8UIntNormalized))
{
rendertarget = new CanvasRenderTarget(CanvasDevice.GetSharedDevice(), canvas.SizeInPixels.Width, canvas.SizeInPixels.Height, 96);
using (CanvasDrawingSession ds = rendertarget.CreateDrawingSession())
{
ds.Clear(Colors.Black);
ds.DrawImage(canvas);
}
}
MediaClip m = MediaClip.CreateFromSurface(rendertarget, TimeSpan.FromMilliseconds(80));
mc.Clips.Add(m);
}
mc is the MediaComposition object I declared earlier .
When you are done recording, stop the DispatcherTimer and save the video on the disk like this:
tm.Stop();
await mc.RenderToFileAsync(file, MediaTrimmingPreference.Precise, MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Vga));
tm is the DispatcherTimer I declared earlier and file is a StorageFile with mp4 extension.
This procedure doesn't require you to save each screenshot to disk .
If you wonder why I used an additional CanvasRenderTarget, it's because of this problem.
Hope that helps.
I have a list of images in my program, and I am generating an AVI video from them. For that purpose I use avifilewrapper_src library that handles the creation of video.
The process of creating is:
Bitmap bitmap;
//load the first image
bitmap = (Bitmap)imageSequence[0];
//create a new AVI file
AviManager aviManager = new AviManager(paths.outputVideo, false);
//add a new video stream and one frame to the new file
VideoStream aviStream =
aviManager.AddVideoStream(true, (double)nud_picturePerSec.Value, bitmap);
if(chb_audio.Checked)
aviManager.AddAudioStream(paths.sampleAudio, 0);
int count = 0;
for (int n = 0; n < imageSequence.Count; n++) {
bitmap = (Bitmap)imageSequence[n];
aviStream.AddFrame(bitmap);
bitmap.Dispose();
count++;
}
aviManager.Close();
If I keep giving different images, it works fine. If I however, put two similar images, than the video shows second image upside down (left/right side is correct). By two similar images I mean creating second image and copying it from the first one.
I have a feeling that this is somehow related to streams, but I can't find why the images are inverted.
Well I didn't managed to find the cause of that behavior. But fliping it between each use does the correction well.
bitmap.RotateFlip(RotateFlipType.RotateNoneFlipY);
I want to save video which capturing from webcam into local disk. I wrote code it shows webcam but It can't save into local disk. The error is Failed creating compressed stream.. What should I do in here?
writer = new AVIWriter("wmv3");
writer.FrameRate = 30;
writer.Open("video.avi", Convert.ToInt32(640), Convert.ToInt32(480)); // ERROR İS HERE **Failed creating compressed stream.**
//Create NewFrame event handler
//(This one triggers every time a new frame/image is captured
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
//Start recording
videoSource.Start();
}
}
void videoSource_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Cast the frame as Bitmap object and don't forget to use ".Clone()" otherwise
//you'll probably get access violation exceptions
pictureBoxVideo.BackgroundImage = (Bitmap)eventArgs.Frame.Clone();
writer.AddFrame((Bitmap)eventArgs.Frame.Clone());
}
Have you ever consider the size of the stream of your webcam? I have the same problem, too. I know you set your video size into 640 and 480, but the video stream size which comes from your webcam(I guess) would be never the same. I also guess you set your container such as picturebox or imagebox into 640 and 480, but that doesn't mean the video stream would be the same. I use savedialog to check the video stream that comes out of my webcam, and guess what? The size would be (648, 486). Who would ever set such a strange number set? But I set my code into this :
writer.Open("video.avi", Convert.ToInt32(648), Convert.ToInt32(486));
And it works fine!
I do not know the rest of your code is correct or not, but I'm sure my bug is in the set of size :)
Four years later I'm having this same issue. After hours I figured out if I didn't specify wmv3 for the codec and just left it blank writer = new AVIWriter(); then everything worked.
AVIWriter write = new AVIWriter();
write.Open("newTestVideo.avi", Convert.ToInt32(320), Convert.ToInt32(240));
Bitmap bit = new Bitmap(320, 240);
for (int tt = 0; tt < 240; tt++) {
bit.SetPixel(tt, tt, System.Drawing.Color.FromArgb((int)(UnityEngine.Random.value * 255f), (int)(UnityEngine.Random.value * 255f), (int)(UnityEngine.Random.value * 255f)));
write.AddFrame(bit);
}
write.Close();
I am having a problem with EmguCV. I used a demo application, and edited it to my needs.
It involves the following function:
public override Image<Gray, byte> DetectSkin(Image<Bgr, byte> Img, IColor min, IColor max)
{
Image<Hsv, Byte> currentHsvFrame = Img.Convert<Hsv, Byte>();
Image<Gray, byte> skin = new Image<Gray, byte>(Img.Width, Img.Height);
skin = currentHsvFrame.InRange((Hsv)min,(Hsv)max);
return skin;
}
In the demo application, the Image comes from a video. The frame is capured from the video like this:
Image<Bgr, Byte> currentFrame;
grabber = new Emgu.CV.Capture(#".\..\..\..\M2U00253.MPG");
grabber.QueryFrame();
currentFrame = grabber.QueryFrame();
In my application, the Image comes from a microsoft kinect stream.
I use the following function:
private void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
// Copy the pixel data from the image to a temporary array
colorFrame.CopyPixelDataTo(this.colorPixels);
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
Bitmap b = BitmapFromWriteableBitmap(this.colorBitmap);
currentFrame = new Image<Bgr, byte>(b);
currentFrameCopy = currentFrame.Copy();
skinDetector = new YCrCbSkinDetector();
Image<Gray, Byte> skin = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
}
}
}
private static System.Drawing.Bitmap BitmapFromWriteableBitmap(WriteableBitmap writeBmp)
{
System.Drawing.Bitmap bmp;
using (System.IO.MemoryStream outStream = new System.IO.MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create((BitmapSource)writeBmp));
enc.Save(outStream);
bmp = new System.Drawing.Bitmap(outStream);
}
return bmp;
}
Now, the demo application works, and mine doesn't. Mine gives the following exception:
And, the image here, contains the following:
I really don't understand this exception. And, now, when I run the demo, working aplication, the image, contains:
Which is, in my eyes, exactly the same. I really don't understand this. Help is very welcome!
To make things easier I've uploaded a working WPF solution for you to the code reference sourceforge page I've been building:
http://sourceforge.net/projects/emguexample/files/Capture/Kinect_SkinDetector_WPF.zip/download
https://sourceforge.net/projects/emguexample/files/Capture/
This was designed and tested using EMGU x64 2.42 so in the Lib folder of the project you will find the referenced dlls. If you are using a different version you will need to delete the current references and replace them with the version you're using.
Secondly the project is design like all projects from the code reference library to be built from the Emgu.CV.Example folder into the ..\EMGU 2.X.X.X\bin.. global bin directory where the opencv compiled libraries are within a folder either x86 or x64.
If you struggle to get the code working I can provide all components but I hate redistributing all the opencv files that you already have so let me know if you want this.
You will need to resize the Mainwindow manually to display both images as I didn't spend to much time playing with layout.
So the code...
In the form initialisation method I check for the kinect sensor and set up the eventhandlers for the frames ready. I have left the original threshold values and skinDetector type although I don't use the EMGU version I just forgot to remove it. You will need to play with the threshold values and so on.
//// Look through all sensors and start the first connected one.
//// This requires that a Kinect is connected at the time of app startup.
//// To make your app robust against plug/unplug,
//// it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit (See components in Toolkit Browser).
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.KS = potentialSensor;
break;
}
}
//If we have a Kinect Sensor we will set it up
if (null != KS)
{
// Turn on the color stream to receive color frames
KS.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
//Turn on the depth stream to recieve depth frames
KS.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
//Start the Streaming process
KS.Start();
//Create a link to a callback to deal with the frames
KS.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(KS_AllFramesReady);
//We set up a thread to process the image/disparty map from the kinect
//Why? The kinect AllFramesReady has a timeout if it has not finished the streams will simply stop
KinectBuffer = new Thread(ProcessBuffer);
hsv_min = new Hsv(0, 45, 0);
hsv_max = new Hsv(20, 255, 255);
YCrCb_min = new Ycc(0, 131, 80);
YCrCb_max = new Ycc(255, 185, 135);
detector = new AdaptiveSkinDetector(1, AdaptiveSkinDetector.MorphingMethod.NONE);
skinDetector = new YCrCbSkinDetector();
}
I always play with the kinect data in a new thread for speed but you may want to advanced this to a Background worker if you plan to do any more heavy processing so it is better managed.
The thread calls the ProcessBuffer() method you can ignore all the commented code as this is the remanence of the code used to display the depth image. Again I'm using the Marshall copy method to keep things fast but the thing to look for is the Dispatcher.BeginInvoke in WPF that allows the images to be displayed from the Kinect thread. This is required as I'm not processing on the main thread.
//This takes the byte[] array from the kinect and makes a bitmap from the colour data for us
byte[] pixeldata = new byte[CF.PixelDataLength];
CF.CopyPixelDataTo(pixeldata);
System.Drawing.Bitmap bmap = new System.Drawing.Bitmap(CF.Width, CF.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new System.Drawing.Rectangle(0, 0, CF.Width, CF.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, CF.PixelDataLength);
bmap.UnlockBits(bmapdata);
//display our colour frame
currentFrame = new Image<Bgr, Byte>(bmap);
Image<Gray, Byte> skin2 = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
ExtractContourAndHull(skin2);
DrawAndComputeFingersNum();
//Display our images using WPF Dispatcher Invoke as this is a sub thread.
Dispatcher.BeginInvoke((Action)(() =>
{
ColorImage.Source = BitmapSourceConvert.ToBitmapSource(currentFrame);
}), System.Windows.Threading.DispatcherPriority.Render, null);
Dispatcher.BeginInvoke((Action)(() =>
{
SkinImage.Source = BitmapSourceConvert.ToBitmapSource(skin2);
}), System.Windows.Threading.DispatcherPriority.Render, null);
I hope this helps I will at some point neaten up the code I uploaded,
Cheers
I'm trying to extract a specific frame from a video file. I have a frame that I want when I play a video file with the aforge library. I call a new frame event, and if the new frame matches my specific frame, then it shows me a message: "Frame Match". This specific frame randomly appears in a video file. Here is my code:
private void Form1_Load(object sender, EventArgs e)
{
IVideoSource videoSource = new FileVideoSource(#"e:\media\test\a.mkv");
playerControl.VideoSource = videoSource;
playerControl.Start( );
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(Video_NewFrame );
}
private void Video_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Create Bitmap from frame
Bitmap FrameData = new Bitmap(eventArgs.Frame);
//Add to PictureBox
pictureBox1.Image = FrameData;
//compare current frame to specific fram
if (pictureBox1.Image == pictureBox2.Image)
{
MessageBox.Show("Frame Match");
}
}
pictureBox2.image is a fixed frame that I want to match. This code is working fine when I play video files and extract new frames, but I am unable to compare new frames to specific frames. Please guide me on how to achieve this.
You can take a look at:
https://github.com/dajuric/accord-net-extensions
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
capture.Seek(<yourFrameIndex>, SeekOrigin.Begin);
var image = capture.ReadAs<Bgr, byte>();
or you can use standard IEnumerable like:
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
var image = capture.ElementAt(<yourFrameIndex>); //will actually just cast image
Examples are included.
moved to: https://github.com/dajuric/dot-imaging
As far as I can understand your problem the issue is that you can't compare image to image this way. I think you will find that the way to do this is to build a histogram table and then compare image histograms.
Some of the related things to look into are:
how to compare two images
image comparer class form VS 2015 unit testing
The second one is from unit testing library so not sure of performance (haven't tried myself yet)