Create video from UI controls in UWP using Composition - c#

I'm creating an application that takes a video and overlays sensor data from a drone.
Please, watch it in action here:
https://youtu.be/eAOjImJci3M
But that doesn't edit the video, it's just on the application.
My goal is to produce a new video with the overlaid data on it. How can I capture an element of the UI over time and make a MediaClip out of it?

Short answer: Take a screenshot of the UI with a certain interval, and then accumulate them to produce a video:
Long answer: Let's name the UI you want to record as myGrid, to get screenshots in an interval you can use a DispatcerTimer, and handle the Tick event like this:
private async void Tm_Tick(object sender, object e)
{
RenderTargetBitmap rendertargetBitmap = new RenderTargetBitmap();
await rendertargetBitmap.RenderAsync(myGrid);
var bfr = await rendertargetBitmap.GetPixelsAsync();
CanvasRenderTarget rendertarget = null
using (CanvasBitmap canvas = CanvasBitmap.CreateFromBytes(CanvasDevice.GetSharedDevice(), bfr, rendertargetBitmap.PixelWidth, rendertargetBitmap.PixelHeight, Windows.Graphics.DirectX.DirectXPixelFormat.B8G8R8A8UIntNormalized))
{
rendertarget = new CanvasRenderTarget(CanvasDevice.GetSharedDevice(), canvas.SizeInPixels.Width, canvas.SizeInPixels.Height, 96);
using (CanvasDrawingSession ds = rendertarget.CreateDrawingSession())
{
ds.Clear(Colors.Black);
ds.DrawImage(canvas);
}
}
MediaClip m = MediaClip.CreateFromSurface(rendertarget, TimeSpan.FromMilliseconds(80));
mc.Clips.Add(m);
}
mc is the MediaComposition object I declared earlier .
When you are done recording, stop the DispatcherTimer and save the video on the disk like this:
tm.Stop();
await mc.RenderToFileAsync(file, MediaTrimmingPreference.Precise, MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Vga));
tm is the DispatcherTimer I declared earlier and file is a StorageFile with mp4 extension.
This procedure doesn't require you to save each screenshot to disk .
If you wonder why I used an additional CanvasRenderTarget, it's because of this problem.
Hope that helps.

Related

How to add contents of CanvasControl to a MediaClip

I'm creating a uwp app where text a user types in is converted into a video overlay. The user loads the video, types in their text, gets a preview of the video with the text, then they can save their video with the overlay.
Right now it works with an image overlay, but not text. I'm trying to use win2d to format the text. However, I can't figure out how to get the win2d canvas control into a windows.media.editing media clip.
I thought I could go CanvasControl > Bitmap of some sort > Image > MediaClip, but media clips can only be created through storage files or direct 3d surfaces. Anything would be helpful! This is my first time trying to write something outside of a game engine.
The part that currently converts the canvas control to a bitmap:
private async void btnAddOverlay_Click(object sender, RoutedEventArgs e)
{
var bitmap = new RenderTargetBitmap();
await bitmap.RenderAsync(canvasControl);
IBuffer pixelBuffer = await bitmap.GetPixelsAsync();
byte[] pixels = pixelBuffer.ToArray();
SoftwareBitmap outputBitmap = SoftwareBitmap.CreateCopyFromBuffer
(
pixelBuffer,
BitmapPixelFormat.Bgra8,
bitmap.PixelWidth,
bitmap.PixelHeight
);
}
The old part that added the image overlay to the video. (overlayImageFile is the storage file. I just don't know how to convert outputBitmap to a storage file to make a media clip from it.)
private async void CreateOverlays()
{
// Create a clip from video and add it to the composition
var baseVideoClip = await MediaClip.CreateFromFileAsync(pickedFile);
composition = new MediaComposition();
composition.Clips.Add(baseVideoClip);
// Create a clip from image
var overlayImageClip = await MediaClip.CreateFromImageFileAsync(overlayImageFile,timeSpan);
// Put image in upper left corner, retain its native aspect ratio
Rect imageOverlayPosition;
imageOverlayPosition.Height = mediaElement.ActualHeight / 3;
imageOverlayPosition.Width = (double)imageBitmap.PixelWidth / (double)imageBitmap.PixelHeight * imageOverlayPosition.Height;
imageOverlayPosition.X = 0;
imageOverlayPosition.Y = 0;
// Make the clip from the image an overlay
var imageOverlay = new MediaOverlay(overlayImageClip);
imageOverlay.Position = imageOverlayPosition;
imageOverlay.Opacity = 0.8;
// Make a new overlay layer and add the overlay to it
var overlayLayer = new MediaOverlayLayer();
overlayLayer.Overlays.Add(imageOverlay);
composition.OverlayLayers.Add(overlayLayer);
//Render to MediaElement
mediaElement.Position = TimeSpan.Zero;
mediaStreamSource = composition.GeneratePreviewMediaStreamSource((int)mediaElement.ActualWidth, (int)mediaElement.ActualHeight);
mediaElement.SetMediaStreamSource(mediaStreamSource);
txbNotification.Text = "Overlay creaeted";
}
Windows MediaClip Class: https://learn.microsoft.com/en-us/uwp/api/windows.media.editing.mediaclip?view=winrt-19041
I just don't know how to convert outputBitmap to a storage file to make a media clip from it.
After getting the RenderTargetBitmap, there is no need to convert to SoftwareBitmap. You can store the pixel data as a file in a specified format through BitmapEncoder. This is the method:
private async void btnAddOverlay_Click(object sender, RoutedEventArgs e)
{
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(canvasControl);
var pixelBuffer = await renderTargetBitmap.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
// Create the file in local folder
var file = await ApplicationData.Current.LocalFolder.CreateFileAsync("TempCanvas.png", CreationCollisionOption.ReplaceExisting);
// Write data
using (var stream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied, (uint)renderTargetBitmap.PixelWidth, (uint)renderTargetBitmap.PixelHeight, displayInformation.RawDpiX, displayInformation.RawDpiY, pixels);
await encoder.FlushAsync();
}
}
When needed, you can use the following code to get the saved picture:
var file = await ApplicationData.Current.LocalFolder.GetFileAsync("TempCanvas.png");
it's just another possible way
If you are using win2d than you can use
MediaClip.CreateFromSurface(IDirect3DSurface, TimeSpan) function and
you can pass a canvas render target directly to this method
this way you don't have to save bitmap to file
so you have to create an offline canvas device
then a canvas render target
then draw text over it
then pass that render target to function
and it will generate the media clip for overlay

How can we capture and save in local disk with Aforge video?

I want to save video which capturing from webcam into local disk. I wrote code it shows webcam but It can't save into local disk. The error is Failed creating compressed stream.. What should I do in here?
writer = new AVIWriter("wmv3");
writer.FrameRate = 30;
writer.Open("video.avi", Convert.ToInt32(640), Convert.ToInt32(480)); // ERROR İS HERE **Failed creating compressed stream.**
//Create NewFrame event handler
//(This one triggers every time a new frame/image is captured
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(videoSource_NewFrame);
//Start recording
videoSource.Start();
}
}
void videoSource_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Cast the frame as Bitmap object and don't forget to use ".Clone()" otherwise
//you'll probably get access violation exceptions
pictureBoxVideo.BackgroundImage = (Bitmap)eventArgs.Frame.Clone();
writer.AddFrame((Bitmap)eventArgs.Frame.Clone());
}
Have you ever consider the size of the stream of your webcam? I have the same problem, too. I know you set your video size into 640 and 480, but the video stream size which comes from your webcam(I guess) would be never the same. I also guess you set your container such as picturebox or imagebox into 640 and 480, but that doesn't mean the video stream would be the same. I use savedialog to check the video stream that comes out of my webcam, and guess what? The size would be (648, 486). Who would ever set such a strange number set? But I set my code into this :
writer.Open("video.avi", Convert.ToInt32(648), Convert.ToInt32(486));
And it works fine!
I do not know the rest of your code is correct or not, but I'm sure my bug is in the set of size :)
Four years later I'm having this same issue. After hours I figured out if I didn't specify wmv3 for the codec and just left it blank writer = new AVIWriter(); then everything worked.
AVIWriter write = new AVIWriter();
write.Open("newTestVideo.avi", Convert.ToInt32(320), Convert.ToInt32(240));
Bitmap bit = new Bitmap(320, 240);
for (int tt = 0; tt < 240; tt++) {
bit.SetPixel(tt, tt, System.Drawing.Color.FromArgb((int)(UnityEngine.Random.value * 255f), (int)(UnityEngine.Random.value * 255f), (int)(UnityEngine.Random.value * 255f)));
write.AddFrame(bit);
}
write.Close();

IP Camera stop streaming after some time

I am working on one application where I want to use IP camera for displaying video streaming and and some other major operations on image captured by the IP Camera.
Libraries used in Camera capture
For Camera Capture : Emgu.CV Library
Below is the code which I am using in C#.
Variable Declaration
private Capture capture; //takes images from camera as image frames
private Emgu.CV.UI.ImageBox img; // Dynamic Picture Controls
private int nCam; // no of cameras
Code for Processing Image
private void ProcessFrame(object sender, EventArgs arg)
{
try
{
// Live Streaming Display
Image<Bgr, Byte> ImageFrame = capture.QueryFrame();
// If Ip camera try to reinitialize the IP camera
if(ImageFrame == null)
{
capture.Dispose();
capture = new Capture(URL);
ImageFrame = capture.QueryFrame();
}
ImageFrame = ImageFrame.Resize(img.Width, img.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR);
img.Image = ImageFrame;
// Here I am doing some other operations like
// 1. Save Image captured from the IP Camera
// 2. Detect faces in Image
// 3. Draw Face markers on Image
// 4. Some database based on result of Face Detection
// 4. Delete image File
// continue Looping for other Ip Cameras
}
catch (NullReferenceException e)
{
}
}
Now, The Problem is after some time the QueryFrame() provide null value and camera Stop streaming.
Can any one tell me why this is happening?
How I can resolve this problem?
If any more information is needed Please Let me know.
Thanks in Advance.
Sorry about the delay but I have provide an example that works with several public IP cameras. It will need the EMGU reference replacing with your current version and the target build directory should be set to "EMGU Version\bin" alternatively extract it to the examples folder.
http://sourceforge.net/projects/emguexample/files/Capture/CameraCapture%20Public%20IP.zip/download
Rather than using the older QueryFrame() method it uses the RetrieveBgrFrame() method. It has worked reasonably well and I have had no null exceptions. However if you do replace the ProcessFrame() method with something like this
You should not be attempting to do any operations if the frame returned is Image is a nullable field and should not have a problem if _capture.RetrieveBgrFrame(); returns null if there is a problem then there is a bigger issue.
private void ProcessFrame(object sender, EventArgs arg)
{
//If you want to access the image data the use the following method call
//Image<Bgr, Byte> frame = new Image<Bgr,byte>(_capture.RetrieveBgrFrame().ToBitmap());
if (RetrieveBgrFrame.Checked)
{
Image<Bgr, Byte> frame = _capture.RetrieveBgrFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if(frame!=null)
{
DisplayImage(frame.ToBitmap());
Image<Bgr, Byte> ImageFrame = frame.Resize(img.Width, img.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR);
// Here I am doing some other operations like
// 1. Save Image captured from the IP Camera
// 2. Detect faces in Image
// 3. Draw Face markers on Image
// 4. Some database based on result of Face Detection
// 4. Delete image File
// continue Looping for other Ip Cameras
}
//else do nothing as we have no image
}
else if (RetrieveGrayFrame.Checked)
{
Image<Gray, Byte> frame = _capture.RetrieveGrayFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if (frame != null) DisplayImage(frame.ToBitmap());
}
}
On a separate note your comment 'continue Looping for other Ip Cameras' may cause several issues. You should have a new Capture constructor for each camera camera you are using. How many camera are you using? and what public ip camera are you using so I can attempt to replicate the issue? The reason for the separate constructor is that ip cameras take a while to negotiate connections with and constantly Disposing of the original construct and replacing it will play havoc with the garbage collector and introduce no end if timing issues.
Cheers
Chris
[EDIT]
If your camera is returning null frames after a timeout period then I would check to see if there is an issue with the setup or maybe your connection is so slow it disconnects you to reduce lag to others there are various causes but this is not a code problem. You can use c# alone to acquire the data to a bitmap and then pass this to an Image type variable. There is a great article here:
http://www.codeproject.com/Articles/15537/Camera-Vision-video-surveillance-on-C
I've adapted this so you can use a HttpWebRequest as a final check to see if the stream is alive although there are still null exceptions that will be produced here:
using System.Net;
using System.IO;
string url;
private void ProcessFrame(object sender, EventArgs arg)
{
//***If you want to access the image data the use the following method call***/
//Image<Bgr, Byte> frame = new Image<Bgr,byte>(_capture.RetrieveBgrFrame().ToBitmap());
if (RetrieveBgrFrame.Checked)
{
Image<Bgr, Byte> frame = _capture.RetrieveBgrFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if (frame != null)
{
DisplayImage(frame.ToBitmap());
}
else
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(url);
// get response
WebResponse resp = req.GetResponse();
//get stream
Stream stream = resp.GetResponseStream();
if (!stream.CanRead)
{
//try reconnecting the camera
captureButtonClick(null, null); //pause
_capture.Dispose();//get rid
captureButtonClick(null, null); //reconnect
}
}
}
else if (RetrieveGrayFrame.Checked)
{
Image<Gray, Byte> frame = _capture.RetrieveGrayFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if (frame != null) DisplayImage(frame.ToBitmap());
}
}
private void captureButtonClick(object sender, EventArgs e)
{
url = Camera_Selection.SelectedItem.ToString(); //add this
... the rest of the code
}
To display multiple webcams you would create a class to handle the Capture construct and processframe event. Ideally you would raise an purpose built event call that would include a camera identifier as the frameready event call does not call this. I have to make things easier created a form with as a MDI parent and opened an object to manage the capture variables and frame ready event. The Alpha version is available here:
http://sourceforge.net/projects/emguexample/files/Capture/CameraCapture%20Public%20IP%20Multipl%20Display%20Alpha.zip/download

How To Extract specific Frame using AForge Library

I'm trying to extract a specific frame from a video file. I have a frame that I want when I play a video file with the aforge library. I call a new frame event, and if the new frame matches my specific frame, then it shows me a message: "Frame Match". This specific frame randomly appears in a video file. Here is my code:
private void Form1_Load(object sender, EventArgs e)
{
IVideoSource videoSource = new FileVideoSource(#"e:\media\test\a.mkv");
playerControl.VideoSource = videoSource;
playerControl.Start( );
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(Video_NewFrame );
}
private void Video_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Create Bitmap from frame
Bitmap FrameData = new Bitmap(eventArgs.Frame);
//Add to PictureBox
pictureBox1.Image = FrameData;
//compare current frame to specific fram
if (pictureBox1.Image == pictureBox2.Image)
{
MessageBox.Show("Frame Match");
}
}
pictureBox2.image is a fixed frame that I want to match. This code is working fine when I play video files and extract new frames, but I am unable to compare new frames to specific frames. Please guide me on how to achieve this.
You can take a look at:
https://github.com/dajuric/accord-net-extensions
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
capture.Seek(<yourFrameIndex>, SeekOrigin.Begin);
var image = capture.ReadAs<Bgr, byte>();
or you can use standard IEnumerable like:
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
var image = capture.ElementAt(<yourFrameIndex>); //will actually just cast image
Examples are included.
moved to: https://github.com/dajuric/dot-imaging
As far as I can understand your problem the issue is that you can't compare image to image this way. I think you will find that the way to do this is to build a histogram table and then compare image histograms.
Some of the related things to look into are:
how to compare two images
image comparer class form VS 2015 unit testing
The second one is from unit testing library so not sure of performance (haven't tried myself yet)

Creating Video from generated bitmap images c#

I am trying to play a stream of bitmap images using openCVSharp to generate the Bitmaps from YUV. But I am unable to display it as a video.
I found some links regarding using it as an AVI wrapper here and some more to save on the hard disk like here and FFMPEG might work great on LINUX, but it is not so good in Windows.
I even tried with by using the following code. But it just displays the last frame in the sequence, and I do not have a URI for using MediaElement as the bitmaps are generated by my program.
image.source = ToBitmapSource(bitmapImage);
where
public static BitmapSource ToBitmapSource(System.Drawing.Bitmap bitmap){
IntPtr ip = bitmap.GetHbitmap();
BitmapSource bs = null;
bs = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(ip,
IntPtr.Zero, System.Windows.Int32Rect.Empty,
System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());
return bs;
}
I am trying to play the video (similar to streaming) without saving it on the computer. Is direct show a must for this? I desperately need your help, my deadline is fast approaching!
You can use a DispatcherTimer (equivalent to a Timer in Winform):
DispatcherTimer dt = new DispatcherTimer();
dt.Interval = 25; //25 ms --> 50 frames per second
dt.Tick += delegate(object sender, EventArgs e){
//get the image and display it
}
dt.Start(); //to start record

Categories

Resources