I am developing an UWP Desktop Application, where I would like to capture a video with my WebCam and apply some Video Effects on it.
I am using MediaComposition, MediaClip, MediaOverlay to composite multiple videos and intro pngs as a composition and put some overlays on it.
Now I was trying to implement a slow motion and reverse (boomerang /pingpong) video effect on my composition. I was expecting, that there are already some available IBasicVideoEffect effects for that, but I am searching since 3 days and not able to find something similar.
There is also FFmpegInteropX, which is I was trying but still not able to achieve my goal. But I also would like to avoid using FFMPEG in my project.
Does someone have any Ideas how to implement a slow motion and reverse video effect with UWP?
Thank you in advance.
Related
I'm an electronics engineer used to coding in embedded C and assembly, but I decided to start learning higher-level stuff like C#, .NET, etc., so I can start making software as a hobby. I have a great idea for one of my first projects, but after searching several forums for days on end, I'm left not really knowing what would be the easiest path forward.
The functionality that I'm looking to create is pretty similar to the idea of a photo slideshow, but applied to videos instead. The program would open a playlist or a folder full of videos and then play the videos in a random order, starting from a random starting position, and with a fixed duration (let's say 10 seconds as an example). You would end up being able to watch a sort of "video montage" that consisted of small clips from random parts of the videos in the playlist, shown in a random order, ad infinitum until the program is closed.
There are a number of ways I could tackle the problem:
Develop a standalone video player with the fixed functionality of showing "video slideshows." DirectX has the Microsoft.DirectX.AudioVideoPlayback API that
could be a good starting point. I found an example here: http://www.dreamincode.net/forums/topic/111181-adding-video-to-an-application/
Modify an open source project to add the desired functionality. I've seen a few cool projects that could get me started, like this simple C# Movie Player: http://www.codeproject.com/Articles/18552/C-Movie-Player
Use a scripting interface to implement this functionality on an existing media player, like VLC or Winamp. You could also control VLC via C#, like the example here: Controlling VLC via c#
I realize that the obvious answer for most people would be to "use whatever you're most comfortable with," but since I'm a pure beginner, I don't really have any allegiances to a particular language or development environment. So, I was just curious if anybody had an idea of what might be the least painful option for a beginner.
I also apologize that this is not a very specific programming question. I'm sort of just testing the waters to get my footing. Hopefully, once I get started on the project, I'll be able to come back and post more intelligent and relevant questions!
While your background would lend you toward C#, I recommend investigating something like this and using WPF for the media player. You can then control the media player using a background worker in order to stop the video or queue up the next one. Some other .NET concepts that will be of use to you are FileInfo and DirectoryInfo objects, to provide you with the necessary information about the files. I'm not sure if you've had experience with generic data structures in .NET, but the System.Collections.Generic namespace would be a good place to start to get a feel for data structure you want to keep your playlist in. WPF will also be able to help you with transitions between video clips.
Admittedly WPF is easier with an understanding of the MVVM or MVC design patterns, but I think you'll be able to get something working without having to delve too far into that right up front.
I am starting a new project to show live video in a Windows form from an attached web cam and overlay that video with windows controls (buttons etc). Additionally I would like to do some image correction to remove distortion on the fly and do some edge detection.
I'm confused as to which library might be best suited for this.
OpenCVSharp - Can handle the correction / detection, not sure if overlay / live feed is possible.
DirectShow/DirectShow.Net - Do I need to code filters up for the
overlay, how to handle edge detection?
AForge.net - It's been recommended but I'm not sure it is as capable
Does anyone have experience of these or other libs that might be suitable for access from .Net?
If you only want to work with the vision part then AForge.net is your best bet. I have used it in the past and it was pretty good for video/feed stuff. Don't expect to do something with your audio later on though since AForge.NET only supports Vision related stuff. Personally I wouldn't use DirectShow since that is pretty old and sometimes requires you to do some complex interop tricks to get what you want. If you want to go the DirectShow way at least use DirectShow.NET.
I believe you can accomplish that with OpenCVSharp and the instructions from transparent image overlay.
I'm trying to use a 2D camera to recognize the device/object a user is pointing at so I was looking for a skeleton tracking software using a 2D camera in order to be able to do that. Is there any open source project that deals with skeleton tracking using 2D cameras?
(I've gone through tons of links on Google and it seems like most of what's there is just research papers but no actual open source projects)
Thanks!
Skamleton could be an option. It's an open-source project in early stages, but it implements a background subtractor, a skin color classifier, blob tracking and face classification. There is a demo on YouTube.
Note Skamleton use simple cameras, not RGB-D (depth) cameras as the Kinectic system (Kinect uses a structured light device from PrimSense).
It seems there's kind of a pre-release of a SDK for Kinect from Microsoft. Perhaps this might be helpful for you:
http://nuigroup.com/forums/viewthread/11249/
(Although I think this won't be Open Source. But since you are using c#, a Microsoft SDK might be ok for you.)
This seems like an old post, but in case anyone is still looking Extreme Reality uses a regular webcam and does skeleton tracking. It's not open source, but I've played around with it a bit and it does seem to be fairly robust.
http://www.xtr3d.com/developers/resources/
I am making an object tracking application. I have used Emgucv 2.1.0.0
to load a video file
to a picturebox. I have also taken the video stream from a web camera.
Now, I want
to draw an unfilled square on the video stream using a mouse and then track the object enclosed
by the unfilled square as the video continues to stream.
This is what people have suggested so far:-
(1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester
said that there are .NET wrappers, but I had a hard time finding any.
(2) DxLogo sample
DxLogo – A sample application showing how to superimpose a logo on a data stream.
It uses a capture device for the video source, and outputs the result to a file.
Sadly, this does not use a mouse.
(3) GDI+ and mouse handling - this area I do not have a clue.
And for tracking the object in the square, I would appreciate if someone give me some research paper links to read.
Any help as to using the mouse to draw on a video is greatly appreciated.
Thank you for taking the time to read this.
Many Thanks
It sounds like you want to do image detection and / or tracking.
The EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) library provides a good foundation for this sort of thing in .Net.
e.g. http://www.emgu.com/wiki/index.php/Tutorial#Examples
It's a pretty meaty subject with quite a few years and different branches of research associated with it so I'm not sure anyone can give the definitive guide to such things but reading up neural networks and related topics would give you a pretty good grounding in the way EmguCV and related libraries manage it.
It should be noted that systems such as EmguCV are designed to recognise predefined items within a scene (such as a licence plate number) rather than an arbitory feature within a scene.
For arbitory tracking of a given feature, a search for research papers on edge detection and the like (in combination with a library such a EmguCV) is probably a good start.
(You also may want to sneak a peek at an existing application such as http://www.pfhoe.com/ to see if it fits your needs)
I'm looking at trying to create a simple 'slider puzzle' game. You've seen the ones, you have an image and you shuffle the tiles.
However, I want to make one that will play back videos instead. What I'm trying to determine is whether it's possible to playback a video in C# and render the video on different controls (probably buttons, or panels). I've spotted the Microsoft.DirectX.AudioVideoPlayback classes but haven't found much documentation on them yet.
So to throw it up in the air, is this going to be possible to do without too much difficulty? Are there any useful (free) libraries that might help me along?
Have a look at DirectShowNet that wraps the DirectShow API, in the samples page there is a sample called PlayWnd the shows how to play a video file.
Depending upon how large and how long your video sources are, you could accomplish this very simply by first converting your videos to animated GIFs. A .Net PictureBox control will display and animate a GIF automatically, and you could easily use PictureBoxes for your tiles.
One big advantage of this approach is that (thanks to Mono) your application could work unaltered on Windows, Mac and the iPhone (also Linux and a couple others).