Show webcam stream with delay - c#

I've been trying to find a way to show the stream of my webcam with a 30 second delay.
I found out that VLC is able to do this, but the problem is that the framerate is much too low and it's not smooth or viewable at all. I have to run this on a fairly old laptop with a better webcam that I own now, so I guess it's not an option.
I am able somewhat familiar with c#/c++ and python so I thought that I might make a solution of my own as the task seems fairly easy. Though, the problem is, I don't know where to start and any nudges in the right direction would be much appreciated.
My initial idea was to record first 30 seconds of the stream to the disk, then use VLC to view partial file (AFAIK it's able to do that). Is it an idea worth working on or should I scratch it and use some sort of a buffer for the video frames in the last 30 seconds?
Again, any nudges in the right direction will be much appreciated, thanks!

Take a look at OpenCV.
It can retrieve and show images from a webcam.
A lot a of tutorials are also available; e.g. http://opencv.willowgarage.com/wiki/CameraCapture
So simply create an array (or whatever) to hold the amount of pictures to be expected in 30 sec (FPS*30).
Start to retrieve images and as soon as the array is filled start to play from position 0.
Newly retrieved images can than be stored at the position from the "just" shown image.

Miguel Grinberg has written an excellent video streaming tutorial in Python which sends JPEG frames successively. Check his blog post here:
http://blog.miguelgrinberg.com/post/video-streaming-with-flask1
Each of these JPEG can be quickly reviewed and then broadcasted. [To take the required Delay in consideration]
As far as getting the input video feed is concerned, you can interface a webcam using OPENCV. OpenCV uses VideoCapture which returns raw images as bytes. These bytes needs to be encoded to JPEG and interfaced with Miguel's code.
import cv2
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device
self.video = cv2.VideoCapture(0)
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
This approach will help you cater all the required features:
No internet required
Adjustable delay - Easily control the delay and the processing you want to perform on each frame
Good Quality
Record on Demand - Store the captured frames as per need
Have a record back feature, by just saving the previous 24*x frames (24fps stream)

Related

How can I transform a sequence of images into a playable video using LibVLCSharp?

I have a sequence of images that I was able to extract from a video using LibVLCSharp. This sample to be more specific. I'm creating a small video library manager for learning purposes, and I would like to extract frames and create thumbnails to play when the user hovers the mouse over the previewer.
Using the aforementioned sample I was able to create a WPF UI around the same loging and extract the frames from a video file. However what I want now is to convert these extracted frames into a video file, using them as preview for the video, just like happens on YouTube.
I wasn't able, however, to find out how to achieve this using LibVLCSharp or just LibVLC. Using this answer on Super User I was able to achieve my goal and put those frames together into a video using ffmpeg.
I haven't taken the time yet to study FFmpeg.Autogen, so I don't know if I would be able to extract the frames from the video files in the same way I can do with LibVLCSharp, but I don't see with good eyes using both libraries on my application, one to export the frames and one to generate these frames into a video.
So, is there a way to get the output frames and convert them into a playable video using LibVLCSharp (or libvlc) itself?
I don't see with good eyes using both libraries on my application
You already are, LibVLC ships with ffmpeg.
So, is there a way to get the output frames and convert them into a playable video using LibVLCSharp (or libvlc) itself?
It is possible that there is a way, but I cannot find it right now. Using libvlc for this would be awkward and an inflexible solution. I would use ffmpeg.
You are not forced to use FFmpeg.Autogen for conversion scenarios you can achieve with ffmpeg.exe. I would start a ffmpeg process to do the conversion, and read the ffmpeg stdout for the video data, if you don't want to save it somewhere.
I think there is a way to play images at a specific rate (look at the VLC CLI options), but I don't know how well it works as I never used that

video record and playback at same time using DirectShow

I should develop an application that records and plays recorded video while recording continues (without stopping graph). I know how to use and develop using DirectShow but I need architecture of my graph. What filters should I use?
I did it with raw video using DirectShow and SampleGrabber but I should compress video and I have no idea how to do it in compressed video file (which is not closed, because recording continues).
I think MPEG-2 recording is the best for my application but please guide me about filters I should use.
Thanks
Have you tried the StreamBufferEngine?
Recording a file and watching it at the same time is a little bit problematic, because the Demuxer needs to reparse the written file, to know about the new parts. I have never seen an demuxer doing this.
The other big problem is the file-locking! The DirectShow filewriter and most other similar filters lock the file for the writing. So no other process can open the file for reading.
You are searching for a TimeShift system. There are some thirt-party SDK for this. But it can also be implemented with own DirectShow Filters, but you will need a lot of time and knowledge for this. I know it's possible because I have done it in the videoplayer from my company (utilius fairplay 5).
I worked on that issue for weeks, SBE does not work well in H264 mux in transport
I came up with a solution , let me share
First , the encoder need to have small GOP , like 15 , not 150, otherwise the sync will take time and it will seen as a big hang.
The idea is to create directshow filter , start from Live source filter
(https://www.codeproject.com/Articles/158053/DirectShow-Filters-Development-Part-2-Live-Source)
Modified the filter , not to enter the frame from external program.
add winsock2 , for UDP binding , or multicast group.
Start receive the data from live source and sample it out to the output pin.
Suggest always to use Elecard SDk for setup.
On the sender you can use nw renderer but there you cannot change the encoder GOP
so open graph edit and build the sender graph
desktop capture->encoder->mux->nwrenderer.
Your new filter should know to receive the data and display it on screen
once you have that working you can than continue add the timeshift capabilities into your filter.
Allocate a very big buffer , 200 mega to 1 giga , up to you( on ram ofcourse)
the same data that you send to the output pin , copy it into that buffer, make it circular with rd and wr indexes.
You need to add interface to your filter, functions like
1. GoLive
2.SetSeekPoint
The way i did it is the following
I created a callback from the filter into the host ( c# ) that send the time and the wr pointer every 1 second or more( depends on the accurate i need)
In the host i created a list of those two information.
Now in c# i have the list of the wr pointer and its time.
easy to search and set the RD pointer back to the filter.
The filter have two modes than:
1. in live it send the current received data
2. in seek it send the data from the big buffer , following the rd pointer.
Hope it understood.

How to save all depth frames in OpenNI kinect

I've wrote a program to save all the depth frames of the Kinect depth images in OpenNI. I've used the simple viewer sample. The problem that not all the frames are saved!! I run my program for 10 sec and around 20 images only are saved although the application is set with 30fps!!
Could anyone please advise?
My colleague uses a 2-phase extraction. First write the images in binary, in order to avoid losing time during encoding or conversions. (You can use System.IO.FileStream and BinaryWriter for that). And then in another program, read the binary files to get raw depth or color images. You might make use of Matlab, OpenCV or another utility for this second part.
But keep in mind that, even this approach might cause some skipped/dropped frames. Personally, I've never achieved to obtain a constant 30 fps for a long period.

Capturing Desktop activities as recorded movie in C#

My requirement is to create an application that records desktop activities, along with audio, as a movie. After searching, I found that Microsoft Expression Encoder can be used to record desktop activities as a movie but the output file size is very large. For 10 seconds of video, it occupied around 30 to 40 MB. Also, it uses xesc format.
Is there is any other free API available to do this job?
Before you give up on Expression Encoder try adjusting:
ScreenCaptureJob.ScreenCaptureVideoProfile.Quality
Reducing the quality can greatly reduce the file size. Try it and see if the results are acceptable for you.
Reducing the framerate is actually unhelpful; I guess it forces a fixed framerate, whereas the default is to use a variable framerate based on activity.
If you don't like .xesc files you can transcode the video after you've captured it.
But 30 to 40MB for ten seconds is still way too much. I recorded ten seconds of (admittedly not very large, 1366x768) full-screen video at the default quality. With not much going on it took 300K; with lots of activity (constantly switching between full-screen apps) it took at most 1.5MB.
Reducing the quality reduced file sizes by about 50%.
Unless you're playing a full-screen video and trying to record that, you shouldn't see anything like 30 to 40MB. Perhaps you should look at your audio settings.
ScreenRecorderLib from nuget is good.
SharpAVI is taking too much of my disc space.
You will need to be careful for ScreenRecorderLib, it would require some time saving mp4 file in the end.
Make sure your program won't end before that happens.
I use FileInfo.Length to check if the file size is not growing anymore. This will determine if the saving is finished or not.

high compress Image

I'm trying to do a "remote desktop viewer".
For this I need to send the user's desktop - and it's alot of infomation for sockets...(especially if the resolution is high and the information can approach to 5.3MB (1680X1050))
So I started to compress with GZIP stream and the 5.3MB became 500KB, then I added my own compress algorithm (I think it called RLE) - taking near pixels and write it in format that 1) have 256 >> 3 = 32 colors(for red,blue,green each) and write how many pixels in a row have the same color. + GZIP.
That brought the compression to be in average 60~65KB - up to 200KB and it can be also under 5000 if the screen is totally white.
Now - I thought (and haven't implemented yet) about passing the difference between each frame - for each line I write where the difference(between the pixels) is starting and how long is the difference.
well, it can help - maybe I could get 30KB for each frame in average. but for sockets it's alot.
has anyone ever succeed to fit with this problem? (and how of course...)
There are standard algorithms for compressing images: e.g. JPEG.
A further optimization is to know something about the image: for example on a desktop, items like the Windows Start button, and various application icons, and widgets on the title bar, are standard: so instead of sending their pixels values, you can send their logical identifiers.
Yes people have succeeded with this problem: people who write remote desktop software, including the open source VNC.
You may wish to review the source code of a VNC.
Most VNC servers implement several different forms of compression.

Categories

Resources