I should develop an application that records and plays recorded video while recording continues (without stopping graph). I know how to use and develop using DirectShow but I need architecture of my graph. What filters should I use?
I did it with raw video using DirectShow and SampleGrabber but I should compress video and I have no idea how to do it in compressed video file (which is not closed, because recording continues).
I think MPEG-2 recording is the best for my application but please guide me about filters I should use.
Thanks
Have you tried the StreamBufferEngine?
Recording a file and watching it at the same time is a little bit problematic, because the Demuxer needs to reparse the written file, to know about the new parts. I have never seen an demuxer doing this.
The other big problem is the file-locking! The DirectShow filewriter and most other similar filters lock the file for the writing. So no other process can open the file for reading.
You are searching for a TimeShift system. There are some thirt-party SDK for this. But it can also be implemented with own DirectShow Filters, but you will need a lot of time and knowledge for this. I know it's possible because I have done it in the videoplayer from my company (utilius fairplay 5).
I worked on that issue for weeks, SBE does not work well in H264 mux in transport
I came up with a solution , let me share
First , the encoder need to have small GOP , like 15 , not 150, otherwise the sync will take time and it will seen as a big hang.
The idea is to create directshow filter , start from Live source filter
(https://www.codeproject.com/Articles/158053/DirectShow-Filters-Development-Part-2-Live-Source)
Modified the filter , not to enter the frame from external program.
add winsock2 , for UDP binding , or multicast group.
Start receive the data from live source and sample it out to the output pin.
Suggest always to use Elecard SDk for setup.
On the sender you can use nw renderer but there you cannot change the encoder GOP
so open graph edit and build the sender graph
desktop capture->encoder->mux->nwrenderer.
Your new filter should know to receive the data and display it on screen
once you have that working you can than continue add the timeshift capabilities into your filter.
Allocate a very big buffer , 200 mega to 1 giga , up to you( on ram ofcourse)
the same data that you send to the output pin , copy it into that buffer, make it circular with rd and wr indexes.
You need to add interface to your filter, functions like
1. GoLive
2.SetSeekPoint
The way i did it is the following
I created a callback from the filter into the host ( c# ) that send the time and the wr pointer every 1 second or more( depends on the accurate i need)
In the host i created a list of those two information.
Now in c# i have the list of the wr pointer and its time.
easy to search and set the RD pointer back to the filter.
The filter have two modes than:
1. in live it send the current received data
2. in seek it send the data from the big buffer , following the rd pointer.
Hope it understood.
Related
I am a beginner in video streaming and handling.
I need to take the stream from a Basler GiGE camera and display it in a web container.
I am using the Basler.Pylon C# API to access the camera and grab images one by one. from the IGrabResult object returned, I can access various parameters such as width, height, stride, and of course the byte buffer.
On my PC, I can easily display that in an image window, but what do i need to do to display that in an ASP.NET web application?
EDIT
I am not looking for code but more for guidelines, if someone could explain how video streaming works in general, that would work too
Video streaming is quite specialised and in general I would say if you want to stream high quality video over the internet to multiple end suers, its easiest to use a dedicated video streaming server rather than try to build one yourself.
Dedicated video streaming servers can be provided via a hosted service (e.g Vimeo), be a commercial server you install and run (e.g. Wowza) or a freeware streaming server you install and rune (e.g. GStreamer), so you have options there.
A a general rule, the streaming server will break your video into chunks and create multiple bit rate copies of your video. This allows a client to use Adaptive Bit Rate streaming (ABR) and download your video chunk by chunk, selecting the bit rate version for the next chunk depending on the current device and network conditions. HLS and MPEG-DASH are exmaples of ABR streaming protocols.
On a web page you then need a HTNML5 player which can understand this streaming protocol - again there are many examples such as the freeware Shaka and Dash.js players. Integrating these into a web page is very straightforward.
You can observe these in use on services like Netflix and YouTube which will often start at a lower bit rate to ensure a fast start and then 'step up' to higher bit rates until the optimal one for the current network conditions and device is reached. See here for some info on how you can see a graph of this when watching YouTube, for example:
https://stackoverflow.com/a/42365034/334402
Having said all the above, it is worth nothing that your case seems to be dealing with a stream of still images. Although all video is really a stream of still images under the covers, it may be that your image is changing infrequently, and hence you may not need some of the techniques above - a lot of the technology of video streaming is to deal with the very large amount of data streaming 30 or 60 high quality frames per second from a server to a client.
If your stream of images, for example, was one every 30 seconds then it may, as Nisus says, be just as easy to simply display the images in your web page and have the web page or app poll the server every 30 seconds (using ASP.NET AJAX in your case) to download the new image.
You have at least two options - first one is producing series of jpeg-images every few seconds and show them one by one on client using tag and simple javascript code. Second option is to generate and stream mp4-video and show it on client with some COM-windows media player or html5 control.
I asked a slightly similar question previously but never got a good handle on this specific piece. Here's the setup:
Webserver: I have coded my own webserver. Works great. I have no issues here. It can handle any known mime-type (that I define) and send files accordingly. It can do so dynamically as well to send information within the responses like date/time, database results etc. So I need no help here.
Webcam code: I have coded a windows control that allows me to select whatever webcam I have hooked up and to view a preview inside the control. So I need no help here.
Here's where the issue arises. I want to connect these two components together so I can grab frames and stream them out the web server to viewers who request video.
Reading, I find that I probably want to write some sort of routine that takes regular "snapshots" at 5/10/20/30 frames per second and then, on-the-fly push them to some sort of MPEG routine to convert to MP4 and push the resulting bytes out to the viewer.
Can someone help enlighten me on how to do this? I don't know what the request should be from the client's end (presumably HTML5) and how to elegantly handle the stream. Thanks in advance!!!
I've been trying to find a way to show the stream of my webcam with a 30 second delay.
I found out that VLC is able to do this, but the problem is that the framerate is much too low and it's not smooth or viewable at all. I have to run this on a fairly old laptop with a better webcam that I own now, so I guess it's not an option.
I am able somewhat familiar with c#/c++ and python so I thought that I might make a solution of my own as the task seems fairly easy. Though, the problem is, I don't know where to start and any nudges in the right direction would be much appreciated.
My initial idea was to record first 30 seconds of the stream to the disk, then use VLC to view partial file (AFAIK it's able to do that). Is it an idea worth working on or should I scratch it and use some sort of a buffer for the video frames in the last 30 seconds?
Again, any nudges in the right direction will be much appreciated, thanks!
Take a look at OpenCV.
It can retrieve and show images from a webcam.
A lot a of tutorials are also available; e.g. http://opencv.willowgarage.com/wiki/CameraCapture
So simply create an array (or whatever) to hold the amount of pictures to be expected in 30 sec (FPS*30).
Start to retrieve images and as soon as the array is filled start to play from position 0.
Newly retrieved images can than be stored at the position from the "just" shown image.
Miguel Grinberg has written an excellent video streaming tutorial in Python which sends JPEG frames successively. Check his blog post here:
http://blog.miguelgrinberg.com/post/video-streaming-with-flask1
Each of these JPEG can be quickly reviewed and then broadcasted. [To take the required Delay in consideration]
As far as getting the input video feed is concerned, you can interface a webcam using OPENCV. OpenCV uses VideoCapture which returns raw images as bytes. These bytes needs to be encoded to JPEG and interfaced with Miguel's code.
import cv2
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device
self.video = cv2.VideoCapture(0)
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
This approach will help you cater all the required features:
No internet required
Adjustable delay - Easily control the delay and the processing you want to perform on each frame
Good Quality
Record on Demand - Store the captured frames as per need
Have a record back feature, by just saving the previous 24*x frames (24fps stream)
This is a bit of a weird question but, with the functionalities of C++, c# and objective C as we speak is there any possible way for video content to be uploaded whilst its recording. So as you record the video it would be being compressed and uploaded to a website.
Would this involve cutting the video into small parts as you record, hardly noticeable stops and starts during the recording?
If anyone knows if this is at all possible, please let me know.
Sorry for the odd question.
You've just asked for streaming media -- something that's been done for over a decade (and, if you overlook "television", something that's probably been underway in research settings for several decades).
Typically, the video recorder will feed the raw data through filters of some sort -- correct white balance, sharpen or soften the video, image stabilize, and then compress the raw data using a codec. Most codec designs will happily take a block of input, work on it, and then produce a block of encoded data ready for writing. Instead of writing to disk, you could "write" to a socket opened to a remote machine.
Or, if you're working with an API that only writes to disk, you could easily re-read the data off disk as it is being written and send the data to a remote site. You'd have to "follow" the writing using something like tail -f's magic ability to follow the file as it is written. (Heck, if you're just bodging something together for a one-off, I'd even recommend using tail -f as part of your system.)
It depends on if the application recording to disk is locking the file. My guess is that, unless you wrote the recording software, the application locks the file(or doesn't even create the real file) until it stops recording. If you are writing the recording software as well, then yes, you can do this. you would just use sychronized threads.
I have a requirement to build a very simple streaming server. It needs to be able to capture video from a device and then stream that video via multicast to several clients on a LAN.
The capture part of this is pretty easy (in C#) thanks to a library someone wrote with DirectShow.Net (http://www.codeproject.com/KB/directx/directxcapture.aspx).
The question I have now is how to multicast this? This is the part I'm stuck on. I'm not sure what to do next, or what steps to take.
There are no filters available that you can plug and use.
You need to do three things here:
Compress the video into MPEG2 or MPEG4
Mux it into MPEG Transport Stream
Broadcast it
There are lots of codecs available for part 1, and some devices can even output compressed video.
The part 3 is quite simple too.
Main problem goes with part 2, as MPEG Transport Stream is patented. It is licensed so that you cannot develop free software based on it (VLC and FFMPEG violate that license), and you have to pay several hundred dollars just to obtain a copy of specification.
If you have to develop it, you need to:
Obtain a copy of ISO/IEC 13818-1-2000 (you may download it as PDF from their site), it describes MPEG Transport Stream
Develop a renderer filter that takes MPEG Elementary Streams and muxes them into Transport Stream
It has to be a renderer as Transport Stream is not a transform filter. There are some kind of outband data (program allocation tables and reference clocks) that need to be sent on a regular basis, and you need to keep a worker thread to do that.
To achieve that you need to setup/write some kind of video streaming server.
I've used VideoCapX for the same purpose on my project. The documentation and support is not top notch, but it's good enough. It's using WMV streaming technology. The stream is called MMS stream. You can view it with any most media player. I've tested with Windows Media Player, Media Player Classics and VLC. If you would like to see it's capability without writing any code just yet, take a look at U-Broadcast, it uses VideoCapX to do the job behind the scene.
I've been using DirectShow.Net for almost 2 years, and I still find it hard to write a streaming server myself, due to the complexity of DirectShow technology.
Other than WMV, you can take a look at Helix Server or Apple Streaming Server. The latter one is not free, so is WMV Streaming Server from Microsoft.
You can also take a look at VLC or Windows Media Encoder to do streaming straight from the application. But so far I find U-Broadcast out do both of the above. VLC has some compatibility issue with codec and playback from non VLC player, WME has problem with starting up capturing device.
Good Luck
NOTE: I'm not associated with VideoCapX or it's company, I'm just a happy user of it.
http://www.codeproject.com/KB/directx/DShowStreamingServer.aspx might help, and http://en.wikipedia.org/wiki/VLC_media_player#cite_note-14
VLC also "should" be able to stream from any device natively.