I am a beginner in video streaming and handling.
I need to take the stream from a Basler GiGE camera and display it in a web container.
I am using the Basler.Pylon C# API to access the camera and grab images one by one. from the IGrabResult object returned, I can access various parameters such as width, height, stride, and of course the byte buffer.
On my PC, I can easily display that in an image window, but what do i need to do to display that in an ASP.NET web application?
EDIT
I am not looking for code but more for guidelines, if someone could explain how video streaming works in general, that would work too
Video streaming is quite specialised and in general I would say if you want to stream high quality video over the internet to multiple end suers, its easiest to use a dedicated video streaming server rather than try to build one yourself.
Dedicated video streaming servers can be provided via a hosted service (e.g Vimeo), be a commercial server you install and run (e.g. Wowza) or a freeware streaming server you install and rune (e.g. GStreamer), so you have options there.
A a general rule, the streaming server will break your video into chunks and create multiple bit rate copies of your video. This allows a client to use Adaptive Bit Rate streaming (ABR) and download your video chunk by chunk, selecting the bit rate version for the next chunk depending on the current device and network conditions. HLS and MPEG-DASH are exmaples of ABR streaming protocols.
On a web page you then need a HTNML5 player which can understand this streaming protocol - again there are many examples such as the freeware Shaka and Dash.js players. Integrating these into a web page is very straightforward.
You can observe these in use on services like Netflix and YouTube which will often start at a lower bit rate to ensure a fast start and then 'step up' to higher bit rates until the optimal one for the current network conditions and device is reached. See here for some info on how you can see a graph of this when watching YouTube, for example:
https://stackoverflow.com/a/42365034/334402
Having said all the above, it is worth nothing that your case seems to be dealing with a stream of still images. Although all video is really a stream of still images under the covers, it may be that your image is changing infrequently, and hence you may not need some of the techniques above - a lot of the technology of video streaming is to deal with the very large amount of data streaming 30 or 60 high quality frames per second from a server to a client.
If your stream of images, for example, was one every 30 seconds then it may, as Nisus says, be just as easy to simply display the images in your web page and have the web page or app poll the server every 30 seconds (using ASP.NET AJAX in your case) to download the new image.
You have at least two options - first one is producing series of jpeg-images every few seconds and show them one by one on client using tag and simple javascript code. Second option is to generate and stream mp4-video and show it on client with some COM-windows media player or html5 control.
Related
I´m doing a "Whatsapp" like app and I need to send user videos (from camera/gallery).
I need to send video from ios to android and from android to ios (windows phone in the future).
First thing I thought is to use camera params to record the video in low resolution, but that won´t help with recorded videos stored in the phone already.
Second thought was to zip the video file, but I guess this is not enough for very large files.
Third: actually compressing the video file generating a new file, and then zip it before sending it through the network.
So this is what I need before actually sending the video:
Compress the video file, generating a new file that will play nicely in
both platforms (ios and android)
Make the compressing process aysnc(as I don´t want to block the UI
thread for a really long time)
Zip it (this is the easy part, just for the record)
Any ideas or help are appreciated
You would best need to use your platforms framework to also leverage existing hardware support for encoding (mainly h.264 hardware encoding). A PCL solution would eat to much battery as it would need to run on CPU only giving you bad performance and even worst battery live.
This ties in with 1. Just use your platforms native method to execute the frameworks methods async.
Skip this part. It will increase overhead and disallow video streaming There are virtually 0 benefits from using a zip algorithm on top of an already compressed video stream.
Just make sure that you end up with a cross platform compatible video format like H264.
I should develop an application that records and plays recorded video while recording continues (without stopping graph). I know how to use and develop using DirectShow but I need architecture of my graph. What filters should I use?
I did it with raw video using DirectShow and SampleGrabber but I should compress video and I have no idea how to do it in compressed video file (which is not closed, because recording continues).
I think MPEG-2 recording is the best for my application but please guide me about filters I should use.
Thanks
Have you tried the StreamBufferEngine?
Recording a file and watching it at the same time is a little bit problematic, because the Demuxer needs to reparse the written file, to know about the new parts. I have never seen an demuxer doing this.
The other big problem is the file-locking! The DirectShow filewriter and most other similar filters lock the file for the writing. So no other process can open the file for reading.
You are searching for a TimeShift system. There are some thirt-party SDK for this. But it can also be implemented with own DirectShow Filters, but you will need a lot of time and knowledge for this. I know it's possible because I have done it in the videoplayer from my company (utilius fairplay 5).
I worked on that issue for weeks, SBE does not work well in H264 mux in transport
I came up with a solution , let me share
First , the encoder need to have small GOP , like 15 , not 150, otherwise the sync will take time and it will seen as a big hang.
The idea is to create directshow filter , start from Live source filter
(https://www.codeproject.com/Articles/158053/DirectShow-Filters-Development-Part-2-Live-Source)
Modified the filter , not to enter the frame from external program.
add winsock2 , for UDP binding , or multicast group.
Start receive the data from live source and sample it out to the output pin.
Suggest always to use Elecard SDk for setup.
On the sender you can use nw renderer but there you cannot change the encoder GOP
so open graph edit and build the sender graph
desktop capture->encoder->mux->nwrenderer.
Your new filter should know to receive the data and display it on screen
once you have that working you can than continue add the timeshift capabilities into your filter.
Allocate a very big buffer , 200 mega to 1 giga , up to you( on ram ofcourse)
the same data that you send to the output pin , copy it into that buffer, make it circular with rd and wr indexes.
You need to add interface to your filter, functions like
1. GoLive
2.SetSeekPoint
The way i did it is the following
I created a callback from the filter into the host ( c# ) that send the time and the wr pointer every 1 second or more( depends on the accurate i need)
In the host i created a list of those two information.
Now in c# i have the list of the wr pointer and its time.
easy to search and set the RD pointer back to the filter.
The filter have two modes than:
1. in live it send the current received data
2. in seek it send the data from the big buffer , following the rd pointer.
Hope it understood.
I have a video analytics program that processes assorted frames from a video. (Several hours long)
The video is likely going to be an MP4 but may be other formats going forwards.
At the moment, I have a C# wrapper around an ffmpeg call to extract an individual frame at the requested time. (I'm using the ffmpeg.exe binary. Not the libraries directly)
At the moment, this all works. But it's slow. Very slow.
I've found ways to improve the speed by storing the extracted frames in a ramdisk while they're being processed. Changing the stored image format etc...
I just wanted to check if anyone could think of any way to pull individual frames out. At split-second accuracy.
I know this is probably possible with DShow etc... I went straight to FFMPEG as I've used it before. But if DShow is likely to be faster I'll gladly change!
In Windows you have native APIs to process, and in particular read from, media files:
DirectShow
Media Foundation
Both provide support for MP4 (H.264 video), DirectShow as a framework extended by third party MP4 Demultiplexer and H.264 decoder (of needed, also Windows 7 provides build it), and Media Foundation - natively or extended by third party extensions depending on OS version.
Both can be interfaced from .NET via open source wrappers DirectShow.NET and Media Foundation .NET respectively. This works out way faster then FFmpeg CLI for individual frames. Also note that you would be able to obtain frames incrementally without need to locate specific time and do excessive duplicated work, not even to mention process startup/initialization overhead. Alternatively you could use FFmpeg/Libav binaries through wrapper into C# and get similar performance.
You can change the position of the offset parameters. The order matters for the speed if the video contains valid meta data you can seek through the video faster.
If you put the offset before the input file the offset will be calculated with the bit rate with is not every time exactly (in case of a variable bit rate), but it is much faster. The correct way is to walk through the video (offset parameter is after the input file) but this takes time.
I am learning WCF, LINQ and a few other technologies by writing, from scratch, a custom remote control application like VNC. I am creating it with three main goals in mind:
The server will provide 'remote control' on an application level (i.e. seamless windows) instead of full desktop access.
The client can select any number of applications that are running on the server and receive a stream of images of each of them.
A client can connect to more than one server simultaneously.
Right now I am using WCF to send an array of Bytes that represents the window being sent:
using (var ms = new MemoryStream()) {
window.GetBitmap().Save(ms, ImageFormat.Jpeg);
frame.Snapshot = ms.ToArray();
}
GetBitmap implementation:
var wRectangle = GetRectangle();
var image = new Bitmap(wRectangle.Width, wRectangle.Height);
var gfx = Graphics.FromImage(image);
gfx.CopyFromScreen(wRectangle.Left, wRectangle.Top, 0, 0, wRectangle.Size, CopyPixelOperation.SourceCopy);
return image;
It is then sent via WCF (TCPBinding and it will always be over LAN) to the client and reconstructed in a blank windows form with no border like this:
using (var ms = new MemoryStream(_currentFrame.Snapshot))
{
BackgroundImage = Image.FromStream(ms);
}
I would like to make this process as efficient as possible in both CPU and memory usage with bandwidth coming in third place. I am aiming to have the client connect to 5+ servers with 10+ applications per server.
Is my existing method the best approach (while continuing to use these technologies) and is there anything I can do to improve it?
Ideas that I am looking into (but I have no experience with):
Using an open source graphics library to capture and save the images instead of .Net solution.
Saving as PNG or another image type rather than JPG.
Send image deltas instead of a full image every time.
Try and 'record' the windows and create a compressed video stream instead of picture snapshots (mpeg?).
You should be aware for this points:
Transport: TCP/binary message encoding will be fastest way to transfer your image data
Image capture: you can rely on P/Invoke to access your screen data, as this can be faster and more memory consuming. Some examples: Capturing the Screen Image in C# [P/Invoke], How to take a screen shot using .NET [Managed] and Capturing screenshots using C# (Managed)
You should to reduce your image data before send it;
choose your image format wisely, as some formats have native compression (as JPG)
an example should be Find differences between images C#
sending only diff image, you can crop it and just send non-empty areas
Try to inspect your WCF messages. This will help you to understand how messages are formatted and will help you to identify how to make that messages smaller.
Just after passing through all this steps and being satisfied with your final code, you can download VncSharp source code. It implements the RFB Protocol (Wikipedia entry), "a simple protocol for remote access to graphical user interfaces. Because it works at the framebuffer level it is applicable to all windowing systems and applications, including X11, Windows and Macintosh. RFB is the protocol used in VNC (Virtual Network Computing)."
I worked on a similar project a while back. This was my general approach:
Rasterized the captured bitmap to tiles of 32x32
To determine which tiles had changed between frames I used unsafe code to compare them 64-bits at a time
On the set of delta tiles I applied one of the PNG filters to improve compressability and had the best results with the Paeth filter
Used DeflateStream to compress the filtered deltas
Used BinaryMessageEncoding custom binding to the service to transmit the data in Binary in stead of the default Base64 encoded version
Some client-side considerations. When dealing with large amounts of data being transferred through a WCF service I found that some parameters of the HttpTransportBinding and the XmlDictionaryRenderQuotas were set to pretty conservative values. So you will want to increase them.
Check out this: Large Data and Streaming (WCF)
The fastest way to send data between client/server is to send a byte array, or several byte arrays. That way WCF don't have to do any custom serialization on your data.
That said. You should use the new WPF/.Net 3.5 library to compress your images instead of the ones from System.Drawing. The functions in the System.Windows.Media.Imaging namespace are faster than the old ones, and can still be used in winforms.
In order to know if compression is the way to go you will have to benchmark your scenario to know how the compression/decompression time compares to transferring all the bytes uncompressed.
If you transfer the data over internet, then compression will help for sure. Between components on the same machine or on a LAN, the benefit might not be so obvious.
You could also try compressing the image, then chunk the data and send asynchronously with a chunk id which you puzzle together on the client. Tcp connections start slow and increase in bandwidth over time, so starting two or four at the same time should cut the total transfer time (all depending on how much data you are sending). Chunking the compressed images bytes is also easier logic wise compared to doing tiles in the actual images.
Summed up: System.Windows.Media.Imaging should help you both cpu and bandwidth wise compared to your current code. Memory wise I would guess about the same.
Instead of capturing the entire image just send smaller subsections of the image. Meaning: starting in the upper left corner, send a 10x10 pixel image, then 'move' ten pixels and send the next 10px square, and so on. You can then send dozens of small images and then update the painted full image on the client. If you've used RDC to view images on a remote machine you've probably seen it do this sort of screen painting.
Using the smaller image sections you can then split up the deltas as well, so if nothing has changed in the current section you can safely skip it, inform the client that you're skipping it, and then move onto the next section.
You'll definitely want to use compression for sending the images. However you should check to see if you get smaller file sizes from using compression similar to gZip, or if using an image codec gives you better results. I've never run a comparison, so I can't say for certain one way or another.
Your solution looks fine to me, but I suggest (as others did) you use tiles and compress the traffic when possible. In addition, I think you should send the entire image once a while, just to be sure that the client's deltas have a common "base".
Perhaps you can use an existing solution for streaming, such as RTP-H263 for video streaming. It works great, it uses compression, and it's well documented and widely used. You can then skip the WCF part and go directly to the streaming part (either over TCP or over UDP). If your solution should go to production, perhaps the H263 streaming approach would be better in terms of responsiveness and network usage.
Bitmap scrImg = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics scr;
scr.CopyFromScreen(new Point(0, 0), new Point(0, 0), Screen.PrimaryScreen.Bounds.Size);
testPictureBox.Image = (Image)scrImg;
I use this code to capture my screen.
I have a requirement to build a very simple streaming server. It needs to be able to capture video from a device and then stream that video via multicast to several clients on a LAN.
The capture part of this is pretty easy (in C#) thanks to a library someone wrote with DirectShow.Net (http://www.codeproject.com/KB/directx/directxcapture.aspx).
The question I have now is how to multicast this? This is the part I'm stuck on. I'm not sure what to do next, or what steps to take.
There are no filters available that you can plug and use.
You need to do three things here:
Compress the video into MPEG2 or MPEG4
Mux it into MPEG Transport Stream
Broadcast it
There are lots of codecs available for part 1, and some devices can even output compressed video.
The part 3 is quite simple too.
Main problem goes with part 2, as MPEG Transport Stream is patented. It is licensed so that you cannot develop free software based on it (VLC and FFMPEG violate that license), and you have to pay several hundred dollars just to obtain a copy of specification.
If you have to develop it, you need to:
Obtain a copy of ISO/IEC 13818-1-2000 (you may download it as PDF from their site), it describes MPEG Transport Stream
Develop a renderer filter that takes MPEG Elementary Streams and muxes them into Transport Stream
It has to be a renderer as Transport Stream is not a transform filter. There are some kind of outband data (program allocation tables and reference clocks) that need to be sent on a regular basis, and you need to keep a worker thread to do that.
To achieve that you need to setup/write some kind of video streaming server.
I've used VideoCapX for the same purpose on my project. The documentation and support is not top notch, but it's good enough. It's using WMV streaming technology. The stream is called MMS stream. You can view it with any most media player. I've tested with Windows Media Player, Media Player Classics and VLC. If you would like to see it's capability without writing any code just yet, take a look at U-Broadcast, it uses VideoCapX to do the job behind the scene.
I've been using DirectShow.Net for almost 2 years, and I still find it hard to write a streaming server myself, due to the complexity of DirectShow technology.
Other than WMV, you can take a look at Helix Server or Apple Streaming Server. The latter one is not free, so is WMV Streaming Server from Microsoft.
You can also take a look at VLC or Windows Media Encoder to do streaming straight from the application. But so far I find U-Broadcast out do both of the above. VLC has some compatibility issue with codec and playback from non VLC player, WME has problem with starting up capturing device.
Good Luck
NOTE: I'm not associated with VideoCapX or it's company, I'm just a happy user of it.
http://www.codeproject.com/KB/directx/DShowStreamingServer.aspx might help, and http://en.wikipedia.org/wiki/VLC_media_player#cite_note-14
VLC also "should" be able to stream from any device natively.