Efficient way to send images via WCF? - c#

I am learning WCF, LINQ and a few other technologies by writing, from scratch, a custom remote control application like VNC. I am creating it with three main goals in mind:
The server will provide 'remote control' on an application level (i.e. seamless windows) instead of full desktop access.
The client can select any number of applications that are running on the server and receive a stream of images of each of them.
A client can connect to more than one server simultaneously.
Right now I am using WCF to send an array of Bytes that represents the window being sent:
using (var ms = new MemoryStream()) {
window.GetBitmap().Save(ms, ImageFormat.Jpeg);
frame.Snapshot = ms.ToArray();
}
GetBitmap implementation:
var wRectangle = GetRectangle();
var image = new Bitmap(wRectangle.Width, wRectangle.Height);
var gfx = Graphics.FromImage(image);
gfx.CopyFromScreen(wRectangle.Left, wRectangle.Top, 0, 0, wRectangle.Size, CopyPixelOperation.SourceCopy);
return image;
It is then sent via WCF (TCPBinding and it will always be over LAN) to the client and reconstructed in a blank windows form with no border like this:
using (var ms = new MemoryStream(_currentFrame.Snapshot))
{
BackgroundImage = Image.FromStream(ms);
}
I would like to make this process as efficient as possible in both CPU and memory usage with bandwidth coming in third place. I am aiming to have the client connect to 5+ servers with 10+ applications per server.
Is my existing method the best approach (while continuing to use these technologies) and is there anything I can do to improve it?
Ideas that I am looking into (but I have no experience with):
Using an open source graphics library to capture and save the images instead of .Net solution.
Saving as PNG or another image type rather than JPG.
Send image deltas instead of a full image every time.
Try and 'record' the windows and create a compressed video stream instead of picture snapshots (mpeg?).

You should be aware for this points:
Transport: TCP/binary message encoding will be fastest way to transfer your image data
Image capture: you can rely on P/Invoke to access your screen data, as this can be faster and more memory consuming. Some examples: Capturing the Screen Image in C# [P/Invoke], How to take a screen shot using .NET [Managed] and Capturing screenshots using C# (Managed)
You should to reduce your image data before send it;
choose your image format wisely, as some formats have native compression (as JPG)
an example should be Find differences between images C#
sending only diff image, you can crop it and just send non-empty areas
Try to inspect your WCF messages. This will help you to understand how messages are formatted and will help you to identify how to make that messages smaller.
Just after passing through all this steps and being satisfied with your final code, you can download VncSharp source code. It implements the RFB Protocol (Wikipedia entry), "a simple protocol for remote access to graphical user interfaces. Because it works at the framebuffer level it is applicable to all windowing systems and applications, including X11, Windows and Macintosh. RFB is the protocol used in VNC (Virtual Network Computing)."

I worked on a similar project a while back. This was my general approach:
Rasterized the captured bitmap to tiles of 32x32
To determine which tiles had changed between frames I used unsafe code to compare them 64-bits at a time
On the set of delta tiles I applied one of the PNG filters to improve compressability and had the best results with the Paeth filter
Used DeflateStream to compress the filtered deltas
Used BinaryMessageEncoding custom binding to the service to transmit the data in Binary in stead of the default Base64 encoded version
Some client-side considerations. When dealing with large amounts of data being transferred through a WCF service I found that some parameters of the HttpTransportBinding and the XmlDictionaryRenderQuotas were set to pretty conservative values. So you will want to increase them.

Check out this: Large Data and Streaming (WCF)

The fastest way to send data between client/server is to send a byte array, or several byte arrays. That way WCF don't have to do any custom serialization on your data.
That said. You should use the new WPF/.Net 3.5 library to compress your images instead of the ones from System.Drawing. The functions in the System.Windows.Media.Imaging namespace are faster than the old ones, and can still be used in winforms.
In order to know if compression is the way to go you will have to benchmark your scenario to know how the compression/decompression time compares to transferring all the bytes uncompressed.
If you transfer the data over internet, then compression will help for sure. Between components on the same machine or on a LAN, the benefit might not be so obvious.
You could also try compressing the image, then chunk the data and send asynchronously with a chunk id which you puzzle together on the client. Tcp connections start slow and increase in bandwidth over time, so starting two or four at the same time should cut the total transfer time (all depending on how much data you are sending). Chunking the compressed images bytes is also easier logic wise compared to doing tiles in the actual images.
Summed up: System.Windows.Media.Imaging should help you both cpu and bandwidth wise compared to your current code. Memory wise I would guess about the same.

Instead of capturing the entire image just send smaller subsections of the image. Meaning: starting in the upper left corner, send a 10x10 pixel image, then 'move' ten pixels and send the next 10px square, and so on. You can then send dozens of small images and then update the painted full image on the client. If you've used RDC to view images on a remote machine you've probably seen it do this sort of screen painting.
Using the smaller image sections you can then split up the deltas as well, so if nothing has changed in the current section you can safely skip it, inform the client that you're skipping it, and then move onto the next section.
You'll definitely want to use compression for sending the images. However you should check to see if you get smaller file sizes from using compression similar to gZip, or if using an image codec gives you better results. I've never run a comparison, so I can't say for certain one way or another.

Your solution looks fine to me, but I suggest (as others did) you use tiles and compress the traffic when possible. In addition, I think you should send the entire image once a while, just to be sure that the client's deltas have a common "base".
Perhaps you can use an existing solution for streaming, such as RTP-H263 for video streaming. It works great, it uses compression, and it's well documented and widely used. You can then skip the WCF part and go directly to the streaming part (either over TCP or over UDP). If your solution should go to production, perhaps the H263 streaming approach would be better in terms of responsiveness and network usage.

Bitmap scrImg = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics scr;
scr.CopyFromScreen(new Point(0, 0), new Point(0, 0), Screen.PrimaryScreen.Bounds.Size);
testPictureBox.Image = (Image)scrImg;
I use this code to capture my screen.

Related

Generate a video stream from Basler camera grabbing loop

I am a beginner in video streaming and handling.
I need to take the stream from a Basler GiGE camera and display it in a web container.
I am using the Basler.Pylon C# API to access the camera and grab images one by one. from the IGrabResult object returned, I can access various parameters such as width, height, stride, and of course the byte buffer.
On my PC, I can easily display that in an image window, but what do i need to do to display that in an ASP.NET web application?
EDIT
I am not looking for code but more for guidelines, if someone could explain how video streaming works in general, that would work too
Video streaming is quite specialised and in general I would say if you want to stream high quality video over the internet to multiple end suers, its easiest to use a dedicated video streaming server rather than try to build one yourself.
Dedicated video streaming servers can be provided via a hosted service (e.g Vimeo), be a commercial server you install and run (e.g. Wowza) or a freeware streaming server you install and rune (e.g. GStreamer), so you have options there.
A a general rule, the streaming server will break your video into chunks and create multiple bit rate copies of your video. This allows a client to use Adaptive Bit Rate streaming (ABR) and download your video chunk by chunk, selecting the bit rate version for the next chunk depending on the current device and network conditions. HLS and MPEG-DASH are exmaples of ABR streaming protocols.
On a web page you then need a HTNML5 player which can understand this streaming protocol - again there are many examples such as the freeware Shaka and Dash.js players. Integrating these into a web page is very straightforward.
You can observe these in use on services like Netflix and YouTube which will often start at a lower bit rate to ensure a fast start and then 'step up' to higher bit rates until the optimal one for the current network conditions and device is reached. See here for some info on how you can see a graph of this when watching YouTube, for example:
https://stackoverflow.com/a/42365034/334402
Having said all the above, it is worth nothing that your case seems to be dealing with a stream of still images. Although all video is really a stream of still images under the covers, it may be that your image is changing infrequently, and hence you may not need some of the techniques above - a lot of the technology of video streaming is to deal with the very large amount of data streaming 30 or 60 high quality frames per second from a server to a client.
If your stream of images, for example, was one every 30 seconds then it may, as Nisus says, be just as easy to simply display the images in your web page and have the web page or app poll the server every 30 seconds (using ASP.NET AJAX in your case) to download the new image.
You have at least two options - first one is producing series of jpeg-images every few seconds and show them one by one on client using tag and simple javascript code. Second option is to generate and stream mp4-video and show it on client with some COM-windows media player or html5 control.

how to compress Image List before downloading to client in silverlight application

In my silverlight application there is thousands of jpg, bmp and other format images in Database (on server side). based on many conditions, a list of selected images(sometimes more than a thousand) should transferred to client and displayed to end user.
To improve the process we used paging method. therefore whenever user clicks 'Next Page' button, we get next page images from server side.
I'm trying to improve the process using a method like image compression in server side and then decompression in client(in silverligh application).
Is there something like a client-server compression algorithm or tool or any other method to do this work?
You'll get the highest bandwith savings by converting all images to a smaller size, lower resolution, low compression level and JPEG format.
Find out what size/resolution/compression level is just acceptable for the end-user.
If you send a JPEG over the wire, the decompression is done automatically by the JPEG renderer when displaying the image on the client side.
The library functions in .Net should allow you to do all that.
See Set JPEG Compression Level (MSDN) as an example.

How to reduce Base64 size?

I am new to web service development. I am working on a web service where I need to upload image on server using web service which will be called from an Android application.
My problem is, when user selects image from Android device and clicks on upload button at that time it generates string using Base64 class file. And its length is more than 3000 characters.
So, I need to reduce the size of string which is generated using Base64 for image.
Base-64 is, by definition, a fixed size representation; n bytes binary will always be m characters base-64. The most you can do is remove the final chunk's padding, which will save you a whopping 3 characters maximum.
If you want your payload to be shorter, you could:
use a higher base (although you'll need to write the encode/decode manually, and you'll still only be able to get to about base-90 or something, so you won't drastically reduce the size)
use a pure binary post body rather than base-anything (this is effectively base-256)
have less data
by having smaller images
or by using compression (note: many image formats are already compressed, so this often won't help)
use multiple requests
Generally speaking, to reduce data size without losing any information you'll need lossless compression. A good starting point there might be Android's built-in zip classes, but you'll still need encoding across the wire.
If it's a captured image, however, changing the parameters of the JPEG compression (or original resolution) will prove far more useful, as the compression you'll get on JPEG-like data which is then base-64'd are likely to be very low.
You need to reduce your actual image size first to be able to reduce your base64 size.

Image resizing on the fly in asp.net

For simplicity lets say that I have a web page that needs to display thumbnails of images. The images locations are stored in a database(the images are stored on Amazon S3). Is it possible to have my web server scale down the large image before it is delivered to the client? This way I don't have to store thumbnails of every image and the client can download a smaller file.
Every tutorial on this topic over-simplifies the situation and nearly all of them leak memory. It's a long read, but you should know about the 29 image resizing pitfalls so you can avoid them.
I wrote a library to do server-side dynamic image resizing safely. It's not something that can be done properly in 1 tutorial or even 10. You can solve 80% of the bugs, but not 100%. And when you're doing something this resource-intensive, you can't tolerate bugs or memory leaks.
The core library is free and open-source, but the Amazon S3 plugin is part of the Performance edition, which has a $249 license fee. The Performance Edition comes with source, examples, and documentation for S3, MS SQL, Azure, MongoDB GridFS, and CloudFront integration, as well as terabyte-scale disk caching and memcaching.
From the statistics I have access to, it appears that imageresizing.net is the most widely-used library of its kind. It runs at least 5 social networks and is used with image collections as large as 20TB. Most large sites use the S3 plugin, as local storage (or even a SAN) isn't very scalable.
Sure, no problem. There's plenty of resources on the web that show how to dish up an image from a database. So I won't duplicate that here.
Once you've loaded the image, you can easily shrink it using .NET. There is an example at the following URL. It doesn't do exactly what you are doing, but it does generate thumbnails of an image.
http://blackbeltcoder.com/Articles/graphics/creating-website-thumbnails-in-asp-net
Using WebImage class that comes in System.Web.Helpers.WebImage you can achieve this.
You can use this great kid to output resized images on the fly.
Sample code:
public void GetPhotoThumbnail(int realtyId, int width, int height)
{
// Loading photos’ info from database for specific Realty...
var photos = DocumentSession.Query<File>().Where(f => f.RealtyId == realtyId);
if (photos.Any())
{
var photo = photos.First();
new WebImage(photo.Path)
.Resize(width, height, false, true) // Resizing the image to 100x100 px on the fly...
.Crop(1, 1) // Cropping it to remove 1px border at top and left sides (bug in WebImage)
.Write();
}
// Loading a default photo for realties that don't have a Photo
new WebImage(HostingEnvironment.MapPath(#"~/Content/images/no-photo100x100.png")).Write();
}
More about it here: Resize image on the fly with ASP.NET MVC
Here's a great tutorial that shows how to work with WebImage directly from the ASP.NET site:
Working with Images in an ASP.NET Web Pages (Razor) Site
Yes.
You make a ASP.Net page that does Response.Clear(), sets Content-Type-header in Response and sends the binary data of the image (also through Response). The image can be resized on-the-fly, but I'd recommend caching it for some time on disk or so. Then you reference the image from HTML as <img src="http://server/yourimagepage.aspx">. For storing image in memory before sending you can use MemStream.
I have sample code but not in front of me right now, sorry. :)

high compress Image

I'm trying to do a "remote desktop viewer".
For this I need to send the user's desktop - and it's alot of infomation for sockets...(especially if the resolution is high and the information can approach to 5.3MB (1680X1050))
So I started to compress with GZIP stream and the 5.3MB became 500KB, then I added my own compress algorithm (I think it called RLE) - taking near pixels and write it in format that 1) have 256 >> 3 = 32 colors(for red,blue,green each) and write how many pixels in a row have the same color. + GZIP.
That brought the compression to be in average 60~65KB - up to 200KB and it can be also under 5000 if the screen is totally white.
Now - I thought (and haven't implemented yet) about passing the difference between each frame - for each line I write where the difference(between the pixels) is starting and how long is the difference.
well, it can help - maybe I could get 30KB for each frame in average. but for sockets it's alot.
has anyone ever succeed to fit with this problem? (and how of course...)
There are standard algorithms for compressing images: e.g. JPEG.
A further optimization is to know something about the image: for example on a desktop, items like the Windows Start button, and various application icons, and widgets on the title bar, are standard: so instead of sending their pixels values, you can send their logical identifiers.
Yes people have succeeded with this problem: people who write remote desktop software, including the open source VNC.
You may wish to review the source code of a VNC.
Most VNC servers implement several different forms of compression.

Categories

Resources