Image resizing on the fly in asp.net - c#

For simplicity lets say that I have a web page that needs to display thumbnails of images. The images locations are stored in a database(the images are stored on Amazon S3). Is it possible to have my web server scale down the large image before it is delivered to the client? This way I don't have to store thumbnails of every image and the client can download a smaller file.

Every tutorial on this topic over-simplifies the situation and nearly all of them leak memory. It's a long read, but you should know about the 29 image resizing pitfalls so you can avoid them.
I wrote a library to do server-side dynamic image resizing safely. It's not something that can be done properly in 1 tutorial or even 10. You can solve 80% of the bugs, but not 100%. And when you're doing something this resource-intensive, you can't tolerate bugs or memory leaks.
The core library is free and open-source, but the Amazon S3 plugin is part of the Performance edition, which has a $249 license fee. The Performance Edition comes with source, examples, and documentation for S3, MS SQL, Azure, MongoDB GridFS, and CloudFront integration, as well as terabyte-scale disk caching and memcaching.
From the statistics I have access to, it appears that imageresizing.net is the most widely-used library of its kind. It runs at least 5 social networks and is used with image collections as large as 20TB. Most large sites use the S3 plugin, as local storage (or even a SAN) isn't very scalable.

Sure, no problem. There's plenty of resources on the web that show how to dish up an image from a database. So I won't duplicate that here.
Once you've loaded the image, you can easily shrink it using .NET. There is an example at the following URL. It doesn't do exactly what you are doing, but it does generate thumbnails of an image.
http://blackbeltcoder.com/Articles/graphics/creating-website-thumbnails-in-asp-net

Using WebImage class that comes in System.Web.Helpers.WebImage you can achieve this.
You can use this great kid to output resized images on the fly.
Sample code:
public void GetPhotoThumbnail(int realtyId, int width, int height)
{
// Loading photos’ info from database for specific Realty...
var photos = DocumentSession.Query<File>().Where(f => f.RealtyId == realtyId);
if (photos.Any())
{
var photo = photos.First();
new WebImage(photo.Path)
.Resize(width, height, false, true) // Resizing the image to 100x100 px on the fly...
.Crop(1, 1) // Cropping it to remove 1px border at top and left sides (bug in WebImage)
.Write();
}
// Loading a default photo for realties that don't have a Photo
new WebImage(HostingEnvironment.MapPath(#"~/Content/images/no-photo100x100.png")).Write();
}
More about it here: Resize image on the fly with ASP.NET MVC
Here's a great tutorial that shows how to work with WebImage directly from the ASP.NET site:
Working with Images in an ASP.NET Web Pages (Razor) Site

Yes.
You make a ASP.Net page that does Response.Clear(), sets Content-Type-header in Response and sends the binary data of the image (also through Response). The image can be resized on-the-fly, but I'd recommend caching it for some time on disk or so. Then you reference the image from HTML as <img src="http://server/yourimagepage.aspx">. For storing image in memory before sending you can use MemStream.
I have sample code but not in front of me right now, sorry. :)

Related

AWS S3 Files how to properly store images and image compressing

I am building a mobile app similar to Instagram in terms of working with Images. I am using amazon s3 to store the images and Mysql for the file path. I'm trying to ensure users upload great quality pictures but also ensure the file size is reasonable, Should I compress the images? Does anyone have an idea and what is the acceptable size for an image?
The definition of "acceptable" is totally up to you!
You should certainly store a high-quality image (probably at the original resolution), but you would also want to have smaller images for thumbnails and web/app viewing. This will make it faster to serve images and will reduce bandwidth costs.
A common technique is to have Amazon S3 trigger an AWS Lambda function when a new image is uploaded. The Lambda function can then resize the image into multiple sizes. Later, when your app wishes to retrieve an image, it can point to a resized image rather than the original.
When resizing images, you can also consider image quality. This allows JPG files to reduce in size without needing to reduce in resolution.

Load external images efficiency

I would like to know a most efficient way to load external images to my website.
For example:
My website's url is "www.mydomain.com". The external image is http://www.myimagedomain.com/image.jpg.
The most common way is to write a simple html image-tag like
<img src="http://www.myimagedomain.com/image.jpg" />.
The problem is, if the requested image is very large (8000x6000 pixel) but I want to show this picture as a thumbnail/preview like 200x200 pixel, e.g on mobile devices.
Based on this information I wrote a little ashx (c#) handler that downloads the requested image and resizes it to a given weight/height parameters, like this:
<img src="http://www.mydomain.com/img.ashx?imageUrl=http://www.myimagedomain.com/image.jpg&w=200&h=200" />
Now there is another problem, because the httphandler always downloads the requested image, on-the-fly.
My new approach is to generate a based64 string from the resized image and save this in a database once?!
Would you recommend this or is there another way to eliminated the download problem?
Maybe someone know how google-image-search prevents this problem?
I don't want to save the external images on my own server...
I would suggest to use image resizer library, it solves much of what you need in efficient way - caching included:
http://www.nuget.org/packages/ImageResizer/
I think google caches image thumbnails on its servers for search.

Resizing / caching JPEG images on Azure

I have a website written in .NET 4.5 / MVC 4 that allows users to upload images, among other things. These images can be displayed throughout the site in a variety of sizes. Currently, the way this works is as follows:
The image is uploaded and re-sized in memory to a max width of 640px (the largest the site will display).
The resized image is saved to disk at /assets/photos/source/{id}-{timestamphash}.jpg.
When a request for the image in various sizes comes through, I get the filename by combining {id}-{hash} where {hash} is the hash of a combination of ids, height, width and some other information I need to get the image.
If that image exists in /assets/photos/cache, I simply return it, otherwise I create it in memory using the source image and then save it to the cache directory.
I like this approach because it happens quickly and it all happens in-memory or via disk retrieval.
I'd like to eventually move my site to Azure. How would a workflow like this happen in Azure given that all of my images would be stored as blobs? Is it still efficient to use a re-sizing/caching strategy like this or are there other alternatives? Wouldn't you incur network latency as the image is uploaded to Azure from the server where today, it just gets saved to disk which is obviously a lot faster?
Just looking for some direction on how to migrate a workflow like this to something workable and scalable with Azure.
Given your comment above, why not create a background task that resizes to all acceptable sizes on upload, storing each one into the Azure blob storage. You are correct that if you resize on request, you would suffer some latency and lag as you would need to download the source image, resize, then upload to blob storage, then redirect the user to your blob storage url. Given the 'cheapness' of blob storage, I would submit that paying a few dimes more for extra storage would outweight the potential slowness of the scenario above.
Pseudo Code Below:
[HttpPost]
public ActionResult FileUpload(HttpPostedBaseFile file){
if(ValidateFile(file)){
//fire off a background tasks that resizes and stores each file size to the azure
//blog storage. You could use a naming scheme such as id_size.imageTypeExtension
}
}
Now, when asked for a file, you could still use your same routine, but instead of returning a file result, you would return a redirect result
public ActionResult GetImage(string hash){
//do stuff to get image details
return Redirect("http://yourAzureBlobStorageDomain.com/Assets/Images/Cache/" + imageDetails")
}
This is cool because you don't need to download the image to your web server and then serve it, but simply redirect the request directly to the blob storage! This would have the affect that an image tag such as below
<img src="#Url.RouteUrl("GetImage", "Images" new {hash = hash})"/> would hit your web application, forcing a redirect to the actual image location in blob storage.
You are correct that you do not want to store anything on the Azure web role permanently as the web roles can be moved around at any time, losing any locally stored data.
This is just a simple way to sort of keep your code base the way it is now with minimal changes. You could modify this to behave more like what you have now in that you could query the blob storage if an image exists, if it does, redirect, if it does not then generate, store and redirect, but I believe you would find you will have more issues with the latency at this point given you would need to download the source image, do your stuff and then reupload the image before instructing the user's browser where to go find it.
However, this would be something for you to decide if it was worth the extra time it would take to resize on demand vs the cost of storing multiple sizes of each image. As a side note, we have not noticed a significant latency issue when using blob storage from our web/worker roles. Obviously it is higher than retrieval from disks, but it has not really posed a significant increase that we have been able to see.

Efficient way to send images via WCF?

I am learning WCF, LINQ and a few other technologies by writing, from scratch, a custom remote control application like VNC. I am creating it with three main goals in mind:
The server will provide 'remote control' on an application level (i.e. seamless windows) instead of full desktop access.
The client can select any number of applications that are running on the server and receive a stream of images of each of them.
A client can connect to more than one server simultaneously.
Right now I am using WCF to send an array of Bytes that represents the window being sent:
using (var ms = new MemoryStream()) {
window.GetBitmap().Save(ms, ImageFormat.Jpeg);
frame.Snapshot = ms.ToArray();
}
GetBitmap implementation:
var wRectangle = GetRectangle();
var image = new Bitmap(wRectangle.Width, wRectangle.Height);
var gfx = Graphics.FromImage(image);
gfx.CopyFromScreen(wRectangle.Left, wRectangle.Top, 0, 0, wRectangle.Size, CopyPixelOperation.SourceCopy);
return image;
It is then sent via WCF (TCPBinding and it will always be over LAN) to the client and reconstructed in a blank windows form with no border like this:
using (var ms = new MemoryStream(_currentFrame.Snapshot))
{
BackgroundImage = Image.FromStream(ms);
}
I would like to make this process as efficient as possible in both CPU and memory usage with bandwidth coming in third place. I am aiming to have the client connect to 5+ servers with 10+ applications per server.
Is my existing method the best approach (while continuing to use these technologies) and is there anything I can do to improve it?
Ideas that I am looking into (but I have no experience with):
Using an open source graphics library to capture and save the images instead of .Net solution.
Saving as PNG or another image type rather than JPG.
Send image deltas instead of a full image every time.
Try and 'record' the windows and create a compressed video stream instead of picture snapshots (mpeg?).
You should be aware for this points:
Transport: TCP/binary message encoding will be fastest way to transfer your image data
Image capture: you can rely on P/Invoke to access your screen data, as this can be faster and more memory consuming. Some examples: Capturing the Screen Image in C# [P/Invoke], How to take a screen shot using .NET [Managed] and Capturing screenshots using C# (Managed)
You should to reduce your image data before send it;
choose your image format wisely, as some formats have native compression (as JPG)
an example should be Find differences between images C#
sending only diff image, you can crop it and just send non-empty areas
Try to inspect your WCF messages. This will help you to understand how messages are formatted and will help you to identify how to make that messages smaller.
Just after passing through all this steps and being satisfied with your final code, you can download VncSharp source code. It implements the RFB Protocol (Wikipedia entry), "a simple protocol for remote access to graphical user interfaces. Because it works at the framebuffer level it is applicable to all windowing systems and applications, including X11, Windows and Macintosh. RFB is the protocol used in VNC (Virtual Network Computing)."
I worked on a similar project a while back. This was my general approach:
Rasterized the captured bitmap to tiles of 32x32
To determine which tiles had changed between frames I used unsafe code to compare them 64-bits at a time
On the set of delta tiles I applied one of the PNG filters to improve compressability and had the best results with the Paeth filter
Used DeflateStream to compress the filtered deltas
Used BinaryMessageEncoding custom binding to the service to transmit the data in Binary in stead of the default Base64 encoded version
Some client-side considerations. When dealing with large amounts of data being transferred through a WCF service I found that some parameters of the HttpTransportBinding and the XmlDictionaryRenderQuotas were set to pretty conservative values. So you will want to increase them.
Check out this: Large Data and Streaming (WCF)
The fastest way to send data between client/server is to send a byte array, or several byte arrays. That way WCF don't have to do any custom serialization on your data.
That said. You should use the new WPF/.Net 3.5 library to compress your images instead of the ones from System.Drawing. The functions in the System.Windows.Media.Imaging namespace are faster than the old ones, and can still be used in winforms.
In order to know if compression is the way to go you will have to benchmark your scenario to know how the compression/decompression time compares to transferring all the bytes uncompressed.
If you transfer the data over internet, then compression will help for sure. Between components on the same machine or on a LAN, the benefit might not be so obvious.
You could also try compressing the image, then chunk the data and send asynchronously with a chunk id which you puzzle together on the client. Tcp connections start slow and increase in bandwidth over time, so starting two or four at the same time should cut the total transfer time (all depending on how much data you are sending). Chunking the compressed images bytes is also easier logic wise compared to doing tiles in the actual images.
Summed up: System.Windows.Media.Imaging should help you both cpu and bandwidth wise compared to your current code. Memory wise I would guess about the same.
Instead of capturing the entire image just send smaller subsections of the image. Meaning: starting in the upper left corner, send a 10x10 pixel image, then 'move' ten pixels and send the next 10px square, and so on. You can then send dozens of small images and then update the painted full image on the client. If you've used RDC to view images on a remote machine you've probably seen it do this sort of screen painting.
Using the smaller image sections you can then split up the deltas as well, so if nothing has changed in the current section you can safely skip it, inform the client that you're skipping it, and then move onto the next section.
You'll definitely want to use compression for sending the images. However you should check to see if you get smaller file sizes from using compression similar to gZip, or if using an image codec gives you better results. I've never run a comparison, so I can't say for certain one way or another.
Your solution looks fine to me, but I suggest (as others did) you use tiles and compress the traffic when possible. In addition, I think you should send the entire image once a while, just to be sure that the client's deltas have a common "base".
Perhaps you can use an existing solution for streaming, such as RTP-H263 for video streaming. It works great, it uses compression, and it's well documented and widely used. You can then skip the WCF part and go directly to the streaming part (either over TCP or over UDP). If your solution should go to production, perhaps the H263 streaming approach would be better in terms of responsiveness and network usage.
Bitmap scrImg = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics scr;
scr.CopyFromScreen(new Point(0, 0), new Point(0, 0), Screen.PrimaryScreen.Bounds.Size);
testPictureBox.Image = (Image)scrImg;
I use this code to capture my screen.

Dynamically reducing image dimension as well as image size in C#

I have an image gallery that is created using a repeater control. The repeater gets bound inside my code behind file to a table that contains various image paths.
The images in my repeater are populated like this
<img src='<%# Eval("PicturePath")' %>' height='200px' width='150px'/>
(or something along those lines, I don't recall the exact syntax)
The problem is sometimes the images themselves are massive so the load times are a little ridiculous. And populating a 150x200px image definitely should not require a 3MB file.
Is there a way I can not only change the image dimensions, but shrink the file size down as well?
Thanks!
I would recommend creating a handler that can resize images for you on the fly and encode them in whatever format you like.. kind of like a thumbnail generator. This will cost CPU on the server but you can cache images and severely reduce bandwidth costs ETC. Let me see if I can find the link to a good article I read on something similar.
You can look at this article it isn't the one I had read but it has some info about how you can go about implementing this.
You're looking for the GetThumbnailImage method of the Image class. You will either want to generate the thumbnail images ahead of time or create the image the first time it is accessed and save it to disk for later use (so first access would be slow but subsequent requests would be quick).
You could try either of these 2 projects on CodePlex.com, both offer dynamic image generation with caching.
Dynamic Image Process
ASP.NET Image Generation
The later is straight from Microsoft.

Categories

Resources