What is the correct way to get the thumbnails of images when using C#? There must be some built-in system method for that, but I seem to be unable find it anywhere.
Right now I'm using a workaround, but it seems to be much heavier on the computing side, as generating the thumbnails of 50 images, when using parallel processing takes about 1-1,5 seconds, and during that time, my CPU is 100% loaded. Not to mention that it builds up quite some garbage, which it later needs to collect.
This is what my class currently looks like:
public class ImageData
{
public const int THUMBNAIL_SIZE = 160;
public string path;
private Image _thumbnail;
public string imageName { get { return Path.GetFileNameWithoutExtension(path); } }
public string folder { get { return Path.GetDirectoryName(path); } }
public Image image { get
{
try
{
using (FileStream stream = new FileStream(path, FileMode.Open, FileAccess.Read))
using (BinaryReader reader = new BinaryReader(stream))
{
var memoryStream = new MemoryStream(reader.ReadBytes((int)stream.Length));
return new Bitmap(memoryStream);
}
}
catch (Exception e) { }
return null;
}
}
public Image thumbnail
{
get
{
if (_thumbnail == null)
LoadThumbnail();
return _thumbnail;
}
}
public void LoadThumbnail()
{
if (_thumbnail != null) return;
Image img = image;
if (img == null) return;
float ratio = (float)image.Width / (float)image.Height;
int h = THUMBNAIL_SIZE;
int w = THUMBNAIL_SIZE;
if (ratio > 1)
h = (int)(THUMBNAIL_SIZE / ratio);
else
w = (int)(THUMBNAIL_SIZE * ratio);
_thumbnail = new Bitmap(image, w, h);
}
I am saving up the thumbnail once generated, to save up some computing time later on. Meanwhile, I have an array of 50 elements, containing picture boxes, where I inject the thumbnails into.
Anyways... when I open a folder, containing images, my PC certainly doesn't use up 100% CPU for the thumbnails, so I am wondering what is the correct method to generate them.
Windows pregenerates the thumbnails and stores them in the thumbs.db-File (hidden) for later use.
So unless you either access the thumbs.db file and are fine with relying on it being available or cache the thumbnails yourself somewehere you always will have to render them in some way or another.
That being said, you can probably rely on whatever framework you are using for your UI to display them scaled down seeing as you load them into memory anyway.
Related
I have a console application written using C# on the top of Core .NET 2.2 framework.
I want to create asynchronous Task that would write a full-size image to storage. Additionally, the process will need to create a thumbnail and write it to the default storage.
Follow is the method that processes the logic. I documented each line to explain that I believe is happening
// This method accepts FileObject and returns a task
// The generated task will write the file as is to the default storage
// Then it'll create a thumbnail of that images and store it to the default storage
public async Task ProcessImage(FileObject file, int thumbnailWidth = 250)
{
// The name of the full-size image
string filename = string.Format("{0}{1}", file.ObjectId, file.Extension);
// The name along with the exact path to the full-size image
string path = Path.Combine(file.ContentId, filename);
// Write the full-size image to the storage
await Storage.CreateAsync(file.Content, path)
.ContinueWith(task =>
{
// Reset the stream to the beginning since this will be the second time the stream is read
file.Content.Seek(0, SeekOrigin.Begin);
// Create original Image
Image image = Image.FromStream(file.Content);
// Calulate the height of the new thumbnail
int height = (thumbnailWidth * image.Height) / image.Width;
// Create the new thumbnail
Image thumb = image.GetThumbnailImage(thumbnailWidth, height, null, IntPtr.Zero);
using (MemoryStream thumbnailStream = new MemoryStream())
{
// Save the thumbnail to the memory stream
thumb.Save(thumbnailStream, image.RawFormat);
// The name of the new thumbnail
string thumbnailFilename = string.Format("thumbnail_{0}", filename);
// The name along with the exact path to the thumbnail
string thumbnailPath = Path.Combine(file.ContentId, thumbnailFilename);
// Write the thumbnail to storage
Storage.CreateAsync(thumbnailStream, thumbnailPath);
}
// Dispose the file object to ensure the Stream is disposed
file.Dispose();
image.Dispose();
thumb.Dispose();
});
}
Here is my FileObject if needed
public class FileObject : IDisposable
{
public string ContentId { get; set; }
public string ObjectId { get; set; }
public ContentType ContentType { get; set; }
public string Extension { get; set; }
public Stream Content { get; set; }
private bool IsDisposed;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (IsDisposed)
return;
if (disposing && Content != null)
{
Content.Close();
Content.Dispose();
}
IsDisposed = true;
}
}
The above code writes the correct full-size image to the storage drive. It also writes the thumbnail to storage. However, the thumbnail is always corrupted. In other words, the generated thumbnail file is always written with 0 bytes.
How can I correctly create my thumbnail from file.Content stream after writing the same stream to the storage?
I figured out the cause of the issue. For some reason the line thumb.Save(thumbnailStream, image.RawFormat); position the thumbnailStream at the end and when writing to the storage nothing gets written
Fixing that issues was to reset the seek position after writing to the stream like this
using (MemoryStream thumbnailStream = new MemoryStream())
{
// Save the thumbnail to the memory stream
thumb.Save(thumbnailStream, image.RawFormat);
// Reset the seek position to the begining
thumbnailStream.Seek(0, SeekOrigin.Begin);
// The name of the new thumbnail
string thumbnailFilename = string.Format("thumbnail_{0}", filename);
// The name along with the exact path to the thumbnail
string thumbnailPath = Path.Combine(file.ContentId, thumbnailFilename);
// Write the thumbnail to storage
Storage.CreateAsync(thumbnailStream, thumbnailPath);
}
I am not sure what is the benefit that is gained when thumb.Save(...) does not reset the position to 0 after copying into a new stream! I just feel that it should be doing that since it will always write a new stream not appending to an existing one.
I am looking for a way to efficiently prepare large amount of images for website in .NET
Images are often largescale, unedited, 2-5MB 4000x8000px monster images from phone camera.
I would like to generate thumbnails quickly, and as efficient as possible. (not to slow down CPU) or performance for user.
I also have to consider caching.
Modern CMS systems are using Image pre-processor that you can invoke via front-end. So I want to make something like this also, but my own. Cause in my case, I can't use CMS here.
Here is my code: I have a static helper method from Helper class.
I call it in razor every time I need to render an image.
public static string GetImgThumbnail(string web_path, int _width = 0, int _height = 0)
{
//Default parameters
Image image;
Image thumbnail;
string thumb_url = "/img/noimg.png";
try
{
// web_path example input "/images/helloworld.jpg"
//system_path returns system path of Ogirinal image in the system f.e.: "C:\projects\websites\mysite\images\helloworld.jpg"
string system_path = HttpContext.Current.Server.MapPath(web_path);
image = Image.FromFile(system_path);
//Original image dimensions
int width = image.Width;
int height = image.Height;
//Get New image dimensions
if(_height == 0)
{
if(_width == 0)
{
_width = 700;
}
_height = (_width * height) / width;
}
if (_width == 0)
{
if (_height == 0)
{
_height = 700;
}
_width = (_height * width) / height;
}
//Generate Thumbnail, passing in new dimensions
thumbnail = image.GetThumbnailImage(_width, _height, null, IntPtr.Zero);
//Find out how to call newly created thumbnail.
//Original image example name = "/images/helloworld.jpg" thumbnail would be "/images/helloworld_thumb_700x250.jpg" or analogical with .png or JPEG etc...
//Suffix should be "_thumb_{width}x{height}"
string suffix = string.Format("_thumb_{0}x{1}", _width.ToString(),_height.ToString());
var bigImgFilename = String.Format("{0}{1}",
Path.GetFileNameWithoutExtension(system_path), Path.GetExtension(system_path));
var newImgFilename = String.Format("{0}{1}{2}",
Path.GetFileNameWithoutExtension(system_path), suffix, Path.GetExtension(system_path));
//New system path of new Thumbnail example: "C:\projects\websites\mysite\images\helloworld_thumb_700x250.jpg"
var newpath = system_path.Replace(bigImgFilename, newImgFilename);
//Set new web path, expect this to be: "/images/helloworld_thumb_700x250.jpg"
thumb_url = web_path.Replace(bigImgFilename, newImgFilename);
//Check if file exists, no need to overrite if file exists.
if (!File.Exists(newpath))
{
thumbnail.Save(newpath);
}
}
catch (Exception exc)
{
// If something goes wrong, just return backup image.
thumb_url = "/img/noimg.png";
}
// return thumbnail
return thumb_url;
}
Would love to hear some tips/suggestions or whether I am on the right path, or I should do it in different way?
So in your code, you calculate the thumbnail first. Then you calculate the filename and check if the thumbnail calculation was necessary.
I would do in a different order:
Load the image (since you need that to calculate the new filename)
Calculate the new filename
Check if the file is already there
If the file is already there, use it
If not, generate the thumbnail.
In code this would roughtly look like the following:
public static string GetImgThumbnail(string web_path, int _width = 0, int _height = 0)
{
[...]
string system_path = HttpContext.Current.Server.MapPath(web_path);
image = Image.FromFile(system_path);
// calculate new width and height
[...]
// calculate new filename
[...]
//New system path of new Thumbnail example: "C:\projects\websites\mysite\images\helloworld_thumb_700x250.jpg"
var newpath = system_path.Replace(bigImgFilename, newImgFilename);
//Set new web path, expect this to be: "/images/helloworld_thumb_700x250.jpg"
thumb_url = web_path.Replace(bigImgFilename, newImgFilename);
//Check if file exists, no need to overrite if file exists.
if (!File.Exists(newpath))
{
//Generate Thumbnail, passing in new dimensions
thumbnail = image.GetThumbnailImage(_width, _height, null, IntPtr.Zero);
thumbnail.Save(newpath);
}
My point is: Loading an image and calculating the size dosen't use much resources. The resizing part is a heavy on the CPU. So you will get faster responses, if you only transform the image if necessary.
Here are some points you can consider:
I call it in razor every time I need to render an image.
This will generate a thumbnail on every image view. This is very cpu heavy and most likely not what you want. Consider creating the thumbnail only once, save it to disk and start using the pre-rendered version.
Next issue:
//Check if file exists, no need to overrite if file exists.
if (!File.Exists(newpath))
{
thumbnail.Save(newpath);
}
You first compute the thumbnail and then check if the computation has already be done. It should be the other way round.
What I am currently doing is:
Capture Some Screen shots
Copy All the captured Screen Shots in a Bitmap List: List
Save All the Screenshots in List to hard drive
Feed All the pictures in the directory to the VideoWriter Object of DotImaging Library.
My source code for writing the video is:
private void MakeVideo()
{
var saveDialog = new SaveFileDialog { Filter = #"AVI Video(.avi)|*.avi" };
if (saveDialog.ShowDialog() == DialogResult.OK)
{
using (var videoWriter = new VideoWriter(saveDialog.FileName, new Size(_screenWidth, _screenHeight), FrameRate, true))
{
var ir = new ImageDirectoryCapture(_path, "*.jpg");
while (ir.Position < ir.Length)
{
IImage image = ir.Read();
videoWriter.Write(image);
}
videoWriter.Close();
DeleteFiles(); // Deletes The Files from hard drive
}
}
}
What I want to do is:
Skip the saving screenshots in hard drive.
Feed the List to the Video Writer Object directly.
I am unable to do so because it takes the directory path and not the images itself directly.
I want to do it because of the fact that writing all images to hard drive and then making video is much slower, Any Alternatives of DotImaging are also good.
Or maybe you can let me know if I can cast Bitmap Images to the IImage Format that VideoWriter.Write() method is accepting as a parameter.
There is an extension library called DotImaging.BitmapInterop.
Once it's installed in your project, you can write something like:
IEnumerable<Bitmap> frames = GetFrames();
using (var writer = new VideoWriter(target), new Size(width, height)))
{
foreach (var bitmap in frames)
{
var imageBuffer = bitmap.ToBgr();
writer.Write(imageBuffer.Lock());
}
}
I'm developing an application that uses a mobile device to take a photo and send it using a webservice. But after I've taken 4 photos I am getting an OutOfMemoryException in the code below. I tried calling GC.Collect() but it didn't help either. Maybe someone here could be give me an advice how to handle this problem.
public static Bitmap TakePicture()
{
var dialog = new CameraCaptureDialog
{
Resolution = new Size(1600, 1200),
StillQuality = CameraCaptureStillQuality.Default
};
dialog.ShowDialog();
// If the filename is empty the user took no picture
if (string.IsNullOrEmpty(dialog.FileName))
return null;
// (!) The OutOfMemoryException is thrown here (!)
var bitmap = new Bitmap(dialog.FileName);
File.Delete(dialog.FileName);
return bitmap;
}
The function is called by an event handler:
private void _pictureBox_Click(object sender, EventArgs e)
{
_takePictureLinkLabel.Visible = false;
var image = Camera.TakePicture();
if (image == null)
return;
image = Camera.CutBitmap(image, 2.5);
_pictureBox.Image = image;
_image = Camera.ImageToByteArray(image);
}
I suspect you are holding onto references. As a minor cause, note that dialogs don't dispose themselves when using ShowDialog, so you should be using the dialog (although I would expect GC to still collect an undisposed but non-referenced dialog).
Likewise, you should probably be using the image, but again: not sure I'd expect this to make-or-break; worth a try, though...
public static Bitmap TakePicture()
{
string filename;
using(var dialog = new CameraCaptureDialog
{
Resolution = new Size(1600, 1200),
StillQuality = CameraCaptureStillQuality.Default
}) {
dialog.ShowDialog();
filename = dialog.FileName;
}
// If the filename is empty the user took no picture
if (string.IsNullOrEmpty(filename))
return null;
// (!) The OutOfMemoryException is thrown here (!)
var bitmap = new Bitmap(filename);
File.Delete(filename);
return bitmap;
}
private void _pictureBox_Click(object sender, EventArgs e)
{
_takePictureLinkLabel.Visible = false;
using(var image = Camera.TakePicture()) {
if (image == null)
return;
image = Camera.CutBitmap(image, 2.5);
_pictureBox.Image = image;
_image = Camera.ImageToByteArray(image);
}
}
I'd also be a little cautious of the CutBitmap etc, to ensure that things are released ASAP.
Your mobile device usually does not have any memory swapping to disk option, so since you choose to store your images as bitmaps in memory rather than files on disk, you quickly consume your phone's memory. Your "new Bitmap()" line allocates a large chunk of memory, so it is very likely to throw the exception there. Another contender is your Camera.ImageToByteArray that will allocate a large amount of memory. This probably isn't large to what you're used to with your computer, but for your mobile this is gigantic
Try keeping the pictures on disk until you use them, i.e. until sending them to the webservice. For displaying them, use your built-in controls, they are probably the most memory efficient and you can usually point them to the image files.
Cheers
Nik
I have a little sample application I was working on trying to get some of the new .Net 4.0 Parallel Extensions going (they are very nice). I'm running into a (probably really stupid) problem with an OutOfMemoryException. My main app that I'm looking to plug this sample into reads some data and lots of files, does some processing on them, and then writes them out somewhere. I was running into some issues with the files getting bigger (possibly GB's) and was concerned about memory so I wanted to parallelize things which led me down this path.
Now the below code gets an OOME on smaller files and I think I'm just missing something. It will read in 10-15 files and write them out in parellel nicely, but then it chokes on the next one. It looks like it's read and written about 650MB. A second set of eyes would be appreciated.
I'm reading into a MemorySteam from the FileStream because that is what is needed for the main application and I'm just trying to replicate that to some degree. It reads data and files from all types of places and works on them as MemoryStreams.
This is using .Net 4.0 Beta 2, VS 2010.
namespace ParellelJob
{
class Program
{
BlockingCollection<FileHolder> serviceToSolutionShare;
static void Main(string[] args)
{
Program p = new Program();
p.serviceToSolutionShare = new BlockingCollection<FileHolder>();
ServiceStage svc = new ServiceStage(ref p.serviceToSolutionShare);
SolutionStage sol = new SolutionStage(ref p.serviceToSolutionShare);
var svcTask = Task.Factory.StartNew(() => svc.Execute());
var solTask = Task.Factory.StartNew(() => sol.Execute());
while (!solTask.IsCompleted)
{
}
}
}
class ServiceStage
{
BlockingCollection<FileHolder> outputCollection;
public ServiceStage(ref BlockingCollection<FileHolder> output)
{
outputCollection = output;
}
public void Execute()
{
var di = new DirectoryInfo(#"C:\temp\testfiles");
var files = di.GetFiles();
foreach (FileInfo fi in files)
{
using (var fs = new FileStream(fi.FullName, FileMode.Open, FileAccess.Read))
{
int b;
var ms = new MemoryStream();
while ((b = fs.ReadByte()) != -1)
{
ms.WriteByte((byte)b); //OutOfMemoryException Occurs Here
}
var f = new FileHolder();
f.filename = fi.Name;
f.contents = ms;
outputCollection.TryAdd(f);
}
}
outputCollection.CompleteAdding();
}
}
class SolutionStage
{
BlockingCollection<FileHolder> inputCollection;
public SolutionStage(ref BlockingCollection<FileHolder> input)
{
inputCollection = input;
}
public void Execute()
{
FileHolder current;
while (!inputCollection.IsCompleted)
{
if (inputCollection.TryTake(out current))
{
using (var fs = new FileStream(String.Format(#"c:\temp\parellel\{0}", current.filename), FileMode.OpenOrCreate, FileAccess.Write))
{
using (MemoryStream ms = (MemoryStream)current.contents)
{
ms.WriteTo(fs);
current.contents.Close();
}
}
}
}
}
}
class FileHolder
{
public string filename { get; set; }
public Stream contents { get; set; }
}
}
The main logic seems OK, but if that empty while-loop in main is literal then you are burning unnecesary CPU cycles. Better use solTask.Wait() instead.
But if individual files can run in Gigabytes, you still have the problem of holding at least 1 completely in memory, and usually 2 (1 being read, 1 being processed/written.
PS1: I just realized you don't pre-allocate the MemStream. That's bad, it will have to re-size very often for a big file, and that costs a lot of memory. Better use something like:
var ms = new MemoryStream(fs.Length);
And then, for big files, you have to consider the Large Object Heap (LOH). Are you sure you can't break a file up in segments and process them?
PS2: And you don't need the ref's on the constructor parameters, but that's not the problem.
Just looking through quickly, inside your ServiceStage.Execute method you have
var ms = new MemoryStream();
I don't see where you are closing ms out or have it in a using. You do have the using in the other class. That's one thing to check out.