I have a component for an image file upload.
However, I want to also be able to check DPI settings because these images will eventually be printed and submitted on paper.
Within ASP.net, I can usually do something like this:
using (var rawBitmap = new Bitmap(postedFile.InputStream)){
var dpi = (decimal)rawBitmap.VerticalResolution/bitmap.Height;
// do other stuff.
}
However, within silverlight, I don't have access to the same libraries in order to do this (that said, this is my first stab at Silverlight, so if there is a way to get those dlls in, I'm all for it, but I couldn't get my utility wrapper imported).
I've seen lots of recommendations for FJcore (imagetools also wraps this library), a JPEG encoding/decoding utility. In theory, one loads up the JPEG stream into the decoder and gets information out.
I've tried using the approach with FJcore, but all the files that I'm saving out of photoshop seem to be missing the correct header that indicates the star of the file, which causes the decoder to fail. I've also confirmed this issue using their unit tests.
Any ideas on how to pull image resolution out of a file upload in silverlight?
DPI of the image is not always stored on the image. This is usually an extra property saved as a metadata during capture by the scanner (or the camera). You can actually see that if you load a JPEG in C# using Bitmap and save it again, DPI property is lost and set to the default 96.
So this is unfortunately not an option always reliable. I do not think there is any chance of getting it for all images. DPI in fact is irrelevant for pictures that are not created by scanners.
Try F J core assemblies to find DPI of image in silverlight
FileStream stream = imageFiles.OpenRead();
DecodedJpeg jpegImage = new JpegDecoder(stream).Decode();
int imageDpi = jpegImage.Image.DensityX;
Related
I am trying to work with TIFF images using C# in an ASP.NET environment. The catch, the images may have transparency, and any time I try to work with a transparent TIFF file I get either an Out of Memory Exception or Parameter is not valid error.
Here are the different ways I've gone about it:
string imagename = "Test.tif";
var image = Image.FromFile(#"C:\Path-to-File\" + imagename);
OR
Bitmap myBitmap = new Bitmap(#"C:\Path-to-File\" + imagename);
I've tried throwing it into a filestream and still receive errors. The TIFF files are coming from Photoshop, and I've definitely narrowed down transparency as the culprit.
This link does mention that the Image class does not support transparency
Looking for any sort of guidance...this shouldn't be as difficult as I'm finding.
Thanks to #Andrew Morton suggesting FreeImage, it ended up leading me to the Magick.NET library.
Ultimately FreeImage was near impossible to get to work in a C# program using current versions of Visual Studio, thus from some digging I found Magick.NET which after a quick search in NuGet, was very easy to install and get going.
Here is what I did:
Search for Magick in NuGet and select the desired architecture (x64 or x86)
Note: If selecting x64, the build settings for your project must be set to this same architecture otherwise you will get a failure such as:
Could not load file or assembly 'Magick.NET.DLL' or one of its dependencies. The specified module could not be found.
Familiarize yourself with the basics in this documentation
That's it! Here is the code I used to get it working:
using (MagickImage image = new MagickImage(#"C:\FolderPathToFile\OriginalFile.tif"))
{
image.Write(#"C:\FolderPathToFile\FinalOutput.png");
}
I have a number of web controls, which are made up of png images. The simplest is a button.
I need to be able to generate these controls with different colours depending on the colour selected by the client.
The images are .PSD files, layered before exporting to png.
My idea was to allow the client to pick one colour and use a layer filter in the psd to change the overall colour of the image and programmatically export the .PSD to PNG on the server. I looked into using the Photoshop CS Interface via COM, but haven't got my head around it, has anyone else used it for a similar task?
Alternatively I could read the png into memory and perform colour replacement, but this seems really complex for what reads like a simple(ish) task.
Many thanks in advance
.PSD is quite complicated and poor documented file format, that is constantly receiving new features from Adobe, so editing them is no way an easy task.
One way is to use Photoshop batch processing, which means photoshop installed on server, but as long you you wished to make that through COM, it should not be a problem.
One of the starting points may be: http://www.webdesignerdepot.com/2008/11/photoshop-droplets-and-imagemagick/
Another way would be to try composite layers using c#, that means you would have some layers ready (textures/borders/etc), some would be created at runtime and all those layers would be merged at runtime using c#.
I am learning WCF, LINQ and a few other technologies by writing, from scratch, a custom remote control application like VNC. I am creating it with three main goals in mind:
The server will provide 'remote control' on an application level (i.e. seamless windows) instead of full desktop access.
The client can select any number of applications that are running on the server and receive a stream of images of each of them.
A client can connect to more than one server simultaneously.
Right now I am using WCF to send an array of Bytes that represents the window being sent:
using (var ms = new MemoryStream()) {
window.GetBitmap().Save(ms, ImageFormat.Jpeg);
frame.Snapshot = ms.ToArray();
}
GetBitmap implementation:
var wRectangle = GetRectangle();
var image = new Bitmap(wRectangle.Width, wRectangle.Height);
var gfx = Graphics.FromImage(image);
gfx.CopyFromScreen(wRectangle.Left, wRectangle.Top, 0, 0, wRectangle.Size, CopyPixelOperation.SourceCopy);
return image;
It is then sent via WCF (TCPBinding and it will always be over LAN) to the client and reconstructed in a blank windows form with no border like this:
using (var ms = new MemoryStream(_currentFrame.Snapshot))
{
BackgroundImage = Image.FromStream(ms);
}
I would like to make this process as efficient as possible in both CPU and memory usage with bandwidth coming in third place. I am aiming to have the client connect to 5+ servers with 10+ applications per server.
Is my existing method the best approach (while continuing to use these technologies) and is there anything I can do to improve it?
Ideas that I am looking into (but I have no experience with):
Using an open source graphics library to capture and save the images instead of .Net solution.
Saving as PNG or another image type rather than JPG.
Send image deltas instead of a full image every time.
Try and 'record' the windows and create a compressed video stream instead of picture snapshots (mpeg?).
You should be aware for this points:
Transport: TCP/binary message encoding will be fastest way to transfer your image data
Image capture: you can rely on P/Invoke to access your screen data, as this can be faster and more memory consuming. Some examples: Capturing the Screen Image in C# [P/Invoke], How to take a screen shot using .NET [Managed] and Capturing screenshots using C# (Managed)
You should to reduce your image data before send it;
choose your image format wisely, as some formats have native compression (as JPG)
an example should be Find differences between images C#
sending only diff image, you can crop it and just send non-empty areas
Try to inspect your WCF messages. This will help you to understand how messages are formatted and will help you to identify how to make that messages smaller.
Just after passing through all this steps and being satisfied with your final code, you can download VncSharp source code. It implements the RFB Protocol (Wikipedia entry), "a simple protocol for remote access to graphical user interfaces. Because it works at the framebuffer level it is applicable to all windowing systems and applications, including X11, Windows and Macintosh. RFB is the protocol used in VNC (Virtual Network Computing)."
I worked on a similar project a while back. This was my general approach:
Rasterized the captured bitmap to tiles of 32x32
To determine which tiles had changed between frames I used unsafe code to compare them 64-bits at a time
On the set of delta tiles I applied one of the PNG filters to improve compressability and had the best results with the Paeth filter
Used DeflateStream to compress the filtered deltas
Used BinaryMessageEncoding custom binding to the service to transmit the data in Binary in stead of the default Base64 encoded version
Some client-side considerations. When dealing with large amounts of data being transferred through a WCF service I found that some parameters of the HttpTransportBinding and the XmlDictionaryRenderQuotas were set to pretty conservative values. So you will want to increase them.
Check out this: Large Data and Streaming (WCF)
The fastest way to send data between client/server is to send a byte array, or several byte arrays. That way WCF don't have to do any custom serialization on your data.
That said. You should use the new WPF/.Net 3.5 library to compress your images instead of the ones from System.Drawing. The functions in the System.Windows.Media.Imaging namespace are faster than the old ones, and can still be used in winforms.
In order to know if compression is the way to go you will have to benchmark your scenario to know how the compression/decompression time compares to transferring all the bytes uncompressed.
If you transfer the data over internet, then compression will help for sure. Between components on the same machine or on a LAN, the benefit might not be so obvious.
You could also try compressing the image, then chunk the data and send asynchronously with a chunk id which you puzzle together on the client. Tcp connections start slow and increase in bandwidth over time, so starting two or four at the same time should cut the total transfer time (all depending on how much data you are sending). Chunking the compressed images bytes is also easier logic wise compared to doing tiles in the actual images.
Summed up: System.Windows.Media.Imaging should help you both cpu and bandwidth wise compared to your current code. Memory wise I would guess about the same.
Instead of capturing the entire image just send smaller subsections of the image. Meaning: starting in the upper left corner, send a 10x10 pixel image, then 'move' ten pixels and send the next 10px square, and so on. You can then send dozens of small images and then update the painted full image on the client. If you've used RDC to view images on a remote machine you've probably seen it do this sort of screen painting.
Using the smaller image sections you can then split up the deltas as well, so if nothing has changed in the current section you can safely skip it, inform the client that you're skipping it, and then move onto the next section.
You'll definitely want to use compression for sending the images. However you should check to see if you get smaller file sizes from using compression similar to gZip, or if using an image codec gives you better results. I've never run a comparison, so I can't say for certain one way or another.
Your solution looks fine to me, but I suggest (as others did) you use tiles and compress the traffic when possible. In addition, I think you should send the entire image once a while, just to be sure that the client's deltas have a common "base".
Perhaps you can use an existing solution for streaming, such as RTP-H263 for video streaming. It works great, it uses compression, and it's well documented and widely used. You can then skip the WCF part and go directly to the streaming part (either over TCP or over UDP). If your solution should go to production, perhaps the H263 streaming approach would be better in terms of responsiveness and network usage.
Bitmap scrImg = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics scr;
scr.CopyFromScreen(new Point(0, 0), new Point(0, 0), Screen.PrimaryScreen.Bounds.Size);
testPictureBox.Image = (Image)scrImg;
I use this code to capture my screen.
I have some JPEG files that I can't seem to load into my C# application. They load fine into other applications, like the GIMP. This is the line of code I'm using to load the image:
System.Drawing.Image img = System.Drawing.Image.FromFile(#"C:\Image.jpg");
The exception I get is: "A generic error occurred in GDI+.", which really isn't very helpful. Has anyone else run into this, or know a way around it?
Note: If you would like to test the problem you can download a test image that doesn't work in C#.
There's an exact answer to this problem. We ran into this at work today, and I was able to prove conclusively what's going on here.
The JPEG standard defines a metadata format, a file that consists of a series of "chunks" of data (which they call "segments"). Each chunk starts with FF marker, followed by another marker byte to identify what kind of chunk it is, followed by a pair of bytes that describe the length of the chunk (a 16-bit little-endian value). Some chunks (like FFD8, "Start of Image") are critical to the file's usage, and some (like FFFE, "Comment") are utterly meaningless.
When the JPEG standard was defined, they also included the so-called "APP markers" --- types FFE0 through FFEF --- that were supposed to be used for "application-specific data." These are abused in various ways by various programs, but for the most part, they're meaningless, and can be safely ignored, with the exception of APP0 (FFE0), which is used for JFIF data: JFIF extends the JPEG standard slightly to include additional useful information like the DPI of the image.
The problem with your image is that it contains an FFE1 marker, with a size-zero chunk following that marker. It's otherwise unremarkable image data (a remarkable image, but unremarkable data) save for that weird little useless APP1 chunk. GDI+ is wrongly attempting to interpret that APP1 chunk, probably attempting to decode it as EXIF data, and it's blowing up. (My guess is that GDI+ is dying because it's attempting to actually process a size-zero array.) GDI+, if it was written correctly, would ignore any APPn chunks that it doesn't understand, but instead, it tries to make sense of data that is by definition nonstandard, and it bursts into flames.
So the solution is to write a little routine that will read your file into memory, strip out the unneeded APPn chunks (markers FFE1 through FFEF), and then feed the resulting "clean" image data into GDI+, which it will then process correctly.
We currently have a contest underway here at work to see who can write the JPEG-cleaning routine the fastest, with fun prizes :-)
For the naysayers: That image is not "slightly nonstandard." The image uses APP1 for its own purposes, and GDI+ is very wrong to try to process that data. Other applications have no trouble reading the image because they rightly ignore the APP chunks like they're supposed to.
.Net isn't handling the format of that particular image, potentially because the jpeg data format is slightly broken or non-standard. If you load the image into GIMP and save to a new file you can then load it with the Image class. Presumably GIMP is a bit more forgiving of file format problems.
This thread from MSDN Forums may be useful.
The error may mean the data is corrupt
or there is some underlying stream
that has been close too early.
The error can be a permission problem. Especially if your application is an ASP.NET application. Try moving the file to the same directory as your executable (if Win forms) or the root directory of your web application (if asp.net).
I have the same problem.
The only difference i've noticed is the compression. It works fine with "JPEG" but when the compression is "Progressive JPEG" i get the exception (A generic error occurred in GDI+).
At first i thought it could be a memory problem because the images i mentioned were kind of big (about 5MB in disk and maybe ~80MB in memory), but then i'v noticed the difference in the compression type.
When i open/save the image file in other program like IrfanView or GIMP, the result is ok, but that's not the idea.
I am working on a system that stores many images in a database as byte[]. Each byte[] is a multi page tiff already, but I have a need to retrieve the images, converting them all to one multi page tiff. The system was previously using the System.Drawing.Image classes, with Save and SaveAdd - this was nice in that it saves the file progressively, and therefore memory usage was minimal, however GDI+ concurrency issues were encountered - this is running on a back end in COM+.
The methods were converted to use the System.Windows.Media.Imaging classes, TiffBitmapDecoder and TiffBitmapEncoder, with a bit of massaging in between. This resolved the concurrency issue, but I am struggling to find a way to save the image progressively (i.e. frame by frame) to limit memory usage, and therefore the size of images that can be manipulated is much lower (i.e. I created a test 1.2GB image using the GDI+ classes, and could have gone on, but could only create a ~600MB file using the other method).
Is there any way to progressively save a multi page tiff image to avoid memory issues? If Save is called on the TiffBitmapEncoder more than once an error is thrown.
I think I would use the standard .NET way to decode the tiff images and write my own tiff encoder that can write progressively to disk. The tiff format specifications are public.
Decoding a tiff is not that easy, that's why I would use the TiffBitmapDecoder for this. Encoding is is easier, so I think it is doable to write an encoder that you can feed with separate frames and that is writing the necessary data progressively to disk. You'll probably have to update the header of the resulting tiff once you are ready to update the IFD (Image File Directory) entry.
Good luck!
I've done this via LibTIFF.NET I can handle multi-gigabyte images this way with no pain. See my question at
Using LibTIFF from c# to access tiled tiff images
Although I use it for tiled access, the memory issues are similar. LibTIFF allows full access to all TIFF functions, so files can be read and stored in a directory-like manner.
The other thing worth noting is the differences between GDI on different windows versions. See GDI .NET exceptions.