I am using Aspose.Imaging 19.11.0.0 for manipulating the Tiff Images with Compression JPEG,
But here If I have 10MB+ sized tiff files(having 50 pages) then in this case it is taking 30 to 40 minutes to rotate these all tiff pages and application went on not responding mode.
In my code, suppose I have 50 pages in Tiff image files, then from client application I am iterating each pages through foreach loop and sending corresponding rotate method for each page on server side for rotation,
I know the one of the factor for time consuming is the sending each pages instead of all pages at once,
but when I debugged the code then found that tiffImage.Save(Stream, tiffOptions) is taking more time for each page also.
Below are the server side code for rotating the page using JPEG compression ,
Here below RotatePageUsingAspose() method is called each time for all pages,
means suppose I have selected only 3rd page out of 50 then it is being called only one time for selected page with parameter pageNumber =3 and rotation degree = 90 degree
In this case, means rotating the 3rd page and saving this page is also taking almost 1 minute,which is far too slow.
Server side code for rotation:
private void RotatePageUsingAspose(int pageNo, RotationDegrees rotationDegree)
{
float angleOfRotation = (float)rotationDegree;
// Auto mode is flexible and efficient.
Cache.CacheType = CacheType.Auto;
// The default cache max value is 0, which means that there is no upper limit.
Cache.MaxDiskSpaceForCache = 1073741824; // 1 gigabyte
Cache.MaxMemoryForCache = 1073741824; // 1 gigabyte
// Changing the following property will greatly affect performance.
Cache.ExactReallocateOnly = false;
TiffOptions tiffOptions = new TiffOptions(TiffExpectedFormat.TiffJpegRgb);
//Set RGB color mode.
tiffOptions.Photometric = TiffPhotometrics.Rgb;
tiffOptions.BitsPerSample = new ushort[] { 8, 8, 8 };
try
{
using (TiffImage tiffImage = (TiffImage)Image.Load(Stream))
{
TiffFrame selectedFrame = tiffImage.Frames[pageNo - 1];
selectedFrame.Rotate(angleOfRotation);
tiffImage.Save(Stream, tiffOptions);
}
}
finally
{
tiffOptions.Dispose();
}
}
I have raised the same question to Aspose.Imaging team but they have not provide the solution for this yet.
Kindly suggest the improvements for above code for saving the pages in efficient manner.
If possible please provide the approach to achieve this.
Related
I have a problem with ABCPdf, when I try to convert a pdf files into seperate image files as fallbacks for old browsers.
I have some working code that perfectly renders the page and resizes the rendering into the wanted size. Now my problem occurs when the pdf page is huge w7681px x h10978px. It nearly kills my development machine and the deployment machine cannot even chew the file.
I normally just render the page 1-to-1 as the pdf page and then uses other algorithms to resize this image. This is not efficient since ABCPdf takes alot of power to output this image.
I have the following code:
private byte[] GeneratePng(Doc pdfDoc, int dpi)
{
var useDpi = dpi;
pdfDoc.Rendering.DotsPerInch = useDpi;
pdfDoc.Rendering.SaveQuality = 100;
pdfDoc.Rect.String = pdfDoc.CropBox.String;
pdfDoc.Rendering.ResizeImages = true;
int attemptCount = 0;
for (;;)
{
try
{
return pdfDoc.Rendering.GetData("defineFileTypeDummyString.png");
}
catch
{
if (++attemptCount == 3) throw;
}
}
}
I have tried the following solutions:
Resizing the page
pdfDoc.SetInfo(pdfDoc.Page, "/MediaBox:Rect", "0 0 200 300");
Resizing the page and outputting it. Which doesn't seem to make any changes at all.
Resizing the images before rendering it:
foreach (IndirectObject io in pdfDoc.ObjectSoup) {
if (io is PixMap) {
PixMap pm = (PixMap)io;
pm.Realize(); // eliminate indexed color images
pm.Resize(pm.Width / 4, pm.Height / 4);
}
}
Didn't do anything either and still resulted in a long load time.
Running the reduzed size operation before rendering:
using (ReduceSizeOperation op = new ReduceSizeOperation(pdfDoc))
op.Compact(true);
Didn't do anything either. Just went directly to rendering and took a long time.
Can anyone help me here? Maybe point me to some ABCPdf resizing algorithm or something.
Ok so I talked to the customer support at ABCPdf and they gave me the following.
doc1.Read(originalPDF);
// Specify size of output page. (This example scales the page, maintaining the aspect ratio,
// but you could set the MediaBox Height and Width to any desired value.)
doc2.MediaBox.Height = doc1.MediaBox.Height / 8;
doc2.MediaBox.Width = doc1.MediaBox.Width / 8;
doc2.Rect.SetRect(doc2.MediaBox);
doc2.Page = doc2.AddPage();
// Create the output image
doc2.AddImageDoc(doc1, 1, null);
doc2.Rendering.Save(savePath);
Which is supposed to be used with single page PDFs, so if you have a pdf full of large pictures, then you should chop it up. Which you can do following my other Q/A: Chop PDFs into single pages
The rendering algorithm they use in the above code is auto detected by ABCPdf and you cannot control it yourself (and they told me that I didn't want to). So I put my faith in their code. At least I did a test and the quality looks quite similar to a InterpolationMode.HighQualityBicubic and only differed when zoomed. So I wouldn't be too concerned with it either.
At last the above code gave me a speed boost compared to rendering and then resizing of about 10x faster. So it is really worth something if you do this operation a lot.
I'm trying to reduce the file sizes of the GIF animations I'm exporting, I've read up on how to do it. Another thread suggested to reduce the quality, add compression and slightly blur the picture which is what I'm trying to do like so:
using (MagickImageCollection col = new MagickImageCollection(#"C:/PathToGif"))
{
for (int i = 0; i < col.Count; i++)
{
col[i].Quality = 85;
col[i].CompressionMethod = CompressionMethod.LZW;
col[i].Strip();
}
col.Write(#"C:/Path/To/Outputh");
}
The code runs however the settings seem to be ignored, while setting AnimationDelay the same way does work. I verify it by checking the quality and file size of the output, which seem to be the same as when I don't use any of the settings. Even setting quality to 20, gives the same results.
I've also attempted to use QuantizeSettings where I passed a value of 255 to the colors property. Which just seemed to lock my application up, while using 50% CPU. (I gave it about 5 minutes before forcefully closing the application)
My application processes a .GIF of about 950 kB and turns it into 5.3 mB which is unacceptable. (Disclaimer: I add about 20+- frames to the .GIF and draw an overlay on it.)
Could someone who has experience with the Magick .NET library tell me if I'm doing something wrong and point me into the right direction of doing this? I was unable to find a different way of applying these settings.
The GIF coder does not use the Quality setting and the CompressionMethod will always be CompressionMethod.LZW. You should do the following if you want to optimize the output file:
using (MagickImageCollection col = new MagickImageCollection(#"C:/PathToGif"))
{
col.Coalesce();
AddOtherImages(col);
col.Optimize();
col.OptimizeTransparency();
col.Write(#"C:/Path/To/Output");
}
Make sure you upgrade to the latest version, the Optimize/OptimizeTransparency methods were bugged in previous versions.
I am reading a particular TIF file that reports a zero scanline size. The read operation returns null.
tiff = Tiff.ClientOpen("image", Stream.Length == 0 ? "w" : "ra", Stream, new TIFFTruncStream());
tiff == null, and the log contains a Zero scanline size trace message.
The .NET framework and some other viewers cannot open the file, We have managed to open the file(s) in some older IBM viewers. Is this definitely a corrupt file or just a scenario unsupported by LibTiff.NET?
Thanks
Zero scanline size is definitely not supported by libtiff/LibTiff.Net. I do not know about any other viewer that supports images with scanlines of zero length.
Jim sent us couple of such files and it turned out that the files are corrupt/broken. They specify zero width for their first page.
I tried to open these files in some other image viewers and only Preview utility in Mac OS X Mavericks could open them. The utility opens both files but silently skips the fist broken page. It shows not errors and acts like there is one less page in the files.
To achieve the same (silently skip first page), you can use the following workaround:
Open the TIFF in append mode
Set current page to be first page
In a loop check the size of each page
Skip any page with zero width or height
Below is a sample code for the workaround.
// "a" is for append
using (Tiff inImage = Tiff.Open(put-file-name-here, "a"))
{
if (inImage == null)
return;
// move to the first page
inImage.SetDirectory(0);
do
{
FieldValue[] width = inImage.GetField(TiffTag.IMAGEWIDTH);
FieldValue[] height = inImage.GetField(TiffTag.IMAGEWIDTH);
if (width[0].ToInt() != 0 && height[0].ToInt() != 0)
{
// the page appears correct, do something with it
}
} while (inImage.ReadDirectory());
}
I'm using ImageMagick.NET to convert PDFs to JPGs. Here's my code:
MagickReadSettings settings = new MagickReadSettings();
settings.Density = new MagickGeometry(300, 300);
using (MagickImageCollection images = new MagickImageCollection())
{
images.Read(pdfFilePathString, settings);
MagickImage image = images.AppendVertically();
image.Format = MagickFormat.Jpg;
//image.Quality = 70;
//if (image.Width > 1024)
//{
// int heightRatio = Convert.ToInt32(Math.Round((decimal)(image.Height / (image.Width / 1024)), 0));
// image.Resize(1024, heightRatio);
//}
image.Write(tempFilePathString);
image.Dispose();
}
The problem is, I keep getting insufficient memory exceptions, which occur on the image.Write(). It is obviously due to file size, as a small pdf will work, but not a multi page pdf. The particular file I'm trying to get it to run through is a 12 page text pdf. I can get it to work if I sent the density low, for example (100, 100) works, but the quality is terrible.
The commented out lines were some other solutions I was trying to implement, but it keeps running for a long time (several minutes) without end (at least as far as my patience is concerned) with those enabled. One of those is to reduce quality, and the other to reduce image size. The pdfs always come out very large, much larger than necessary.
If I could reduce image size and/or quality before the file is written, that would be great. Or at least I need to be able to be able to produce an image in a quality that is decent enough without having memory issues. It doesn't seem like it should be having memory issues here as its not as if the file size is ginormous, although it probably still is bigger than desired for an image. The 12 page pdf when I could get it to render came in at around 6-7 megs.
I'm using 32-bit ImageMagick - I wonder if 64-bit would solve the issue, but there have been issues trying to get that version to run on a local environment - which is another issue entirely.
Anybody have any thoughts on anything else I can try?
Thanks
I believe with JPGs, the width and height information is stored within the first few bytes. What's the easiest way to get this information given an absolute URI?
First, you can request the first hundred bytes of an image using the Range header.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Headers.Set(HttpRequestHeader.UserAgent, "Range: bytes=0-100");
Next, you need to decode. The unix file command has a table of common formats, and the locations of key information. I'd suggest installing Cygwin and taking a look at /usr/share/file/magic.
For gif's and png's, you can easily get the image dimensions from the first 32 bytes. However, for JPEGs, #Andrew is correct in that you can't reliably get this information. You can figure out if it has a thumbnail, and the size of the thumbnail.
The get the actual jpeg size, you need to scan for the start of frame tag. Unfortunately, one can't reliably determine where this is going to be in advance, and a thumbnail could push it several thousand bytes into the file.
I'd recommend using the range request to get the first 32 bytes. This will let you determine the file type. After which, if it's a JPEG, then download the whole file, and use a library to get the size information.
I am a bit rusty at this, but with jpeg, it might not be as simple as it seems. Jpeg has a header within each segment of data which has its own height / width and resolution. jpeg was not designed to be streamed easily. You might need to read the entire image to find the width and height of each segment within the jpeg to get the entire width and height.
If you absolutely need to stream an image consider changing to another format which is easier to stream, jpeg is going to be tough.
You could do it if you can develop a server side program that would seek forward and read the header of each segment to compute the width and height of the segment.
It's a bit Heath Robinson, but since browsers seem to be able to do it perhaps you could automate IE to download the image within a webpage and then interrogate the browser's DOM to reveal the dimensions before the image has finished downloading?
Use this code:
public static async Task<Size> GetUrlImageSizeAsync(string url)
{
try
{
var request = WebRequest.Create(url);
request.Headers.Set(HttpRequestHeader.UserAgent, "Range: bytes=0-32");
var size = new Size();
using (var response = await request.GetResponseAsync())
{
var ms = new MemoryStream();
var stream = response.GetResponseStream();
stream.CopyTo(ms);
var img = Image.FromStream(ms);
size = img.Size;
}
return size;
}
catch
{
return new Size();
}
}