C#: video compression using aforge.net - c#

My application receives a sequence of images (BitmapImage) from external device with rate 30 fps.
I'm using Aforge.net library for save the received stream in .avi file.
I used the following code for inizializing the AVIWriter:
AVIWriter writer;
writer = new AVIWriter("wmv3");
writer.FrameRate = 30;
writer.Open("test.avi", 320, 240);
And for each frame received I add it in the video stream, with the following code line:
writer.AddFrame(ResizeBitmap(BitmapImage2Bitmap(e.ColorFrame.BitmapImage),320,240));
But the generated file is too heavy. (10 secondos corresponds to about 3Mb).
I tryied also setting a low level of writer.Quality , but the result seems the same (just 5-7% less).
So, I need a more efficient compression.
What are the compressions supported in Aforge.net ? What compression should I use in order to reducing the weight of saved file?

I suspect that interframe compression is not used in AVIWriter (but I may be wrong).
You may try to use VideoFileWriter from Aforge.Video.FFMPEG instead:
var writer = new VideoFileWriter();
writer.Open("test.mpg", 320, 240, 30, VideoCodec.Default, 1000);
// add your frame
writer.WriteVideoFrame(frame);
Remember to put dlls from Externals/ffmpeg/bin from AForge zip into your output directory.

Related

IMagickImage.Crop() causes image to be blurry

I have the following C# code:
MagickImage pdfPage = MyCodeToGetPage();
String barcodePng = "tmp.png"
MagickGeometry barcodeArea = new MagickGeometry(350, 153, 208, 36);
IMagickImage barcodeImg = pdfPage.Clone();
barcodeImg.Crop(barcodeArea);
barcodeImg.Write(barcodePng);
It creates a tmp.png file that is displayed in the lower barcode below:
The problem is that tmp.png file is fuzzy and my barcode detection logic will not detect the barcode. You can see the upper image is clear and the lines are not merged.
The title says that Crop() is causing the problem, but it could also be the Write().
How do I crop the barcode out of the pdf without making tmp.png fuzzy?
This was not a problem when the source document is a .tif. More precisely, if I convert the .pdf to a .tif and then crop it the .png is clear enough that the barcode can be detected. I want to eliminate the intermediate .tif as it used a clumsy printer driver to convert.
As you requested in your answer below:
Adding density on the read was what I had first suggested in my comment to your question. It increases the size of the rasterized version of the input. It is like scanning at a higher density. What I typically do in ImageMagick is to read the pdf at 4x nominal density, which is 4x72=288, then resize down by 1/4=25%. This will generally give a much better quality to your your result. So the command that I use in command line ImageMagick would be:
convert -density 288 input.pdf -resize 25% result.suffix
I would also add that Ghostscript cannot handle CMYK PDFs that have transparency. So one must change the colorspace to sRGB before reading the pdf file. So in this case, it would be:
convert -density 288 -colorspace sRGB input.pdf -resize 25% result.suffix
Sorry, I do not code C++, so perhaps I misunderstand, but I do not understand why increasing the density before reading in a TIFF would make any difference.
This URL had the answer:
http://www.jiajianhudong.com/question/642668.html
To fix it I changed my code to this:
MagickImage pdfPage = MyCodeToGetPage();
String barcodePng = "tmp.png"
MagickGeometry barcodeArea = new MagickGeometry(350, 153, 208, 36);
IMagickImage barcodeImg = pdfPage.Clone();
barcodeImg.ColorType = ColorType.Bilevel;
barcodeImg.Depth = 1;
barcodeImg.Alpha(AlphaOption.Off);
barcodeImg.Crop(barcodeArea);
barcodeImg.Write(barcodePng);
And the most critical part of fixing was to change:
using (MagickImageCollection tiffPageCollection = new MagickImageCollection())
{
tiffPageCollection.Read(tifName);
to
var settings = new MagickReadSettings { Density = new Density(200) };
using (MagickImageCollection tiffPageCollection = new MagickImageCollection())
{
tiffPageCollection.Read(tifName, settings);
If someone wants to copy my answer and add a clear reason why adding Density on the read fixes the problem I will give them the answer.

PDFSharp compress filesize in c#

In my App i generate an PDF-File with PDFSharp.Xamarin which I got from this site:
https://github.com/roceh/PdfSharp.Xamarin
Everything is working fine.
In my PDF-Document I have many Images, which are compressed.
But the file size of my PDF-Document is too large.
Is there a possibility to compress my PDF-Document before saving it?
How can I work with the PdfSharp.SharpZipLib.Zip Namespace to deflate the file size?
UPDATE:
Here is my Code:
document = new PdfDocument();
document.Info.Title = nameDok.Replace(" ", "");
document.Info.Author = "---";
document.Info.CreationDate = DateTime.Now;
document.Info.Subject = nameDok.Replace(" ", "");
//That is how i add Images:
XImage image = XImage.FromStream(lstr);
gfx.DrawImage(image, 465, YPrev - 2, newimagewf, newimagehf);
document.CustomValues.CompressionMode = PdfCustomValueCompressionMode.Compressed;
document.Options.FlateEncodeMode = PdfFlateEncodeMode.BestCompression;
document.Save(speicherPfad);
Thanks for everyone.
I only know the original PDFsharp, not the Xamarin port: images are deflated automatically using SharpZipLib.
Make sure to use appropriate source images (e.g. JPEG or PNG, depending on the image).
On the project start page they write:
"Currently all images created via XGraphics are converted to jpegs with 70% quality."
This could mean that images are re-compressed, maybe leading to larger files than before.
Take one JPEG file, convert it to PDF, and check the size of the image (in bytes) in the PDF file.

C# image compression for serial transmission

This is my first question, so I hope to provide what it needs to get a decent answer.
I want to send an image received by a webcam over a serial link.
The Image is converted into a byte array and then written to the serial port.
The first issue I ran into was, that when I tried to send the image, it lead to a TimeoutException. Looking at the lenght of the byte array, it showed me around 1 MB of data that needs to be transmitted. Shrinking the actual size of the image resulted in an much faster transmission, but afterwards the image was way too small.
The second isuue was when I tried to compress the image. Using different methods, the size of transmission was always excactly the same.
I hope you can help me find a way to improve my implementation, so that the transmission only takes a few seconds while still maintaining reasonable resolution of the image. Thanks.
Specific Information
Webcam Image
The image from the webcam is received by the AForge library
The image is handled as a Bitmap
(Obviously) it doesn't transmit every frame, only on the click of a button
Serial Port
The port uses a baud rate of 57600 bps (defined by hardware beneath)
The WriteTimeout-value is set to 30s, as it would be unacceptable to wait longer than that
Text transmission works with default values on the SerialPort-item in a WinForm
Image Manipulation
I used different approaches to compress the image:
Simple method like
public static byte[] getBytes(Bitmap img)
{
MemoryStream ms = new MemoryStream();
img.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] output = ms.toArray();
ms.Dispose();
return output;
}
as well as more advanced methods like the one posted here. Not only with Encoder.Quality but also with Encoder.Compression.
My Application
private void btn_Send(...)
{
Bitmap currentFrame = getImageFromWebcam();
//Bitmap sendFrame = new Bitmap(currentFrame, new Size(currentFrame.Width/10, currentFrame.Height/10));
Bitmap sendFrame = compressImage(currentFrame);
byte[] data = getBytes(sendFrame);
serialPort.Write(data, 0, data.Lenght);
}
C Hanging the timeout property of the serial port would solve the timeout issue. How is show in this link https://msdn.microsoft.com/en-us/library/system.io.ports.serialport.writetimeout(v=vs.110).aspx. File compression works by looking at blocks of data and associating similar blocks with each other for a given segment of blocks. If your image is too unique it will not compress depending on the compression software being used.

.NET - Creating a looping .gif using GifBitmapEncoder

I'm trying to write some code to export animated .gifs from a WPF application using GifBitmapEncoder. What I have so far works fine but when I view the resulting .gif it only runs once and then stops - I'd like to have it looping indefinitely.
I've found this previous similar question:
How do I make a GIF repeat in loop when generating with BitmapEncoder
However, he is using the BitmapEncoder from Windows.Graphics.Imaging rather than the Windows.Media.Imaging version, which seems to be a bit different. Nonetheless, that gave me a direction and after a bit more googling I came up with this:
Dim encoder As New GifBitmapEncoder
Dim metaData As New BitmapMetadata("gif")
metaData.SetQuery("/appext/Application", System.Text.Encoding.ASCII.GetBytes("NETSCAPE2.0"))
metaData.SetQuery("/appext/Data", New Byte() {3, 1, 0, 0, 0})
'The following line throws the exception "The designated BitmapEncoder does not support global metadata.":
'encoder.Metadata = metaData
If DrawingManager.Instance.SelectedFacing IsNot Nothing Then
For Each Frame As Frame In DrawingManager.Instance.SelectedFacing.Frames
Dim bmpFrame As BitmapFrame = BitmapFrame.Create(Frame.CombinedImage, Nothing, metaData, Nothing)
encoder.Frames.Add(bmpFrame)
Next
End If
Dim fs As New FileStream(newFileName, FileMode.Create)
encoder.Save(fs)
fs.Close()
Initially I tried adding the metadata directly to the encoder (as in the commented-out line in the code above), but at runtime that throws the exception "The designated BitmapEncoder does not support global metadata". I can instead attach my metadata to each frame, but although that doesn't crash it the resultant .gif doesn't loop either (and I would expect that the looping metadata would need to be global anyway).
Can anyone offer any advice?
I finally got this to work after studying this article and referencing the raw bytes of GIF files. If you want to do so yourself, you can get the bytes in hex format using PowerShell like so...
$bytes = [System.IO.File]::ReadAllBytes("C:\Users\Me\Desktop\SomeGif.gif")
[System.BitConverter]::ToString($bytes)
The GifBitmapEncoder appears to write the Header, Logical Screen Descriptor, then the Graphics Control Extension. The "NETSCAPE2.0" extension is missing. In GIFs from other sources that do loop, the missing extension always appears right before the Graphics Control Extension.
So I just plugged in the bytes after the 13th byte, since the first two sections are always this long.
// After adding all frames to gifEncoder (the GifBitmapEncoder)...
using (var ms = new MemoryStream())
{
gifEncoder.Save(ms);
var fileBytes = ms.ToArray();
// This is the NETSCAPE2.0 Application Extension.
var applicationExtension = new byte[] { 33, 255, 11, 78, 69, 84, 83, 67, 65, 80, 69, 50, 46, 48, 3, 1, 0, 0, 0 };
var newBytes = new List<byte>();
newBytes.AddRange(fileBytes.Take(13));
newBytes.AddRange(applicationExtension);
newBytes.AddRange(fileBytes.Skip(13));
File.WriteAllBytes(saveFile, newBytes.ToArray());
}
Did you know that you can just download this functionality? Take a look at the WPF Animated GIF page on CodePlex. Alternatively, there is WPF Animated GIF 1.4.4 on Nuget Gallery. If you prefer a tutorial, then take a look at the GIF Animation in WPF page on the Code Project website.
#PaulJeffries, I do apologise... I misunderstood your question. I have used some code from a post here before to animate a .gif file. It is quite straight forward and you might be able to 'reverse engineer' it for your purposes. Please take a look at the How do I get an animated gif to work in WPF? post to see if that helps. (I am aware that the code's actual purpose is also to animate a .gif).

iTextsharp - PDF file size after inserting image

I'm currently converting some legacy code to create PDF files using iTextSharp. We're creating a largish PDF file that contains a number of images, which I'm inserting like so:
Document doc = new Document(PageSize.A4, 50, 50, 25, 25);
PdfWriter writer = PdfWriter.GetInstance(doc, myStream);
writer.SetFullCompression();
doc.Open();
Image frontCover = iTextSharp.text.Image.GetInstance(#"C:\MyImage.png");
//Scale down from a 96 dpi image to standard itextsharp 72 dpi
frontCover.ScalePercent(75f);
frontCover.SetAbsolutePosition(0, 0);
doc.Add(frontCover);
doc.Close();
Inserting an image (20.8 KB png file) seems to increase the PDF file size by nearly 100 KB.
Is there a way of compressing the image before entry (bearing in mind that this needs to be of reasonable print quality), or of further compressing the entire PDF? Am I even performing any compression in the above example?
The answer appears to have been that you need to set an appropriate version of the PDF spec to target and then set the compression as follows:
PdfWriter writer = PdfWriter.GetInstance(doc, ms);
PdfContentByte contentPlacer;
writer.SetPdfVersion(PdfWriter.PDF_VERSION_1_5);
writer.CompressionLevel = PdfStream.BEST_COMPRESSION;
This has brought my file size down considerably. I also found that PNG's were giving me the best results as regards to final size of document.
I did some experiments this morning. My test image was 800x600 with a file size of 100.69K when saved as a PNG. I inserted this into a PDF (using iTextSharp and the usual GetInstance() method) and the file size increased from 301.71K to 402.63K. I then re-saved my test image as a raw bitmap with file size of 1,440,054. I inserted this into the PDF and the file size went DOWN to 389.81K. Interesting!
I did some research on the web for a possible explanation, and, based on what I found, it looks like iTextSharp does not compress images, but rather it compresses everything with some generic compression. So in other words, the BMP is not actually converted to another file type, it's just compressed very much like you would by ZIPping it. Whatever they're doing, it must be good, for it compressed better than the image with PNG compression. I assume iTextSharp woudld try to compress the PNG but would compress at 0% since it already is compressed. (This is inconsistent with the original author's observations, though... Paddy said his PDF size increased much more than the size of the PNG... not sure what to make of that. I can only go on my own experiments).
Conclusions:
1) I don't need to add some fancy library to my project to convert my (eventual dynamically-created) image to PNG; it actually does better to leave it totally uncompressed and let iTextSharp do all the compression work.
2) I also read stuff on the web about iTextSharp saving images at a certain DPI. I did NOT see this problem... I used ScalePercent() method to scale the bitmap to 1% and the file size was the same and there was no "loss" in the bitmap pixels in the bitmap... this confirms that iTextSharp is doing a simple, nice, generic lossless compression.
It seems that PDF requires the png to be transcoded to something else, jpeg, most probably.
see here: http://forums.adobe.com/message/2952201
The only thing I can think of is to convert png to smallest jpeg first, including scaling down 75%, then importing that file without scaling.
use:
var image = iTextSharp.text.Image.GetInstance(srcImage, ImageFormat.Jpeg);
image.ScaleToFit(document.PageSize.Width, document.PageSize.Height);
//image.ScalePercent(75f);
image.SetAbsolutePosition(0, 0);
document.Add(image);
document.NewPage();

Categories

Resources