Big red X on image and I cannot detect error - c#

To be brief, I'm trying to implement some motion detection algorithm and in my case I'm working on UWP using portable version of AForge library to processing image. Avoiding this issue, I have problem with conversion SoftwareBitmap object (that I get from MediaFrameReader) to Bitmap object (and vice versa) that I use in my code associated with motion detection. In consequence of this conversion, I get proper image with big red X in the foreground. Code below:
private async void FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
var frame = sender.TryAcquireLatestFrame();
if (frame != null && !_detectingMotion)
{
SoftwareBitmap aForgeInputBitmap = null;
var inputBitmap = frame.VideoMediaFrame?.SoftwareBitmap;
if (inputBitmap != null)
{
_detectingMotion = true;
//The XAML Image control can only display images in BRGA8 format with premultiplied or no alpha
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8
&& inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
aForgeInputBitmap = SoftwareBitmap.Copy(inputBitmap);
}
else
{
aForgeInputBitmap = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore);
}
await _aForgeHelper.MoveBackgrounds(aForgeInputBitmap);
SoftwareBitmap aForgeOutputBitmap = await _aForgeHelper.DetectMotion();
_frameRenderer.PresentSoftwareBitmap(aForgeOutputBitmap);
_detectingMotion = false;
}
}
}
class AForgeHelper
{
private Bitmap _background;
private Bitmap _currentFrameBitmap;
public async Task MoveBackgrounds(SoftwareBitmap currentFrame)
{
if (_background == null)
{
_background = TransformToGrayscale(await ConvertSoftwareBitmapToBitmap(currentFrame));
}
else
{
// modifying _background in compliance with algorithm - in this case irrelevant
}
}
public async Task<SoftwareBitmap> DetectMotion()
{
// to check only this conversion
return await ConvertBitmapToSoftwareBitmap(_background);
}
private static async Task<Bitmap> ConvertSoftwareBitmapToBitmap(SoftwareBitmap input)
{
Bitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = new WriteableBitmap(input.PixelWidth, input.PixelHeight);
input.CopyToBuffer(tmpBitmap.PixelBuffer);
output = (Bitmap)tmpBitmap;
});
return output;
}
private static async Task<SoftwareBitmap> ConvertBitmapToSoftwareBitmap(Bitmap input)
{
SoftwareBitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = (WriteableBitmap)input;
output = new SoftwareBitmap(BitmapPixelFormat.Bgra8, tmpBitmap.PixelWidth, tmpBitmap.PixelHeight,
BitmapAlphaMode.Premultiplied);
output.CopyFromBuffer(tmpBitmap.PixelBuffer);
});
return output;
}
private static Bitmap TransformToGrayscale(Bitmap input)
{
Grayscale grayscaleFilter = new Grayscale(0.2125, 0.7154, 0.0721);
Bitmap output = grayscaleFilter.Apply(input);
return output;
}
Certainly, I've tried to detect some errors using try-catch clauses. I've found nothing. Thanks in advance.
EDIT (29/03/2018):
Generally, my app targets at providing some features associated with Kinect Sensor. App user has possibility to choose some feature from features list. First of all, this app have to be available on Xbox One, therefore I've chosen UWP. Due to 'scientific' issues, I've implemented MVVM pattern, using MVVM Light framework. As for PresentSoftwareBitmap() method, it comes from Windows-universal-samples repo and I paste FrameRenderer helper class below:
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
class FrameRenderer
{
private Image _imageElement;
private SoftwareBitmap _backBuffer;
private bool _taskRunning = false;
public FrameRenderer(Image imageElement)
{
_imageElement = imageElement;
_imageElement.Source = new SoftwareBitmapSource();
}
// Processes a MediaFrameReference and displays it in a XAML image control
public void ProcessFrame(MediaFrameReference frame)
{
var softwareBitmap = FrameRenderer.ConvertToDisplayableImage(frame?.VideoMediaFrame);
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
// Function delegate that transforms a scanline from an input image to an output image.
private unsafe delegate void TransformScanline(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes);
/// <summary>
/// Determines the subtype to request from the MediaFrameReader that will result in
/// a frame that can be rendered by ConvertToDisplayableImage.
/// </summary>
/// <returns>Subtype string to request, or null if subtype is not renderable.</returns>
public static string GetSubtypeForFrameReader(MediaFrameSourceKind kind, MediaFrameFormat format)
{
// Note that media encoding subtypes may differ in case.
// https://learn.microsoft.com/en-us/uwp/api/Windows.Media.MediaProperties.MediaEncodingSubtypes
string subtype = format.Subtype;
switch (kind)
{
// For color sources, we accept anything and request that it be converted to Bgra8.
case MediaFrameSourceKind.Color:
return Windows.Media.MediaProperties.MediaEncodingSubtypes.Bgra8;
// The only depth format we can render is D16.
case MediaFrameSourceKind.Depth:
return String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.D16, StringComparison.OrdinalIgnoreCase) ? subtype : null;
// The only infrared formats we can render are L8 and L16.
case MediaFrameSourceKind.Infrared:
return (String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L8, StringComparison.OrdinalIgnoreCase) ||
String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L16, StringComparison.OrdinalIgnoreCase)) ? subtype : null;
// No other source kinds are supported by this class.
default:
return null;
}
}
/// <summary>
/// Converts a frame to a SoftwareBitmap of a valid format to display in an Image control.
/// </summary>
/// <param name="inputFrame">Frame to convert.</param>
public static unsafe SoftwareBitmap ConvertToDisplayableImage(VideoMediaFrame inputFrame)
{
SoftwareBitmap result = null;
using (var inputBitmap = inputFrame?.SoftwareBitmap)
{
if (inputBitmap != null)
{
switch (inputFrame.FrameReference.SourceKind)
{
case MediaFrameSourceKind.Color:
// XAML requires Bgra8 with premultiplied alpha.
// We requested Bgra8 from the MediaFrameReader, so all that's
// left is fixing the alpha channel if necessary.
if (inputBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8)
{
System.Diagnostics.Debug.WriteLine("Color frame in unexpected format.");
}
else if (inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
// Already in the correct format.
result = SoftwareBitmap.Copy(inputBitmap);
}
else
{
// Convert to premultiplied alpha.
result = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
break;
case MediaFrameSourceKind.Depth:
// We requested D16 from the MediaFrameReader, so the frame should
// be in Gray16 format.
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Gray16)
{
// Use a special pseudo color to render 16 bits depth frame.
var depthScale = (float)inputFrame.DepthMediaFrame.DepthFormat.DepthScaleInMeters;
var minReliableDepth = inputFrame.DepthMediaFrame.MinReliableDepth;
var maxReliableDepth = inputFrame.DepthMediaFrame.MaxReliableDepth;
result = TransformBitmap(inputBitmap, (w, i, o) => PseudoColorHelper.PseudoColorForDepth(w, i, o, depthScale, minReliableDepth, maxReliableDepth));
}
else
{
System.Diagnostics.Debug.WriteLine("Depth frame in unexpected format.");
}
break;
case MediaFrameSourceKind.Infrared:
// We requested L8 or L16 from the MediaFrameReader, so the frame should
// be in Gray8 or Gray16 format.
switch (inputBitmap.BitmapPixelFormat)
{
case BitmapPixelFormat.Gray16:
// Use pseudo color to render 16 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor16BitInfrared);
break;
case BitmapPixelFormat.Gray8:
// Use pseudo color to render 8 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor8BitInfrared);
break;
default:
System.Diagnostics.Debug.WriteLine("Infrared frame in unexpected format.");
break;
}
break;
}
}
}
return result;
}
/// <summary>
/// Transform image into Bgra8 image using given transform method.
/// </summary>
/// <param name="softwareBitmap">Input image to transform.</param>
/// <param name="transformScanline">Method to map pixels in a scanline.</param>
private static unsafe SoftwareBitmap TransformBitmap(SoftwareBitmap softwareBitmap, TransformScanline transformScanline)
{
// XAML Image control only supports premultiplied Bgra8 format.
var outputBitmap = new SoftwareBitmap(BitmapPixelFormat.Bgra8,
softwareBitmap.PixelWidth, softwareBitmap.PixelHeight, BitmapAlphaMode.Premultiplied);
using (var input = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
using (var output = outputBitmap.LockBuffer(BitmapBufferAccessMode.Write))
{
// Get stride values to calculate buffer position for a given pixel x and y position.
int inputStride = input.GetPlaneDescription(0).Stride;
int outputStride = output.GetPlaneDescription(0).Stride;
int pixelWidth = softwareBitmap.PixelWidth;
int pixelHeight = softwareBitmap.PixelHeight;
using (var outputReference = output.CreateReference())
using (var inputReference = input.CreateReference())
{
// Get input and output byte access buffers.
byte* inputBytes;
uint inputCapacity;
((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputBytes, out inputCapacity);
byte* outputBytes;
uint outputCapacity;
((IMemoryBufferByteAccess)outputReference).GetBuffer(out outputBytes, out outputCapacity);
// Iterate over all pixels and store converted value.
for (int y = 0; y < pixelHeight; y++)
{
byte* inputRowBytes = inputBytes + y * inputStride;
byte* outputRowBytes = outputBytes + y * outputStride;
transformScanline(pixelWidth, inputRowBytes, outputRowBytes);
}
}
}
return outputBitmap;
}
/// <summary>
/// A helper class to manage look-up-table for pseudo-colors.
/// </summary>
private static class PseudoColorHelper
{
#region Constructor, private members and methods
private const int TableSize = 1024; // Look up table size
private static readonly uint[] PseudoColorTable;
private static readonly uint[] InfraredRampTable;
// Color palette mapping value from 0 to 1 to blue to red colors.
private static readonly Color[] ColorRamp =
{
Color.FromArgb(a:0xFF, r:0x7F, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x7F, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0xFF, b:0x00),
Color.FromArgb(a:0xFF, r:0x7F, g:0xFF, b:0x7F),
Color.FromArgb(a:0xFF, r:0x00, g:0xFF, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x7F, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0x7F),
};
static PseudoColorHelper()
{
PseudoColorTable = InitializePseudoColorLut();
InfraredRampTable = InitializeInfraredRampLut();
}
/// <summary>
/// Maps an input infrared value between [0, 1] to corrected value between [0, 1].
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)] // Tell the compiler to inline this method to improve performance
private static uint InfraredColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return InfraredRampTable[index];
}
/// <summary>
/// Initializes the pseudo-color look up table for infrared pixels
/// </summary>
private static uint[] InitializeInfraredRampLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
var value = (float)i / TableSize;
// Adjust to increase color change between lower values in infrared images
var alpha = (float)Math.Pow(1 - value, 12);
lut[i] = ColorRampInterpolation(alpha);
}
return lut;
}
/// <summary>
/// Initializes pseudo-color look up table for depth pixels
/// </summary>
private static uint[] InitializePseudoColorLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
lut[i] = ColorRampInterpolation((float)i / TableSize);
}
return lut;
}
/// <summary>
/// Maps a float value to a pseudo-color pixel
/// </summary>
private static uint ColorRampInterpolation(float value)
{
// Map value to surrounding indexes on the color ramp
int rampSteps = ColorRamp.Length - 1;
float scaled = value * rampSteps;
int integer = (int)scaled;
int index =
integer < 0 ? 0 :
integer >= rampSteps - 1 ? rampSteps - 1 :
integer;
Color prev = ColorRamp[index];
Color next = ColorRamp[index + 1];
// Set color based on ratio of closeness between the surrounding colors
uint alpha = (uint)((scaled - integer) * 255);
uint beta = 255 - alpha;
return
((prev.A * beta + next.A * alpha) / 255) << 24 | // Alpha
((prev.R * beta + next.R * alpha) / 255) << 16 | // Red
((prev.G * beta + next.G * alpha) / 255) << 8 | // Green
((prev.B * beta + next.B * alpha) / 255); // Blue
}
/// <summary>
/// Maps a value in [0, 1] to a pseudo RGBA color.
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint PseudoColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return PseudoColorTable[index];
}
#endregion
/// <summary>
/// Maps each pixel in a scanline from a 16 bit depth value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
/// <param name="depthScale">Physical distance that corresponds to one unit in the input scanline.</param>
/// <param name="minReliableDepth">Shortest distance at which the sensor can provide reliable measurements.</param>
/// <param name="maxReliableDepth">Furthest distance at which the sensor can provide reliable measurements.</param>
public static unsafe void PseudoColorForDepth(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes, float depthScale, float minReliableDepth, float maxReliableDepth)
{
// Visualize space in front of your desktop.
float minInMeters = minReliableDepth * depthScale;
float maxInMeters = maxReliableDepth * depthScale;
float one_min = 1.0f / minInMeters;
float range = 1.0f / maxInMeters - one_min;
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
var depth = inputRow[x] * depthScale;
if (depth == 0)
{
// Map invalid depth values to transparent pixels.
// This happens when depth information cannot be calculated, e.g. when objects are too close.
outputRow[x] = 0;
}
else
{
var alpha = (1.0f / depth - one_min) / range;
outputRow[x] = PseudoColor(alpha * alpha);
}
}
}
/// <summary>
/// Maps each pixel in a scanline from a 8 bit infrared value to a pseudo-color pixel.
/// </summary>
/// /// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor8BitInfrared(
int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
byte* inputRow = inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)Byte.MaxValue);
}
}
/// <summary>
/// Maps each pixel in a scanline from a 16 bit infrared value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor16BitInfrared(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)UInt16.MaxValue);
}
}
}
// Displays the provided softwareBitmap in a XAML image control.
public void PresentSoftwareBitmap(SoftwareBitmap softwareBitmap)
{
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
}
This is output image, that I've got after conversion issues and grayscale process:
Output image
As for VS version - Visual Studio Enterprise 2017, Version 15.6.1 (to be precise). Once more, thanks in advance for help.

Related

Editing animated GIF in C#

I am attempting to modify an animated GIF. That is, I need to make modifications to certain sequences of frames. In my case I need to add some text depending on what a user has done.
Mathew Sachin provided sample animated GIF code for loading and saving an image in a similar question on 2012-11-13. This code is considerably more compact than other offerings and is reproduced here:
using System;
using System.Collections;
using System.Collections.Generic;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
/// <summary>
/// Uses default .net GIF encoding and adds animation headers.
/// </summary>
public class Gif : IDisposable, IEnumerable<Image>
{
#region Header Constants
const byte FileTrailer = 0x3b,
ApplicationBlockSize = 0x0b,
GraphicControlExtensionBlockSize = 0x04;
const int ApplicationExtensionBlockIdentifier = 0xff21,
GraphicControlExtensionBlockIdentifier = 0xf921;
const long SourceGlobalColorInfoPosition = 10,
SourceGraphicControlExtensionPosition = 781,
SourceGraphicControlExtensionLength = 8,
SourceImageBlockPosition = 789,
SourceImageBlockHeaderLength = 11,
SourceColorBlockPosition = 13,
SourceColorBlockLength = 768;
const string ApplicationIdentification = "NETSCAPE2.0",
FileType = "GIF",
FileVersion = "89a";
#endregion
class GifFrame
{
public GifFrame(Image image, double delay, int xOffset, int yOffset)
{
Image = image;
Delay = delay;
XOffset = xOffset;
YOffset = yOffset;
}
public Image Image;
public double Delay;
public int XOffset, YOffset;
}
List<GifFrame> Frames = new List<GifFrame>();
public Gif() { DefaultFrameDelay = 500; }
public Gif(Stream InStream, int Repeat = 0, int Delay = 500)
{
using (Image Animation = Bitmap.FromStream(InStream))
{
int Length = Animation.GetFrameCount(FrameDimension.Time);
DefaultFrameDelay = Delay;
this.Repeat = Repeat;
for (int i = 0; i < Length; ++i)
{
Animation.SelectActiveFrame(FrameDimension.Time, i);
var Frame = new Bitmap(Animation.Size.Width, Animation.Size.Height);
Graphics.FromImage(Frame).DrawImage(Animation, new Point(0, 0));
Frames.Add(new GifFrame(Frame, Delay, 0, 0));
}
}
}
#region Properties
public int DefaultWidth { get; set; }
public int DefaultHeight { get; set; }
public int Count { get { return Frames.Count; } }
/// <summary>
/// Default Delay in Milliseconds
/// </summary>
public int DefaultFrameDelay { get; set; }
public int Repeat { get; private set; }
#endregion
/// <summary>
/// Adds a frame to this animation.
/// </summary>
/// <param name="Image">The image to add</param>
/// <param name="XOffset">The positioning x offset this image should be displayed at.</param>
/// <param name="YOffset">The positioning y offset this image should be displayed at.</param>
public void AddFrame(Image Image, double? frameDelay = null, int XOffset = 0, int YOffset = 0)
{
Frames.Add(new GifFrame(Image, frameDelay ?? DefaultFrameDelay, XOffset, YOffset));
}
public void AddFrame(string FilePath, double? frameDelay = null, int XOffset = 0, int YOffset = 0)
{
AddFrame(new Bitmap(FilePath), frameDelay, XOffset, YOffset);
}
public void RemoveAt(int Index) { Frames.RemoveAt(Index); }
public void Clear() { Frames.Clear(); }
public void Save(Stream OutStream)
{
using (var Writer = new BinaryWriter(OutStream))
{
for (int i = 0; i < Count; ++i)
{
var Frame = Frames[i];
using (var gifStream = new MemoryStream())
{
Frame.Image.Save(gifStream, ImageFormat.Gif);
// Steal the global color table info
if (i == 0) InitHeader(gifStream, Writer, Frame.Image.Width, Frame.Image.Height);
WriteGraphicControlBlock(gifStream, Writer, Frame.Delay);
WriteImageBlock(gifStream, Writer, i != 0, Frame.XOffset, Frame.YOffset, Frame.Image.Width, Frame.Image.Height);
}
}
// Complete File
Writer.Write(FileTrailer);
}
}
#region Write
void InitHeader(Stream sourceGif, BinaryWriter Writer, int w, int h)
{
// File Header
Writer.Write(FileType.ToCharArray());
Writer.Write(FileVersion.ToCharArray());
Writer.Write((short)(DefaultWidth == 0 ? w : DefaultWidth)); // Initial Logical Width
Writer.Write((short)(DefaultHeight == 0 ? h : DefaultHeight)); // Initial Logical Height
sourceGif.Position = SourceGlobalColorInfoPosition;
Writer.Write((byte)sourceGif.ReadByte()); // Global Color Table Info
Writer.Write((byte)0); // Background Color Index
Writer.Write((byte)0); // Pixel aspect ratio
WriteColorTable(sourceGif, Writer);
// App Extension Header
unchecked { Writer.Write((short)ApplicationExtensionBlockIdentifier); };
Writer.Write((byte)ApplicationBlockSize);
Writer.Write(ApplicationIdentification.ToCharArray());
Writer.Write((byte)3); // Application block length
Writer.Write((byte)1);
Writer.Write((short)Repeat); // Repeat count for images.
Writer.Write((byte)0); // terminator
}
void WriteColorTable(Stream sourceGif, BinaryWriter Writer)
{
sourceGif.Position = SourceColorBlockPosition; // Locating the image color table
var colorTable = new byte[SourceColorBlockLength];
sourceGif.Read(colorTable, 0, colorTable.Length);
Writer.Write(colorTable, 0, colorTable.Length);
}
void WriteGraphicControlBlock(Stream sourceGif, BinaryWriter Writer, double frameDelay)
{
sourceGif.Position = SourceGraphicControlExtensionPosition; // Locating the source GCE
var blockhead = new byte[SourceGraphicControlExtensionLength];
sourceGif.Read(blockhead, 0, blockhead.Length); // Reading source GCE
unchecked { Writer.Write((short)GraphicControlExtensionBlockIdentifier); }; // Identifier
Writer.Write((byte)GraphicControlExtensionBlockSize); // Block Size
Writer.Write((byte)(blockhead[3] & 0xf7 | 0x08)); // Setting disposal flag
Writer.Write((short)(frameDelay / 10)); // Setting frame delay
Writer.Write((byte)blockhead[6]); // Transparent color index
Writer.Write((byte)0); // Terminator
}
void WriteImageBlock(Stream sourceGif, BinaryWriter Writer, bool includeColorTable, int x, int y, int w, int h)
{
sourceGif.Position = SourceImageBlockPosition; // Locating the image block
var header = new byte[SourceImageBlockHeaderLength];
sourceGif.Read(header, 0, header.Length);
Writer.Write((byte)header[0]); // Separator
Writer.Write((short)x); // Position X
Writer.Write((short)y); // Position Y
Writer.Write((short)w); // Width
Writer.Write((short)h); // Height
if (includeColorTable) // If first frame, use global color table - else use local
{
sourceGif.Position = SourceGlobalColorInfoPosition;
Writer.Write((byte)(sourceGif.ReadByte() & 0x3f | 0x80)); // Enabling local color table
WriteColorTable(sourceGif, Writer);
}
else Writer.Write((byte)(header[9] & 0x07 | 0x07)); // Disabling local color table
Writer.Write((byte)header[10]); // LZW Min Code Size
// Read/Write image data
sourceGif.Position = SourceImageBlockPosition + SourceImageBlockHeaderLength;
var dataLength = sourceGif.ReadByte();
while (dataLength > 0)
{
var imgData = new byte[dataLength];
sourceGif.Read(imgData, 0, dataLength);
Writer.Write((byte)dataLength);
Writer.Write(imgData, 0, dataLength);
dataLength = sourceGif.ReadByte();
}
Writer.Write((byte)0); // Terminator
}
#endregion
public void Dispose()
{
Frames.Clear();
Frames = null;
}
public Image this[int Index] { get { return Frames[Index].Image; } }
public IEnumerator<Image> GetEnumerator() { foreach (var Frame in Frames) yield return Frame.Image; }
IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); }
}
The constructor takes a stream and stores each frame in an array of the class GifFrame.
So given a GIF file in a stream I can store in internally with:-
Gif myGif = new Gif(imageStream);
I can then take the Image from any frame and modify it as required.
I know this works correctly because I can take a random frame and save it away as a single image as here:-
Image frame50 = myGif[50];
frame50.Save("C:\\frame50.gif", ImageFormat.Gif);
My question is specifically how, having modified the animation, do I obtain the animation as a stream or file to display to the user.
In theory the Save method does this.
Stream outStream = new MemoryStream();
myGif.Save(outStream);
Image img = Image.FromStream(outStream);
In practice this crashes with 'cannot access a closed stream'. I think this is because the 'using' clause within the method that loads each frame in turn closes the stream.
Can anybody see a fix for this?
Many thanks
Tony Reynolds (UK)

Upload from IOS picture to .net app: Rotate

I have below code for uploading and resize pictures from IOS Devices to my .net application. Users use to take picture in portrait orientation and then all pictures show up in my app with wrong rotation. Any suggestion how to fix this?
string fileName = Server.HtmlEncode(FileUploadFormbilde.FileName);
string extension = System.IO.Path.GetExtension(fileName);
System.Drawing.Image image_file = System.Drawing.Image.FromStream(FileUploadFormbilde.PostedFile.InputStream);
int image_height = image_file.Height;
int image_width = image_file.Width;
int max_height = 300;
int max_width = 300;
image_height = (image_height * max_width) / image_width;
image_width = max_width;
if (image_height > max_height)
{
image_width = (image_width * max_height) / image_height;
image_height = max_height;
}
Bitmap bitmap_file = new Bitmap(image_file, image_width, image_height);
System.IO.MemoryStream stream = new System.IO.MemoryStream();
bitmap_file.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
stream.Position = 0;
byte[] data = new byte[stream.Length + 1];
stream.Read(data, 0, data.Length);
Here you go my friend:
Image originalImage = Image.FromStream(data);
if (originalImage.PropertyIdList.Contains(0x0112))
{
int rotationValue = originalImage.GetPropertyItem(0x0112).Value[0];
switch (rotationValue)
{
case 1: // landscape, do nothing
break;
case 8: // rotated 90 right
// de-rotate:
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate270FlipNone);
break;
case 3: // bottoms up
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate180FlipNone);
break;
case 6: // rotated 90 left
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate90FlipNone);
break;
}
}
You must read the image's Orientation value from the EXIF data in the Image.PropertyItems collection, and rotate it accordingly.
Here is a better solution answer posted Here He wrote a simple helper class that does all that:
you can check the full source code here.
private System.Drawing.Image ResizeAndDraw(System.Drawing.Image objTempImage)
{
// call image helper to fix the orientation issue
var temp = ImageHelper.RotateImageByExifOrientationData(objTempImage, true);
Size objSize = new Size(150, 200);
Bitmap objBmp;
objBmp = new Bitmap(objSize.Width, objSize.Height);
Graphics g = Graphics.FromImage(objBmp);
g.SmoothingMode = SmoothingMode.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
//Rectangle rect = new Rectangle(x, y, thumbSize.Width, thumbSize.Height);
Rectangle rect = new Rectangle(0,0,150,200);
//g.DrawImage(objTempImage, rect, 0, 0, objTempImage.Width, objTempImage.Height, GraphicsUnit.Pixel);
g.DrawImage(objTempImage, rect);
return objBmp;
}
using System.Drawing;
using System.Drawing.Imaging;
using System.Linq;
public static class ImageHelper
{
/// <summary>
/// Rotate the given image file according to Exif Orientation data
/// </summary>
/// <param name="sourceFilePath">path of source file</param>
/// <param name="targetFilePath">path of target file</param>
/// <param name="targetFormat">target format</param>
/// <param name="updateExifData">set it to TRUE to update image Exif data after rotation (default is TRUE)</param>
/// <returns>The RotateFlipType value corresponding to the applied rotation. If no rotation occurred, RotateFlipType.RotateNoneFlipNone will be returned.</returns>
public static RotateFlipType RotateImageByExifOrientationData(string sourceFilePath, string targetFilePath, ImageFormat targetFormat, bool updateExifData = true)
{
// Rotate the image according to EXIF data
var bmp = new Bitmap(sourceFilePath);
RotateFlipType fType = RotateImageByExifOrientationData(bmp, updateExifData);
if (fType != RotateFlipType.RotateNoneFlipNone)
{
bmp.Save(targetFilePath, targetFormat);
}
return fType;
}
/// <summary>
/// Rotate the given bitmap according to Exif Orientation data
/// </summary>
/// <param name="img">source image</param>
/// <param name="updateExifData">set it to TRUE to update image Exif data after rotation (default is TRUE)</param>
/// <returns>The RotateFlipType value corresponding to the applied rotation. If no rotation occurred, RotateFlipType.RotateNoneFlipNone will be returned.</returns>
public static RotateFlipType RotateImageByExifOrientationData(Image img, bool updateExifData = true)
{
int orientationId = 0x0112;
var fType = RotateFlipType.RotateNoneFlipNone;
if (img.PropertyIdList.Contains(orientationId))
{
var pItem = img.GetPropertyItem(orientationId);
fType = GetRotateFlipTypeByExifOrientationData(pItem.Value[0]);
if (fType != RotateFlipType.RotateNoneFlipNone)
{
img.RotateFlip(fType);
// Remove Exif orientation tag (if requested)
if (updateExifData) img.RemovePropertyItem(orientationId);
}
}
return fType;
}
/// <summary>
/// Return the proper System.Drawing.RotateFlipType according to given orientation EXIF metadata
/// </summary>
/// <param name="orientation">Exif "Orientation"</param>
/// <returns>the corresponding System.Drawing.RotateFlipType enum value</returns>
public static RotateFlipType GetRotateFlipTypeByExifOrientationData(int orientation)
{
switch (orientation)
{
case 1:
default:
return RotateFlipType.RotateNoneFlipNone;
case 2:
return RotateFlipType.RotateNoneFlipX;
case 3:
return RotateFlipType.Rotate180FlipNone;
case 4:
return RotateFlipType.Rotate180FlipX;
case 5:
return RotateFlipType.Rotate90FlipX;
case 6:
return RotateFlipType.Rotate90FlipNone;
case 7:
return RotateFlipType.Rotate270FlipX;
case 8:
return RotateFlipType.Rotate270FlipNone;
}
}
}

Draggable selection rectangle

Before anybody points it out I know that a there is a question with the same title that has already been asked here it just doesn't answer my issue I think.
Working in .NET 3.5 As in that question I am making an area selection component to select an area on a picture. The picture is displayed using a custom control in which the picture is drawn during OnPaint.
I have the following code for my selection rectangle:
internal class AreaSelection : Control
{
private Rectangle selection
{
get { return new Rectangle(Point.Empty, Size.Subtract(this.Size, new Size(1, 1))); }
}
private Size mouseStartLocation;
public AreaSelection()
{
this.Size = new Size(150, 150);
this.SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.ResizeRedraw | ControlStyles.SupportsTransparentBackColor, true);
this.BackColor = Color.FromArgb(70, 200, 200, 200);
}
protected override void OnMouseEnter(EventArgs e)
{
this.Cursor = Cursors.SizeAll;
base.OnMouseEnter(e);
}
protected override void OnMouseDown(MouseEventArgs e)
{
this.mouseStartLocation = new Size(e.Location);
base.OnMouseDown(e);
}
protected override void OnMouseMove(MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
Point offset = e.Location - this.mouseStartLocation;
this.Left += offset.X;
this.Top += offset.Y;
}
base.OnMouseMove(e);
}
protected override void OnPaint(PaintEventArgs e)
{
e.Graphics.DrawRectangle(new Pen(Color.Black) { DashStyle = DashStyle.Dash }, this.selection);
Debug.WriteLine("Selection redrawn");
}
}
Which gives me a nice semi-transparent rectangle which I can drag around. The problem I have is that whilst dragging the underlying image which shows through the rectangle gets lags behind the position of the rectangle.
This gets more noticeable the faster I move the rectangle. When I stop moving it the image catches up and everything aligns perfectly again.
I assume that there is something wrong with the way the rectangle draws, but I really can't figure out what it is...
Any help would be much appreciated.
EDIT:
I have noticed that the viewer gets redrawn twice as often as the selection area when I drag the selection area. Could this be the cause of the problem?
EDIT 2:
Here is the code for the viewer in case it is relevant:
public enum ImageViewerViewMode
{
Normal,
PrintSelection,
PrintPreview
}
public enum ImageViewerZoomMode
{
None,
OnClick,
Lens
}
public partial class ImageViewer : UserControl
{
/// <summary>
/// The current zoom factor. Note: Use SetZoom() to set the value.
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float ZoomFactor
{
get { return this.zoomFactor; }
private set
{
this.zoomFactor = value;
}
}
/// <summary>
/// The maximum zoom factor to use
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float MaximumZoomFactor
{
get
{
return this.maximumZoomFactor;
}
set
{
this.maximumZoomFactor = value;
this.SetZoomFactorLimits();
}
}
/// <summary>
/// The minimum zoom factort to use
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float MinimumZoomFactor
{
get
{
return this.minimumZoomFactor;
}
set
{
this.minimumZoomFactor = value;
this.SetZoomFactorLimits();
}
}
/// <summary>
/// The multiplying factor to apply to each ZoomIn/ZoomOut command
/// </summary>
[Category("Behavior")]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
[DefaultValue(2F)]
public float ZoomStep { get; set; }
/// <summary>
/// The image currently displayed by the control
/// </summary>
[Category("Data")]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public Image Image
{
get { return this.image; }
set
{
this.image = value;
this.ZoomExtents();
this.minimumZoomFactor = this.zoomFactor / 10;
this.MaximumZoomFactor = this.zoomFactor * 10;
}
}
public ImageViewerViewMode ViewMode { get; set; }
public ImageViewerZoomMode ZoomMode { get; set; }
private ImageViewerLens Lens { get; set; }
private float zoomFactor;
private float minimumZoomFactor;
private float maximumZoomFactor;
private bool panning;
private Point imageLocation;
private Point imageTranslation;
private Image image;
private AreaSelection areaSelection;
/// <summary>
/// Class constructor
/// </summary>
public ImageViewer()
{
this.DoubleBuffered = true;
this.MinimumZoomFactor = 0.1F;
this.MaximumZoomFactor = 10F;
this.ZoomStep = 2F;
this.UseScannerUI = true;
this.Lens = new ImageViewerLens();
this.ViewMode = ImageViewerViewMode.PrintSelection;
this.areaSelection = new AreaSelection();
this.Controls.Add(this.areaSelection);
// TWAIN
// Initialise twain
this.twain = new Twain(new WinFormsWindowMessageHook(this));
// Try to set the last used default scanner
if (this.AvailableScanners.Any())
{
this.twain.TransferImage += twain_TransferImage;
this.twain.ScanningComplete += twain_ScanningComplete;
if (!this.SetScanner(this.defaultScanner))
this.SetScanner(this.AvailableScanners.First());
}
}
/// <summary>
/// Saves the currently loaded image under the specified filename, in the specified format at the specified quality
/// </summary>
/// <param name="FileName">The file name (full file path) under which to save the file. File type extension is not required.</param>
/// <param name="Format">The file format under which to save the file</param>
/// <param name="Quality">The quality in percent of the image to save. This is optional and may or may not be used have an effect depending on the chosen file type. Default is maximum quality.</param>
public void SaveImage(string FileName, GraphicFormats Format, uint Quality = 100)
{
ImageCodecInfo encoder;
EncoderParameters encoderParameters;
if (FileName.IsNullOrEmpty())
throw new ArgumentNullException(FileName);
else
{
string extension = Path.GetExtension(FileName);
if (!string.IsNullOrEmpty(extension))
FileName = FileName.Replace(extension, string.Empty);
FileName += "." + Format.ToString();
}
Quality = Math.Min(Math.Max(1, Quality), 100);
if (!TryGetEncoder(Format, out encoder))
return;
encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, (int)Quality);
this.Image.Save(FileName, encoder, encoderParameters);
}
/// <summary>
/// Tries to retrieve the appropriate encoder for the chose image format.
/// </summary>
/// <param name="Format">The image format for which to attempt retrieving the encoder</param>
/// <param name="Encoder">The encoder object in which to store the encoder if found</param>
/// <returns>True if the encoder was found, else false</returns>
private bool TryGetEncoder(GraphicFormats Format, out ImageCodecInfo Encoder)
{
ImageCodecInfo[] codecs;
codecs = ImageCodecInfo.GetImageEncoders();
Encoder = codecs.First(c => c.FormatDescription.Equals(Format.ToString(), StringComparison.CurrentCultureIgnoreCase));
return Encoder != null;
}
/// <summary>
/// Set the zoom level to view the entire image in the control
/// </summary>
public void ZoomExtents()
{
if (this.Image == null)
return;
this.ZoomFactor = (float)Math.Min((double)this.Width / this.Image.Width, (double)this.Height / this.Image.Height);
this.LimitBasePoint(imageLocation.X, imageLocation.Y);
this.Invalidate();
}
/// <summary>
/// Multiply the zoom
/// </summary>
/// <param name="NewZoomFactor">The zoom factor to set for the image</param>
public void SetZoom(float NewZoomFactor)
{
this.SetZoom(NewZoomFactor, Point.Empty);
}
/// <summary>
/// Multiply the zoom
/// </summary>
/// <param name="NewZoomFactor">The zoom factor to set for the image</param>
/// <param name="ZoomLocation">The point in which to zoom in</param>
public void SetZoom(float NewZoomFactor, Point ZoomLocation)
{
int x;
int y;
float multiplier;
multiplier = NewZoomFactor / this.ZoomFactor;
x = (int)((ZoomLocation.IsEmpty ? this.Width / 2 : ZoomLocation.X - imageLocation.X) / ZoomFactor);
y = (int)((ZoomLocation.IsEmpty ? this.Height / 2 : ZoomLocation.Y - imageLocation.Y) / ZoomFactor);
if ((multiplier < 1 && this.ZoomFactor > this.MinimumZoomFactor) || (multiplier > 1 && this.ZoomFactor < this.MaximumZoomFactor))
ZoomFactor *= multiplier;
else
return;
LimitBasePoint((int)(this.Width / 2 - x * ZoomFactor), (int)(this.Height / 2 - y * ZoomFactor));
this.Invalidate();
}
/// <summary>
/// Determines the base point for positioning the image
/// </summary>
/// <param name="x">The x coordinate based on which to determine the positioning</param>
/// <param name="y">The y coordinate based on which to determine the positioning</param>
private void LimitBasePoint(int x, int y)
{
int width;
int height;
if (this.Image == null)
return;
width = this.Width - (int)(Image.Width * ZoomFactor);
height = this.Height - (int)(Image.Height * ZoomFactor);
x = width < 0 ? Math.Max(Math.Min(x, 0), width) : width / 2;
y = height < 0 ? Math.Max(Math.Min(y, 0), height) : height / 2;
imageLocation = new Point(x, y);
}
/// <summary>
/// Verify that the maximum and minimum zoom are correctly set
/// </summary>
private void SetZoomFactorLimits()
{
float maximum = this.MaximumZoomFactor;
float minimum = this.minimumZoomFactor;
this.maximumZoomFactor = Math.Max(maximum, minimum);
this.minimumZoomFactor = Math.Min(maximum, minimum);
}
/// <summary>
/// Mouse button down event
/// </summary>
protected override void OnMouseDown(MouseEventArgs e)
{
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
switch (e.Button)
{
case MouseButtons.Left:
this.SetZoom(this.ZoomFactor * this.ZoomStep, e.Location);
break;
case MouseButtons.Middle:
this.panning = true;
this.Cursor = Cursors.NoMove2D;
this.imageTranslation = e.Location;
break;
case MouseButtons.Right:
this.SetZoom(this.ZoomFactor / this.ZoomStep, e.Location);
break;
}
break;
case ImageViewerZoomMode.Lens:
if (e.Button == MouseButtons.Left)
{
this.Cursor = Cursors.Cross;
this.Lens.Location = e.Location;
this.Lens.Visible = true;
}
else
{
this.Cursor = Cursors.Default;
this.Lens.Visible = false;
}
this.Invalidate();
break;
}
base.OnMouseDown(e);
}
/// <summary>
/// Mouse button up event
/// </summary>
protected override void OnMouseUp(MouseEventArgs e)
{
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
if (e.Button == MouseButtons.Middle)
{
panning = false;
this.Cursor = Cursors.Default;
}
break;
case ImageViewerZoomMode.Lens:
break;
}
base.OnMouseUp(e);
}
/// <summary>
/// Mouse move event
/// </summary>
protected override void OnMouseMove(MouseEventArgs e)
{
switch (this.ViewMode)
{
case ImageViewerViewMode.Normal:
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
if (panning)
{
LimitBasePoint(imageLocation.X + e.X - this.imageTranslation.X, imageLocation.Y + e.Y - this.imageTranslation.Y);
this.imageTranslation = e.Location;
}
break;
case ImageViewerZoomMode.Lens:
if (this.Lens.Visible)
{
this.Lens.Location = e.Location;
}
break;
}
break;
case ImageViewerViewMode.PrintSelection:
break;
case ImageViewerViewMode.PrintPreview:
break;
}
base.OnMouseMove(e);
}
/// <summary>
/// Resize event
/// </summary>
protected override void OnResize(EventArgs e)
{
LimitBasePoint(imageLocation.X, imageLocation.Y);
this.Invalidate();
base.OnResize(e);
}
/// <summary>
/// Paint event
/// </summary>
protected override void OnPaint(PaintEventArgs pe)
{
Rectangle src;
Rectangle dst;
pe.Graphics.Clear(this.BackColor);
if (this.Image != null)
{
switch (this.ViewMode)
{
case ImageViewerViewMode.Normal:
src = new Rectangle(Point.Empty, new Size(Image.Width, Image.Height));
dst = new Rectangle(this.imageLocation, new Size((int)(this.Image.Width * this.ZoomFactor), (int)(this.Image.Height * this.ZoomFactor)));
pe.Graphics.DrawImage(this.Image, dst, src, GraphicsUnit.Pixel);
this.Lens.Draw(pe.Graphics, this.Image, this.ZoomFactor, this.imageLocation);
break;
case ImageViewerViewMode.PrintSelection:
src = new Rectangle(Point.Empty, new Size(Image.Width, Image.Height));
dst = new Rectangle(this.imageLocation, new Size((int)(this.Image.Width * this.ZoomFactor), (int)(this.Image.Height * this.ZoomFactor)));
pe.Graphics.DrawImage(this.Image, dst, src, GraphicsUnit.Pixel);
break;
case ImageViewerViewMode.PrintPreview:
break;
}
}
//Debug.WriteLine("Viewer redrawn " + DateTime.Now);
base.OnPaint(pe);
}
}
EDIT 3:
Experience further graphics-related trouble when setting the height to something large. For example, if in the AreaSelection constructor I set the height to 500, dragging the control really screws up the painting.
whilst dragging the underlying image which shows through the rectangle gets lags behind
This is rather inevitable, updating the rectangle also redraws the image. And if that's expensive, say more than 30 milliseconds, then this can become noticeable to the eye.
That's a lot of milliseconds for something as simple as an image on a modern machine. The only way it can take that long is when the image is large and needs to be rescaled to fit the picturebox. And the pixel format is incompatible with the pixel format of the video adapter so that every single one of them has to be translated from the image pixel format to the video adapter's pixel format. That can indeed add up to multiple milliseconds.
You'll need to help to avoid PictureBox from having to burn that many cpu cycles every time the image gets painted. Do so by prescaling the image, turning it from a huge bitmap into one that better fits the control. And by altering the pixel format, the 32bppPArgb format is best by a long shot since that matches the pixel format of the vast majority of all video adapters. It draws ten times faster than all the other formats. You'll find boilerplate code to make this conversion in this answer.

How to check if logo/image is in a image?

I want to compare an image of a document with a logo type or another picture to see if the logo is in the document.
The application to do this is going to use ASP.NET MVC 3 and C#.
After more searching i finded a solution with a extension of Bitmap that using AForge:
public static class BitmapExtensions
{
/// <summary>
/// See if bmp is contained in template with a small margin of error.
/// </summary>
/// <param name="template">The Bitmap that might contain.</param>
/// <param name="bmp">The Bitmap that might be contained in.</param>
/// <returns>You guess!</returns>
public static bool Contains(this Bitmap template, Bitmap bmp)
{
const Int32 divisor = 4;
const Int32 epsilon = 10;
ExhaustiveTemplateMatching etm = new ExhaustiveTemplateMatching(0.9f);
TemplateMatch[] tm = etm.ProcessImage(
new ResizeNearestNeighbor(template.Width / divisor, template.Height / divisor).Apply(template),
new ResizeNearestNeighbor(bmp.Width / divisor, bmp.Height / divisor).Apply(bmp)
);
if (tm.Length == 1)
{
Rectangle tempRect = tm[0].Rectangle;
if (Math.Abs(bmp.Width / divisor - tempRect.Width) < epsilon
&&
Math.Abs(bmp.Height / divisor - tempRect.Height) < epsilon)
{
return true;
}
}
return false;
}
}
I think that you want to use template matching functionalities. I would suggest using opencv for that. This is similar to this question

What's a good pixelation algorithm in C# .NET?

What is a good algorithm for pixelating an image in C# .NET?
A simple, yet unefficient solution would be to resize to a smaller size, then resize back using pixel duplication.
A better solution would be (pseudo-code):
(Time O(n), Additional space (besides mutable source image): O(1))
// Pixelize in x axis (choose a whole k s.t. 1 <= k <= Width)
var sum = Pixel[0, 0];
for (y = 0; y < Height; y++)
{
for (x = 0; x < Width + 1; x++)
{
if (x % k == 0)
{
sum /= k;
for (xl = Max(0, x-k); xl < x; xl++)
Pixel[y, xl] = sum;
sum = 0;
}
if (x == Width)
break;
sum += Pixel[y, x];
}
}
// Now do the same in the y axis
// (make sure to keep y the outer loop - for better performance)
// If your image has more than one channel, then then Pixel should be a struct.
The guy over at this forum has a pretty good algorithm. It works by taking the average of all of the colors in each "block."
I just used his implementation in C#/GDI+ today:
using System;
using System.Collections.Generic;
using System.Diagnostics.CodeAnalysis;
using System.Drawing;
using System.Linq;
using System.Text;
/// <summary>
/// Applies a pixelation effect to an image.
/// </summary>
[SuppressMessage(
"Microsoft.Naming",
"CA1704",
Justification = "'Pixelate' is a word in my book.")]
public class PixelateEffect : EffectBase
{
/// <summary>
/// Gets or sets the block size, in pixels.
/// </summary>
private int blockSize = 10;
/// <summary>
/// Gets or sets the block size, in pixels.
/// </summary>
public int BlockSize
{
get
{
return this.blockSize;
}
set
{
if (value <= 1)
{
throw new ArgumentOutOfRangeException("value");
}
this.blockSize = value;
}
}
/// <summary>
/// Applies the effect by rendering it onto the target bitmap.
/// </summary>
/// <param name="source">The source bitmap.</param>
/// <param name="target">The target bitmap.</param>
public override void DrawImage(Bitmap source, Bitmap target)
{
if (source == null)
{
throw new ArgumentNullException("source");
}
if (target == null)
{
throw new ArgumentNullException("target");
}
if (source.Size != target.Size)
{
throw new ArgumentException("The source bitmap and the target bitmap must be the same size.");
}
using (var graphics = Graphics.FromImage(target))
{
graphics.PageUnit = GraphicsUnit.Pixel;
for (int x = 0; x < source.Width; x += this.BlockSize)
{
for (int y = 0; y < source.Height; y += this.BlockSize)
{
var sums = new Sums();
for (int xx = 0; xx < this.BlockSize; ++xx)
{
for (int yy = 0; yy < this.BlockSize; ++yy)
{
if (x + xx >= source.Width || y + yy >= source.Height)
{
continue;
}
var color = source.GetPixel(x + xx, y + yy);
sums.A += color.A;
sums.R += color.R;
sums.G += color.G;
sums.B += color.B;
sums.T++;
}
}
var average = Color.FromArgb(
sums.A / sums.T,
sums.R / sums.T,
sums.G / sums.T,
sums.B / sums.T);
using (var brush = new SolidBrush(average))
{
graphics.FillRectangle(brush, x, y, (x + this.BlockSize), (y + this.BlockSize));
}
}
}
}
}
/// <summary>
/// A structure that holds sums for color averaging.
/// </summary>
private struct Sums
{
/// <summary>
/// Gets or sets the alpha component.
/// </summary>
public int A
{
get;
set;
}
/// <summary>
/// Gets or sets the red component.
/// </summary>
public int R
{
get;
set;
}
/// <summary>
/// Gets or sets the blue component.
/// </summary>
public int B
{
get;
set;
}
/// <summary>
/// Gets or sets the green component.
/// </summary>
public int G
{
get;
set;
}
/// <summary>
/// Gets or sets the total count.
/// </summary>
public int T
{
get;
set;
}
}
}
Caveat emptor, works on my machine, & etc.
While I don't know of a well know algorithm for this, I did have to write something similar. The technique I used was pretty simple, but I am thinking not very efficient for large images. Basically I would take the image and do color averaging in 5 (or howerver big you want) pixel blocks and then make all those pixels the same color. You could speed this up by doing the average on just the diagonal pixels which would save a lot of cycles but be less accurate.

Categories

Resources