Upload from IOS picture to .net app: Rotate - c#

I have below code for uploading and resize pictures from IOS Devices to my .net application. Users use to take picture in portrait orientation and then all pictures show up in my app with wrong rotation. Any suggestion how to fix this?
string fileName = Server.HtmlEncode(FileUploadFormbilde.FileName);
string extension = System.IO.Path.GetExtension(fileName);
System.Drawing.Image image_file = System.Drawing.Image.FromStream(FileUploadFormbilde.PostedFile.InputStream);
int image_height = image_file.Height;
int image_width = image_file.Width;
int max_height = 300;
int max_width = 300;
image_height = (image_height * max_width) / image_width;
image_width = max_width;
if (image_height > max_height)
{
image_width = (image_width * max_height) / image_height;
image_height = max_height;
}
Bitmap bitmap_file = new Bitmap(image_file, image_width, image_height);
System.IO.MemoryStream stream = new System.IO.MemoryStream();
bitmap_file.Save(stream, System.Drawing.Imaging.ImageFormat.Png);
stream.Position = 0;
byte[] data = new byte[stream.Length + 1];
stream.Read(data, 0, data.Length);

Here you go my friend:
Image originalImage = Image.FromStream(data);
if (originalImage.PropertyIdList.Contains(0x0112))
{
int rotationValue = originalImage.GetPropertyItem(0x0112).Value[0];
switch (rotationValue)
{
case 1: // landscape, do nothing
break;
case 8: // rotated 90 right
// de-rotate:
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate270FlipNone);
break;
case 3: // bottoms up
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate180FlipNone);
break;
case 6: // rotated 90 left
originalImage.RotateFlip(rotateFlipType: RotateFlipType.Rotate90FlipNone);
break;
}
}

You must read the image's Orientation value from the EXIF data in the Image.PropertyItems collection, and rotate it accordingly.

Here is a better solution answer posted Here He wrote a simple helper class that does all that:
you can check the full source code here.
private System.Drawing.Image ResizeAndDraw(System.Drawing.Image objTempImage)
{
// call image helper to fix the orientation issue
var temp = ImageHelper.RotateImageByExifOrientationData(objTempImage, true);
Size objSize = new Size(150, 200);
Bitmap objBmp;
objBmp = new Bitmap(objSize.Width, objSize.Height);
Graphics g = Graphics.FromImage(objBmp);
g.SmoothingMode = SmoothingMode.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
//Rectangle rect = new Rectangle(x, y, thumbSize.Width, thumbSize.Height);
Rectangle rect = new Rectangle(0,0,150,200);
//g.DrawImage(objTempImage, rect, 0, 0, objTempImage.Width, objTempImage.Height, GraphicsUnit.Pixel);
g.DrawImage(objTempImage, rect);
return objBmp;
}
using System.Drawing;
using System.Drawing.Imaging;
using System.Linq;
public static class ImageHelper
{
/// <summary>
/// Rotate the given image file according to Exif Orientation data
/// </summary>
/// <param name="sourceFilePath">path of source file</param>
/// <param name="targetFilePath">path of target file</param>
/// <param name="targetFormat">target format</param>
/// <param name="updateExifData">set it to TRUE to update image Exif data after rotation (default is TRUE)</param>
/// <returns>The RotateFlipType value corresponding to the applied rotation. If no rotation occurred, RotateFlipType.RotateNoneFlipNone will be returned.</returns>
public static RotateFlipType RotateImageByExifOrientationData(string sourceFilePath, string targetFilePath, ImageFormat targetFormat, bool updateExifData = true)
{
// Rotate the image according to EXIF data
var bmp = new Bitmap(sourceFilePath);
RotateFlipType fType = RotateImageByExifOrientationData(bmp, updateExifData);
if (fType != RotateFlipType.RotateNoneFlipNone)
{
bmp.Save(targetFilePath, targetFormat);
}
return fType;
}
/// <summary>
/// Rotate the given bitmap according to Exif Orientation data
/// </summary>
/// <param name="img">source image</param>
/// <param name="updateExifData">set it to TRUE to update image Exif data after rotation (default is TRUE)</param>
/// <returns>The RotateFlipType value corresponding to the applied rotation. If no rotation occurred, RotateFlipType.RotateNoneFlipNone will be returned.</returns>
public static RotateFlipType RotateImageByExifOrientationData(Image img, bool updateExifData = true)
{
int orientationId = 0x0112;
var fType = RotateFlipType.RotateNoneFlipNone;
if (img.PropertyIdList.Contains(orientationId))
{
var pItem = img.GetPropertyItem(orientationId);
fType = GetRotateFlipTypeByExifOrientationData(pItem.Value[0]);
if (fType != RotateFlipType.RotateNoneFlipNone)
{
img.RotateFlip(fType);
// Remove Exif orientation tag (if requested)
if (updateExifData) img.RemovePropertyItem(orientationId);
}
}
return fType;
}
/// <summary>
/// Return the proper System.Drawing.RotateFlipType according to given orientation EXIF metadata
/// </summary>
/// <param name="orientation">Exif "Orientation"</param>
/// <returns>the corresponding System.Drawing.RotateFlipType enum value</returns>
public static RotateFlipType GetRotateFlipTypeByExifOrientationData(int orientation)
{
switch (orientation)
{
case 1:
default:
return RotateFlipType.RotateNoneFlipNone;
case 2:
return RotateFlipType.RotateNoneFlipX;
case 3:
return RotateFlipType.Rotate180FlipNone;
case 4:
return RotateFlipType.Rotate180FlipX;
case 5:
return RotateFlipType.Rotate90FlipX;
case 6:
return RotateFlipType.Rotate90FlipNone;
case 7:
return RotateFlipType.Rotate270FlipX;
case 8:
return RotateFlipType.Rotate270FlipNone;
}
}
}

Related

Why displayed picture is rotated left? [duplicate]

As the question implies, when I load an image into a pictureBox (using dialog box), it doesn't appear as its original look. in this screen shoot, the image on the left is the one I loaded into the pictureBox (on the right).
Trying to know what causes this I draw an image using Paint app and rotated it using Windows Photo Viewer, the rotated image loaded as it is (rotated), that is it, some pictures are just fine loaded, and others are rotated! and I can't figure out why?!
When you view an image in Windows Photo Viewer, it automatically corrects image orientation if it has an Exif orientation data. PictureBox doesn't have built-in support for such functionality. You can create a custom PictureBox which shows images correctly even if they have orientation data:
using System.Linq;
using System.Windows.Forms;
using System.Drawing;
using System.ComponentModel;
public class MyPictureBox : PictureBox
{
private void CorrectExifOrientation(Image image)
{
if (image == null) return;
int orientationId = 0x0112;
if (image.PropertyIdList.Contains(orientationId))
{
var orientation = (int)image.GetPropertyItem(orientationId).Value[0];
var rotateFlip = RotateFlipType.RotateNoneFlipNone;
switch (orientation)
{
case 1: rotateFlip = RotateFlipType.RotateNoneFlipNone; break;
case 2: rotateFlip = RotateFlipType.RotateNoneFlipX; break;
case 3: rotateFlip = RotateFlipType.Rotate180FlipNone; break;
case 4: rotateFlip = RotateFlipType.Rotate180FlipX; break;
case 5: rotateFlip = RotateFlipType.Rotate90FlipX; break;
case 6: rotateFlip = RotateFlipType.Rotate90FlipNone; break;
case 7: rotateFlip = RotateFlipType.Rotate270FlipX; break;
case 8: rotateFlip = RotateFlipType.Rotate270FlipNone; break;
default: rotateFlip = RotateFlipType.RotateNoneFlipNone; break;
}
if (rotateFlip != RotateFlipType.RotateNoneFlipNone)
{
image.RotateFlip(rotateFlip);
image.RemovePropertyItem(orientationId);
}
}
}
[Localizable(true)]
[Bindable(true)]
public new Image Image
{
get { return base.Image; }
set { base.Image = value; CorrectExifOrientation(value); }
}
}
Without the original image data, it's impossible to say for sure what going on. But it's clear that at some point, some software involved in the processing of the image has used the EXIF orientation property to rotate the image, rather than actually modifying the image data itself. This may be Photo Viewer or some other tool that handled the photo at some point.
Here is code you can use to detect the orientation of the image, as recorded in the EXIF data by the camera that took the picture:
static ImageOrientation GetOrientation(this Image image)
{
PropertyItem pi = SafeGetPropertyItem(image, 0x112);
if (pi == null || pi.Type != 3)
{
return ImageOrientation.Original;
}
return (ImageOrientation)BitConverter.ToInt16(pi.Value, 0);
}
// A file without the desired EXIF property record will throw ArgumentException.
static PropertyItem SafeGetPropertyItem(Image image, int propid)
{
try
{
return image.GetPropertyItem(propid);
}
catch (ArgumentException)
{
return null;
}
}
where:
/// <summary>
/// Possible EXIF orientation values describing clockwise
/// rotation of the captured image due to camera orientation.
/// </summary>
/// <remarks>Reverse/undo these transformations to display an image correctly</remarks>
public enum ImageOrientation
{
/// <summary>
/// Image is correctly oriented
/// </summary>
Original = 1,
/// <summary>
/// Image has been mirrored horizontally
/// </summary>
MirrorOriginal = 2,
/// <summary>
/// Image has been rotated 180 degrees
/// </summary>
Half = 3,
/// <summary>
/// Image has been mirrored horizontally and rotated 180 degrees
/// </summary>
MirrorHalf = 4,
/// <summary>
/// Image has been mirrored horizontally and rotated 270 degrees clockwise
/// </summary>
MirrorThreeQuarter = 5,
/// <summary>
/// Image has been rotated 270 degrees clockwise
/// </summary>
ThreeQuarter = 6,
/// <summary>
/// Image has been mirrored horizontally and rotated 90 degrees clockwise.
/// </summary>
MirrorOneQuarter = 7,
/// <summary>
/// Image has been rotated 90 degrees clockwise.
/// </summary>
OneQuarter = 8
}
The GetOrientation() method above is written as an extension method, but of course you can call it as a plain static method. Either way, just pass it the Bitmap object you've just opened from a file, and it will return the EXIF orientation stored in the file, if any.
With that in hand, you can rotate the image according to your needs.
I load my image from the file explorer to the pictureBox like this:
private void btnBrowse_Click ( object sender, EventArgs e ) {
using( OpenFileDialog openFile = new OpenFileDialog() ) {
openFile.Title = "Select image for [user]";
openFile.Filter = "Image files (*.jpg, *.jpeg, *.jpe, *.jfif, *.png)|*.jpg; *.jpeg; *.jpe; *.jfif; *.png|All files (*.*)|*.*";
if( openFile.ShowDialog() == DialogResult.OK ) {
//image validation
try {
Bitmap bmp = new Bitmap( openFile.FileName );//to validate the image
if( bmp != null ) {//if image is valid
pictureBox1.Load( openFile.FileName );//display selected image file
pictureBox1.Image.RotateFlip( Rotate( bmp ) );//display image in proper orientation
bmp.Dispose();
}
} catch( ArgumentException ) {
MessageBox.Show( "The specified image file is invalid." );
} catch( FileNotFoundException ) {
MessageBox.Show( "The path to image is invalid." );
}
}
}
}
This is the method for displaying the images proper orientation:
//Change Image to Correct Orientation When displaying to PictureBox
public static RotateFlipType Rotate ( Image bmp ) {
const int OrientationId = 0x0112;
PropertyItem pi = bmp.PropertyItems.Select( x => x )
.FirstOrDefault( x => x.Id == OrientationId );
if( pi == null )
return RotateFlipType.RotateNoneFlipNone;
byte o = pi.Value[ 0 ];
//Orientations
if( o == 2 ) //TopRight
return RotateFlipType.RotateNoneFlipX;
if( o == 3 ) //BottomRight
return RotateFlipType.RotateNoneFlipXY;
if( o == 4 ) //BottomLeft
return RotateFlipType.RotateNoneFlipY;
if( o == 5 ) //LeftTop
return RotateFlipType.Rotate90FlipX;
if( o == 6 ) //RightTop
return RotateFlipType.Rotate90FlipNone;
if( o == 7 ) //RightBottom
return RotateFlipType.Rotate90FlipY;
if( o == 8 ) //LeftBottom
return RotateFlipType.Rotate90FlipXY;
return RotateFlipType.RotateNoneFlipNone; //TopLeft (what the image looks by default) [or] Unknown
}
These are my references:
Code reference (I modified the accepted answer): https://stackoverflow.com/a/42972969/11565087
Visual reference (found in the comments[I don't know how to link a comment here]) : http://csharphelper.com/blog/2016/07/read-an-image-files-exif-orientation-data-in-c/

Big red X on image and I cannot detect error

To be brief, I'm trying to implement some motion detection algorithm and in my case I'm working on UWP using portable version of AForge library to processing image. Avoiding this issue, I have problem with conversion SoftwareBitmap object (that I get from MediaFrameReader) to Bitmap object (and vice versa) that I use in my code associated with motion detection. In consequence of this conversion, I get proper image with big red X in the foreground. Code below:
private async void FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
var frame = sender.TryAcquireLatestFrame();
if (frame != null && !_detectingMotion)
{
SoftwareBitmap aForgeInputBitmap = null;
var inputBitmap = frame.VideoMediaFrame?.SoftwareBitmap;
if (inputBitmap != null)
{
_detectingMotion = true;
//The XAML Image control can only display images in BRGA8 format with premultiplied or no alpha
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8
&& inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
aForgeInputBitmap = SoftwareBitmap.Copy(inputBitmap);
}
else
{
aForgeInputBitmap = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore);
}
await _aForgeHelper.MoveBackgrounds(aForgeInputBitmap);
SoftwareBitmap aForgeOutputBitmap = await _aForgeHelper.DetectMotion();
_frameRenderer.PresentSoftwareBitmap(aForgeOutputBitmap);
_detectingMotion = false;
}
}
}
class AForgeHelper
{
private Bitmap _background;
private Bitmap _currentFrameBitmap;
public async Task MoveBackgrounds(SoftwareBitmap currentFrame)
{
if (_background == null)
{
_background = TransformToGrayscale(await ConvertSoftwareBitmapToBitmap(currentFrame));
}
else
{
// modifying _background in compliance with algorithm - in this case irrelevant
}
}
public async Task<SoftwareBitmap> DetectMotion()
{
// to check only this conversion
return await ConvertBitmapToSoftwareBitmap(_background);
}
private static async Task<Bitmap> ConvertSoftwareBitmapToBitmap(SoftwareBitmap input)
{
Bitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = new WriteableBitmap(input.PixelWidth, input.PixelHeight);
input.CopyToBuffer(tmpBitmap.PixelBuffer);
output = (Bitmap)tmpBitmap;
});
return output;
}
private static async Task<SoftwareBitmap> ConvertBitmapToSoftwareBitmap(Bitmap input)
{
SoftwareBitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = (WriteableBitmap)input;
output = new SoftwareBitmap(BitmapPixelFormat.Bgra8, tmpBitmap.PixelWidth, tmpBitmap.PixelHeight,
BitmapAlphaMode.Premultiplied);
output.CopyFromBuffer(tmpBitmap.PixelBuffer);
});
return output;
}
private static Bitmap TransformToGrayscale(Bitmap input)
{
Grayscale grayscaleFilter = new Grayscale(0.2125, 0.7154, 0.0721);
Bitmap output = grayscaleFilter.Apply(input);
return output;
}
Certainly, I've tried to detect some errors using try-catch clauses. I've found nothing. Thanks in advance.
EDIT (29/03/2018):
Generally, my app targets at providing some features associated with Kinect Sensor. App user has possibility to choose some feature from features list. First of all, this app have to be available on Xbox One, therefore I've chosen UWP. Due to 'scientific' issues, I've implemented MVVM pattern, using MVVM Light framework. As for PresentSoftwareBitmap() method, it comes from Windows-universal-samples repo and I paste FrameRenderer helper class below:
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
class FrameRenderer
{
private Image _imageElement;
private SoftwareBitmap _backBuffer;
private bool _taskRunning = false;
public FrameRenderer(Image imageElement)
{
_imageElement = imageElement;
_imageElement.Source = new SoftwareBitmapSource();
}
// Processes a MediaFrameReference and displays it in a XAML image control
public void ProcessFrame(MediaFrameReference frame)
{
var softwareBitmap = FrameRenderer.ConvertToDisplayableImage(frame?.VideoMediaFrame);
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
// Function delegate that transforms a scanline from an input image to an output image.
private unsafe delegate void TransformScanline(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes);
/// <summary>
/// Determines the subtype to request from the MediaFrameReader that will result in
/// a frame that can be rendered by ConvertToDisplayableImage.
/// </summary>
/// <returns>Subtype string to request, or null if subtype is not renderable.</returns>
public static string GetSubtypeForFrameReader(MediaFrameSourceKind kind, MediaFrameFormat format)
{
// Note that media encoding subtypes may differ in case.
// https://learn.microsoft.com/en-us/uwp/api/Windows.Media.MediaProperties.MediaEncodingSubtypes
string subtype = format.Subtype;
switch (kind)
{
// For color sources, we accept anything and request that it be converted to Bgra8.
case MediaFrameSourceKind.Color:
return Windows.Media.MediaProperties.MediaEncodingSubtypes.Bgra8;
// The only depth format we can render is D16.
case MediaFrameSourceKind.Depth:
return String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.D16, StringComparison.OrdinalIgnoreCase) ? subtype : null;
// The only infrared formats we can render are L8 and L16.
case MediaFrameSourceKind.Infrared:
return (String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L8, StringComparison.OrdinalIgnoreCase) ||
String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L16, StringComparison.OrdinalIgnoreCase)) ? subtype : null;
// No other source kinds are supported by this class.
default:
return null;
}
}
/// <summary>
/// Converts a frame to a SoftwareBitmap of a valid format to display in an Image control.
/// </summary>
/// <param name="inputFrame">Frame to convert.</param>
public static unsafe SoftwareBitmap ConvertToDisplayableImage(VideoMediaFrame inputFrame)
{
SoftwareBitmap result = null;
using (var inputBitmap = inputFrame?.SoftwareBitmap)
{
if (inputBitmap != null)
{
switch (inputFrame.FrameReference.SourceKind)
{
case MediaFrameSourceKind.Color:
// XAML requires Bgra8 with premultiplied alpha.
// We requested Bgra8 from the MediaFrameReader, so all that's
// left is fixing the alpha channel if necessary.
if (inputBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8)
{
System.Diagnostics.Debug.WriteLine("Color frame in unexpected format.");
}
else if (inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
// Already in the correct format.
result = SoftwareBitmap.Copy(inputBitmap);
}
else
{
// Convert to premultiplied alpha.
result = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
break;
case MediaFrameSourceKind.Depth:
// We requested D16 from the MediaFrameReader, so the frame should
// be in Gray16 format.
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Gray16)
{
// Use a special pseudo color to render 16 bits depth frame.
var depthScale = (float)inputFrame.DepthMediaFrame.DepthFormat.DepthScaleInMeters;
var minReliableDepth = inputFrame.DepthMediaFrame.MinReliableDepth;
var maxReliableDepth = inputFrame.DepthMediaFrame.MaxReliableDepth;
result = TransformBitmap(inputBitmap, (w, i, o) => PseudoColorHelper.PseudoColorForDepth(w, i, o, depthScale, minReliableDepth, maxReliableDepth));
}
else
{
System.Diagnostics.Debug.WriteLine("Depth frame in unexpected format.");
}
break;
case MediaFrameSourceKind.Infrared:
// We requested L8 or L16 from the MediaFrameReader, so the frame should
// be in Gray8 or Gray16 format.
switch (inputBitmap.BitmapPixelFormat)
{
case BitmapPixelFormat.Gray16:
// Use pseudo color to render 16 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor16BitInfrared);
break;
case BitmapPixelFormat.Gray8:
// Use pseudo color to render 8 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor8BitInfrared);
break;
default:
System.Diagnostics.Debug.WriteLine("Infrared frame in unexpected format.");
break;
}
break;
}
}
}
return result;
}
/// <summary>
/// Transform image into Bgra8 image using given transform method.
/// </summary>
/// <param name="softwareBitmap">Input image to transform.</param>
/// <param name="transformScanline">Method to map pixels in a scanline.</param>
private static unsafe SoftwareBitmap TransformBitmap(SoftwareBitmap softwareBitmap, TransformScanline transformScanline)
{
// XAML Image control only supports premultiplied Bgra8 format.
var outputBitmap = new SoftwareBitmap(BitmapPixelFormat.Bgra8,
softwareBitmap.PixelWidth, softwareBitmap.PixelHeight, BitmapAlphaMode.Premultiplied);
using (var input = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
using (var output = outputBitmap.LockBuffer(BitmapBufferAccessMode.Write))
{
// Get stride values to calculate buffer position for a given pixel x and y position.
int inputStride = input.GetPlaneDescription(0).Stride;
int outputStride = output.GetPlaneDescription(0).Stride;
int pixelWidth = softwareBitmap.PixelWidth;
int pixelHeight = softwareBitmap.PixelHeight;
using (var outputReference = output.CreateReference())
using (var inputReference = input.CreateReference())
{
// Get input and output byte access buffers.
byte* inputBytes;
uint inputCapacity;
((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputBytes, out inputCapacity);
byte* outputBytes;
uint outputCapacity;
((IMemoryBufferByteAccess)outputReference).GetBuffer(out outputBytes, out outputCapacity);
// Iterate over all pixels and store converted value.
for (int y = 0; y < pixelHeight; y++)
{
byte* inputRowBytes = inputBytes + y * inputStride;
byte* outputRowBytes = outputBytes + y * outputStride;
transformScanline(pixelWidth, inputRowBytes, outputRowBytes);
}
}
}
return outputBitmap;
}
/// <summary>
/// A helper class to manage look-up-table for pseudo-colors.
/// </summary>
private static class PseudoColorHelper
{
#region Constructor, private members and methods
private const int TableSize = 1024; // Look up table size
private static readonly uint[] PseudoColorTable;
private static readonly uint[] InfraredRampTable;
// Color palette mapping value from 0 to 1 to blue to red colors.
private static readonly Color[] ColorRamp =
{
Color.FromArgb(a:0xFF, r:0x7F, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x7F, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0xFF, b:0x00),
Color.FromArgb(a:0xFF, r:0x7F, g:0xFF, b:0x7F),
Color.FromArgb(a:0xFF, r:0x00, g:0xFF, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x7F, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0x7F),
};
static PseudoColorHelper()
{
PseudoColorTable = InitializePseudoColorLut();
InfraredRampTable = InitializeInfraredRampLut();
}
/// <summary>
/// Maps an input infrared value between [0, 1] to corrected value between [0, 1].
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)] // Tell the compiler to inline this method to improve performance
private static uint InfraredColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return InfraredRampTable[index];
}
/// <summary>
/// Initializes the pseudo-color look up table for infrared pixels
/// </summary>
private static uint[] InitializeInfraredRampLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
var value = (float)i / TableSize;
// Adjust to increase color change between lower values in infrared images
var alpha = (float)Math.Pow(1 - value, 12);
lut[i] = ColorRampInterpolation(alpha);
}
return lut;
}
/// <summary>
/// Initializes pseudo-color look up table for depth pixels
/// </summary>
private static uint[] InitializePseudoColorLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
lut[i] = ColorRampInterpolation((float)i / TableSize);
}
return lut;
}
/// <summary>
/// Maps a float value to a pseudo-color pixel
/// </summary>
private static uint ColorRampInterpolation(float value)
{
// Map value to surrounding indexes on the color ramp
int rampSteps = ColorRamp.Length - 1;
float scaled = value * rampSteps;
int integer = (int)scaled;
int index =
integer < 0 ? 0 :
integer >= rampSteps - 1 ? rampSteps - 1 :
integer;
Color prev = ColorRamp[index];
Color next = ColorRamp[index + 1];
// Set color based on ratio of closeness between the surrounding colors
uint alpha = (uint)((scaled - integer) * 255);
uint beta = 255 - alpha;
return
((prev.A * beta + next.A * alpha) / 255) << 24 | // Alpha
((prev.R * beta + next.R * alpha) / 255) << 16 | // Red
((prev.G * beta + next.G * alpha) / 255) << 8 | // Green
((prev.B * beta + next.B * alpha) / 255); // Blue
}
/// <summary>
/// Maps a value in [0, 1] to a pseudo RGBA color.
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint PseudoColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return PseudoColorTable[index];
}
#endregion
/// <summary>
/// Maps each pixel in a scanline from a 16 bit depth value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
/// <param name="depthScale">Physical distance that corresponds to one unit in the input scanline.</param>
/// <param name="minReliableDepth">Shortest distance at which the sensor can provide reliable measurements.</param>
/// <param name="maxReliableDepth">Furthest distance at which the sensor can provide reliable measurements.</param>
public static unsafe void PseudoColorForDepth(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes, float depthScale, float minReliableDepth, float maxReliableDepth)
{
// Visualize space in front of your desktop.
float minInMeters = minReliableDepth * depthScale;
float maxInMeters = maxReliableDepth * depthScale;
float one_min = 1.0f / minInMeters;
float range = 1.0f / maxInMeters - one_min;
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
var depth = inputRow[x] * depthScale;
if (depth == 0)
{
// Map invalid depth values to transparent pixels.
// This happens when depth information cannot be calculated, e.g. when objects are too close.
outputRow[x] = 0;
}
else
{
var alpha = (1.0f / depth - one_min) / range;
outputRow[x] = PseudoColor(alpha * alpha);
}
}
}
/// <summary>
/// Maps each pixel in a scanline from a 8 bit infrared value to a pseudo-color pixel.
/// </summary>
/// /// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor8BitInfrared(
int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
byte* inputRow = inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)Byte.MaxValue);
}
}
/// <summary>
/// Maps each pixel in a scanline from a 16 bit infrared value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor16BitInfrared(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)UInt16.MaxValue);
}
}
}
// Displays the provided softwareBitmap in a XAML image control.
public void PresentSoftwareBitmap(SoftwareBitmap softwareBitmap)
{
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
}
This is output image, that I've got after conversion issues and grayscale process:
Output image
As for VS version - Visual Studio Enterprise 2017, Version 15.6.1 (to be precise). Once more, thanks in advance for help.

PictureBox rotates the image itself [duplicate]

As the question implies, when I load an image into a pictureBox (using dialog box), it doesn't appear as its original look. in this screen shoot, the image on the left is the one I loaded into the pictureBox (on the right).
Trying to know what causes this I draw an image using Paint app and rotated it using Windows Photo Viewer, the rotated image loaded as it is (rotated), that is it, some pictures are just fine loaded, and others are rotated! and I can't figure out why?!
When you view an image in Windows Photo Viewer, it automatically corrects image orientation if it has an Exif orientation data. PictureBox doesn't have built-in support for such functionality. You can create a custom PictureBox which shows images correctly even if they have orientation data:
using System.Linq;
using System.Windows.Forms;
using System.Drawing;
using System.ComponentModel;
public class MyPictureBox : PictureBox
{
private void CorrectExifOrientation(Image image)
{
if (image == null) return;
int orientationId = 0x0112;
if (image.PropertyIdList.Contains(orientationId))
{
var orientation = (int)image.GetPropertyItem(orientationId).Value[0];
var rotateFlip = RotateFlipType.RotateNoneFlipNone;
switch (orientation)
{
case 1: rotateFlip = RotateFlipType.RotateNoneFlipNone; break;
case 2: rotateFlip = RotateFlipType.RotateNoneFlipX; break;
case 3: rotateFlip = RotateFlipType.Rotate180FlipNone; break;
case 4: rotateFlip = RotateFlipType.Rotate180FlipX; break;
case 5: rotateFlip = RotateFlipType.Rotate90FlipX; break;
case 6: rotateFlip = RotateFlipType.Rotate90FlipNone; break;
case 7: rotateFlip = RotateFlipType.Rotate270FlipX; break;
case 8: rotateFlip = RotateFlipType.Rotate270FlipNone; break;
default: rotateFlip = RotateFlipType.RotateNoneFlipNone; break;
}
if (rotateFlip != RotateFlipType.RotateNoneFlipNone)
{
image.RotateFlip(rotateFlip);
image.RemovePropertyItem(orientationId);
}
}
}
[Localizable(true)]
[Bindable(true)]
public new Image Image
{
get { return base.Image; }
set { base.Image = value; CorrectExifOrientation(value); }
}
}
Without the original image data, it's impossible to say for sure what going on. But it's clear that at some point, some software involved in the processing of the image has used the EXIF orientation property to rotate the image, rather than actually modifying the image data itself. This may be Photo Viewer or some other tool that handled the photo at some point.
Here is code you can use to detect the orientation of the image, as recorded in the EXIF data by the camera that took the picture:
static ImageOrientation GetOrientation(this Image image)
{
PropertyItem pi = SafeGetPropertyItem(image, 0x112);
if (pi == null || pi.Type != 3)
{
return ImageOrientation.Original;
}
return (ImageOrientation)BitConverter.ToInt16(pi.Value, 0);
}
// A file without the desired EXIF property record will throw ArgumentException.
static PropertyItem SafeGetPropertyItem(Image image, int propid)
{
try
{
return image.GetPropertyItem(propid);
}
catch (ArgumentException)
{
return null;
}
}
where:
/// <summary>
/// Possible EXIF orientation values describing clockwise
/// rotation of the captured image due to camera orientation.
/// </summary>
/// <remarks>Reverse/undo these transformations to display an image correctly</remarks>
public enum ImageOrientation
{
/// <summary>
/// Image is correctly oriented
/// </summary>
Original = 1,
/// <summary>
/// Image has been mirrored horizontally
/// </summary>
MirrorOriginal = 2,
/// <summary>
/// Image has been rotated 180 degrees
/// </summary>
Half = 3,
/// <summary>
/// Image has been mirrored horizontally and rotated 180 degrees
/// </summary>
MirrorHalf = 4,
/// <summary>
/// Image has been mirrored horizontally and rotated 270 degrees clockwise
/// </summary>
MirrorThreeQuarter = 5,
/// <summary>
/// Image has been rotated 270 degrees clockwise
/// </summary>
ThreeQuarter = 6,
/// <summary>
/// Image has been mirrored horizontally and rotated 90 degrees clockwise.
/// </summary>
MirrorOneQuarter = 7,
/// <summary>
/// Image has been rotated 90 degrees clockwise.
/// </summary>
OneQuarter = 8
}
The GetOrientation() method above is written as an extension method, but of course you can call it as a plain static method. Either way, just pass it the Bitmap object you've just opened from a file, and it will return the EXIF orientation stored in the file, if any.
With that in hand, you can rotate the image according to your needs.
I load my image from the file explorer to the pictureBox like this:
private void btnBrowse_Click ( object sender, EventArgs e ) {
using( OpenFileDialog openFile = new OpenFileDialog() ) {
openFile.Title = "Select image for [user]";
openFile.Filter = "Image files (*.jpg, *.jpeg, *.jpe, *.jfif, *.png)|*.jpg; *.jpeg; *.jpe; *.jfif; *.png|All files (*.*)|*.*";
if( openFile.ShowDialog() == DialogResult.OK ) {
//image validation
try {
Bitmap bmp = new Bitmap( openFile.FileName );//to validate the image
if( bmp != null ) {//if image is valid
pictureBox1.Load( openFile.FileName );//display selected image file
pictureBox1.Image.RotateFlip( Rotate( bmp ) );//display image in proper orientation
bmp.Dispose();
}
} catch( ArgumentException ) {
MessageBox.Show( "The specified image file is invalid." );
} catch( FileNotFoundException ) {
MessageBox.Show( "The path to image is invalid." );
}
}
}
}
This is the method for displaying the images proper orientation:
//Change Image to Correct Orientation When displaying to PictureBox
public static RotateFlipType Rotate ( Image bmp ) {
const int OrientationId = 0x0112;
PropertyItem pi = bmp.PropertyItems.Select( x => x )
.FirstOrDefault( x => x.Id == OrientationId );
if( pi == null )
return RotateFlipType.RotateNoneFlipNone;
byte o = pi.Value[ 0 ];
//Orientations
if( o == 2 ) //TopRight
return RotateFlipType.RotateNoneFlipX;
if( o == 3 ) //BottomRight
return RotateFlipType.RotateNoneFlipXY;
if( o == 4 ) //BottomLeft
return RotateFlipType.RotateNoneFlipY;
if( o == 5 ) //LeftTop
return RotateFlipType.Rotate90FlipX;
if( o == 6 ) //RightTop
return RotateFlipType.Rotate90FlipNone;
if( o == 7 ) //RightBottom
return RotateFlipType.Rotate90FlipY;
if( o == 8 ) //LeftBottom
return RotateFlipType.Rotate90FlipXY;
return RotateFlipType.RotateNoneFlipNone; //TopLeft (what the image looks by default) [or] Unknown
}
These are my references:
Code reference (I modified the accepted answer): https://stackoverflow.com/a/42972969/11565087
Visual reference (found in the comments[I don't know how to link a comment here]) : http://csharphelper.com/blog/2016/07/read-an-image-files-exif-orientation-data-in-c/

Edge of the image is cutted out when conver DIBto ImageSource

I use below code to convert DIB from scanner with TWAIN to BitmapSource, but I found that the edge of image is cutted off. Image size is different with source image. Is there any thing wrong in below code when handel image?
/// <summary>
/// Get managed BitmapSource from a DIB provided as a low level windows hadle
///
/// Notes:
/// Data is copied from the source so the windows handle can be saftely discarded
/// even when the BitmapSource is in use.
///
/// Only a subset of possible DIB forrmats is supported.
///
/// </summary>
/// <param name="dibHandle"></param>
/// <returns>A copy of the image in a managed BitmapSource </returns>
///
public static BitmapSource FormHDib(IntPtr dibHandle)
{
BitmapSource bs = null;
IntPtr bmpPtr = IntPtr.Zero;
bool flip = true; // vertivcally flip the image
try {
bmpPtr = Win32.GlobalLock(dibHandle);
Win32.BITMAPINFOHEADER bmi = new Win32.BITMAPINFOHEADER();
Marshal.PtrToStructure(bmpPtr, bmi);
if (bmi.biSizeImage == 0)
bmi.biSizeImage = (uint)(((((bmi.biWidth * bmi.biBitCount) + 31) & ~31) >> 3) * bmi.biHeight);
int palettSize = 0;
if (bmi.biClrUsed != 0)
throw new NotSupportedException("DibToBitmap: DIB with pallet is not supported");
// pointer to the beginning of the bitmap bits
IntPtr pixptr = (IntPtr)((int)bmpPtr + bmi.biSize + palettSize);
// Define parameters used to create the BitmapSource.
PixelFormat pf = PixelFormats.Default;
switch (bmi.biBitCount) {
case 32:
pf = PixelFormats.Bgr32;
break;
case 24:
pf = PixelFormats.Bgr24;
break;
case 8:
pf = PixelFormats.Gray8;
break;
case 1:
pf = PixelFormats.BlackWhite;
break;
default: // not supported
throw new NotSupportedException("DibToBitmap: Can't determine picture format (biBitCount=" + bmi.biBitCount + ")");
// break;
}
int width = bmi.biWidth;
int height = bmi.biHeight;
int stride = (int)(bmi.biSizeImage / height);
byte[] imageBytes = new byte[stride * height];
//Debug: Initialize the image with random data.
//Random value = new Random();
//value.NextBytes(rawImage);
if (flip) {
for (int i = 0, j = 0, k = (height - 1) * stride; i < height; i++, j += stride, k -= stride)
Marshal.Copy(((IntPtr)((int)pixptr + j)), imageBytes, k, stride);
} else {
Marshal.Copy(pixptr, imageBytes, 0, imageBytes.Length);
}
int xDpi = (int)Math.Round(bmi.biXPelsPerMeter * 2.54 / 100); // pels per meter to dots per inch
int yDpi = (int)Math.Round(bmi.biYPelsPerMeter * 2.54 / 100);
// Create a BitmapSource.
bs = BitmapSource.Create(width, height, xDpi, yDpi, pf, null, imageBytes, stride);
Win32.GlobalUnlock(pixptr);
Win32.GlobalFree(pixptr);
pixptr = IntPtr.Zero;
imageBytes = null;
bmi = null;
} catch (Exception ex) {
//string msg = ex.Message;
}
finally {
// cleanup
if (bmpPtr != IntPtr.Zero) { // locked sucsessfully
Win32.GlobalUnlock(dibHandle);
}
}
Win32.GlobalUnlock(bmpPtr);
Win32.GlobalFree(bmpPtr);
bmpPtr = IntPtr.Zero;
return bs;
}
This code is from here.
I find the root reason now, it's caused by wrong TWAIN setting.
Set the Edge padding to nothing will solve this problem.

Draggable selection rectangle

Before anybody points it out I know that a there is a question with the same title that has already been asked here it just doesn't answer my issue I think.
Working in .NET 3.5 As in that question I am making an area selection component to select an area on a picture. The picture is displayed using a custom control in which the picture is drawn during OnPaint.
I have the following code for my selection rectangle:
internal class AreaSelection : Control
{
private Rectangle selection
{
get { return new Rectangle(Point.Empty, Size.Subtract(this.Size, new Size(1, 1))); }
}
private Size mouseStartLocation;
public AreaSelection()
{
this.Size = new Size(150, 150);
this.SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.ResizeRedraw | ControlStyles.SupportsTransparentBackColor, true);
this.BackColor = Color.FromArgb(70, 200, 200, 200);
}
protected override void OnMouseEnter(EventArgs e)
{
this.Cursor = Cursors.SizeAll;
base.OnMouseEnter(e);
}
protected override void OnMouseDown(MouseEventArgs e)
{
this.mouseStartLocation = new Size(e.Location);
base.OnMouseDown(e);
}
protected override void OnMouseMove(MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
Point offset = e.Location - this.mouseStartLocation;
this.Left += offset.X;
this.Top += offset.Y;
}
base.OnMouseMove(e);
}
protected override void OnPaint(PaintEventArgs e)
{
e.Graphics.DrawRectangle(new Pen(Color.Black) { DashStyle = DashStyle.Dash }, this.selection);
Debug.WriteLine("Selection redrawn");
}
}
Which gives me a nice semi-transparent rectangle which I can drag around. The problem I have is that whilst dragging the underlying image which shows through the rectangle gets lags behind the position of the rectangle.
This gets more noticeable the faster I move the rectangle. When I stop moving it the image catches up and everything aligns perfectly again.
I assume that there is something wrong with the way the rectangle draws, but I really can't figure out what it is...
Any help would be much appreciated.
EDIT:
I have noticed that the viewer gets redrawn twice as often as the selection area when I drag the selection area. Could this be the cause of the problem?
EDIT 2:
Here is the code for the viewer in case it is relevant:
public enum ImageViewerViewMode
{
Normal,
PrintSelection,
PrintPreview
}
public enum ImageViewerZoomMode
{
None,
OnClick,
Lens
}
public partial class ImageViewer : UserControl
{
/// <summary>
/// The current zoom factor. Note: Use SetZoom() to set the value.
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float ZoomFactor
{
get { return this.zoomFactor; }
private set
{
this.zoomFactor = value;
}
}
/// <summary>
/// The maximum zoom factor to use
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float MaximumZoomFactor
{
get
{
return this.maximumZoomFactor;
}
set
{
this.maximumZoomFactor = value;
this.SetZoomFactorLimits();
}
}
/// <summary>
/// The minimum zoom factort to use
/// </summary>
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public float MinimumZoomFactor
{
get
{
return this.minimumZoomFactor;
}
set
{
this.minimumZoomFactor = value;
this.SetZoomFactorLimits();
}
}
/// <summary>
/// The multiplying factor to apply to each ZoomIn/ZoomOut command
/// </summary>
[Category("Behavior")]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
[DefaultValue(2F)]
public float ZoomStep { get; set; }
/// <summary>
/// The image currently displayed by the control
/// </summary>
[Category("Data")]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public Image Image
{
get { return this.image; }
set
{
this.image = value;
this.ZoomExtents();
this.minimumZoomFactor = this.zoomFactor / 10;
this.MaximumZoomFactor = this.zoomFactor * 10;
}
}
public ImageViewerViewMode ViewMode { get; set; }
public ImageViewerZoomMode ZoomMode { get; set; }
private ImageViewerLens Lens { get; set; }
private float zoomFactor;
private float minimumZoomFactor;
private float maximumZoomFactor;
private bool panning;
private Point imageLocation;
private Point imageTranslation;
private Image image;
private AreaSelection areaSelection;
/// <summary>
/// Class constructor
/// </summary>
public ImageViewer()
{
this.DoubleBuffered = true;
this.MinimumZoomFactor = 0.1F;
this.MaximumZoomFactor = 10F;
this.ZoomStep = 2F;
this.UseScannerUI = true;
this.Lens = new ImageViewerLens();
this.ViewMode = ImageViewerViewMode.PrintSelection;
this.areaSelection = new AreaSelection();
this.Controls.Add(this.areaSelection);
// TWAIN
// Initialise twain
this.twain = new Twain(new WinFormsWindowMessageHook(this));
// Try to set the last used default scanner
if (this.AvailableScanners.Any())
{
this.twain.TransferImage += twain_TransferImage;
this.twain.ScanningComplete += twain_ScanningComplete;
if (!this.SetScanner(this.defaultScanner))
this.SetScanner(this.AvailableScanners.First());
}
}
/// <summary>
/// Saves the currently loaded image under the specified filename, in the specified format at the specified quality
/// </summary>
/// <param name="FileName">The file name (full file path) under which to save the file. File type extension is not required.</param>
/// <param name="Format">The file format under which to save the file</param>
/// <param name="Quality">The quality in percent of the image to save. This is optional and may or may not be used have an effect depending on the chosen file type. Default is maximum quality.</param>
public void SaveImage(string FileName, GraphicFormats Format, uint Quality = 100)
{
ImageCodecInfo encoder;
EncoderParameters encoderParameters;
if (FileName.IsNullOrEmpty())
throw new ArgumentNullException(FileName);
else
{
string extension = Path.GetExtension(FileName);
if (!string.IsNullOrEmpty(extension))
FileName = FileName.Replace(extension, string.Empty);
FileName += "." + Format.ToString();
}
Quality = Math.Min(Math.Max(1, Quality), 100);
if (!TryGetEncoder(Format, out encoder))
return;
encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, (int)Quality);
this.Image.Save(FileName, encoder, encoderParameters);
}
/// <summary>
/// Tries to retrieve the appropriate encoder for the chose image format.
/// </summary>
/// <param name="Format">The image format for which to attempt retrieving the encoder</param>
/// <param name="Encoder">The encoder object in which to store the encoder if found</param>
/// <returns>True if the encoder was found, else false</returns>
private bool TryGetEncoder(GraphicFormats Format, out ImageCodecInfo Encoder)
{
ImageCodecInfo[] codecs;
codecs = ImageCodecInfo.GetImageEncoders();
Encoder = codecs.First(c => c.FormatDescription.Equals(Format.ToString(), StringComparison.CurrentCultureIgnoreCase));
return Encoder != null;
}
/// <summary>
/// Set the zoom level to view the entire image in the control
/// </summary>
public void ZoomExtents()
{
if (this.Image == null)
return;
this.ZoomFactor = (float)Math.Min((double)this.Width / this.Image.Width, (double)this.Height / this.Image.Height);
this.LimitBasePoint(imageLocation.X, imageLocation.Y);
this.Invalidate();
}
/// <summary>
/// Multiply the zoom
/// </summary>
/// <param name="NewZoomFactor">The zoom factor to set for the image</param>
public void SetZoom(float NewZoomFactor)
{
this.SetZoom(NewZoomFactor, Point.Empty);
}
/// <summary>
/// Multiply the zoom
/// </summary>
/// <param name="NewZoomFactor">The zoom factor to set for the image</param>
/// <param name="ZoomLocation">The point in which to zoom in</param>
public void SetZoom(float NewZoomFactor, Point ZoomLocation)
{
int x;
int y;
float multiplier;
multiplier = NewZoomFactor / this.ZoomFactor;
x = (int)((ZoomLocation.IsEmpty ? this.Width / 2 : ZoomLocation.X - imageLocation.X) / ZoomFactor);
y = (int)((ZoomLocation.IsEmpty ? this.Height / 2 : ZoomLocation.Y - imageLocation.Y) / ZoomFactor);
if ((multiplier < 1 && this.ZoomFactor > this.MinimumZoomFactor) || (multiplier > 1 && this.ZoomFactor < this.MaximumZoomFactor))
ZoomFactor *= multiplier;
else
return;
LimitBasePoint((int)(this.Width / 2 - x * ZoomFactor), (int)(this.Height / 2 - y * ZoomFactor));
this.Invalidate();
}
/// <summary>
/// Determines the base point for positioning the image
/// </summary>
/// <param name="x">The x coordinate based on which to determine the positioning</param>
/// <param name="y">The y coordinate based on which to determine the positioning</param>
private void LimitBasePoint(int x, int y)
{
int width;
int height;
if (this.Image == null)
return;
width = this.Width - (int)(Image.Width * ZoomFactor);
height = this.Height - (int)(Image.Height * ZoomFactor);
x = width < 0 ? Math.Max(Math.Min(x, 0), width) : width / 2;
y = height < 0 ? Math.Max(Math.Min(y, 0), height) : height / 2;
imageLocation = new Point(x, y);
}
/// <summary>
/// Verify that the maximum and minimum zoom are correctly set
/// </summary>
private void SetZoomFactorLimits()
{
float maximum = this.MaximumZoomFactor;
float minimum = this.minimumZoomFactor;
this.maximumZoomFactor = Math.Max(maximum, minimum);
this.minimumZoomFactor = Math.Min(maximum, minimum);
}
/// <summary>
/// Mouse button down event
/// </summary>
protected override void OnMouseDown(MouseEventArgs e)
{
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
switch (e.Button)
{
case MouseButtons.Left:
this.SetZoom(this.ZoomFactor * this.ZoomStep, e.Location);
break;
case MouseButtons.Middle:
this.panning = true;
this.Cursor = Cursors.NoMove2D;
this.imageTranslation = e.Location;
break;
case MouseButtons.Right:
this.SetZoom(this.ZoomFactor / this.ZoomStep, e.Location);
break;
}
break;
case ImageViewerZoomMode.Lens:
if (e.Button == MouseButtons.Left)
{
this.Cursor = Cursors.Cross;
this.Lens.Location = e.Location;
this.Lens.Visible = true;
}
else
{
this.Cursor = Cursors.Default;
this.Lens.Visible = false;
}
this.Invalidate();
break;
}
base.OnMouseDown(e);
}
/// <summary>
/// Mouse button up event
/// </summary>
protected override void OnMouseUp(MouseEventArgs e)
{
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
if (e.Button == MouseButtons.Middle)
{
panning = false;
this.Cursor = Cursors.Default;
}
break;
case ImageViewerZoomMode.Lens:
break;
}
base.OnMouseUp(e);
}
/// <summary>
/// Mouse move event
/// </summary>
protected override void OnMouseMove(MouseEventArgs e)
{
switch (this.ViewMode)
{
case ImageViewerViewMode.Normal:
switch (this.ZoomMode)
{
case ImageViewerZoomMode.OnClick:
if (panning)
{
LimitBasePoint(imageLocation.X + e.X - this.imageTranslation.X, imageLocation.Y + e.Y - this.imageTranslation.Y);
this.imageTranslation = e.Location;
}
break;
case ImageViewerZoomMode.Lens:
if (this.Lens.Visible)
{
this.Lens.Location = e.Location;
}
break;
}
break;
case ImageViewerViewMode.PrintSelection:
break;
case ImageViewerViewMode.PrintPreview:
break;
}
base.OnMouseMove(e);
}
/// <summary>
/// Resize event
/// </summary>
protected override void OnResize(EventArgs e)
{
LimitBasePoint(imageLocation.X, imageLocation.Y);
this.Invalidate();
base.OnResize(e);
}
/// <summary>
/// Paint event
/// </summary>
protected override void OnPaint(PaintEventArgs pe)
{
Rectangle src;
Rectangle dst;
pe.Graphics.Clear(this.BackColor);
if (this.Image != null)
{
switch (this.ViewMode)
{
case ImageViewerViewMode.Normal:
src = new Rectangle(Point.Empty, new Size(Image.Width, Image.Height));
dst = new Rectangle(this.imageLocation, new Size((int)(this.Image.Width * this.ZoomFactor), (int)(this.Image.Height * this.ZoomFactor)));
pe.Graphics.DrawImage(this.Image, dst, src, GraphicsUnit.Pixel);
this.Lens.Draw(pe.Graphics, this.Image, this.ZoomFactor, this.imageLocation);
break;
case ImageViewerViewMode.PrintSelection:
src = new Rectangle(Point.Empty, new Size(Image.Width, Image.Height));
dst = new Rectangle(this.imageLocation, new Size((int)(this.Image.Width * this.ZoomFactor), (int)(this.Image.Height * this.ZoomFactor)));
pe.Graphics.DrawImage(this.Image, dst, src, GraphicsUnit.Pixel);
break;
case ImageViewerViewMode.PrintPreview:
break;
}
}
//Debug.WriteLine("Viewer redrawn " + DateTime.Now);
base.OnPaint(pe);
}
}
EDIT 3:
Experience further graphics-related trouble when setting the height to something large. For example, if in the AreaSelection constructor I set the height to 500, dragging the control really screws up the painting.
whilst dragging the underlying image which shows through the rectangle gets lags behind
This is rather inevitable, updating the rectangle also redraws the image. And if that's expensive, say more than 30 milliseconds, then this can become noticeable to the eye.
That's a lot of milliseconds for something as simple as an image on a modern machine. The only way it can take that long is when the image is large and needs to be rescaled to fit the picturebox. And the pixel format is incompatible with the pixel format of the video adapter so that every single one of them has to be translated from the image pixel format to the video adapter's pixel format. That can indeed add up to multiple milliseconds.
You'll need to help to avoid PictureBox from having to burn that many cpu cycles every time the image gets painted. Do so by prescaling the image, turning it from a huge bitmap into one that better fits the control. And by altering the pixel format, the 32bppPArgb format is best by a long shot since that matches the pixel format of the vast majority of all video adapters. It draws ten times faster than all the other formats. You'll find boilerplate code to make this conversion in this answer.

Categories

Resources