Crop an image with a different screen resolution - c#

What I need is to crop images at the same place but with different resolution.
For example:
Image 1 created with 1024 x 768
Image 2 created with 1440 x 900
Now I have to crop images but at the same place let's say it will be
X = 10%
Y = 10%
WIDTH = 30%
HEIGHT = 20%
I use the following code to do it but it doesn't work like I need.
Any clue?
THANK YOU!!!
int x = 0;
int y = 0;
int w = 0;
int h = 0;
int inputX = 10;
int inputY = 10;
int inputW = 20;
int inputH = 30;
x = int.Parse(Math.Round(decimal.Parse((__Bitmap.Width * inputX / 100).ToString()), 0).ToString());
y = int.Parse(Math.Round(decimal.Parse((__Bitmap.Height * inputY / 100).ToString()), 0).ToString());
w = int.Parse(Math.Round(decimal.Parse((__Bitmap.Width * inputW / 100).ToString()), 0).ToString());
h = int.Parse(Math.Round(decimal.Parse((__Bitmap.Height * inputH / 100).ToString()), 0).ToString());
Rectangle cropArea = new Rectangle(x, y, w,h);
Bitmap bmpCrop = __Bitmap.Clone(cropArea, __Bitmap.PixelFormat);
I mean if there technically logic to do it?
I guess I can do like (pseudo-code)
if (Resolution == "1024x768")
int inputX = 10;
int inputY = 10;
int inputW = 20;
int inputH = 30;
else if (Resolution == "1440x900")
int inputX = 8;
int inputY = 8;
int inputW = 19;
int inputH = 28;
and etc...
I am not sure of there is any coefficient to recalculate % depending on resolution to do it... It is like a crop-factor.
UPDATE:

A quick and dirty example of what I mean by, you'll always get the same image section, when calculating your crop window with percentages:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
// 100x80 image
Image asdf = Image.FromFile("asdf.bmp", true);
// twice the size, 200x160
Image asdf2 = Image.FromFile("asdf2.bmp", true);
// same image, different aspect ratio: 200x80
Image asdf3 = Image.FromFile("asdf3.bmp", true);
Bitmap asdfBmp = new Bitmap(asdf);
Bitmap asdf2Bmp = new Bitmap(asdf2);
Bitmap asdf3Bmp = new Bitmap(asdf3);
pictureBox1.Image = cropImage(asdfBmp);
pictureBox2.Image = cropImage(asdf2Bmp);
pictureBox3.Image = cropImage(asdf3Bmp);
}
private Bitmap cropImage(Bitmap sourceBitmap)
{
double x = 0;
double y = 0;
double w = 0;
double h = 0;
double inputX = 10;
double inputY = 10;
double inputW = 50;
double inputH = 50;
// integer division " 10 / 100 " will return 0, use doubles or floats.
// furthermore you don't have to convert anything to a string or back here.
x = sourceBitmap.Width * inputX / 100.0;
y = sourceBitmap.Height * inputY / 100.0;
w = sourceBitmap.Width * inputW / 100.0;
h = sourceBitmap.Height * inputH / 100.0;
// casting to int will just cut off all decimal places. you could also round.
Rectangle cropArea = new Rectangle((int)x, (int)y, (int)w, (int)h);
return sourceBitmap.Clone(cropArea, sourceBitmap.PixelFormat);
}
}
Sources:
Result:
As you can see, all result images show the same section of the image. So I either still don't get what you're aiming at, or your error must be somewhere else.
Considering your unnecessary type conversions and integer division bug, you should perhaps have a look at a c# tutorial about types.

First calculate the center of the crop. I assume that you get somehow the required x,y,w,h values. Then this center point need to be recalculated to the center of the second image: i.e. if the center is [25;50], then for the 1024x768 image it is respectively [25/1024;50/768], which gives [2.44%;6.51%]. So on the second image, let's say 1440x900 it gives us [1440*2.44%;900*6.51%] => [35;59] in pixels of course.
Now you need width and height of the new image. If the aspect ratio is the same it is easy, because you can calculate dimensions as a cropWidth/firstImageWidth*secondImageWidth, but otherwise you need to multiply it by the correct aspect ratio.
Anyway I don't think you understand the problem. If the aspect ratios of similar images are different, it is either different part of the image or image is distorted.
Below I've corrected your example. I won't explain it because it's quite obvious.. I hope. Just take a look at the parts covered by the transparent black and white areas...

Related

Emgu crop image, crop wrong area

I'm practice with emgu libraries, trying to crop and image to later apply another filtesre or search, the problem is I select a rectangle with the mouse, in the ImageBox(Emgu component) I selected zoom in the SizeMode property, and load the crop picture in another imagebox but the result is always a bit up of the area I selected.
I check the calculation with GIMP and I can see that the rectangle is ok, so I dont know what can be the problem
Point f1=scaleCalculation(firstPoint, pIma.Size, imOri.Size);
Point f2= scaleCalculation(secondPoint, pIma.Size, imOri.Size);
imGray.ROI = new Rectangle(Math.Min(f1.X, f2.X), Math.Min(f1.Y, f2.Y)
, Math.Abs(f1.X - f2.X), Math.Abs(f1.Y-f2.Y));
imOri.ROI = imGray.ROI;
pRec.Image = imOri.Copy();
imOri.ROI = new Rectangle();
And here is the function
private Point scaleCalculation(Point real, Size pBox, Size imCalc) {
double scale, spare;
try {
if (imCalc.Height > imCalc.Width){
scale = (double) imCalc.Height/ pBox.Height ;
spare = pBox.Width-((imCalc.Width / scale));
var x = ((real.X * scale) -(spare/4));
x = (x < 0) ? 0 : x;
return new Point((int) x, (int)(real.Y * scale));
}
else {
scale = (double) imCalc.Width/ pBox.Width ;
spare = pBox.Height - ((imCalc.Height / scale));
var y = ((real.Y * scale) - (spare /4));
y = (y < 0) ? 0 : y;
return new Point((int)(real.X * scale), (int) y);
}
}
catch (Exception ex) {
return new Point();
}
}
After look for a while I see that the problem it was in the scalecalculation function.
private Point scaleCalculation(Point real, Size pBox, Size imCalc) {
double scale, spare;
try {
if (imCalc.Height > imCalc.Width){
scale = (double)imCalc.Height / pBox.Height;
spare = (pBox.Width - (imCalc.Width / scale)) / 2;
var x = (real.X - spare);
x = (x < 0) ? 0 : x;
return new Point((int)(x * scale), (int)(real.Y * scale));
}
else {
scale = (double)imCalc.Width / pBox.Width;
spare = (pBox.Height - (imCalc.Height/scale))/2;
var y = (real.Y - spare);
y = (y < 0) ? 0 : y;
return new Point((int)(real.X * scale),(int) (y *scale));
}
}
catch (Exception ex) {
return new Point();
}
}
I going to try to explain it here with the help of the picture.
(Xr,Yr): Coordinate that we want to know.
(Xm,Ym): Coordinate of the mouse.
(Wi,Hi): Size of the picture.
(Wp,Hp): Size of the ImageBox.
S: Space form the edge of the Imagebox to the picture.
Xr = (Xm * scale)
S = [Hp -(Hi/scale)]/2
Yr = (Ym-S)
(This explanation is when width is bigger than height)
The first I do is calculate the scale using width or height depend which is the bigger.
To calculate S (spare) the height of the Image have to scale to the height of the Imagebox, substract it from the Imagebox height and divide for 2 the result to have the value of only one size.
From Ym (Real.Y) is substract the spare to calculate the y. and check if the result is negative.
Finally the result is Xm and y multiply for scale respectively

Scatter Graph In C# With PictureBox

There is an input of points with size of n like below:
S = {x1,y1,x2,y2,...,xn,yn}
I want to display scatter graph of S sequence in a picture box. So for transforming them into picture box dimensions, I have normalized them and multiplied them by width and height of picture box with respecting picture box left and top:
waveData= wave.GetWaveData();
normalizedData = GetSignedNormalized();
n = normalizedData.Count;
picW = pictureBox1.Width;
picH = pictureBox1.Height;
picL = pictureBox1.Left;
picT = pictureBox1.Top;
normalizedInPictureBox = new List<float>();
for (int i=0;i< n; i +=2)
{
float px = normalizedData[i];
float py = normalizedData[i+1];
px = px * (picW - picL);
py = py * (picH - picT) ;
normalizedInPictureBox.Add(px);
normalizedInPictureBox.Add(py);
}
Normalize Method is also:
public List<float> GetSignedNormalized()
{
List<float> data = new List<float>();
short max = waveData.Max();
int m = waveData.Count;
for(int i=0;i< m; i++)
{
data.Add((float)waveData[i] / (float)max);
}
return data;
}
Now I am thinking normalizedInPictureBox List contains vertices in the range of picture box, and here is the code for drawing them on picture box:
In the paint method of picture box:
Graphics gr = e.Graphics;
gr.Clear(Color.Black);
for(int i=0;i< n; i +=2)
{
float x = normalizedInPictureBox[i] ;
float y = normalizedInPictureBox[i+1];
gr.FillEllipse(Brushes.Green, new RectangleF(x, y, 2.25f, 2.25f));
}
But the result is shown below:
I don't Know whats going wrong here , but I think the graph should be horizontal not diagonal ,the desire result is something like this:
I know that I can transform it to center of picture box after this. but How can change my own result to the desire one?
Thanks in advance.
I don't really know why your code doesn't work correctly without having a look at the actual data and playing around with it, but having done chart drawing before, I suggest you go the full way and clearly define your axis ranges and do proper interpolating. It get's much clearer from there.
Here is what I came up with
static Bitmap DrawChart(float[] Values, int Width, int Height)
{
var n = Values.Count();
if (n % 2 == 1) throw new Exception("Invalid data");
//Split the data into lists for easy access
var x = new List<float>();
var y = new List<float>();
for (int i = 0; i < n - 1; i += 2)
{
x.Add(Values[i]);
y.Add(Values[i + 1]);
}
//Chart axis limits, change here to get custom ranges like -1,+1
var minx = x.Min();
var miny = y.Min();
var maxx = x.Max();
var maxy = y.Max();
var dxOld = maxx - minx;
var dyOld = maxy - miny;
//Rescale the y-Range to add a border at the top and bottom
miny -= dyOld * 0.2f;
maxy += dyOld * 0.2f;
var dxNew = (float)Width;
var dyNew = (float)Height;
//Draw the data
Bitmap res = new Bitmap(Width, Height);
using (var g = Graphics.FromImage(res))
{
g.Clear(Color.Black);
for (int i = 0; i < x.Count; i++)
{
//Calculate the coordinates
var px = Interpolate(x[i], minx, maxx, 0, dxNew);
var py = Interpolate(y[i], miny, maxy, 0, dyNew);
//Draw, put the ellipse center around the point
g.FillEllipse(Brushes.ForestGreen, px - 1.0f, py - 1.0f, 2.0f, 2.0f);
}
}
return res;
}
static float Interpolate(float Value, float OldMin, float OldMax, float NewMin, float NewMax)
{
//Linear interpolation
return ((NewMax - NewMin) / (OldMax - OldMin)) * (Value - OldMin) + NewMin;
}
It should be relatively self explanatory. You may consider drawing lines instead of single points, that depends on the look and feel you want to achive. Draw other chart elements to your liking.
Important: The y-Axis is actually inversed in the code above, so positive values go down, negative go up, it is scaled like the screen coordinates. You'll figure out how to fix that :-)
Example with 5000 random-y points (x is indexed):

How to draw an audio waveform to a bitmap

I am attempting to extract the audio content of a wav file and export the resultant waveform as an image (bmp/jpg/png).
So I have found the following code which draws a sine wave and works as expected:
string filename = #"C:\0\test.bmp";
int width = 640;
int height = 480;
Bitmap b = new Bitmap(width, height);
for (int i = 0; i < width; i++)
{
int y = (int)((Math.Sin((double)i * 2.0 * Math.PI / width) + 1.0) * (height - 1) / 2.0);
b.SetPixel(i, y, Color.Black);
}
b.Save(filename);
This works completely as expected, what I would like to do is replace
int y = (int)((Math.Sin((double)i * 2.0 * Math.PI / width) + 1.0) * (height - 1) / 2.0);
with something like
int y = converted and scaled float from monoWaveFileFloatValues
So how would I best go about doing this in the simplest manner possible?
I have 2 basic issues I need to deal with (i think)
convert float to int in a way which does not loose information, this is due to SetPixel(i, y, Color.Black); where x & y are both int
sample skipping on the x axis so the waveform fits into the defined space audio length / image width give the number of samples to average out intensity over which would be represented by a single pixel
The other options is find another method of plotting the waveform which does not rely on the method noted above. Using a chart might be a good method, but I would like to be able to render the image directly if possible
This is all to be run from a console application and I have the audio data (minus the header) already in a float array.
UPDATE 1
The following code enabled me to draw the required output using System.Windows.Forms.DataVisualization.Charting but it took about 30 seconds to process 27776 samples and whilst it does do what I need, it is far too slow to be useful. So I am still looking towards a solution which will draw the bitmap directly.
System.Windows.Forms.DataVisualization.Charting.Chart chart = new System.Windows.Forms.DataVisualization.Charting.Chart();
chart.Size = new System.Drawing.Size(640, 320);
chart.ChartAreas.Add("ChartArea1");
chart.Legends.Add("legend1");
// Plot {sin(x), 0, 2pi}
chart.Series.Add("sin");
chart.Series["sin"].LegendText = args[0];
chart.Series["sin"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.Spline;
//for (double x = 0; x < 2 * Math.PI; x += 0.01)
for (int x = 0; x < audioDataLength; x ++)
{
//chart.Series["sin"].Points.AddXY(x, Math.Sin(x));
chart.Series["sin"].Points.AddXY(x, leftChannel[x]);
}
// Save sin_0_2pi.png image file
chart.SaveImage(#"c:\tmp\example.png", System.Drawing.Imaging.ImageFormat.Png);
Output shown below:
So I managed to figure it out using a code sample found here, though I made some minor changes to the way I interact with it.
public static Bitmap DrawNormalizedAudio(List<float> data, Color foreColor, Color backColor, Size imageSize, string imageFilename)
{
Bitmap bmp = new Bitmap(imageSize.Width, imageSize.Height);
int BORDER_WIDTH = 0;
float width = bmp.Width - (2 * BORDER_WIDTH);
float height = bmp.Height - (2 * BORDER_WIDTH);
using (Graphics g = Graphics.FromImage(bmp))
{
g.Clear(backColor);
Pen pen = new Pen(foreColor);
float size = data.Count;
for (float iPixel = 0; iPixel < width; iPixel += 1)
{
// determine start and end points within WAV
int start = (int)(iPixel * (size / width));
int end = (int)((iPixel + 1) * (size / width));
if (end > data.Count)
end = data.Count;
float posAvg, negAvg;
averages(data, start, end, out posAvg, out negAvg);
float yMax = BORDER_WIDTH + height - ((posAvg + 1) * .5f * height);
float yMin = BORDER_WIDTH + height - ((negAvg + 1) * .5f * height);
g.DrawLine(pen, iPixel + BORDER_WIDTH, yMax, iPixel + BORDER_WIDTH, yMin);
}
}
bmp.Save(imageFilename);
bmp.Dispose();
return null;
}
private static void averages(List<float> data, int startIndex, int endIndex, out float posAvg, out float negAvg)
{
posAvg = 0.0f;
negAvg = 0.0f;
int posCount = 0, negCount = 0;
for (int i = startIndex; i < endIndex; i++)
{
if (data[i] > 0)
{
posCount++;
posAvg += data[i];
}
else
{
negCount++;
negAvg += data[i];
}
}
if (posCount > 0)
posAvg /= posCount;
if (negCount > 0)
negAvg /= negCount;
}
In order to get it working I had to do a couple of things prior to calling the method DrawNormalizedAudio you can see below what I needed to do:
Size imageSize = new Size();
imageSize.Width = 1000;
imageSize.Height = 500;
List<float> lst = leftChannel.OfType<float>().ToList(); //change float array to float list - see link below
DrawNormalizedAudio(lst, Color.Red, Color.White, imageSize, #"c:\tmp\example2.png");
* change float array to float list
The result of this is as follows, a waveform representation of a hand clap wav sample:
I am quite sure there needs to be some updates/revisions to the code, but it's a start and hopefully this will assist someone else who is trying to do the same thing I was.
If you can see any improvements that can be made, let me know.
UPDATES
NaN issue mentioned in the comments now resolved and code above updated.
Waveform Image updated to represent output fixed by removal of NaN values as noted in point 1.
UPDATE 1
Average level (not RMS) was determined by summing the max level for each sample point and dividing by the total number of samples. Examples of this can be seen below:
Silent Wav File:
Hand Clap Wav File:
Brownian, Pink & White Noise Wav File:
Here is a variation you may want to study. It scales the Graphics object so it can use the float data directly.
Note how I translate (i.e. move) the drawing area twice so I can do the drawing more conveniently!
It also uses the DrawLines method for drawing. The benefit in addition to speed is that the lines may be semi-transparent or thicker than one pixel without getting artifacts at the joints. You can see the center line shine through.
To do this I convert the float data to a List<PointF> using a little Linq magick.
I also make sure to put all GDI+ objects I create in using clause so they will get disposed of properly.
...
using System.Windows.Forms;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;
using System.Drawing.Drawing2D;
..
..
class Program
{
static void Main(string[] args)
{
float[] data = initData(10000);
Size imgSize = new Size(1000, 400);
Bitmap bmp = drawGraph(data, imgSize , Color.Green, Color.Black);
bmp.Save("D:\\wave.png", ImageFormat.Png);
}
static float[] initData(int count)
{
float[] data = new float[count];
for (int i = 0; i < count; i++ )
{
data[i] = (float) ((Math.Sin(i / 12f) * 880 + Math.Sin(i / 15f) * 440
+ Math.Sin(i / 66) * 110) / Math.Pow( (i+1), 0.33f));
}
return data;
}
static Bitmap drawGraph(float[] data, Size size, Color ForeColor, Color BackColor)
{
Bitmap bmp = new System.Drawing.Bitmap(size.Width, size.Height,
PixelFormat.Format32bppArgb);
Padding borders = new Padding(20, 20, 10, 50);
Rectangle plotArea = new Rectangle(borders.Left, borders.Top,
size.Width - borders.Left - borders.Right,
size.Height - borders.Top - borders.Bottom);
using (Graphics g = Graphics.FromImage(bmp))
using (Pen pen = new Pen(Color.FromArgb(224, ForeColor),1.75f))
{
g.SmoothingMode = SmoothingMode.AntiAlias;
g.Clear(Color.Silver);
using (SolidBrush brush = new SolidBrush(BackColor))
g.FillRectangle(brush, plotArea);
g.DrawRectangle(Pens.LightGoldenrodYellow, plotArea);
g.TranslateTransform(plotArea.Left, plotArea.Top);
g.DrawLine(Pens.White, 0, plotArea.Height / 2,
plotArea.Width, plotArea.Height / 2);
float dataHeight = Math.Max( data.Max(), - data.Min()) * 2;
float yScale = 1f * plotArea.Height / dataHeight;
float xScale = 1f * plotArea.Width / data.Length;
g.ScaleTransform(xScale, yScale);
g.TranslateTransform(0, dataHeight / 2);
var points = data.ToList().Select((y, x) => new { x, y })
.Select(p => new PointF(p.x, p.y)).ToList();
g.DrawLines(pen, points.ToArray());
g.ResetTransform();
g.DrawString(data.Length.ToString("###,###,###,##0") + " points plotted.",
new Font("Consolas", 14f), Brushes.Black,
plotArea.Left, plotArea.Bottom + 2f);
}
return bmp;
}
}

Correctly executing bicubic resampling

I've been experimenting with the image bicubic resampling algorithm present in the AForge framework with the idea of introducing something similar into my image processing solution. See the original algorithm here and interpolation kernel here
Unfortunately I've hit a wall. It looks to me like somehow I am calculating the sample destination position incorrectly, probably due to the algorithm being designed for Format24bppRgb images where as I am using a Format32bppPArgb format.
Here's my code:
public Bitmap Resize(Bitmap source, int width, int height)
{
int sourceWidth = source.Width;
int sourceHeight = source.Height;
Bitmap destination = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
destination.SetResolution(source.HorizontalResolution, source.VerticalResolution);
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
using (FastBitmap destinationBitmap = new FastBitmap(destination))
{
double heightFactor = sourceWidth / (double)width;
double widthFactor = sourceHeight / (double)height;
// Coordinates of source points
double ox, oy, dx, dy, k1, k2;
int ox1, oy1, ox2, oy2;
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
for (int y = 0; y < height; y++)
{
// Y coordinates
oy = (y * widthFactor) - 0.5;
oy1 = (int)oy;
dy = oy - oy1;
for (int x = 0; x < width; x++)
{
// X coordinates
ox = (x * heightFactor) - 0.5f;
ox1 = (int)ox;
dx = ox - ox1;
// Destination color components
double r = 0;
double g = 0;
double b = 0;
double a = 0;
for (int n = -1; n < 3; n++)
{
// Get Y cooefficient
k1 = Interpolation.BiCubicKernel(dy - n);
oy2 = oy1 + n;
if (oy2 < 0)
{
oy2 = 0;
}
if (oy2 > maxHeight)
{
oy2 = maxHeight;
}
for (int m = -1; m < 3; m++)
{
// Get X cooefficient
k2 = k1 * Interpolation.BiCubicKernel(m - dx);
ox2 = ox1 + m;
if (ox2 < 0)
{
ox2 = 0;
}
if (ox2 > maxWidth)
{
ox2 = maxWidth;
}
Color color = sourceBitmap.GetPixel(ox2, oy2);
r += k2 * color.R;
g += k2 * color.G;
b += k2 * color.B;
a += k2 * color.A;
}
}
destinationBitmap.SetPixel(
x,
y,
Color.FromArgb(a.ToByte(), r.ToByte(), g.ToByte(), b.ToByte()));
}
}
}
}
source.Dispose();
return destination;
}
And the kernel which should represent the given equation on Wikipedia
public static double BiCubicKernel(double x)
{
if (x < 0)
{
x = -x;
}
double bicubicCoef = 0;
if (x <= 1)
{
bicubicCoef = (1.5 * x - 2.5) * x * x + 1;
}
else if (x < 2)
{
bicubicCoef = ((-0.5 * x + 2.5) * x - 4) * x + 2;
}
return bicubicCoef;
}
Here's the original image at 500px x 667px.
And the image resized to 400px x 543px.
Visually it appears that the image is over reduced and then the same pixels are repeatedly applied once we hit a particular point.
Can anyone give me some pointers here to solve this?
Note FastBitmap is a wrapper for Bitmap that uses LockBits to manipulate pixels in memory. It works well with everything else I apply it to.
Edit
As per request here's the methods involved in ToByte
public static byte ToByte(this double value)
{
return Convert.ToByte(ImageMaths.Clamp(value, 0, 255));
}
public static T Clamp<T>(T value, T min, T max) where T : IComparable<T>
{
if (value.CompareTo(min) < 0)
{
return min;
}
if (value.CompareTo(max) > 0)
{
return max;
}
return value;
}
You are limiting your ox2 and oy2 to destination image dimensions, instead of source dimensions.
Change this:
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
to this:
// Width and height decreased by 1
int maxHeight = sourceHeight - 1;
int maxWidth = sourceWidth - 1;
Well, I've met a very strange thing, which might be or might be not a souce of the problem.
I've started to try implementing convolution matrix by myself and encountered strange behaviour. I was testing code on a small image 4x4 pixels. The code is following:
var source = Bitmap.FromFile(#"C:\Users\Public\Pictures\Sample Pictures\Безымянный.png");
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
for (int TY = 0; TY < 4; TY++)
{
for (int TX = 0; TX < 4; TX++)
{
Color color = sourceBitmap.GetPixel(TX, TY);
Console.Write(color.B.ToString().PadLeft(5));
}
Console.WriteLine();
}
}
Althought I'm printing out only blue channel value, it's still clearly incorrect.
On the other hand, your solution partitially works, what makes the thing I've found kind of irrelevant. One more guess I have: what is your system's DPI?
From what I have found helpfull, here are some links:
C++ implementation of bicubic interpolation on
matrix
C# implemetation of bicubic interpolation, lacking the part about rescaling
Thread on gamedev.net which has almost working solution
That's my answer so far, but I will try further.

Scaling an image using the mouse in a WinForms application?

I'm trying to use the position of the mouse to calculate the scaling factor for scaling an image. Basically, the further you get away from the center of the image, the bigger it gets; and the closer to the center you get, the smaller it gets. I have some code so far but it's acting really strange and I have absolutely no more ideas. First I'll let you know, one thing I was trying to do is average out 5 distances to get a more smooth resize animation. Here's my code:
private void pictureBoxScale_MouseMove(object sender, MouseEventArgs e)
{
if (rotateScaleMode && isDraggingToScale)
{
// For Scaling
int sourceWidth = pictureBox1.Image.Width;
int sourceHeight = pictureBox1.Image.Height;
float dCurrCent = 0; // distance between the current mouse pos and the center of the image
float dPrevCent = 0; // distance between the previous mouse pos and the center of the image
System.Drawing.Point imgCenter = new System.Drawing.Point();
imgCenter.X = pictureBox1.Location.X + (sourceWidth / 2);
imgCenter.Y = pictureBox1.Location.Y + (sourceHeight / 2);
// Calculating the distance between the current mouse location and the center of the image
dCurrCent = (float)Math.Sqrt(Math.Pow(e.X - imgCenter.X, 2) + Math.Pow(e.Y - imgCenter.Y, 2));
// Calculating the distance between the previous mouse location and the center of the image
dPrevCent = (float)Math.Sqrt(Math.Pow(prevMouseLoc.X - imgCenter.X, 2) + Math.Pow(prevMouseLoc.Y - imgCenter.Y, 2));
if (smoothScaleCount < 5)
{
dCurrCentSmooth[smoothScaleCount] = dCurrCent;
dPrevCentSmooth[smoothScaleCount] = dPrevCent;
}
if (smoothScaleCount == 4)
{
float currCentSum = 0;
float prevCentSum = 0;
for (int i = 0; i < 4; i++)
{
currCentSum += dCurrCentSmooth[i];
}
for (int i = 0; i < 4; i++)
{
prevCentSum += dPrevCentSmooth[i];
}
float scaleAvg = (currCentSum / 5) / (prevCentSum / 5);
int destWidth = (int)(sourceWidth * scaleAvg);
int destHeight = (int)(sourceHeight * scaleAvg);
// If statement is for limiting the size of the image
if (destWidth > (currentRotatedImage.Width / 2) && destWidth < (currentRotatedImage.Width * 3) && destHeight > (currentRotatedImage.Height / 2) && destWidth < (currentRotatedImage.Width * 3))
{
AForge.Imaging.Filters.ResizeBilinear resizeFilter = new AForge.Imaging.Filters.ResizeBilinear(destWidth, destHeight);
pictureBox1.Image = resizeFilter.Apply((Bitmap)currentRotatedImage);
pictureBox1.Size = pictureBox1.Image.Size;
pictureBox1.Refresh();
}
smoothScaleCount = -1;
}
prevMouseLoc = e.Location;
currentScaledImage = pictureBox1.Image;
smoothScaleCount++;
}
}
EDIT: Thanks to Ben Voigt and Ray everything works well now. The only thing wrong is that with the way I'm doing it the image doesn't keep it's ratio; but I'll fix that later. Here's what I have for those who want to know:
private void pictureBoxScale_MouseMove(object sender, MouseEventArgs e)
{
if (rotateScaleMode && isDraggingToScale)
{
// For Scaling
int sourceWidth = pictureBox1.Image.Width;
int sourceHeight = pictureBox1.Image.Height;
int scale = e.X + p0.X; //p0 is the location of the mouse when the button first came down
int destWidth = (int)(sourceWidth + (scale/10)); //I divide it by 10 to make it slower
int destHeight = (int)(sourceHeight + (scale/10));
if (destWidth > 20 && destWidth < 1000 && destHeight > 20 && destWidth < 1000)
{
AForge.Imaging.Filters.ResizeBilinear resizeFilter = new AForge.Imaging.Filters.ResizeBilinear(destWidth, destHeight);
pictureBox1.Image = resizeFilter.Apply((Bitmap)currentRotatedImage);
pictureBox1.Size = pictureBox1.Image.Size;
pictureBox1.Refresh();
}
currentScaledImage = pictureBox1.Image; // This is only so I can rotate the scaled image in another part of my program
}
}
You're scaling won't be smooth if you use the center of the image. Instead, use the initial mouse down point (call it p0). Also, rather than using the distance from that point to the current drag point (e), just take the difference along one axis (e.g. exp(e.Y - p0.Y)).
It looks to me (from the scaleAvg calculation) like you're rescaling the already-scaled image. This is a really bad idea because scaling is lossy and the errors will accumulate. Instead, keep a copy of the crisp original image and scale the original directly to the current size.
Also, I would suggest using a different norm, perhaps Manhattan distance, instead of the current Cartesian distance which is a two-norm.
If you do continue using the two-norm, consider getting rid of the Math.Pow calls. They are probably such a small part of the overall scaling complexity that it doesn't matter, but multiplying by itself should be much faster than Math.Pow for squaring a number.

Categories

Resources