Minimum size that matches aspect ratio - c#

I need to find the minimum size that has an aspect ratio of exactly (or within 0.001) some value. Are there any quick math tricks or framework tricks for doing this?
Here's the pseudo code for the current bad idea I had running in O(n^2):
epsilon = 0.001;
from x = 1 to MAX_X
{
from y = 1 to MAX_Y
{
if(Abs(x / y - aspectRatio) <= epsilon)
{
return new Size(x, y);
}
}
}
return Size.Empty;

Unusual. You need to find the greatest common divisor and divide width and height by it. The algorithm is by Euclid and is two thousand three hundred years old. Details are here.

You can write aspectRatio as a fraction (if you want it up to a presicion of 0.001, than you can use round(aspectRatio,3)/1000 )
Then, simplify this fraction. The resulting fraction is the x/y you're looking for.

A quicker way, but still not formulaic would be to only look at possible y values instead of iterating up to MAX_Y. e.g.:
static Size FindMinSize(double requiredRatio, double epsilon)
{
int x = 1;
do
{
int y = (int)(x * requiredRatio);
if (Test(x, y, requiredRatio, epsilon))
{
return new Size(x, y);
}
y = (int)((x + 1) * requiredRatio);
if (Test(x, y, requiredRatio, epsilon))
{
return new Size(x, y);
}
x++;
} while (x != int.MaxValue);
return new Size(0, 0);
}
static bool Test(int x, int y, double requiredRatio, double epsilon)
{
double aspectRatio = ((double)y)/x;
return Math.Abs(aspectRatio - requiredRatio) < epsilon;
}

Instead of testing all possible combinations, just increase the side that gets you closer to the aspect ratio:
public static Size GetSizeFromAspectRatio(double aspectRatio) {
double epsilon = 0.001;
int x = 1;
int y = 1;
while (true) {
double a = (double)x / (double)y;
if (Math.Abs(aspectRatio - a) < epsilon) break;
if (a < aspectRatio) {
x++;
} else {
y++;
}
}
return new Size(x, y);
}

The aspect ratio is the ratio between x and y. You can define the aspect ratio as x / y or y / x.
The minimum aspect ratio is 0 / 0.
Some other minimum has to be defined, either a minimum x or a minimum y.
min x = (min y * x) / y
min y = (min x * y) / x

Related

Moving hexagons with cube coordinates into a square formation

It works for most of it:
The problem starts when the height is alot larger than the width (3x9, 3x11, 5x11 etc.)
As you can see the first line is out of place, increasing the height further will repeat this pattern.
Here is the code (Note: my z and y for cube coordinates is swapped):
void SpawnHexGrid(int Width, int Height)
{
int yStart = -Height / 2;
int yEnd = yStart + Height;
for (int y = yStart; y < yEnd; y++)
{
int xStart = -(Width + y) / 2;
int xEnd = xStart + Width;
if (Width % 2 == 0)
{
if (y % 2 == 0)
{
xStart++;
}
}
else
{
if (y % 2 != 0)
{
xStart++;
}
}
Debug.Log("y: " + y + " , Start: " + xStart + " , End: " + xEnd);
for (int x = xStart; x < xEnd; x++)
{
SetHexagon(new Cube(x, y));
}
}
}
Edit:
After changing to #Idle_Mind solution my grid looks like this:
Edit again:
I found a solution, after changing to #Idle_Mind's solution I corrected the tilting by using y again:
int xStart = -Width / 2 - (y / 2);
but this caused a similar problem as before, but this time I realized it had something to do with the way an int is rounded, when y is negative xStart would be 1 lower then expected, so I just add 1 whenever y is negative:
int add = 0;
if (y < 0)
{
add = 1;
}
int xStart = -Width / 2 - ((y - add) / 2);
This works like a charm now, thanks everyone.
Change your SpawnHexGrid() to:
void SpawnHexGrid(int Width, int Height)
{
int xStart = -Width / 2;
int yStart = -Height / 2;
int yEnd = yStart + Height;
for (int y = yStart; y < yEnd; y++)
{
int xEnd = xStart + Width + (y%2==0 ? 0 : -1);
for (int x = xStart; x < xEnd; x++)
{
SetHexagon(new Cube(x, y));
}
}
}
My test rig:
---------- EDIT ----------
I don't understand why you're using the y value as part of your calculation for x. Make the x constant for a whole column as you'd expect for a regular grid. In my code, the shorter rows still start at the SAME x coord as the longer ones; it's the length of them that changes. Then, when you draw, I simply calculate the position for a normal grid, but add half the width of the hexagon for all odd y positions resulting in the offset you need for the hexagons.
For example, here is a 5x5 grid drawn "normally" without offsetting the odd Y rows. It's clear that the starting X coordinate for all rows is the same:
So the stored x,y coord are all based on a normal grid, but the drawing code shifts the odd y rows. Here's where I change the X coord, only for drawing, of the odd y rows:
if (pt.Y % 2 != 0)
{
center.Offset(Width / 2, 0);
}
So, after adding the offset (again, only at drawing time) for odd Y rows, it now looks like:
And here is the grid shown with the internal coord of each hexagon being displayed:
Hope that makes it clear how I approached it.
I believe you're just alternating a different row size for a hexagonal map. If so something like this should work:
class Program
{
static void Main(string[] args)
{
const int Height = 4;
const int Width = 4;
for (int y = 0; y < Height; ++y)
{
int rowSize = y % 2 > 0 ? Width + 1 : Width;
for (int x = 0; x < rowSize; ++x)
{
Console.WriteLine($"{x}:{y}");
}
}
Console.ReadLine();
}
}

Gabor Filter implementation in Frequency domain

Here we have the Spatial domain implementation of Gabor filter. But, I need to implement a Gabor filter in the Frequency Domain for performance reasons.
I have found the Frequency Domain equation of Gabor Filter:
I am actually in doubt about the correctness and/or applicability of this formula.
Source Code
So, I have implemented the following :
public partial class GaborFfftForm : Form
{
private double Gabor(double u, double v, double f0, double theta, double a, double b)
{
double rad = Math.PI / 180 * theta;
double uDash = u * Math.Cos(rad) + v * Math.Sin(rad);
double vDash = (-1) * u * Math.Sin(rad) + v * Math.Cos(rad);
return Math.Exp((-1) * Math.PI * Math.PI * ((uDash - f0) / (a * a)) + (vDash / (b * b)));
}
public Array2d<Complex> GaborKernelFft(int sizeX, int sizeY, double f0, double theta, double a, double b)
{
int halfX = sizeX / 2;
int halfY = sizeY / 2;
Array2d<Complex> kernel = new Array2d<Complex>(sizeX, sizeY);
for (int u = -halfX; u < halfX; u++)
{
for (int v = -halfY; v < halfY; v++)
{
double g = Gabor(u, v, f0, theta, a, b);
kernel[u + halfX, v + halfY] = new Complex(g, 0);
}
}
return kernel;
}
public GaborFfftForm()
{
InitializeComponent();
Bitmap image = DataConverter2d.ReadGray(StandardImage.LenaGray);
Array2d<double> dImage = DataConverter2d.ToDouble(image);
int newWidth = Tools.ToNextPowerOfTwo(dImage.Width) * 2;
int newHeight = Tools.ToNextPowerOfTwo(dImage.Height) * 2;
double u0 = 0.2;
double v0 = 0.2;
double alpha = 10;//1.5;
double beta = alpha;
Array2d<Complex> kernel2d = GaborKernelFft(newWidth, newHeight, u0, v0, alpha, beta);
dImage.PadTo(newWidth, newHeight);
Array2d<Complex> cImage = DataConverter2d.ToComplex(dImage);
Array2d<Complex> fImage = FourierTransform.ForwardFft(cImage);
// FFT convolution .................................................
Array2d<Complex> fOutput = new Array2d<Complex>(newWidth, newHeight);
for (int x = 0; x < newWidth; x++)
{
for (int y = 0; y < newHeight; y++)
{
fOutput[x, y] = fImage[x, y] * kernel2d[x, y];
}
}
Array2d<Complex> cOutput = FourierTransform.InverseFft(fOutput);
Array2d<double> dOutput = Rescale2d.Rescale(DataConverter2d.ToDouble(cOutput));
//dOutput.CropBy((newWidth-image.Width)/2, (newHeight - image.Height)/2);
Bitmap output = DataConverter2d.ToBitmap(dOutput, image.PixelFormat);
Array2d<Complex> cKernel = FourierTransform.InverseFft(kernel2d);
cKernel = FourierTransform.RemoveFFTShift(cKernel);
Array2d<double> dKernel = DataConverter2d.ToDouble(cKernel);
Array2d<double> dRescaledKernel = Rescale2d.Rescale(dKernel);
Bitmap kernel = DataConverter2d.ToBitmap(dRescaledKernel, image.PixelFormat);
pictureBox1.Image = image;
pictureBox2.Image = kernel;
pictureBox3.Image = output;
}
}
Just concentrate on the algorithmic steps at this time.
I have generated a Gabor kernel in the frequency domain. Since, the kernel is already in Frequency domain, I didn't apply FFT to it, whereas image is FFT-ed. Then, I multiplied the kernel and the image to achieve FFT-Convolution. Then they are inverse-FFTed and converted back to Bitmap as usual.
Output
The kernel looks okay. But, The filter-output doesn't look very promising (or, does it?).
The orientation (theta) doesn't have any effect on the kernel.
The calculation/formula is frequently suffering from divide-by-zero exception up on changing values.
How can I fix those problems?
Oh, and, also,
what do the parameters α, β, represent?
what should be the appropriate value of f0?
Update:
I have modified my code according to #Cris Luoengo's answer.
private double Gabor(double u, double v, double u0, double v0, double a, double b)
{
double p = (-2) * Math.PI * Math.PI;
double q = (u-u0)/(a*a);
double r = (v - v0) / (b * b);
return Math.Exp(p * (q + r));
}
public Array2d<Complex> GaborKernelFft(int sizeX, int sizeY, double u0, double v0, double a, double b)
{
double xx = sizeX;
double yy = sizeY;
double halfX = (xx - 1) / xx;
double halfY = (yy - 1) / yy;
Array2d<Complex> kernel = new Array2d<Complex>(sizeX, sizeY);
for (double u = 0; u <= halfX; u += 0.1)
{
for (double v = 0; v <= halfY; v += 0.1)
{
double g = Gabor(u, v, u0, v0, a, b);
int x = (int)(u * 10);
int y = (int)(v * 10);
kernel[x,y] = new Complex(g, 0);
}
}
return kernel;
}
where,
double u0 = 0.2;
double v0 = 0.2;
double alpha = 10;//1.5;
double beta = alpha;
I am not sure whether this is a good output.
There seems to be a typo in the equation for the Gabor filter that you found. The Gabor filter is a translated Gaussian in the frequency domain. Hence, it needs to have u² and v² in the exponent.
Equation (2) in your link seems more sensible, but still misses a 2:
exp( -2(πσ)² (u-f₀)² )
It is the 1D case, this is the filter we want to use in the direction θ. We now multiply in the perpendicular direction, v, with a non-shifted Gaussian. I set α and β to be the inverse of the two sigmas:
exp( -2(π/α)² (u-f₀)² ) exp( -2(π/β)² v² ) = exp( -2π²((u-f₀)/α)² + -2π²(v/β)² )
You should implement the above equation with u and v rotated over θ, as you already do.
Also, u and v should run from -0.5 to 0.5, not from -sizeX/2 to sizeX/2. And that is assuming your FFT sets the origin in the middle of the image, which is not common. Typically the FFT algorithms set the origin in a corner of the image. So you should probably have your u and v run from 0 to (sizeX-1)/sizeX instead.
With a corrected implementation as above, you should set f₀ to be between 0 and 0.5 (try 0.2 to start with), and α and β should be small enough such that the Gaussian doesn't reach the 0 frequency (you want the filter to be 0 there)
In the frequency domain, your filter will look like a rotated Gaussian away from the origin.
In the spatial domain, the magnitude of your filter should look again like a Gaussian. The imaginary component should look like this (picture links to Wikipedia page I found it on):
(i.e. it is anti-symmetric (odd) in the θ direction), possibly with more lobes depending on α, β and f₀. The real component should be similar but symmetric (even), with a maximum in the middle. Note that after IFFT, you might need to shift the origin from the top-left corner to the middle of the image (Google "fftshift").
Note that if you set α and β to be equal, the rotation of the u-v plane is irrelevant. In this case, you can use cartesian coordinates instead of polar coordinates to define the frequency. That is, instead of defining f₀ and θ as parameters, you define u₀ and v₀. In the exponent you then replace u-f₀ with u-u₀, and v with v-v₀.
The code after the edit of the question misses the square again. I would write the code as follows:
private double Gabor(double u, double v, double u0, double v0, double a, double b)
{
double p = (-2) * Math.PI * Math.PI;
double q = (u-u0)/a;
double r = (v - v0)/b;
return Math.Exp(p * (q*q + r*r));
}
public Array2d<Complex> GaborKernelFft(int sizeX, int sizeY, double u0, double v0, double a, double b)
{
double halfX = sizeX / 2;
double halfY = sizeY / 2;
Array2d<Complex> kernel = new Array2d<Complex>(sizeX, sizeY);
for (double y = 0; y < sizeY; y++)
{
double v = y / sizeY;
// v -= HalfY; // whether this is necessary or not depends on your FFT
for (double x = 0; x < sizeX; x++)
{
double u = x / sizeX;
// u -= HalfX; // whether this is necessary or not depends on your FFT
double g = Gabor(u, v, u0, v0, a, b);
kernel[(int)x, (int)y] = new Complex(g, 0);
}
}
return kernel;
}

How to choose random float in a square weighted against arbitrary zones?

I have a square, which goes from -1 to 1 in x and y.
Choosing a random point in this square is pretty easy:
Random r = new Random();
float x = (float)Math.Round(r.NextDouble() * 2 - 1, 4);
float y = (float)Math.Round(r.NextDouble() * 2 - 1, 4);
This gives me any point, with equal probability, in my square.
It woud also be pretty easy to just remove a section of the square from the possibilities
Random r = new Random();
float x = (float)Math.Round(r.NextDouble() * 1.5 - 1, 4);
float y = (float)Math.Round(r.NextDouble() * 2 - 1, 4);
But what I'm really struggling to do, is to weight the random towards a certain zone. Specifically, I would like the section highlighted here to be more likely, and everything else (except the red section, which is still off-limits) should have a probability lower depending on the distance from the highligthed line. The furthest point should have 0 chance, and the rest an existing chance which is higher when closer to the line, with points exactly on my line (since I round them to a specific decimal, there are points with are on the line) having the best odds.
Sorry for the ugly pictures. This is the best i could do in paint to show my thoughts.
The "most likely" area is an empty diamond (just the that with the vertices (-1, 0), (0, -0.5), (1, 0), (0, 0.5), with of course the red area override the weighting because it's off limits. The red area is anything with x > 0.5
Does anyone knows how to do this? I'm working in C# but honestly an algorithm in any non-esoteric language would do the trick. I'm completely lost as to how to proceed.
A commenter noted that adding the off-limits zone to the algorithm is an added difficulty with no real use.
You can assume that I'll take care of the off-limit section by myself after running the weighting algorithm. Since it's just 25% of the area, most of the times it wouldn't even make a difference performance-wise if I just made this:
while (x > 0.5)
{
runAlgorithmAgain();
}
So you can safely ignore that part for answers.
Ok, here my thoughts on this matter. I would like to propose algorithm which, with some rejections, might solve your problem. Note, due to need of acceptance-rejection, it might be slower than you expected it to be.
We sample in single quadrant (say, lower left one), then use reflection to put point into any other quadrant, and then reject red zone points.
Basically, sampling in quadrant is two-step process. First, we sample first position on the border line. As soon as we got position on the line, we sample from distribution which is bell-like shape (Gaussian or Laplace for example), and move point in the orthogonal to the border line direction.
Code compiles, but completely untested, so please check everything startign with the numbers
using System;
namespace diamond
{
class Program
{
public const double SQRT_5 = 2.2360679774997896964091736687313;
public static double gaussian((double mu, double sigma) N, Random rng) {
var phi = 2.0 * Math.PI * rng.NextDouble();
var r = Math.Sqrt( -2.0 * Math.Log(1.0 - rng.NextDouble()) );
return N.mu + N.sigma * r * Math.Sin(phi);
}
public static double laplace((double mu, double sigma) L, Random rng) {
var v = - L.sigma * Math.Log(1.0 - rng.NextDouble());
return L.mu + ((rng.NextDouble() < 0.5) ? v : -v );
}
public static double sample_length(double lmax, Random rng) {
return lmax * rng.NextDouble();
}
public static (double, double) move_point((double x, double y) pos, (double wx, double wy) dir, double l) {
return (pos.x + dir.wx * l, pos.y + dir.wy * l);
}
public static (double, double) sample_in_quadrant((double x0, double y0) pos, (double wx, double wy) dir, double lmax, double sigma, Random rng) {
while (true) {
var l = sample_length(lmax, rng);
(double x, double y) = move_point(pos, dir, l);
var dort = (dir.wy, -dir.wx); // orthogonal to the line direction
var s = gaussian((0.0, sigma), rng); // could be laplace instead of gaussian
(x, y) = move_point((x, y), dort, s);
if (x >= -1.0 && x <= 0.0 && y >= 0.0 && y <= 1.0) // acceptance/rejection
return (x, y);
}
}
public static (double, double) sample_in_plane((double x, double y) pos, (double wx, double wy) dir, double lmax, double sigma, Random rng) {
(double x, double y) = sample_in_quadrant(pos, dir, lmax, sigma, rng);
if (rng.NextDouble() < 0.25)
return (x, y);
if (rng.NextDouble() < 0.5) // reflection over X
return (x, -y);
if (rng.NextDouble() < 0.75) // reflection over Y
return (-x, y);
return (-x, -y); // reflection over X&Y
}
static void Main(string[] args) {
var rng = new Random(32345);
var L = 0.5 * SQRT_5 + 0.5 / SQRT_5; // sampling length, BIGGER THAN JUST A SEGMENT IN THE QUADRANT
(double x0, double y0) pos = (-1.0, 0.0); // initial position
(double wx, double wy) dir = (2.0 / SQRT_5, 1.0 / SQRT_5); // directional cosines, wx*wx + wy*wy = 1
double sigma = 0.2; // that's a value to play with
// last rejection stage
(double x, double y) pt;
while(true) {
pt = sample_in_plane(pos, dir, L, sigma, rng);
if (pt.x < 0.5) // reject points in the red area, accept otherwise
break;
}
Console.WriteLine(String.Format("{0} {1}", pt.x, pt.y));
}
}
}

C# - Joystick sensitivity formula

How to calculate joystick sensitivity, taking into account deadzone and the circular nature of the stick?
I'm working on a class that represents a stick of a gamepad. I'm having trouble with the mathematics of it, specifically with the sensitivity part. Sensitivity should make the joystick's distance from center non-linear. I applied sensitivity on a X-Box trigger without problems, but because a joystick has two axis (X and Y), I'm having trouble with the math involved.
I want to apply circular sensitivity to the stick, but I don't really know how to do that, specially taking into account other calculations on the axes (like deadzone, distance from center, etc.). How sould I accomplish that?
Additional details about the problem
Right now, I already have my temporary fix which is not working very well. It seems to be working when the joystick direction is either horizontal or vertical, but when I move it to a diagonal direction, is seems buged. My Joystick class has a Distance property, which retrieves the stick's distance from center (a value from 0 to 1). My Distance property is working well, but when I apply the sensitivity, the retrieved distance is less than 1 on diagonal directions if I move my josytick around, when it should be exactly 1, no matter the direction.
Below, I'm including a simplified version of my Joystick class, where I removed most of the unrelevant code. The calculated X and Y positions of the axes are retrieved by ComputedX and ComputedY properties. Each of this properties should include its axis final position (from -1 to 1) taking into account all the modifiers (deadzone, saturation, sensitivity, etc.).
public class Joystick
{
// Properties
// Physical axis positions
public double X { get; set;}
public double Y { get; set; }
// Virtual axis positions, with all modifiers applied (like deadzone, sensitivity, etc.)
public double ComputedX { get => ComputeX(); }
public double ComputedY {get => ComputeY(); }
// Joystick modifiers, which influence the computed axis positions
public double DeadZone { get; set; }
public double Saturation { get; set; }
public double Sensitivity { get; set; }
public double Range { get; set; }
public bool InvertX { get; set; }
public bool InvertY { get; set; }
// Other properties
public double Distance
{
get => CoerceValue(Math.Sqrt((ComputedX * ComputedX) + (ComputedY * ComputedY)), 0d, 1d);
}
public double Direction { get => ComputeDirection(); }
// Methods
private static double CoerceValue(double value, double minValue, double maxValue)
{
return (value < minValue) ? minValue : ((value > maxValue) ? maxValue : value);
}
protected virtual double ComputeX()
{
double value = X;
value = CalculateDeadZoneAndSaturation(value, DeadZone, Saturation);
value = CalculateSensitivity(value, Sensitivity);
value = CalculateRange(value, Range);
if (InvertX) value = -value;
return CoerceValue(value, -1d, 1d);
}
protected virtual double ComputeY()
{
double value = Y;
value = CalculateDeadZoneAndSaturation(value, DeadZone, Saturation);
value = CalculateSensitivity(value, Sensitivity);
value = CalculateRange(value, Range);
if (InvertY) value = -value;
return CoerceValue(value, -1d, 1d);
}
/// <sumary>Gets the joystick's direction (from 0 to 1).</summary>
private double ComputeDirection()
{
double x = ComputedX;
double y = ComputedY;
if (x != 0d && y != 0d)
{
double angle = Math.Atan2(x, y) / (Math.PI * 2d);
if (angle < 0d) angle += 1d;
return CoerceValue(angle, 0d, 1d);
}
return 0d;
}
private double CalculateDeadZoneAndSaturation(double value, double deadZone, double saturation)
{
deadZone = CoerceValue(deadZone, 0.0d, 1.0d);
saturation = CoerceValue(saturation, 0.0d, 1.0d);
if ((deadZone > 0) | (saturation < 1))
{
double distance = CoerceValue(Math.Sqrt((X * X) + (Y * Y)), 0.0d, 1.0d);
double directionalDeadZone = Math.Abs(deadZone * (value / distance));
double directionalSaturation = 1 - Math.Abs((1 - saturation) * (value / distance));
double edgeSpace = (1 - directionalSaturation) + directionalDeadZone;
double multiplier = 1 / (1 - edgeSpace);
if (multiplier != 0)
{
if (value > 0)
{
value = (value - directionalDeadZone) * multiplier;
value = CoerceValue(value, 0, 1);
}
else
{
value = -((Math.Abs(value) - directionalDeadZone) * multiplier);
value = CoerceValue(value, -1, 0);
}
}
else
{
if (value > 0)
value = CoerceValue(value, directionalDeadZone, directionalSaturation);
else
value = CoerceValue(value, -directionalSaturation, -directionalDeadZone);
}
value = CoerceValue(value, -1, 1);
}
return value;
}
private double CalculateSensitivity(double value, double sensitivity)
{
value = CoerceValue(value, -1d, 1d);
if (sensitivity != 0)
{
double axisLevel = value;
axisLevel = axisLevel + ((axisLevel - Math.Sin(axisLevel * (Math.PI / 2))) * (sensitivity * 2));
if ((value < 0) & (axisLevel > 0))
axisLevel = 0;
if ((value > 0) & (axisLevel < 0))
axisLevel = 0;
value = CoerceValue(axisLevel, -1d, 1d);
}
return value;
}
private double CalculateRange(double value, double range)
{
value = CoerceValue(value, -1.0d, 1.0d);
range = CoerceValue(range, 0.0d, 1.0d);
if (range < 1)
{
double distance = CoerceValue(Math.Sqrt((X * X) + (Y * Y)), 0d, 1d);
double directionalRange = 1 - Math.Abs((1 - range) * (value / distance));
value *= CoerceValue(directionalRange, 0d, 1d);
}
return value;
}
}
I tried to make this question as short as possible, but it's hard for me to explain this specific problem without describing some details about it. I know I should keep it short, but I would like to write at least a few more words:
Thank you for having the time to read all this!
After searching a bit for geometry math on the Internet, I finally found out the solution to my problem. I'm really bad at math, but now I know that it is actually very simple.
Instead of applying deadzone and sensitivity for each axis independently, I should apply them to the joystick radius. So, to do that, I just need to convert my joystick's cartesian coordinates (X and Y) to polar coordinates (Radius and Angle). Then, I apply deadzone sensitivity and all modifiers I want on the radius coordinate and convert it back to cartesian coordianates.
I'm posting here the code I'm using now. This looks far simpler and cleaner than the code on my question above:
private void ComputeCoordinates()
{
// Convert to polar coordinates.
double r = CoerceValue(Math.Sqrt((X * X) + (Y * Y)), 0d, 1d); // Radius;
double a = Math.Atan2(Y, X); // Angle (in radians);
// Apply modifiers.
double value = ComputeModifiers(r);
// Convert to cartesian coordinates.
double x = value * Math.Cos(a);
double y = value * Math.Sin(a);
// Apply axis independent modifiers.
if (InvertX) x = -x;
if (InvertY) y = -y;
// Set calculated values to property values;
_computedX = x;
_computedY = y;
}
private double ComputeModifiers(double value)
{
// Apply dead-zone and saturation.
if (DeadZone > 0d || Saturation < 1d)
{
double edgeSpace = (1 - Saturation) + DeadZone;
if (edgeSpace < 1d)
{
double multiplier = 1 / (1 - edgeSpace);
value = (value - DeadZone) * multiplier;
value = CoerceValue(value, 0d, 1d);
}
else
{
value = Math.Round(value);
}
}
// Apply sensitivity.
if (Sensitivity != 0d)
{
value = value + ((value - Math.Sin(value * (Math.PI / 2))) * (Sensitivity * 2));
value = CoerceValue(value, 0d, 1d);
}
// Apply range.
if (Range < 1d)
{
value = value * Range;
}
// Return calculated value.
return CoerceValue(value, 0d, 1d);
}
Explanation of the code above
Convert the physical joystick's X and Y coordinates to polar coordinates;
Apply deadzone, saturation, sensitivity and range modifiers to the radius coordinate;
Convert back to cartesian coordiantes (X and Y) using the original angle and the modified radius;
Optional: apply axis independent modifiers to each of the new axis (in this case, I'm just inverting each axis if the user wants the axis to be inverted);
Done. Every modifier is now applied in a circular way, no matter the direction I move the joystick;
Well, this situation had cost me about a day of work, because I didn't found anything related to my problem on Internet and I didn't know very well how to search for the solution, but I hope other people getting to this question may find this useful.
Here are some references about cartesian and polar coordinate systems:
https://en.wikipedia.org/wiki/Cartesian_coordinate_system
https://en.wikipedia.org/wiki/Polar_coordinate_system
https://social.msdn.microsoft.com/Forums/vstudio/en-US/9f120a35-dcac-42ab-b763-c65f3c39afdc/conversion-between-cartesian-to-polar-coordinates-and-back?forum=vbgeneral
The below worked well for me. It takes a standard parabola (x^2) and makes sure the result is signed. You can probably adjust the curve to make it closer to what you need by using a graphing calculator.
As it is, f(-1) = -1, f(0) = 0, f(1) = 1 and the curve in between is not too sensitive.
Mathf.Pow(axes.x, 2) * (axes.x < 0 ? -1 : 1)

Correctly executing bicubic resampling

I've been experimenting with the image bicubic resampling algorithm present in the AForge framework with the idea of introducing something similar into my image processing solution. See the original algorithm here and interpolation kernel here
Unfortunately I've hit a wall. It looks to me like somehow I am calculating the sample destination position incorrectly, probably due to the algorithm being designed for Format24bppRgb images where as I am using a Format32bppPArgb format.
Here's my code:
public Bitmap Resize(Bitmap source, int width, int height)
{
int sourceWidth = source.Width;
int sourceHeight = source.Height;
Bitmap destination = new Bitmap(width, height, PixelFormat.Format32bppPArgb);
destination.SetResolution(source.HorizontalResolution, source.VerticalResolution);
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
using (FastBitmap destinationBitmap = new FastBitmap(destination))
{
double heightFactor = sourceWidth / (double)width;
double widthFactor = sourceHeight / (double)height;
// Coordinates of source points
double ox, oy, dx, dy, k1, k2;
int ox1, oy1, ox2, oy2;
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
for (int y = 0; y < height; y++)
{
// Y coordinates
oy = (y * widthFactor) - 0.5;
oy1 = (int)oy;
dy = oy - oy1;
for (int x = 0; x < width; x++)
{
// X coordinates
ox = (x * heightFactor) - 0.5f;
ox1 = (int)ox;
dx = ox - ox1;
// Destination color components
double r = 0;
double g = 0;
double b = 0;
double a = 0;
for (int n = -1; n < 3; n++)
{
// Get Y cooefficient
k1 = Interpolation.BiCubicKernel(dy - n);
oy2 = oy1 + n;
if (oy2 < 0)
{
oy2 = 0;
}
if (oy2 > maxHeight)
{
oy2 = maxHeight;
}
for (int m = -1; m < 3; m++)
{
// Get X cooefficient
k2 = k1 * Interpolation.BiCubicKernel(m - dx);
ox2 = ox1 + m;
if (ox2 < 0)
{
ox2 = 0;
}
if (ox2 > maxWidth)
{
ox2 = maxWidth;
}
Color color = sourceBitmap.GetPixel(ox2, oy2);
r += k2 * color.R;
g += k2 * color.G;
b += k2 * color.B;
a += k2 * color.A;
}
}
destinationBitmap.SetPixel(
x,
y,
Color.FromArgb(a.ToByte(), r.ToByte(), g.ToByte(), b.ToByte()));
}
}
}
}
source.Dispose();
return destination;
}
And the kernel which should represent the given equation on Wikipedia
public static double BiCubicKernel(double x)
{
if (x < 0)
{
x = -x;
}
double bicubicCoef = 0;
if (x <= 1)
{
bicubicCoef = (1.5 * x - 2.5) * x * x + 1;
}
else if (x < 2)
{
bicubicCoef = ((-0.5 * x + 2.5) * x - 4) * x + 2;
}
return bicubicCoef;
}
Here's the original image at 500px x 667px.
And the image resized to 400px x 543px.
Visually it appears that the image is over reduced and then the same pixels are repeatedly applied once we hit a particular point.
Can anyone give me some pointers here to solve this?
Note FastBitmap is a wrapper for Bitmap that uses LockBits to manipulate pixels in memory. It works well with everything else I apply it to.
Edit
As per request here's the methods involved in ToByte
public static byte ToByte(this double value)
{
return Convert.ToByte(ImageMaths.Clamp(value, 0, 255));
}
public static T Clamp<T>(T value, T min, T max) where T : IComparable<T>
{
if (value.CompareTo(min) < 0)
{
return min;
}
if (value.CompareTo(max) > 0)
{
return max;
}
return value;
}
You are limiting your ox2 and oy2 to destination image dimensions, instead of source dimensions.
Change this:
// Width and height decreased by 1
int maxHeight = height - 1;
int maxWidth = width - 1;
to this:
// Width and height decreased by 1
int maxHeight = sourceHeight - 1;
int maxWidth = sourceWidth - 1;
Well, I've met a very strange thing, which might be or might be not a souce of the problem.
I've started to try implementing convolution matrix by myself and encountered strange behaviour. I was testing code on a small image 4x4 pixels. The code is following:
var source = Bitmap.FromFile(#"C:\Users\Public\Pictures\Sample Pictures\Безымянный.png");
using (FastBitmap sourceBitmap = new FastBitmap(source))
{
for (int TY = 0; TY < 4; TY++)
{
for (int TX = 0; TX < 4; TX++)
{
Color color = sourceBitmap.GetPixel(TX, TY);
Console.Write(color.B.ToString().PadLeft(5));
}
Console.WriteLine();
}
}
Althought I'm printing out only blue channel value, it's still clearly incorrect.
On the other hand, your solution partitially works, what makes the thing I've found kind of irrelevant. One more guess I have: what is your system's DPI?
From what I have found helpfull, here are some links:
C++ implementation of bicubic interpolation on
matrix
C# implemetation of bicubic interpolation, lacking the part about rescaling
Thread on gamedev.net which has almost working solution
That's my answer so far, but I will try further.

Categories

Resources