I'm trying to parse the values given from a device with a LSM6DSL chip (gyroscopic and acc.) and I'm having a hard time parsing the data properly for positioning and angle.
From the vendor I've received the information that the unit is running on a resolution of 2000 for the gyro, 8g for the acc.
I receive the data in bytes that are converted by the following to shorts;
public int[] BufferToMotionData(byte[] buffer, int segments = 2)
{
int[] motionDataArray = new int[segments * 3];
int offset = Constants.BufferSizeImage + Constants.CommandLength;
for (int i = 0; i < 6; i++)
{
motionDataArray[i] = BitConverter.ToInt16(buffer, offset + (i * 2));
if (motionDataArray[i] >= Int16.MaxValue)
motionDataArray[i] -= 65535;
}
return motionDataArray;
}
(Edit; Cleaned up version)
This returns values in the range of (example) 961, -16223, -1635, 664, -269, -597.
According to the spec sheet I'm supposed to multiply each vector with it's corresponding value.. * 70f for gyro, .448f for acc.
From the documentation I understand that for the G forces these are in milliG's and gyro in millidegrees per sec?
// Gyro X,Y,Z
gx = Mathf.Deg2Rad * (motionData[0] * 70f / 1000f);
gy = Mathf.Deg2Rad * (motionData[1] * 70f / 1000f);
gz = Mathf.Deg2Rad * (motionData[2] * 70f / 1000f);
// Acc X,Y,Z
ax = motionData[3] * 0.488f / 1000f;
ay = motionData[4] * 0.488f / 1000f;
az = motionData[5] * 0.488f / 1000f;
Update(gx, gy, gz, ax, ay, az);
Update(..) is Madgwick's quaternion formul, although for velocity I use the acceleration vectors.
G force values that I'm getting at this moment after calculation;
X 0.047824 Y -0.320128 Z 0.006344
X 0.07076 Y -0.2562 Z 0.020008
X 0.099552 Y -0.063928 Z -0.13664
These look awfully low, and if applied as velocity it just runs off in a given direction, I know I'm missing a gravity correct although not entirely sure how to apply this.
I'm under the assumption that I do not need to apply drag to my velocity vector since values should be negated by the acceleration values received?
Anyone with experience with this type of chip and actually applying the values to yaw/pitch/roll (or quaternion) and applying the G forces as linear acceleration.
By looking on existing code on GitHub, it's looks like the sensitivity factor for 8g is 244 µg/digit and not 488 µg/digit as you coded it.
Also it look's like raw values are shifted and are in [-r/2,r/2] instead of [0, r]. So you have to add 500µg or 500µdps to it. (But maybe it's linked to a uint/int issue, anyway are you sure about the endianness?)
See here for acc data and here for gyro data.
Based on that, the code should look likes this:
// Gyro X,Y,Z (in rad/s)
gx = Mathf.Deg2Rad * (motionData[0] * 70000f + 500) / 1000000;
gy = Mathf.Deg2Rad * (motionData[1] * 70000f + 500) / 1000000;
gz = Mathf.Deg2Rad * (motionData[2] * 70000f + 500) / 1000000;
// Acc X,Y,Z (in g)
ax = (motionData[3] * 244f + 500) / 1000000;
ay = (motionData[4] * 244f + 500) / 1000000;
az = (motionData[5] * 244f + 500) / 1000000;
Update(gx, gy, gz, ax, ay, az);
Related
I have a line segment whose end points I know
Line1 (X1,Y1) (X2,Y2)
I have a second line
Line2 (X3,Y3) (X4,Y4)
I want to calculate new end points for line 1 such that the resulting line is parallel to Line2, and Line1's centre point remains at the same coordinates.
i.e. such that Line1 simply rotates so it is parallel to Line2
I know I can calculate each line's angle
var line1Angle = (Mathf.Atan2(x2 - x1, y2 - y1));
var line2Angle = (Mathf.Atan2(x4 - x3, y4 - y3));
I can also calculate the lengths
var len1 = Math.Sqrt((x2-x1)*(x2-x1)+ (y2-y1) * (y2-y1));
var len2 = Math.Sqrt((x4-x3)*(x4-x3)+ (y4-y3) * (y4-y3));
but everything I have tried seems to fail - either not rotating correctly, or rotating but with the incorrect length.
The closest code I have (below) rotates correctly, but the length of Line1 is not retained.
The code uses an 'offset' which was used by the code this version is based on as it simply drew a parallel line 'offset' pixels from the destination line - I have set it to an arbitrary value but I believe should be the distance of Line1's centre point from the closest point on Line2.
I'd love it if someone could supply a code version, rather than an explanation (or as well as!) as I've read and tried so many non-code solutions, and evidently my understanding / translation to code is flawed!
float len1 = (float)Math.Sqrt((x2-x1)*(x2-x1)+ (y2-y1) * (y2-y1));
float len2 = (float)Math.Sqrt((x4-x3)*(x4-x3)+ (y4-y3) * (y4-y3));
float offset = 3.0f; // This should be the dist from our center to the closest wall but I"m compromising for now!
float newX1 = x3 + offset * (y4 - y3) * (len1 / len2);
float newX2 = x4 + offset * (y4 - y3) * (len1 / len2);
float newY1 = y3 + offset * (x3 - x4) * (len1 / len2);
float newY2 = y4 + offset * (x3 - x4) * (len1 / len2);
Approach without angles: you know segment lengths, and can just form new segment ends with the same direction as line2 defines
dx1 = x2 - x1
dy1 = y2 - y1
dx2 = x4 - x3
dy2 = y4 - y3
len1 = hypot(dx1, dy1)
len2 = hypot(dx2, dy2)
midx = (x1 + x2) / 2
midy = (y1 + y2) / 2
coeff = 0.5 * len1 / len2
//now we make a vector with direction of line2
//and length of half of line1
newx1 = midx - dx2 * coeff
newy1 = midy - dy2 * coeff
newx2 = midx + dx2 * coeff
newy2 = midy + dy2 * coeff
I would highly recommend that you define actual types for your line segment and points. You can use Math.Net or System.Numerics.Vectors if you want something to start with.
Assuming your line segment have a StartPoint and EndPoint we can define a MidPoint and Direction extension methods. I'm going to use the Math.Net types for the example, but it is not difficult to make your own types.
static Vector2D Midpoint(this Line2D l) => (l.StartPoint + l.EndPoint) / 2;
static Vector2D Direction(this Line2D l) => (l.EndPoint - l.StartPoint ).Normalize() ;
We can also define a static method to create a new line from these methods/Properties:
static Line2D FromMidpointDirection(Vector2D midpoint, Vector2D direction, float length){
var halfDir = direction * length/ 2;
return new Line2D(midpoint - halfDir , midpint + halfDir );
Note that you might want to add comments or pick another name for Direction, since it is not obvious if this is normalized or not.
Then you can recreate your line:
var mid = sourceLine.Midpoint();
var dir = targetLine.Direction();
var newLine = FromMidpointDirection(mid, dir,sourceLine.Length);
Using higher level types like this tend to make your code more reusable and easier to read and understand.
I am trying to recreate very simple GDI+ functions, such as scaling and rotating an image. The reason is that some GDI functions can't be done on multiple threads (I found a work around using processes but didn't want to get into that), and processing thousands of images on one thread wasn't nearly cutting it.
Also my images are grayscale, so a custom function would only have to worry about one value instead of 4.
No matter what kind of function I try to recreate, even when highly optimized, it is always SEVERAL times slower, despite being greatly simplified compared to what GDI is doing (I am operating on a 1D array of bytes, one byte per pixel)
I thought maybe the way I was rotating each point could be the difference, so I took it out completely, and basically had a function that goes through each pixel and just sets it to what it already is, and that was only roughly tied with the speed of GDI, even though GDI was doing an actual rotation and changing 4 different values per pixel.
What makes this possible? Is there a way to match it using your own function?
The GDI+ code is written in C/C++, or possibly even partially in assembly. Some GDI+ calls may use GDI, an old and well optimized API. You will find it difficult to match the performance, even if you know all the pixel manipulation tricks.
I am adding my own answer along with my code to help anyone else who may be looking to do this.
From a combination of pointers and using an approximation of Sine and Cosine instead of calling an outside function for the rotation, I have come pretty darn close to reaching GDI speeds. No outside functions are called at all.
It still takes about 50% more time than GDI, but my earlier implementation took over 10 times longer than GDI. And when you consider multi threading, this method can be 10 times faster than GDI. This function can rotate a 300x400 picture in 3 milliseconds on my machine.
Keep in mind that this is for grayscale images and each byte in the input array represents one pixel.
If you have any ideas to make it faster please share!
private unsafe byte[] rotate(byte[] input, int inputWidth, int inputHeight, int cx, int cy, double angle)
{
byte[] result = new byte[input.Length];
int
tx, ty, ix, iy, x1, y1;
double
px, py, fx, fy, sin, cos, v;
byte a, b;
//Approximate Sine and Cosine of the angle
if (angle < 0)
sin = 1.27323954 * angle + 0.405284735 * angle * angle;
else
sin = 1.27323954 * angle - 0.405284735 * angle * angle;
angle += 1.57079632;
if (angle > 3.14159265)
angle -= 6.28318531;
if (angle < 0)
cos = 1.27323954 * angle + 0.405284735 * angle * angle;
else
cos = 1.27323954 * angle - 0.405284735 * angle * angle;
angle -= 1.57079632;
fixed (byte* pInput = input, pResult = result)
{
byte* pi = pInput;
byte* pr = pResult;
for (int x = 0; x < inputWidth; x++)
for (int y = 0; y < inputHeight; y++)
{
tx = x - cx;
ty = y - cy;
px = tx * cos - ty * sin + cx;
py = tx * sin + ty * cos + cy;
ix = (int)px;
iy = (int)py;
fx = px - ix;
fy = py - iy;
if (ix < inputWidth && iy < inputHeight && ix >= 0 && iy >= 0)
{
//keep in array bounds
x1 = ix + 1;
y1 = iy + 1;
if (x1 >= inputWidth)
x1 = ix;
if (y1 >= inputHeight)
y1 = iy;
//bilinear interpolation using pointers
a = *(pInput + (iy * inputWidth + ix));
b = *(pInput + (y1 * inputWidth + ix));
v = a + ((*(pInput + (iy * inputWidth + x1)) - a) * fx);
pr = (pResult + (y * inputWidth + x));
*pr = (byte)(v + (((b + ((*(pInput + (y1 * inputWidth + x1)) - b) * fx)) - v) * fy));
}
}
}
return result;
}
With the following extract from a GPS log:
$GPGGA,153500.009,5137.2603,N,00244.8715,W,1,10,0.8,50.6,M,51.4,M,,0000*71
$GPRMC,153500.009,A,5137.2603,N,00244.8715,W,037.7,101.7,300912,,,A*74
$GPGGA,153500.059,5137.2601,N,00244.8706,W,1,10,0.8,50.6,M,51.4,M,,0000*74
$GPRMC,153500.059,A,5137.2601,N,00244.8706,W,038.0,101.8,300912,,,A*76
$GPGGA,153500.109,5137.2600,N,00244.8697,W,1,10,0.8,50.6,M,51.4,M,,0000*78
$GPRMC,153500.109,A,5137.2600,N,00244.8697,W,038.3,101.9,300912,,,A*78
$GPGGA,153500.159,5137.2599,N,00244.8688,W,1,10,0.8,50.5,M,51.4,M,,0000*73
$GPRMC,153500.159,A,5137.2599,N,00244.8688,W,038.6,101.9,300912,,,A*75
$GPGGA,153500.209,5137.2597,N,00244.8679,W,1,10,0.8,50.5,M,51.4,M,,0000*75
$GPRMC,153500.209,A,5137.2597,N,00244.8679,W,038.9,102.0,300912,,,A*76
I am comparing the logged GPS bearing with a calculated bearing between the last and current position with the following code that loops through each line:
string[] splitline = line.Split(',');
course = Convert.ToDouble(splitline[8]);
Lat = Convert.ToDouble(splitline[3]);
Long = Convert.ToDouble(splitline[5]);
LatDeg = (Convert.ToInt16(Lat) / 100) + (Lat - (Convert.ToInt16(Lat) / 100) * 100) / 60;
LongDeg = (Convert.ToInt16(Long) / 100) + (Long - (Convert.ToInt16(Long) / 100) * 100) / 60;
lastLatDeg = (Convert.ToInt16(lastLat) / 100) + (lastLat - (Convert.ToInt16(lastLat) / 100) * 100) / 60;
lastLongDeg = (Convert.ToInt16(lastLong) / 100) + (lastLong - (Convert.ToInt16(lastLong) / 100) * 100) / 60;
var dLon = lastLongDeg - LongDeg;
var y = Math.Sin(dLon) * Math.Cos(lastLatDeg);
var x = Math.Cos(lastLatDeg) * Math.Sin(LatDeg) - Math.Sin(lastLatDeg) * Math.Cos(LatDeg) * Math.Cos(dLon);
Console.WriteLine(DEG_PER_RAD * Math.Atan2(y, x));
Console.WriteLine("> " + course + " <");
lastLat = Lat;
lastLong = Long;
lastcourse = course;
results in the following:
136.131182151555
> 101.8 <
117.480364881602
> 101.9 <
117.480186101881
> 101.9 <
136.130309531745
> 102 <
117.479649572813
> 102 <
are my calculations out as none of them seem to come close to the gps logged bearing of around 101 degrees?
Thanks
There are a few problems I spotted in the code, for a start when interpreting the latitude and longitude you should look at which quadrant of the earth the positions fall into and convert to negative for south or west locations:
Lat = Convert.ToDouble(splitline[3]);
if (splitline[4] == "S")
Lat = 0.0 - Lat;
Long = Convert.ToDouble(splitline[5]);
if (splitline[6] == "W")
Long = 0.0 - Long;
The remainder of the problems stemmed from passing degrees rather than radians to the math functions and the calculation of the longitude delta seemed reversed. I introduced a few helper functions and rewrote that section of code as follows:
public static double DegreesToRadians(double degrees)
{
return degrees * (Math.PI / 180);
}
public static double RadiansToDegrees(double radians)
{
return radians * 180 / Math.PI;
}
double dLon = DegreesToRadians(LongDeg - lastLongDeg);
double y = Math.Sin(dLon) * Math.Cos(DegreesToRadians(lastLatDeg));
double x = Math.Cos(DegreesToRadians(lastLatDeg)) * Math.Sin(DegreesToRadians(LatDeg)) - Math.Sin(DegreesToRadians(lastLatDeg)) * Math.Cos(DegreesToRadians(LatDeg)) * Math.Cos(dLon);
Console.WriteLine((RadiansToDegrees(Math.Atan2(y, x)) + 360.0) % 360);
Console.WriteLine("> " + course + " <");
That gave me the following results with your test data ignoring the first invalid one where the bearing has not yet been determined:
109.693614586392
> 101.8 <
100.14641169874
> 101.9 <
100.146411372034
> 101.9 <
109.693611985053
> 102 <
I noticed from the GGA speeds that the unit seems to have been either stationary or moving very slowly. Some GPS receivers will filter or hold heading information under those circumstances so some variation can be expected. After the changes I ran through some GPS data I had from a moving vehicle and the results were within one degree of each other.
I have a simulation with multiple circles moving in 2D space.
There is collision detection between them, and the elastic collisions work 95% of the time. Occasionally however, when two balls hit each other, they stick to each other and overlap, often orbiting each other while being stuck together.
I'm unsure how to solve this problem.
My collision management function looks like this:
void manageCollision(Particle particleA, Particle particleB)
{
float distanceX = particleA.Position.X - particleB.Position.X;
float distanceY = particleA.Position.Y - particleB.Position.Y;
double collisionAngle = Math.Atan2(distanceY, distanceX);
double pA_magnitude = Math.Sqrt(particleA.Velocity.X * particleA.Velocity.X + particleA.Velocity.Y * particleA.Velocity.Y);
double pB_magnitude = Math.Sqrt(particleB.Velocity.X * particleB.Velocity.X + particleB.Velocity.Y * particleB.Velocity.Y);
double pA_direction = Math.Atan2(particleA.Velocity.Y, particleA.Velocity.X);
double pB_direction = Math.Atan2(particleB.Velocity.Y, particleB.Velocity.X);
double pA_newVelocityX = pA_magnitude * Math.Cos(pA_direction - collisionAngle);
double pA_newVelocityY = pA_magnitude * Math.Sin(pA_direction - collisionAngle);
double pB_newVelocityX = pB_magnitude * Math.Cos(pB_direction - collisionAngle);
double pB_newVelocityY = pB_magnitude * Math.Sin(pB_direction - collisionAngle);
double pA_finalVelocityX = ((particleA.Mass - particleB.Mass) * pA_newVelocityX + (particleB.Mass + particleB.Mass) * pB_newVelocityX) / (particleA.Mass + particleB.Mass);
double pB_finalVelocityX = ((particleA.Mass + particleA.Mass) * pA_newVelocityX + (particleB.Mass - particleA.Mass) * pB_newVelocityX) / (particleA.Mass + particleB.Mass);
double pA_finalVelocityY = pA_newVelocityY;
double pB_finalVelocityY = pB_newVelocityY;
particleA.Velocity = new Vector2((float)(Math.Cos(collisionAngle) * pA_finalVelocityX + Math.Cos(collisionAngle + Math.PI / 2) * pA_finalVelocityY), (float)(Math.Sin(collisionAngle) * pA_finalVelocityX + Math.Sin(collisionAngle + Math.PI / 2) * pA_finalVelocityY));
particleB.Velocity = new Vector2((float)(Math.Cos(collisionAngle) * pB_finalVelocityX + Math.Cos(collisionAngle + Math.PI / 2) * pB_finalVelocityY), (float)(Math.Sin(collisionAngle) * pB_finalVelocityX + Math.Sin(collisionAngle + Math.PI / 2) * pB_finalVelocityY));
}
Each ball or particle spawns with a random mass and radius.
The function is called within an update type of method, like this:
Particle pA = particles[i];
for (int k = i + 1; k < particles.Count(); k++)
{
Particle pB = particles[k];
Vector2 delta = pA.Position - pB.Position;
float dist = delta.Length();
if (dist < particles[i].Radius + particles[k].Radius && !particles[i].Colliding && !particles[k].Colliding)
{
particles[i].Colliding = true;
particles[k].Colliding = true;
manageCollision(particles[i], particles[k]);
particles[i].initColorTable(); // Upon collision, change the color
particles[k].initColorTable();
totalCollisions++;
}
else
{
particles[i].Colliding = false;
particles[k].Colliding = false;
}
}
This situation stems from the discrete computation and big step size of duration.
When you observe the objects with some time interval dt, you can observe some intersection between two circles and call your collision method but in the next time step they may still overlap although they are going in different directions after the collision in the previous step.
To reduce this effect, you can try a lower time step size so that the overlap ratio between objects may be reduced.
As a more complicated solution, you can keep a list of your collided objects for every step and during iterations you can check this list if current intersected circles had any "affairs" in the previous step.
I'm trying to find a way to compare two colors to find out how much they are alike. I can't seem to find any resources about the subject so I'm hoping to get some pointers here.
Idealy, I would like to get a score that tells how much they are alike. For example, 0 to 100, where 100 would be equal and 0 would be totally different.
Thanks!
Edit:
Getting to know a bit more about colors from the answers I understand my question was a bit vague. I will try to explain what I needed this for.
I have pixeldata (location and color) of an application window at 800x600 size so I can find out if a certain window is open or not by checking every x-interval.
However, this method fails as soon as the application is resized (the contents are scaled, not moved). I can calculate where the pixels move, but because of rounding and antialising the color can be slightly different.
Pieter's solution was good enough for me in this case, although all other responses were extremely helpfull as well, so I just upvoted everyone. I do think that ColorEye's answer is the most accurate when looking at this from a professional way, so I marked it as the answer.
What you are looking for is called Delta-E.
http://www.colorwiki.com/wiki/Delta_E:_The_Color_Difference
It is the distance between two colors in LAB color space. It is said that the human eye cannot distinguish colors below 1 DeltaE (I find that my eyes can find differences in colors below 1 DeltaE, each person is different.)
There are 4 formulas for 'color difference'.
Delta E (CIE 1976)
Delta E (CIE 1994)
Delta E (CIE 2000)
Delta E (CMC)
Check the math link on this site:
http://www.brucelindbloom.com/
So the proper answer is to convert your RGB to LAB using the formula given, then use DeltaE 1976 to determine the 'difference' in your colors. A result of 0 would indicate identical colors. Any value higher than 0 could be judged by the rule 'A delta e of 1 or less is indistinguishable by most people'.
There's an open-source .net library that lets you do this easily: https://github.com/hvalidi/ColorMine
The most common method for comparing colors is CIE76:
var a = new Rgb { R = 149, G = 13, B = 12 }
var b = new Rgb { R = 255, G = 13, B = 12 }
var deltaE = a.Compare(b,new Cie1976Comparison());
Colors have different weights affecting human eye.
So convert the colors to grayscale using their calculated weights:
Gray Color =
.11 * B +
.59 * G +
.30 * R
And your difference will be
difference = (GrayColor1 - GrayColor2) * 100.0 / 255.0
with difference ranging from 0-100.
This is actually commonly used and very simple approach thats used calculating image differences in image procesing.
-edit
this is the very simple and still usable formula - even in commercial applications.
If you want to go deep you should check out the color difference methods called: CIE1976, CIE1994, CIE2000 and CMC
Here you can find some more detailed info:
http://en.wikipedia.org/wiki/Color_difference
Something like this:
public static int CompareColors(Color a, Color b)
{
return 100 * (int)(
1.0 - ((double)(
Math.Abs(a.R - b.R) +
Math.Abs(a.G - b.G) +
Math.Abs(a.B - b.B)
) / (256.0 * 3))
);
}
Converting the RGB color to the HSL color space often produces good results. Check wikipedia for the conversion formula. It is up to you to assign weights to the differences in H, the color, S, how 'deep' the color is and L, how bright it is.
I found a interesting approach called Colour metric and adapted it to C#
public static double ColourDistance(Color e1, Color e2)
{
long rmean = ((long)e1.R + (long)e2.R) / 2;
long r = (long)e1.R - (long)e2.R;
long g = (long)e1.G - (long)e2.G;
long b = (long)e1.B - (long)e2.B;
return Math.Sqrt((((512 + rmean) * r * r) >> 8) + 4 * g * g + (((767 - rmean) * b * b) >> 8));
}
Colour perception depends on many factors and similarity can be measured in many ways. Just comparing how similar the R, G and B components are generally gives results humans won't agree with.
There's some general material on colour comparisons in wikipedia, and on working with natural colour spaces in C# in this question.
I've translated the code for DeltaE2000 on Bruce Lindbloom's page into C.
Here:
//
// deltae2000.c
//
// Translated by Dr Cube on 10/1/16.
// Translated to C from this javascript code written by Bruce LindBloom:
// http://www.brucelindbloom.com/index.html?Eqn_DeltaE_CIE2000.html
// http://www.brucelindbloom.com/javascript/ColorDiff.js
#include <stdio.h>
#include <math.h>
#define Lab2k struct Lab2kStruct
Lab2k
{
float L;
float a;
float b;
};
// function expects Lab where: 0 >= L <=100.0 , -100 >=a <= 100.0 and -100 >= b <= 100.0
float
DeltaE2000(Lab2k Lab1,Lab2k Lab2)
{
float kL = 1.0;
float kC = 1.0;
float kH = 1.0;
float lBarPrime = 0.5 * (Lab1.L + Lab2.L);
float c1 = sqrtf(Lab1.a * Lab1.a + Lab1.b * Lab1.b);
float c2 = sqrtf(Lab2.a * Lab2.a + Lab2.b * Lab2.b);
float cBar = 0.5 * (c1 + c2);
float cBar7 = cBar * cBar * cBar * cBar * cBar * cBar * cBar;
float g = 0.5 * (1.0 - sqrtf(cBar7 / (cBar7 + 6103515625.0))); /* 6103515625 = 25^7 */
float a1Prime = Lab1.a * (1.0 + g);
float a2Prime = Lab2.a * (1.0 + g);
float c1Prime = sqrtf(a1Prime * a1Prime + Lab1.b * Lab1.b);
float c2Prime = sqrtf(a2Prime * a2Prime + Lab2.b * Lab2.b);
float cBarPrime = 0.5 * (c1Prime + c2Prime);
float h1Prime = (atan2f(Lab1.b, a1Prime) * 180.0) / M_PI;
float dhPrime; // not initialized on purpose
if (h1Prime < 0.0)
h1Prime += 360.0;
float h2Prime = (atan2f(Lab2.b, a2Prime) * 180.0) / M_PI;
if (h2Prime < 0.0)
h2Prime += 360.0;
float hBarPrime = (fabsf(h1Prime - h2Prime) > 180.0) ? (0.5 * (h1Prime + h2Prime + 360.0)) : (0.5 * (h1Prime + h2Prime));
float t = 1.0 -
0.17 * cosf(M_PI * ( hBarPrime - 30.0) / 180.0) +
0.24 * cosf(M_PI * (2.0 * hBarPrime ) / 180.0) +
0.32 * cosf(M_PI * (3.0 * hBarPrime + 6.0) / 180.0) -
0.20 * cosf(M_PI * (4.0 * hBarPrime - 63.0) / 180.0);
if (fabsf(h2Prime - h1Prime) <= 180.0)
dhPrime = h2Prime - h1Prime;
else
dhPrime = (h2Prime <= h1Prime) ? (h2Prime - h1Prime + 360.0) : (h2Prime - h1Prime - 360.0);
float dLPrime = Lab2.L - Lab1.L;
float dCPrime = c2Prime - c1Prime;
float dHPrime = 2.0 * sqrtf(c1Prime * c2Prime) * sinf(M_PI * (0.5 * dhPrime) / 180.0);
float sL = 1.0 + ((0.015 * (lBarPrime - 50.0) * (lBarPrime - 50.0)) / sqrtf(20.0 + (lBarPrime - 50.0) * (lBarPrime - 50.0)));
float sC = 1.0 + 0.045 * cBarPrime;
float sH = 1.0 + 0.015 * cBarPrime * t;
float dTheta = 30.0 * expf(-((hBarPrime - 275.0) / 25.0) * ((hBarPrime - 275.0) / 25.0));
float cBarPrime7 = cBarPrime * cBarPrime * cBarPrime * cBarPrime * cBarPrime * cBarPrime * cBarPrime;
float rC = sqrtf(cBarPrime7 / (cBarPrime7 + 6103515625.0));
float rT = -2.0 * rC * sinf(M_PI * (2.0 * dTheta) / 180.0);
return(sqrtf(
(dLPrime / (kL * sL)) * (dLPrime / (kL * sL)) +
(dCPrime / (kC * sC)) * (dCPrime / (kC * sC)) +
(dHPrime / (kH * sH)) * (dHPrime / (kH * sH)) +
(dCPrime / (kC * sC)) * (dHPrime / (kH * sH)) * rT
)
);
}