Currently, I'm developing some fuzzy logic stuff in C# and want to achieve this in a generic way. For simplicity, I can use float, double and decimal to process an interval [0, 1], but for performance, it would be better to use integers. Some thoughts about symmetry also led to the decision to omit the highest value in unsigned and the lowest value in signed integers. The lowest, non-omitted value maps to 0 and the highest, non-omitted value maps to 1. The omitted value is normalized to the next non-omitted value.
Now, I want to implement some compund calculations in the form of:
byte f(byte p1, byte p2, byte p3, byte p4)
{
return (p1 * p2) / (p3 * p4);
}
where the byte values are interpreted as the [0, 1] interval mentioned above. This means p1 * p2 < p1 and p1 * p2 < p2 as opposed to numbers greater than 1, where this is not valid, e. g. 2 * 3 = 6, but 0.1 * 0.2 = 0.02.
Additionally, a problem is: p1 * p2 and p3 * p4 may exceed the range of the type byte. The result of the whole formula may not exceed this range, but the overflow would still occur in one or both parts. Of course, I can just cast to ushort and in the end back to byte, but for an ulong I wouldn't have this possibility without further effort and I don't want to stick to 32 bits. On the other hand, if I return (p1 / p3) * (p2 / p4), I decrease the type escalation, but might run into a result of 0, where the actual result is non-zero.
So I thought of somehow simultaneously "shrinking" both products step by step until I have the result in the [0, 1] interpretation. I don't need an exact value, a heuristic with an error less than 3 integer values off the correct value would be sufficient, and for an ulong an even higher error would certainly be OK.
So far, I have tried to convert the input to a decimal/float/double in the interval [0, 1] and calculated it. But this is completely counterproductive regarding performance. I read stuff about division algorithms, but I couldn't find the one I saw once in class. It was about calculating quotient and remainder simultaneously, with an accumulator. I tried to reconstruct and extend it for factorized parts of the division with corrections, but this breaks, where inidivisibility occurs and I get a too big error. I also made some notes and calculated some integer examples manually, trying to factor out, cancel out, split sums and such fancy derivation stuff, but nothing led to a satisfying result or steps for an algorithm.
Is there a
performant way
to multiply/divide signed (and unsigned) integers as above
interpreted as interval [0, 1]
without type promotion
?
To answer your question as summarised: No.
You need to state (and rank) your overall goals explicitly (e.g., is symmetry more or less important than performance?). Your chances of getting a helpful answer improve with succinctly stating them in the question.
While I think Phil1970's you can ignore scaling for … division overly optimistic, multiplication is enough of a problem: If you don't generate partial results bigger (twice as big) as your "base type", you are stuck with multiplying parts of your operands and piecing the result together.
For ideas about piecing together "larger" results: AVR's Fractional Multiply.
Regarding …in signed integers. The lowest, non-omitted value maps to 0…, I expect that you will find, e.g., excess -32767/32768-coded fractions even harder to handle than two's complement ones.
If you are not careful, you will lost more time doing conversions that it would have take with regular operations.
That being said, an alternative that might make some sense would be to map value between 0 and 128 included (or 0 and 32768 if you want more precision) so that all value are essentially stored multiplied by 128.
So if you have (0.5 * 0.75) / (0.125 * 0.25) the stored values for each of those numbers would be 64, 96, 16 and 32 respectively. If you do those computation using ushort you would have (64 * 96) / (16 * 32) = 6144 / 512 = 12. This would give a result of 12 / 128 = 0.09375.
By the way, you can ignore scaling for addition, substraction and division. For multiplication, you would do the multiplication as usual and then divide by 128. So for 0.5 * 0.75 you would have 64 * 96 / 128 = 48 which correspond to 48 / 128 = 0.375 as expected.
The code can be optimized for the platform particularly if the platform is more efficient with narrow numbers. And if necessary, rounding could be added to operation.
By the way since the scaling if a power of 2, you can use bit shifting for scaling. You might prefer to use 256 instead of 128 particularly if you don't have one cycle bit shifting but then you need larger width to handle some operations.
But you might be able to do some optimization if the most significant bit is not set for example so that you would only use larger width when necessary.
I have this sensor fused data coming from a GYRO-ACC-MAG-hardware sensor.
It's data (YAW-PITCH-ROLL) goes from -180 to +180 or -90 to +90
Which algorithm helps me offset this to an arbitrary position and also have no sign change to minus?
What I mean is: -180 to +180 for instance leads to 0 to 359. And I want 0 not where 0 of the sensor is but also offset to a certain position. In other words, picture a circle. Now put a zero at an arbitrary position on that circle. Now rotate that circle around it's center point. The zero rotates along, so it is now at a different position than it was before, right? That's the idea.
What I did:
YAW + 180 leads to 0 to 359. Pitch + 90 leads to 0 to 179. Roll + 180 leads to 0 to 359.
If I understand you correctly, you want to make use of the modulo operator:
double YAW360 = (YAW+180)%360;
See http://msdn.microsoft.com/en-us/library/0w4e0fzs.aspx
The modulo operator makes a division and returns the division remainder:
17 / 5 = 3 rest 2
Therefore:
17 % 5 = 2
I need to convert any range to a -1 to 1 scale. I have multiple ranges that I am using and therefore need this equation to be dynamic. Below is my current equation. It works great for ranges where 0 is the centerpoint. I.E. 200 to -200. I have another range however that isn't converting nicely. 6000 to 4000. I've also tested 0 to 360 and it works.
var offset = -1 + ((2 / yMax) * (point.Y));
One of the major issues that I may have is that sometimes I'll get a value that is outside the range, and as such, the converted value needs to also be outside the -1 to 1 range.
What this is for is to take a value that is a real world value, and I need to be able to plot it into an OpenGL point. I'm using .NET 4.0 and the Tao Framework.
rescaled = -1 + 2 * (point.Y - yMin) / (yMax - yMin);
However, in OpenGL you can do this with a projection matrix (or matrix multiply inside a shader). Read about glTranslatef and glScalef for how to use them or how to duplicate them with matrix multiply.
I'd like to store two float values in a single 32 bit float variable. The encoding will happen in C# while the decoding is to be done in a HLSL shader.
The best solution I've found so far is hard-wiring the offset of the decimal in the encoded values and storing them as integer and decimal of the "carrier" float:
123.456 -> 12.3 and 45.6
It can't handle negative values but that's ok.
However I was wondering if there is a better way to do this.
EDIT: A few more details about the task:
I'm working with a fixed data structure in Unity where the vertex data is stored as floats. (Float2 for a UV, float3 the normal, and so on.) Apparently there is no way to properly add extra data so I have to work within these limits, that's why I figured it was all down to a more general issue of encoding data. For example I could sacrifice the secondary UV data to transfer the 2x2 extra data channels.
The target is shader model 3.0 but I wouldn't mind if the decoding was working reasonably on SM2.0 too.
Data loss is fine as long as it's "reasonable". The expected value range is 0..64 but as I come to think of it 0..1 would be fine too since that is cheap to remap to any range inside the shader. The important thing is to keep precision as high as possible. Negative values are not important.
Following Gnietschow's recommendation I adapted the algo of YellPika. (It's C# for Unity 3d.)
float Pack(Vector2 input, int precision)
{
Vector2 output = input;
output.x = Mathf.Floor(output.x * (precision - 1));
output.y = Mathf.Floor(output.y * (precision - 1));
return (output.x * precision) + output.y;
}
Vector2 Unpack(float input, int precision)
{
Vector2 output = Vector2.zero;
output.y = input % precision;
output.x = Mathf.Floor(input / precision);
return output / (precision - 1);
}
The quick and dirty testing produced the following stats (1 million random value pairs in the 0..1 range):
Precision: 2048 | Avg error: 0.00024424 | Max error: 0.00048852
Precision: 4096 | Avg error: 0.00012208 | Max error: 0.00024417
Precision: 8192 | Avg error: 0.00011035 | Max error: 0.99999940
Precision of 4096 seems to be the sweet spot. Note that both packing and unpacking in these tests ran on the CPU so the results could be worse on a GPU if it cuts corners with float precision.
Anyway, I don't know if this is the best algorithm but it seems good enough for my case.
I have a series of CSV files of timestamped coordinates (X, Y, and Z in mm). What would be the simplest way to extract motion data from them?
Measurables
The information I'd like to extract includes the following:
Number of direction changes
Initial acceleration of the first and last movements
...and the bearing (angle) of these movements
Average speed whilst non-stationary
Ideally, I'd eventually like to be able to categorise patterns of motion, so bonus points for anyone who can suggest a way of doing this. It strikes me that one way I could do this would be to generate pictures/videos of the motion from the coordinates and ask humans to categorise them - suggestions as to how I'd do this are very welcome.
Noise
A complication is the fact that the readings are polluted with noise. In order to overcome this, each recording is prefaced with at least 20 seconds of stillness which can serve as a sort of "noise profile". I'm not sure how to implement this though.
Specifics
If it helps, the motion being recorded is that of a persons hand during a simple grabbing task. The data is generated using a magnetic motion tracker attached to the wrist. Also, I'm using C#, but I guess the maths is language agnostic.
Edits
Magnetic tracker spec: http://www.ascension-tech.com/realtime/RTminiBIRD500_800.php
Sample data file: http://tdwright.co.uk/sample.csv
Bounty
For the bounty, I'd really like to see some (pseudo-)code examples.
Let's see what can be done with your example data.
Disclaimer: I didn't read your hardware specs (tl;dr :))
I'll work this out in Mathematica for convenience. The relevant algorithms (not many) will be provided as links.
The first observation is that all your measurements are equally spaced in time, which is most convenient for simplifying the approach and algorithms. We will represent "time" or "ticks" (measurements) on our convenience, as their are equivalent.
Let's first plot your position by axis, to see what the problem is about:
(* This is Mathematica code, don't mind, I am posting this only for
future reference *)
ListPlot[Transpose#(Take[p1[[All, 2 ;; 4]]][[1 ;;]]),
PlotRange -> All,
AxesLabel -> {Style["Ticks", Medium, Bold],
Style["Position (X,Y,Z)", Medium, Bold]}]
Now, two observations:
Your movement starts around tick 1000
Your movement does not start at {0,0,0}
So, we will slightly transform your data subtracting a zero position and starting at tick 950.
ListLinePlot[
Drop[Transpose#(x - Array[Mean#(x[[1 ;; 1000]]) &, Length#x]), {}, 950],
PlotRange -> All,
AxesLabel -> {Style["Ticks", Medium, Bold],
Style["Position (X,Y,Z)", Medium, Bold]}]
As the curves have enough noise to spoil the calculations, we will convolve it with a Gaussian Kernel to denoise it:
kern = Table[Exp[-n^2/100]/Sqrt[2. Pi], {n, -10, 10}];
t = Take[p1[[All, 1]]];
x = Take[p1[[All, 2 ;; 4]]];
x1 = ListConvolve[kern, #] & /#
Drop[Transpose#(x - Array[Mean#(x[[1 ;; 1000]]) &, Length#x]), {},
950];
So you can see below the original and smoothed trajectories:
Now we are ready to take Derivatives for the Velocity and Acceleration. We will use fourth order approximants for the first and second derivative. We also will smooth them using a Gaussian kernel, as before:
Vel = ListConvolve[kern, #] & /#
Transpose#
Table[Table[(-x1[[axis, i + 2]] + x1[[axis, i - 2]] -
8 x1[[axis, i - 1]] +
8 x1[[axis, i + 1]])/(12 (t[[i + 1]] - t[[i]])), {axis, 1, 3}],
{i, 3, Length[x1[[1]]] - 2}];
Acc = ListConvolve[kern, #] & /#
Transpose#
Table[Table[(-x1[[axis, i + 2]] - x1[[axis, i - 2]] +
16 x1[[axis, i - 1]] + 16 x1[[axis, i + 1]] -
30 x1[[axis, i]])/(12 (t[[i + 1]] - t[[i]])^2), {axis, 1, 3}],
{i, 3, Length[x1[[1]]] - 2}];
And the we plot them:
Show[ListLinePlot[Vel,PlotRange->All,
AxesLabel->{Style["Ticks",Medium,Bold],
Style["Velocity (X,Y,Z)",Medium,Bold]}],
ListPlot[Vel,PlotRange->All]]
Show[ListLinePlot[Acc,PlotRange->All,
AxesLabel->{Style["Ticks",Medium,Bold],
Style["Acceleation (X,Y,Z)",Medium,Bold]}],
ListPlot[Acc,PlotRange->All]]
Now, we also have the speed and acceleration modulus:
ListLinePlot[Norm /# (Transpose#Vel),
AxesLabel -> {Style["Ticks", Medium, Bold],
Style["Speed Module", Medium, Bold]},
Filling -> Axis]
ListLinePlot[Norm /# (Transpose#Acc),
AxesLabel -> {Style["Ticks", Medium, Bold],
Style["Acceleration Module", Medium, Bold]},
Filling -> Axis]
And the Heading, as the direction of the Velocity:
Show[Graphics3D[
{Line#(Normalize/#(Transpose#Vel)),
Opacity[.7],Sphere[{0,0,0},.7]},
Epilog->Inset[Framed[Style["Heading",20],
Background->LightYellow],{Right,Bottom},{Right,Bottom}]]]
I think this is enough to get you started. let me know if you need help in calculating a particular parameter.
HTH!
Edit
Just as an example, suppose you want to calculate the mean speed when the hand is not at rest. so, we select all points whose speed is more than a cutoff, for example 5, and calculate the mean:
Mean#Select[Norm /# (Transpose#Vel), # > 5 &]
-> 148.085
The units for that magnitude depend on your time units, but I don't see them specified anywhere.
Please note that the cutoff speed is not "intuitive". You can search an appropriate value by plotting the mean speed vs the cutoff speed:
ListLinePlot[
Table[Mean#Select[Norm /# (Transpose#Vel), # > h &], {h, 1, 30}],
AxesLabel -> {Style["Cutoff Speed", Medium, Bold],
Style["Mean Speed", Medium, Bold]}]
So you see that 5 is an appropriate value.
e solution could be as simple as a state machine, where each state represents a direction. Sequences of movements are represented by sequences of directions. This approach would only work if the orientation of the sensor doesn't change with respect to the movements, otherwise you'll need a method of translating the movements into the correct orientation, before calculating sequences of directions.
On the other end, you could use various AI techniques, although exactly what you'd use is beyond me.
To get the speed between any two coordinates:
_________________________________
Avg Speed = /(x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2
--------------------------------------
(t2-t1)
To get the average speed for the whole motion, say you have 100 timestamped coordinates, use the above equation to calculate 99 speed values. Then sum all the speeds, and divide by the number of speeds (99)
To get the acceleration, the location at three moments is required, or the velocity at two moments.
Accel X = (x3 - 2*x + x1) / (t3 - t2)
Accel Y = (y3 - 2*y + y1) / (t3 - t2)
Accel Z = (z3 - 2*z + z1) / (t3 - t2)
Note: This all assumes per axis calculations: I have no experience with two-axis particle motion.
You will have a much easier time with this if you first convert your position measurements to velocity measurements.
First step: Remove the noise. As you said, each recording is prefaced with 20 seconds of stillness. So, to find the actual measurements, search for 20 second intervals where the position doesn't change. Then, take the measurement directly after.
Second step: Calculate velocities using: (x2 - x1)/(t2 - t1); the slope formula. The interval should match the interval of the recordings.
Calculations:
Direction change:
A direction change occurs where the acceleration is zero. Use numeric integration to find these times. Integrate from 0 until a time when the result of the integration is zero. Record this time. Then, integrate from the previous time until you get zero again. Repeat until you hit the end of the data.
Initial accelerations:
These are found using the slope formula again, substituting v for x.
Average speed:
The average speed formula is the slope formula. x1 and t1 should correspond to the first reading, and x2 and t2 should correspond to the final reading.