What is wrong with this fourier transform implementation - c#

I'm trying to implement a discrete fourier transform, but it's not working. I'm probably have written a bug somewhere, but I haven't found it yet.
Based on the following formula:
This function does the first loop, looping over X0 - Xn-1...
public Complex[] Transform(Complex[] data, bool reverse)
{
var transformed = new Complex[data.Length];
for(var i = 0; i < data.Length; i++)
{
//I create a method to calculate a single value
transformed[i] = TransformSingle(i, data, reverse);
}
return transformed;
}
And the actual calculating, this is probably where the bug is.
private Complex TransformSingle(int k, Complex[] data, bool reverse)
{
var sign = reverse ? 1.0: -1.0;
var transformed = Complex.Zero;
var argument = sign*2.0*Math.PI*k/data.Length;
for(var i = 0; i < data.Length; i++)
{
transformed += data[i]*Complex.FromPolarCoordinates(1, argument*i);
}
return transformed;
}
Next the explaination of the rest of the code:
var sign = reverse ? 1.0: -1.0; The reversed DFT will not have -1 in the argument, while a regular DFT does have a -1 in the argument.
var argument = sign*2.0*Math.PI*k/data.Length; is the argument of the algorithm. This part:
then the last part
transformed += data[i]*Complex.FromPolarCoordinates(1, argument*i);
I think I carefully copied the algorithm, so I don't see where I made the mistake...
Additional information
As Adam Gritt has shown in his answer, there is a nice implementation of this algorithm by AForge.net. I can probably solve this problem in 30 seconds by just copying their code. However, I still don't know what I have done wrong in my implementation.
I'm really curious where my flaw is, and what I have interpreted wrong.

My days of doing complex mathematics are a ways behind me right now so I may be missing something myself. However, it appears to me that you are doing the following line:
transformed += data[i]*Complex.FromPolarCoordinates(1, argument*i);
when it should probably be more like:
transformed += data[i]*Math.Pow(Math.E, Complex.FromPolarCoordinates(1, argument*i));
Unless you have this wrapped up into the method FromPolarCoordinates()
UPDATE:
I found the following bit of code in the AForge.NET Framework library and it shows additional Cos/Sin operations being done that are not being handled in your code. This code can be found in full context in the Sources\Math\FourierTransform.cs: DFT method.
for ( int i = 0; i < n; i++ )
{
dst[i] = Complex.Zero;
arg = - (int) direction * 2.0 * System.Math.PI * (double) i / (double) n;
// sum source elements
for ( int j = 0; j < n; j++ )
{
cos = System.Math.Cos( j * arg );
sin = System.Math.Sin( j * arg );
dst[i].Re += ( data[j].Re * cos - data[j].Im * sin );
dst[i].Im += ( data[j].Re * sin + data[j].Im * cos );
}
}
It is using a custom Complex class (as it was pre-4.0). Most of the math is similar to what you have implemented but the inner iteration is doing additional mathematical operations on the Real and Imaginary portions.
FURTHER UPDATE:
After some implementation and testing I found that the code above and the code provided in the question produce the same results. I also found, based on the comments what the difference is between what is generated from this code and what is produced by WolframAlpha. The difference in the results is that it would appear that Wolfram is applying a normalization of 1/sqrt(N) to the results. In the Wolfram Link provided if each value is multiplied by Sqrt(2) then the values are the same as those generated by the above code (rounding errors aside). I tested this by passing 3, 4, and 5 values into Wolfram and found that my results were different by Sqrt(3), Sqrt(4) and Sqrt(5) respectfully. Based on the Discrete Fourier Transform information provided by wikipedia it does mention a normalization to make the transforms for DFT and IDFT unitary. This might be the avenue that you need to look down to either modify your code or understand what Wolfram may be doing.

Your code is actually almost right (you are missing an 1/N on the inverse transform). The thing is, the formula you used is typically used for computations because it's lighter, but on purely theorical environments (and in Wolfram), you would use a normalization by 1/sqrt(N) to make the transforms unitary.
i.e. your formulas would be:
Xk = 1/sqrt(N) * sum(x[n] * exp(-i*2*pi/N * k*n))
x[n] = 1/sqrt(N) * sum(Xk * exp(i*2*pi/N * k*n))
It's just a matter of convention in normalisation, only amplitudes change so your results weren't that bad (if you hadn't forgotten the 1/N in the reverse transform).
Cheers

Related

Translate Excel Spreadsheets with SOLVER in C#

After several searches and mistakes on my part, I finally managed to get out of it and get a result from the MSF solver.
However, it's not perfect, because I still have a difference against me in my C# code.
In the Excel workbook I have 6 solvers, relatively identical.
Only one solver per tab, but I have a lot of calculations.
In order to best stick to the Excel workbook, I created one method per cell containing a formula.
My code works, in the sense that if I give it the same data as Excel I have the same results, but with the solver I have a little difference.
Here's what I did, and I'd like you to tell me if there's anything I can improve by trying to keep my methods (representing my Excel Cells)
Each representation of the cells is created twice.
I need to have the value of my cell to do other calculations and it seems that I can't put methods returning a double, in the solver.
Classic method:
private double Cell_I5()
{
double res = 0;
res = (Math.Exp(-Var.Calc.Var4 * Var.Calc.De * M23) - 1) / (-Var.Calc.Var4 * Var.Calc.De * M23);
return res;
}
Method for the solver:
private Term Solv_I5()
{
Term res = 0;
res = (Model.Exp(-Var.Calc.Var4 * Var.Calc.De * Solver_M23) - 1) / (-Var.Calc.Var4 * Var.Calc.De * Solver_M23);
return res;
}
'M23' is a double
'Solver_M23' is a Decision
'Var4' is a double as well as 'De'.
So I use the return value with "Term" and I change all the Math functions to 'Model', except Math.Pi which is a constant.
You can imagine that there are close to 60 to 70 methods involved like that.
My method for the solver:
public void StartSolver()
{
var solver = SolverContext.GetContext();
solver.ClearModel();
var model = solver.CreateModel();
//Instanciation des variables du Solver en format Real(double) Non Negative
Solver_M22 = new Decision(Domain.RealNonnegative, "M22");
Solver_M23 = new Decision(Domain.RealNonnegative, "M23");
Solver_M24 = new Decision(Domain.RealNonnegative, "M24");
Solver_M25 = new Decision(Domain.RealNonnegative, "M25");
Solver_M26 = new Decision(Domain.RealNonnegative, "M26");
model.AddDecision(Solver_M22);
model.AddDecision(Solver_M23);
model.AddDecision(Solver_M24);
model.AddDecision(Solver_M25);
model.AddDecision(Solver_M26);
model.AddConstraint("M22a", Solver_M22 <= 4);
model.AddConstraint("M22b", Solver_M22 >= 0);
model.AddConstraint("M23a", Solver_M23 <= 2);
model.AddConstraint("M23b", Solver_M23 >= 0.001);
model.AddConstraint("M24a", Solver_M24 <= 2);
model.AddConstraint("M24b", Solver_M24 >= 0);
model.AddConstraint("M25a", Solver_M25 <= 2);
model.AddConstraint("M25b", Solver_M25 >= 0);
model.AddConstraint("M26a", Solver_M26 <= 2);
model.AddConstraint("M26b", Solver_M26 >= 0.001);
//Test with classical calculation methods
double test = Cell_H33() + Cell_H23();
//Adding Solver Methods
model.AddGoal("SommeDesCarresDesEquartsGlobal", GoalKind.Minimize, Solv_H33() + Solv_H23());
// Solve our problem
var solution = solver.Solve();
// Get our decisions
M22 = Solver_M22.ToDouble();
M23 = Solver_M23.ToDouble();
M24 = Solver_M24.ToDouble();
M25 = Solver_M25.ToDouble();
M26 = Solver_M26.ToDouble();
string s = solution.Quality.ToString();
//For test
double testSortie = Cell_H33() + Cell_H23();
}
Questions:
1)
At no time do I indicate whether it is a linear calculation or not. How to indicate if necessary?
In Excel it is declared nonlinear
I saw that the solver was looking for the best method on its own.
2)
Is there something I'm not doing right, because I don't have the same value (with Excel)? I checked several times all the methods by one, with the amount that I missed, maybe, something, I will recheck tomorrow.
3)
Apart from doing the calculation with the classic methods, I have not found a way to find my result. From the 'solution' object
How to extract it from the result if possible?
4)
Here is the result of the 5 variables I find MSF C#:
0.06014756519010750
0.07283670953453890
0.07479568348101340
0.02864805010533950
0.00100000002842722
And what I find the Excel solver:
0.0000
0.0010
0.0141
0.0000
0.0010
Is there a way to restrict the number of decimal places directly in the calculations?
Because when I reduce manually (after calculation) that changes my result quite a bit?
Thank you.
[EDIT] Forgot to post this message it was still pending.
This morning I ran the C# solver calculation again and the result is really different with a huge difference in the result.
I remind you that I want to minimize the result.
Excel = 3.92
C#=8122.34
Result not acceptable at all.
[EDIT 2]
I may have a clue:
When I doing a simple calculation, such as:
private Term Solv_I5()
{
Term res = 0;
res = Model.Exp(-Var.Calc.Var4 * Var.Calc.Den * Solver_M25);
return res;
}
the result is:
{Exp(Times(-4176002161226263/70368744177664, M25))}
Why "Times"
All formulas with multiplication contain Times.
For divisions there is 'Quotient', additions 'Plus', but multiplications 'Times !!!
Question 4)
Am I doing the multiplications wrong in a 'Term'.?
Do you have an idea?
[EDIT 3]
I just saw that "times" was not a stupid term, another misunderstanding on my part of the English language, sorry.
So that doesn't solve my problem.
Can you help me please.

Why is HashSet<Point> so much slower than HashSet<string>?

I wanted to store some pixels locations without allowing duplicates, so the first thing comes to mind is HashSet<Point> or similar classes. However this seems to be very slow compared to something like HashSet<string>.
For example, this code:
HashSet<Point> points = new HashSet<Point>();
using (Bitmap img = new Bitmap(1000, 1000))
{
for (int x = 0; x < img.Width; x++)
{
for (int y = 0; y < img.Height; y++)
{
points.Add(new Point(x, y));
}
}
}
takes about 22.5 seconds.
While the following code (which is not a good choice for obvious reasons) takes only 1.6 seconds:
HashSet<string> points = new HashSet<string>();
using (Bitmap img = new Bitmap(1000, 1000))
{
for (int x = 0; x < img.Width; x++)
{
for (int y = 0; y < img.Height; y++)
{
points.Add(x + "," + y);
}
}
}
So, my questions are:
Is there a reason for that? I checked this answer, but 22.5 sec is way more than the numbers shown in that answer.
Is there a better way to store points without duplicates?
There are two perf problems induced by the Point struct. Something you can see when you add Console.WriteLine(GC.CollectionCount(0)); to the test code. You'll see that the Point test requires ~3720 collections but the string test only needs ~18 collections. Not for free. When you see a value type induce so many collections then you need to conclude "uh-oh, too much boxing".
At issue is that HashSet<T> needs an IEqualityComparer<T> to get its job done. Since you did not provide one, it needs to fall back to one returned by EqualityComparer.Default<T>(). That method can do a good job for string, it implements IEquatable. But not for Point, it is a type that harks from .NET 1.0 and never got the generics love. All it can do is use the Object methods.
The other issue is that Point.GetHashCode() does not do a stellar job in this test, too many collisions, so it hammers Object.Equals() pretty heavily. String has an excellent GetHashCode implementation.
You can solve both problems by providing the HashSet with a good comparer. Like this one:
class PointComparer : IEqualityComparer<Point> {
public bool Equals(Point x, Point y) {
return x.X == y.X && x.Y == y.Y;
}
public int GetHashCode(Point obj) {
// Perfect hash for practical bitmaps, their width/height is never >= 65536
return (obj.Y << 16) ^ obj.X;
}
}
And use it:
HashSet<Point> list = new HashSet<Point>(new PointComparer());
And it is now about 150 times faster, easily beating the string test.
The main reason for the performance drop is all the boxing going on (as already explained in Hans Passant's answer).
Apart from that, the hash code algorithm worsens the problem, because it causes more calls to Equals(object obj) thus increasing the amount of boxing conversions.
Also note that the hash code of Point is computed by x ^ y. This produces very little dispersion in your data range, and therefore the buckets of the HashSet are overpopulated — something that doesn't happen with string, where the dispersion of the hashes is much larger.
You can solve that problem by implementing your own Point struct (trivial) and using a better hash algorithm for your expected data range, e.g. by shifting the coordinates:
(x << 16) ^ y
For some good advice when it comes to hash codes, read Eric Lippert's blog post on the subject.

Finding all nth roots of a complex number in C#

Consider a generic complex number:
System.Numerics.Complex z = new System.Numerics.Complex(0,1); // z = i
And now consider the n-th root extraction operation of z. As you all know, when having a problem like z^n = w (having z and w complex numbers and n a positive not null integer) the equation returns n different complex numbers all residing on the complex circle having its radius equal to the module of z (|z|).
In the System.Numerics namespace I could not find such a method. I obviousle need some sort of method like this:
Complex[] NRoot(Complex number);
How can I find this method. Do I really need to implement it myself?
How can I find this method.
You can't, it's not built into the Framework.
Do I really need to implement it myself?
Yes.
Sorry if this comes across as a tad flip, I don't mean to, but I suspect that you already knew this would be the answer.
That said, there's no magic to it:
public static class ComplexExtensions {
public static Complex[] NthRoot(this Complex complex, int n) {
Contract.Requires(n > 0);
var phase = complex.Phase;
var magnitude = complex.Magnitude;
var nthRootOfMagnitude = Math.Pow(magnitude, 1.0 / n);
return
Enumerable.Range(0, n)
.Select(k => Complex.FromPolarCoordinates(
nthRootOfMagnitude,
phase / n + k * 2 * Math.PI / n)
)
.ToArray();
}
}
Most of the work is offloaded to the Framework. I trust that they've implemented Complex.Phase, Complex.Magnitude correctly ((Complex complex) => Math.Sqrt(complex.Real * complex.Real + complex.Imaginary * complex.Imaginary) is bad, Bad, BAD) and Math.Pow correctly.

Can managed code impact instruction level parallelism?

Is there any way I can impact Instruction Level Parallelism writing C# code? In other words, is there a way I can "help" the compiler produce code that best makes use of ILP? I ask this because I'm trying to abstract away from a few concepts of machine architecture, and I need to know if this is possible. If not, then I will be warranted to abstract away from ILP.
EDIT: you will notice that I do not want to exploit ILP using C# in any way. My question is exactly the opposite. Paraphrasing: "I hope there's no way to exploit ILP from C#"
Thanks.
ILP is a feature of the CPU. You have no way to control it.
Compilers try their best to exploit it by breaking dependency chains.
This may include the .Net JIT Compiler, however I have no evidence of this.
You are at the mercy of the JIT when getting instruction level parallelism.
Who knows what optimisations the JIT actually does? I would select another language, like C++ if I really need this.
To best exploit ILP you need to break dependency chains. This should still apply. See this thread.
However, with all the abstraction, I would doubt that it is still possible to effectively exploit this in but the most extreme cases. What examples do you have where you need this?
There is NO explicit or direct way to influence or hint to the .NET compiler in IL or C# to do this. This is entirely the compilers job.
The only influence you can have on this would be to structure your program such that it would be more likely (although not guaranteed) to do this for you, and it would be difficult to know if it even acted on the structure or not. This is well abstracted away from the .NET languages and IL.
You are able to use ILP in CLI. So the short answer is No.
A bit longer:
I wrote a code for a simple image processing task before, and used this kind of optimazation to made my code a "bit" faster.
A "short" example:
static void Main( string[] args )
{
const int ITERATION_NUMBER = 100;
TimeSpan[] normal = new TimeSpan[ITERATION_NUMBER];
TimeSpan[] ilp = new TimeSpan[ITERATION_NUMBER];
int SIZE = 4000000;
float[] data = new float[SIZE];
float safe = 0.0f;
//Normal for
Stopwatch sw = new Stopwatch();
for (int iteration = 0; iteration < ITERATION_NUMBER; iteration++)
{
//Initialization
for (int i = 0; i < data.Length; i++)
{
data[i] = 1.0f;
}
sw.Start();
for (int index = 0; index < data.Length; index++)
{
data[index] /= 3.0f * data[index] > 2.0f / data[index] ? 2.0f / data[index] : 3.0f * data[index];
}
sw.Stop();
normal[iteration] = sw.Elapsed;
safe = data[0];
//Initialization
for (int i = 0; i < data.Length; i++)
{
data[i] = 1.0f;
}
sw.Reset();
//ILP For
sw.Start();
float ac1, ac2, ac3, ac4;
int length = data.Length / 4;
for (int i = 0; i < length; i++)
{
int index0 = i << 2;
int index1 = index0;
int index2 = index0 + 1;
int index3 = index0 + 2;
int index4 = index0 + 3;
ac1 = 3.0f * data[index1] > 2.0f / data[index1] ? 2.0f / data[index1] : 3.0f * data[index1];
ac2 = 3.0f * data[index2] > 2.0f / data[index2] ? 2.0f / data[index2] : 3.0f * data[index2];
ac3 = 3.0f * data[index3] > 2.0f / data[index3] ? 2.0f / data[index3] : 3.0f * data[index3];
ac4 = 3.0f * data[index4] > 2.0f / data[index4] ? 2.0f / data[index4] : 3.0f * data[index4];
data[index1] /= ac1;
data[index2] /= ac2;
data[index3] /= ac3;
data[index4] /= ac4;
}
sw.Stop();
ilp[iteration] = sw.Elapsed;
sw.Reset();
}
Console.WriteLine(data.All(item => item == data[0]));
Console.WriteLine(data[0] == safe);
Console.WriteLine();
double normalElapsed = normal.Max(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("Normal Max.: {0}", normalElapsed));
double ilpElapsed = ilp.Max(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("ILP Max.: {0}", ilpElapsed));
Console.WriteLine();
normalElapsed = normal.Average(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("Normal Avg.: {0}", normalElapsed));
ilpElapsed = ilp.Average(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("ILP Avg.: {0}", ilpElapsed));
Console.WriteLine();
normalElapsed = normal.Min(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("Normal Min.: {0}", normalElapsed));
ilpElapsed = ilp.Min(time => time.TotalMilliseconds);
Console.WriteLine(String.Format("ILP Min.: {0}", ilpElapsed));
}
Results are (on .Net framework 4.0 Client profile, Release):
On a Virtual Machine (I think with no ILP):
True
True
Nor Max.: 111,1894
ILP Max.: 106,886
Nor Avg.: 78,163619
ILP Avg.: 77,682513
Nor Min.: 58,3035
ILP Min.: 56,7672
On a Xenon:
True
True
Nor Max.: 40,5892
ILP Max.: 30,8906
Nor Avg.: 35,637308
ILP Avg.: 25,45341
Nor Min.: 34,4247
ILP Min.: 23,7888
Explanation of Results:
In Debug, there is no optization applyed by the compiler, but the second for loop is more optimal than the first so there is a significant difference.
The answer seems to be in the results of the execution of Release mode builded assemblies. The IL compiler/JIT-er make it's best to minimize the performance counsumption (I think even ILP). But whether you make a code like the second for loop, you can reach better results in special cases, and second loop can overperform the first one on some achitectures. But
You are at the mercy of the JIT
as mentioned, sadly. Sad thing there is no mention of implementation could define more optimization, like ILP (a short paragraph can be placed in the specification). But they can not enumerate every form of architectural optomizations of code, and CLI is on a higher level:
This is well abstracted away from the .NET languages and IL.
This is a very complex problem to answer it only experimental way. I don't think we could get much more precise answer a way like this. And I think the Question is missleading becuse it isn't depending on C#, it depends on the implementation of CLI.
There could be many influencing factors, and it makes hard to answer correctly a question like this thinking about JIT until we think it as Black Box.
I found things about loop vectorization and autothreading on page 512-513.:
http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-335.pdf
I think they don't specify explicitly how the JIT-er need to behave in cases like this, and impelenters can choose the way of optimization. So I think you can impact, if you can write more optimal code, and JIT will try to use the ILP if it is possible/implemented.
I think because they don't specify, there is a possibility.
So the answer seems to be No. I belive you can't abstract away from ILP in the case of CLI, if the specification doesn't say it.
Update:
I found a blog post before, but I haven't found it until now:
http://igoro.com/archive/gallery-of-processor-cache-effects/
Example four contains a short, but proper answer for your question.

Is there an easy and fast way of checking if a polygon is self-intersecting?

I have a System.Windows.Shapes.Polygon object, whose layout is determined completely by a series of points. I need to determine if this polygon is self-intersecting, i.e., if any of the sides of the polygon intersect any of the other sides at a point which is not a vertex.
Is there an easy/fast way to compute this?
Easy, slow, low memory footprint: compare each segment against all others and check for intersections. Complexity O(n2).
Slightly faster, medium memory footprint (modified version of above): store edges in spatial "buckets", then perform above algorithm on per-bucket basis. Complexity O(n2 / m) for m buckets (assuming uniform distribution).
Fast & high memory footprint: use a spatial hash function to split edges into buckets. Check for collisions. Complexity O(n).
Fast & low memory footprint: use a sweep-line algorithm, such as the one described here (or here). Complexity O(n log n)
The last is my favorite as it has good speed - memory balance, especially the Bentley-Ottmann algorithm. Implementation isn't too complicated either.
Check if any pair of non-contiguous line segments intersects.
For the sake of completeness i add another algorithm to this discussion.
Assuming the reader knows about axis aligned bounding boxes(Google it if not) It can be very efficient to quickly find pairs of edges that have theirs AABB's clashing using the "Sweep and Prune Algorithm". (google it). Intersection routines are then called on these pairs.
The advantage here is that you may even intersect a non straight edge(circles and splines) and the approach is more general albeit almost similarly efficient.
I am a new bee here and this question is old enough. However, here is my Java code for determining if any pair of sides of a polygon defined by an ordered set of points crossover. You can remove the print statements used for debugging. I have also not included the code for returning the first point of crossover found. I am using the Line2D class from the standard java library.
/**
* Checks if any two sides of a polygon cross-over.
* If so, returns that Point.
*
* The polygon is determined by the ordered sequence
* of Points P
*
* If not returns null
*
* #param V vertices of the Polygon
*
* #return
*/
public static Point verify(Point[] V)
{
if (V == null)
{
return null;
}
int len = V.length;
/*
* No cross-over if len < 4
*/
if (len < 4)
{
return null;
}
System.out.printf("\nChecking %d Sided Polygon\n\n", len);
for (int i = 0; i < len-1; i++)
{
for (int j = i+2; j < len; j++)
{
/*
* Eliminate combinations already checked
* or not valid
*/
if ((i == 0) && ( j == (len-1)))
{
continue;
}
System.out.printf("\nChecking if Side %3d cuts Side %3d: ", i, j);
boolean cut = Line2D.linesIntersect(
V[i].X,
V[i].Y,
V[i+1].X,
V[i+1].Y,
V[j].X,
V[j].Y,
V[(j+1) % len].X,
V[(j+1) % len].Y);
if (cut)
{
System.out.printf("\nSide %3d CUTS Side %3d. Returning\n", i, j);
return ( ... some point or the point of intersection....)
}
else
{
System.out.printf("NO\n");
}
}
}
return null;
}

Categories

Resources