Arbitrary Precision for decimals in C# help? - c#

Here is my current code for computing Pi using the chudnovsky method in c#:
using System;
using System.Diagnostics;
using System.IO;
using java.math;
namespace pi.chudnovsky
{
public class Program
{
static Double Factorial(Double fact)
{
//begin factorial function
if (fact <= 1)
return 1;
else
return fact * Factorial(fact - 1); //loops multiplication until the factorial is reached
}
static Double doSummation(Double maxPower)
{
//begin chudnovsky summation function
Double sum = 0;
for (int i = 0; i <= maxPower; i++) //starts at i=0
{
sum += ((Math.Pow(-1, i)) * Factorial(6 * i) * (13591409 + 5451401 * i)) / (Factorial(3 * i) * Factorial(i) * Factorial(i) * Factorial(i) * Math.Pow(640320, (3 * i + 1.5))); //chudnovsky algorithm
}
return sum;
}
static void Main(string[] args)
{
int num;
Console.WriteLine("Enter how many terms to compute Chudnovsky summation: ");
//begin stopwatch
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
//parse user input
num = Convert.ToInt32(Console.ReadLine());
//perform calculation
Double inv = 1 / (12 * doSummation(num));
//stop stopwatch
stopwatch.Stop();
//display info
Console.WriteLine(inv);
Console.WriteLine("3.14159265358979323846264338327950288419716939937510");
Console.WriteLine("Time elapsed: {0}", stopwatch.Elapsed.TotalMilliseconds);
//write to pi.txt
TextWriter pi = new StreamWriter("pi.txt");
pi.WriteLine(inv);
pi.Close();
//write to stats.txt
TextWriter stats = new StreamWriter("stats.txt");
stats.WriteLine(stopwatch.Elapsed.TotalMilliseconds);
stats.Close();
}
}
}
So, I've included the J# library, and included java.math. Now when I replace all the "double"s with "BigDecimal"s, I get these compile errors:
http://f.cl.ly/items/1r2X26470d0d0n260p0p/Image%202011-11-14%20at%206.16.19%20PM.png
I know that this isn't the problem with me using Int for the loops, as it worked perfectly with Doubles. My question is, how do you resolve these errors relating to int and BigDecimal, or can you recommend another arbitrary precision library?
I've tried using XMPIR, have gotten everything to compile, but I get:
http://f.cl.ly/items/1l3C371j2u3z3n2g3a0j/Image%202011-11-14%20at%206.20.24%20PM.png
So I can I use p/invoke to include xmpir so I can use whatever the bigdecimal class is?
Thank you for your time!

Can you not convert your int's to BigDecimal's before comparing them?
I assume you understand the problem here is that there is no operator overload for the greater, less than, etc. signs on the BigDecimal class that accepts an int.

Related

Alglib Data fitting with minlmoptimize does not minimize the results. Full c# included

I'm having trouble implementing the lm optimizer in the alglib library. I'm not sure why the parameters are hardly changing at all while still receiving an exit code of 4. I have been unable to determine what i am doing wrong with the documentation for alglib. Below is the full source I am running:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Threading.Tasks;
namespace FBkineticsFitter
{
class Program
{
public static int Main(string[] args)
{
/*
* This code finds the parameters ka, kd, and Bmax from the minimization of the residuals using "V" mode of the Levenberg-Marquardt optimizer (alglib library).
* This optimizer is used because the equation is non-linear and this particular version of the optimizer does not require the ab inito calculation of partial
* derivatives, a jacobian matrix, or other parameter-space definitions, so it's implementation is simple.
*
* The equations being solved represent a model of a protein-protein interaction where protein in solution is interacting with immobilized protein on a sensor
* in a 1:1 stoichiometery. Mass transport limit is not taken into account. The detials of this equation are described in:
* R.B.M. Schasfoort and Anna J. Tudos Handbook of Surface Plasmon Resonance, 2008, Chapter 5, ISBN: 978-0-85404-267-8
*
* Y=((ka*Cpro*Bmax)/(ka*Cpro+kd))*(1-exp(-1*X*(ka*Cpro+kd))) ; this equation describes the association phase
*
* Y=Req*exp(-1*X*kd) ; this equation describes the dissociation phase
*
* The data are fit globally such that Bmax and Req parameters are linked and kd parameters are linked during simultaneous optimization for the most robust fit
*
* Y= signal
* X= time
* ka= association constant
* kd= dissociation constant
* Bmax= maximum binding capacity at equilibrium
* Req=(Cpro/(Cpro+kobs))*Bmax :. in this case Req=Bmax because Cpro=0 during the dissociation step
* Cpro= concentration of protein in solution
*
* additional calculations:
* kobs=ka*Cpro
* kD=kd/ka
*/
GetRawDataXY(#"C:\Results.txt");
double epsg = .0000001;
double epsf = 0;
double epsx = 0;
int maxits = 0;
alglib.minlmstate state;
alglib.minlmreport rep;
alglib.minlmcreatev(2, GlobalVariables.param, 0.0001, out state);
alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
alglib.minlmoptimize(state, Calc_residuals, null, null);
alglib.minlmresults(state, out GlobalVariables.param, out rep);
System.Console.WriteLine("{0}", rep.terminationtype); ////1=relative function improvement is no more than EpsF. 2=relative step is no more than EpsX. 4=gradient norm is no more than EpsG. 5=MaxIts steps was taken. 7=stopping conditions are too stringent,further improvement is impossible, we return best X found so far. 8= terminated by user
System.Console.WriteLine("{0}", alglib.ap.format(GlobalVariables.param, 20));
System.Console.ReadLine();
return 0;
}
public static void Calc_residuals(double[] param, double[] fi, object obj)
{
/*calculate the difference of the model and the raw data at each X (I.E. residuals)
* the sum of the square of the residuals is returned to the optimized function to be minimized*/
fi[0] = 0;
fi[1] = 0;
for (int i = 0; i < GlobalVariables.rawXYdata[0].Count();i++ )
{
if (GlobalVariables.rawXYdata[1][i] <= GlobalVariables.breakpoint)
{
fi[0] += System.Math.Pow((kaEQN(GlobalVariables.rawXYdata[0][i]) - GlobalVariables.rawXYdata[1][i]), 2);
}
else
{
fi[1] += System.Math.Pow((kdEQN(GlobalVariables.rawXYdata[0][i]) - GlobalVariables.rawXYdata[1][i]), 2);
}
}
}
public static double kdEQN(double x)
{
/*Calculate kd Y value based on the incremented parameters*/
return GlobalVariables.param[2] * Math.Exp(-1 * x * GlobalVariables.param[1]);
}
public static double kaEQN(double x)
{
/*Calculate ka Y value based on the incremented parameters*/
return ((GlobalVariables.param[0] * GlobalVariables.Cpro * GlobalVariables.param[2]) / (GlobalVariables.param[0] * GlobalVariables.Cpro + GlobalVariables.param[1])) * (1 - Math.Exp(-1 * x * (GlobalVariables.param[0] * GlobalVariables.Cpro + GlobalVariables.param[1])));
}
public static void GetRawDataXY(string filename)
{
/*Read in Raw data From tab delim txt*/
string[] elements = { "x", "y" };
int count = 0;
GlobalVariables.rawXYdata[0] = new double[1798];
GlobalVariables.rawXYdata[1] = new double[1798];
using (StreamReader sr = new StreamReader(filename))
{
while (sr.Peek() >= 0)
{
elements = sr.ReadLine().Split('\t');
GlobalVariables.rawXYdata[0][count] = Convert.ToDouble(elements[0]);
GlobalVariables.rawXYdata[1][count] = Convert.ToDouble(elements[1]);
count++;
}
}
}
public class GlobalVariables
{
public static double[] param = new double[] { 1, .02, 0.13 }; ////ka,kd,Bmax these are initial guesses for the algorithm
public static double[][] rawXYdata = new double[2][];
public static double Cpro = 100E-9;
public static double kD = 0;
public static double breakpoint = 180;
}
}
}
According to Sergey Bochkanova The issue is the following:
"You should use param[] array which is provided to you by optimizer. It creates its internal copy of your param, and updates this copy - not your param array.
From the optimizer point of view, it has function which never changes when it changes its internal copy of param. So, it terminates right after first iteration."
Here is the updated and working example code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Threading.Tasks;
namespace FBkineticsFitter
{
class Program
{
public static int Main(string[] args)
{
/*
* This code finds the parameters ka, kd, and Bmax from the minimization of the residuals using "V" mode of the Levenberg-Marquardt optimizer (alglib library).
* This optimizer is used because the equation is non-linear and this particular version of the optimizer does not require the ab inito calculation of partial
* derivatives, a jacobian matrix, or other parameter-space definitions, so it's implementation is simple.
*
* The equations being solved represent a model of a protein-protein interaction where protein in solution is interacting with immobilized protein on a sensor
* in a 1:1 stoichiometery. Mass transport limit is not taken into account. The detials of this equation are described in:
* R.B.M. Schasfoort and Anna J. Tudos Handbook of Surface Plasmon Resonance, 2008, Chapter 5, ISBN: 978-0-85404-267-8
*
* Y=((Cpro*Rmax)/(Cpro+kd))*(1-exp(-1*X*(ka*Cpro+kd))) ; this equation describes the association phase
*
* Y=Req*exp(-1*X*kd)+NS ; this equation describes the dissociation phase
*
* According to ForteBio's Application Notes #14 the amplitudes of the data can be correctly accounted for by modifying the above equations as follows:
*
* Y=(Rmax*(1/(1+(kd/(ka*Cpro))))*(1-exp(((-1*Cpro)+kd)*X)) ; this equation describes the association phase
*
* Y=Y0*(exp(-1*kd*(X-X0))) ; this equation describes the dissociation phase
*
*
*
* The data are fit simultaneously such that all fitting parameters are linked during optimization for the most robust fit
*
* Y= signal
* X= time
* ka= association constant [fitting parameter 0]
* kd= dissociation constant [fitting parameter 1]
* Rmax= maximum binding capacity at equilibrium [fitting parameter 2]
* KD=kd/ka
* kobs=ka*Cpro+kd
* Req=(Cpro/(Cpro+KD))*Rmax
* Cpro= concentration of protein in solution
* NS= non-specific binding at time=infinity (constant correction for end point of fit) [this is taken into account in the amplitude corrected formula: Y0=Ylast]
* Y0= the initial value of Y for the first point of the dissociation curve (I.E. the last point of the association phase)
* X0= the initial value of X for the first point of the dissociation phase
*
*/
GetRawDataXY(#"C:\Results.txt");
double epsg = .00001;
double epsf = 0;
double epsx = 0;
int maxits = 10000;
alglib.minlmstate state;
alglib.minlmreport rep;
double[] param = new double[] { 1000000, .0100, 0.20};////ka,kd,Rmax these are initial guesses for the algorithm and should be mid range for the expected data., The last parameter Rmax should be guessed as the maximum Y-value of Ka
double[] scaling= new double[] { 1E6,1,1};
alglib.minlmcreatev(2, param, 0.001, out state);
alglib.minlmsetcond(state, epsg, epsf, epsx, maxits);
alglib.minlmsetgradientcheck(state, 1);
alglib.minlmsetscale(state, scaling);
alglib.minlmoptimize(state, Calc_residuals, null, V.rawXYdata);
alglib.minlmresults(state, out param, out rep);
System.Console.WriteLine("{0}", rep.terminationtype); ////1=relative function improvement is no more than EpsF. 2=relative step is no more than EpsX. 4=gradient norm is no more than EpsG. 5=MaxIts steps was taken. 7=stopping conditions are too stringent,further improvement is impossible, we return best X found so far. 8= terminated by user
System.Console.WriteLine("{0}", alglib.ap.format(param, 25));
System.Console.ReadLine();
return 0;
}
public static void Calc_residuals(double[] param, double[] fi, object obj)
{
/*calculate the difference of the model and the raw data at each X (I.E. residuals)
* the sum of the square of the residuals is returned to the optimized function to be minimized*/
CalcVariables(param);
fi[0] = 0;
fi[1] = 0;
for (int i = 0; i < V.rawXYdata[0].Count(); i++)
{
if (V.rawXYdata[0][i] <= V.breakpoint)
{
fi[0] += System.Math.Pow((kaEQN(V.rawXYdata[0][i], param) - V.rawXYdata[1][i]), 2);
}
else
{
if (!V.breakpointreached)
{
V.breakpointreached = true;
V.X_0 = V.rawXYdata[0][i];
V.Y_0 = V.rawXYdata[1][i];
}
fi[1] += System.Math.Pow((kdEQN(V.rawXYdata[0][i], param) - V.rawXYdata[1][i]), 2);
}
}
if (param[0] <= 0 || param[1] <=0 || param[2] <= 0)////Exponentiates the error if the parameters go negative to favor positive non-zero values
{
fi[0] = Math.Pow(fi[0], 2);
fi[1] = Math.Pow(fi[1], 2);
}
System.Console.WriteLine("{0}"+" "+V.Cpro+" -->"+fi[0], alglib.ap.format(param, 5));
Console.WriteLine((kdEQN(V.rawXYdata[0][114], param)));
}
public static double kdEQN(double X, double[] param)
{
/*Calculate kd Y value based on the incremented parameters*/
return (V.Rmax * (1 / (1 + (V.kd / (V.ka * V.Cpro)))) * (1 - Math.Exp((-1 * V.ka * V.Cpro) * V.X_0))) * Math.Exp((-1 * V.kd) * (X - V.X_0));
}
public static double kaEQN(double X, double[] param)
{
/*Calculate ka Y value based on the incremented parameters*/
return ((V.Cpro * V.Rmax) / (V.Cpro + V.kd)) * (1 - Math.Exp(-1 * X * ((V.ka * V.Cpro) + V.kd)));
}
public static void GetRawDataXY(string filename)
{
/*Read in Raw data From tab delim txt*/
string[] elements = { "x", "y" };
int count = 0;
V.rawXYdata[0] = new double[226];
V.rawXYdata[1] = new double[226];
using (StreamReader sr = new StreamReader(filename))
{
while (sr.Peek() >= 0)
{
elements = sr.ReadLine().Split('\t');
V.rawXYdata[0][count] = Convert.ToDouble(elements[0]);
V.rawXYdata[1][count] = Convert.ToDouble(elements[1]);
count++;
}
}
}
public class V
{
/*Global Variables*/
public static double[][] rawXYdata = new double[2][];
public static double Cpro = 100E-9;
public static bool breakpointreached = false;
public static double X_0 = 0;
public static double Y_0 = 0;
public static double ka = 0;
public static double kd = 0;
public static double Rmax = 0;
public static double KD = 0;
public static double Kobs = 0;
public static double Req = 0;
public static double breakpoint = 180;
}
public static void CalcVariables(double[] param)
{
V.ka = param[0];
V.kd = param[1];
V.Rmax = param[2];
V.KD = param[1] / param[0];
V.Kobs = param[0] * V.Cpro + param[1];
V.Req = (V.Cpro / (V.Cpro + param[0] * V.Cpro + param[1])) * param[2];
}
}
}

Code sample that shows casting to uint is more efficient than range check

So I am looking at this question and the general consensus is that uint cast version is more efficient than range check with 0. Since the code is also in MS's implementation of List I assume it is a real optimization. However I have failed to produce a code sample that results in better performance for the uint version. I have tried different tests and there is something missing or some other part of my code is dwarfing the time for the checks. My last attempt looks like this:
class TestType
{
public TestType(int size)
{
MaxSize = size;
Random rand = new Random(100);
for (int i = 0; i < MaxIterations; i++)
{
indexes[i] = rand.Next(0, MaxSize);
}
}
public const int MaxIterations = 10000000;
private int MaxSize;
private int[] indexes = new int[MaxIterations];
public void Test()
{
var timer = new Stopwatch();
int inRange = 0;
int outOfRange = 0;
timer.Start();
for (int i = 0; i < MaxIterations; i++)
{
int x = indexes[i];
if (x < 0 || x > MaxSize)
{
throw new Exception();
}
inRange += indexes[x];
}
timer.Stop();
Console.WriteLine("Comparision 1: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");
inRange = 0;
outOfRange = 0;
timer.Reset();
timer.Start();
for (int i = 0; i < MaxIterations; i++)
{
int x = indexes[i];
if ((uint)x > (uint)MaxSize)
{
throw new Exception();
}
inRange += indexes[x];
}
timer.Stop();
Console.WriteLine("Comparision 2: " + inRange + "/" + outOfRange + ", elapsed: " + timer.ElapsedMilliseconds + "ms");
}
}
class Program
{
static void Main()
{
TestType t = new TestType(TestType.MaxIterations);
t.Test();
TestType t2 = new TestType(TestType.MaxIterations);
t2.Test();
TestType t3 = new TestType(TestType.MaxIterations);
t3.Test();
}
}
The code is a bit of a mess because I tried many things to make uint check perform faster like moving the compared variable into a field of a class, generating random index access and so on but in every case the result seems to be the same for both versions. So is this change applicable on modern x86 processors and can someone demonstrate it somehow?
Note that I am not asking for someone to fix my sample or explain what is wrong with it. I just want to see the case where the optimization does work.
if (x < 0 || x > MaxSize)
The comparison is performed by the CMP processor instruction (Compare). You'll want to take a look at Agner Fog's instruction tables document (PDF), it list the cost of instructions. Find your processor back in the list, then locate the CMP instruction.
For mine, Haswell, CMP takes 1 cycle of latency and 0.25 cycles of throughput.
A fractional cost like that could use an explanation, Haswell has 4 integer execution units that can execute instructions at the same time. When a program contains enough integer operations, like CMP, without an interdependency then they can all execute at the same time. In effect making the program 4 times faster. You don't always manage to keep all 4 of them busy at the same time with your code, it is actually pretty rare. But you do keep 2 of them busy in this case. Or in other words, two comparisons take just as long as single one, 1 cycle.
There are other factors at play that make the execution time identical. One thing helps is that the processor can predict the branch very well, it can speculatively execute x > MaxSize in spite of the short-circuit evaluation. And it will in fact end up using the result since the branch is never taken.
And the true bottleneck in this code is the array indexing, accessing memory is one of the slowest thing the processor can do. So the "fast" version of the code isn't faster even though it provides more opportunity to allow the processor to concurrently execute instructions. It isn't much of an opportunity today anyway, a processor has too many execution units to keep busy. Otherwise the feature that makes HyperThreading work. In both cases the processor bogs down at the same rate.
On my machine, I have to write code that occupies more than 4 engines to make it slower. Silly code like this:
if (x < 0 || x > MaxSize || x > 10000000 || x > 20000000 || x > 3000000) {
outOfRange++;
}
else {
inRange++;
}
Using 5 compares, now I can a difference, 61 vs 47 msec. Or in other words, this is a way to count the number of integer engines in the processor. Hehe :)
So this is a micro-optimization that probably used to pay off a decade ago. It doesn't anymore. Scratch it off your list of things to worry about :)
I would suggest attempting code which does not throw an exception when the index is out of range. Exceptions are incredibly expensive and can completely throw off your bench results.
The code below does a timed-average bench for 1,000 iterations of 1,000,000 results.
using System;
using System.Diagnostics;
namespace BenchTest
{
class Program
{
const int LoopCount = 1000000;
const int AverageCount = 1000;
static void Main(string[] args)
{
Console.WriteLine("Starting Benchmark");
RunTest();
Console.WriteLine("Finished Benchmark");
Console.Write("Press any key to exit...");
Console.ReadKey();
}
static void RunTest()
{
int cursorRow = Console.CursorTop; int cursorCol = Console.CursorLeft;
long totalTime1 = 0; long totalTime2 = 0;
long invalidOperationCount1 = 0; long invalidOperationCount2 = 0;
for (int i = 0; i < AverageCount; i++)
{
Console.SetCursorPosition(cursorCol, cursorRow);
Console.WriteLine("Running iteration: {0}/{1}", i + 1, AverageCount);
int[] indexArgs = RandomFill(LoopCount, int.MinValue, int.MaxValue);
int[] sizeArgs = RandomFill(LoopCount, 0, int.MaxValue);
totalTime1 += RunLoop(TestMethod1, indexArgs, sizeArgs, ref invalidOperationCount1);
totalTime2 += RunLoop(TestMethod2, indexArgs, sizeArgs, ref invalidOperationCount2);
}
PrintResult("Test 1", TimeSpan.FromTicks(totalTime1 / AverageCount), invalidOperationCount1);
PrintResult("Test 2", TimeSpan.FromTicks(totalTime2 / AverageCount), invalidOperationCount2);
}
static void PrintResult(string testName, TimeSpan averageTime, long invalidOperationCount)
{
Console.WriteLine(testName);
Console.WriteLine(" Average Time: {0}", averageTime);
Console.WriteLine(" Invalid Operations: {0} ({1})", invalidOperationCount, (invalidOperationCount / (double)(AverageCount * LoopCount)).ToString("P3"));
}
static long RunLoop(Func<int, int, int> testMethod, int[] indexArgs, int[] sizeArgs, ref long invalidOperationCount)
{
Stopwatch sw = new Stopwatch();
Console.Write("Running {0} sub-iterations", LoopCount);
sw.Start();
long startTickCount = sw.ElapsedTicks;
for (int i = 0; i < LoopCount; i++)
{
invalidOperationCount += testMethod(indexArgs[i], sizeArgs[i]);
}
sw.Stop();
long stopTickCount = sw.ElapsedTicks;
long elapsedTickCount = stopTickCount - startTickCount;
Console.WriteLine(" - Time Taken: {0}", new TimeSpan(elapsedTickCount));
return elapsedTickCount;
}
static int[] RandomFill(int size, int minValue, int maxValue)
{
int[] randomArray = new int[size];
Random rng = new Random();
for (int i = 0; i < size; i++)
{
randomArray[i] = rng.Next(minValue, maxValue);
}
return randomArray;
}
static int TestMethod1(int index, int size)
{
return (index < 0 || index >= size) ? 1 : 0;
}
static int TestMethod2(int index, int size)
{
return ((uint)(index) >= (uint)(size)) ? 1 : 0;
}
}
}
You aren't comparing like with like.
The code you were talking about not only saved one branch by using the optimisation, but also 4 bytes of CIL in a small method.
In a small method 4 bytes can be the difference in being inlined and not being inlined.
And if the method calling that method is also written to be small, then that can mean two (or more) method calls are jitted as one piece of inline code.
And maybe some of it is then, because it is inline and available for analysis by the jitter, optimised further again.
The real difference is not between index < 0 || index >= _size and (uint)index >= (uint)_size, but between code that has repeated efforts to minimise the method body size and code that does not. Look for example at how another method is used to throw the exception if necessary, further shaving off a couple of bytes of CIL.
(And no, that's not to say that I think all methods should be written like that, but there certainly can be performance differences when one does).

Sorting Complex Numbers

I have a struct called "Complex" in my project (I build it with using C#) and as the name of the struct implies, it's a struct for complex numbers. That struct has a built-in method called "Modulus" so that I can calculate the modulus of a complex number. The things are quite easy up to now.
The thing is, I create an array out of this struct and I want to sort the array according to the modulus of the complex numbers contained.(greater to smaller). Is there a way for that?? (Any algorithm suggestions will be welcomed.)
Thank you!!
Complex[] complexArray = ...
Complex[] sortedArray = complexArray.OrderByDescending(c => c.Modulus()).ToArray();
First of all, you can increase performances comparing squared modulus instead of modulus.
You don't need the squared root: "sqrt( a * a + b * b ) >= sqrt( c * c + d * d )" is equivalent to "a * a + b + b >= c * c + d * d".
Then, you can write a comparer to sort complex numbers.
public class ComplexModulusComparer :
IComparer<Complex>,
IComparer
{
public static readonly ComplexModulusComparer Default = new ComplexModulusComparer();
public int Compare(Complex a, Complex b)
{
return a.ModulusSquared().CompareTo(b.ModulusSquared());
}
int IComparer.Compare(object a, object b)
{
return ((Complex)a).ModulusSquared().CompareTo(((Complex)b).ModulusSquared());
}
}
You can write also the reverse comparer, since you want from greater to smaller.
public class ComplexModulusReverseComparer :
IComparer<Complex>,
IComparer
{
public static readonly ComplexModulusReverseComparer Default = new ComplexModulusReverseComparer();
public int Compare(Complex a, Complex b)
{
return - a.ModulusSquared().CompareTo(b.ModulusSquared());
}
int IComparer.Compare(object a, object b)
{
return - ((Complex)a).ModulusSquared().CompareTo(((Complex)b).ModulusSquared());
}
}
To sort an array you can then write two nice extension method ...
public static void SortByModulus(this Complex[] array)
{
Array.Sort(array, ComplexModulusComparer.Default);
}
public static void SortReverseByModulus(this Complex[] array)
{
Array.Sort(array, ComplexModulusReverseComparer.Default);
}
Then in your code...
Complex[] myArray ...;
myArray.SortReverseByModulus();
You can also implement the IComparable, if you wish, but a more correct and formal approach is to use the IComparer from my point of view.
public struct Complex :
IComparable<Complex>
{
public double R;
public double I;
public double Modulus() { return Math.Sqrt(R * R + I * I); }
public double ModulusSquared() { return R * R + I * I; }
public int CompareTo(Complex other)
{
return this.ModulusSquared().CompareTo(other.ModulusSquared());
}
}
And then you can write the ReverseComparer that can apply to every kind of comparer
public class ReverseComparer<T> :
IComparer<T>
{
private IComparer<T> comparer;
public static readonly ReverseComparer<T> Default = new ReverseComparer<T>();
public ReverseComparer<T>() :
this(Comparer<T>.Default)
{
}
public ReverseComparer<T>(IComparer<T> comparer)
{
this.comparer = comparer;
}
public int Compare(T a, T b)
{
return - this.comparer.Compare(a, b);
}
}
Then when you need to sort....
Complex[] array ...;
Array.Sort(array, ReverseComparer<Complex>.Default);
or in case you have another IComparer...
Complex[] array ...;
Array.Sort(array, new ReverseComparer<Complex>(myothercomparer));
RE-EDIT-
Ok i performed some speed test calculation.
Compiled with C# 4.0, in release mode, launched with all instances of visual studio closed.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace TestComplex
{
class Program
{
public struct Complex
{
public double R;
public double I;
public double ModulusSquared()
{
return this.R * this.R + this.I * this.I;
}
}
public class ComplexComparer :
IComparer<Complex>
{
public static readonly ComplexComparer Default = new ComplexComparer();
public int Compare(Complex x, Complex y)
{
return x.ModulusSquared().CompareTo(y.ModulusSquared());
}
}
private static void RandomComplexArray(Complex[] myArray)
{
// We use always the same seed to avoid differences in quicksort.
Random r = new Random(2323);
for (int i = 0; i < myArray.Length; ++i)
{
myArray[i].R = r.NextDouble() * 10;
myArray[i].I = r.NextDouble() * 10;
}
}
static void Main(string[] args)
{
// We perform some first operation to ensure JIT compiled and optimized everything before running the real test.
Stopwatch sw = new Stopwatch();
Complex[] tmp = new Complex[2];
for (int repeat = 0; repeat < 10; ++repeat)
{
sw.Start();
tmp[0] = new Complex() { R = 10, I = 20 };
tmp[1] = new Complex() { R = 30, I = 50 };
ComplexComparer.Default.Compare(tmp[0], tmp[1]);
tmp.OrderByDescending(c => c.ModulusSquared()).ToArray();
sw.Stop();
}
int[] testSizes = new int[] { 5, 100, 1000, 100000, 250000, 1000000 };
for (int testSizeIdx = 0; testSizeIdx < testSizes.Length; ++testSizeIdx)
{
Console.WriteLine("For " + testSizes[testSizeIdx].ToString() + " input ...");
// We create our big array
Complex[] myArray = new Complex[testSizes[testSizeIdx]];
double bestTime = double.MaxValue;
// Now we execute repeatCount times our test.
const int repeatCount = 15;
for (int repeat = 0; repeat < repeatCount; ++repeat)
{
// We fill our array with random data
RandomComplexArray(myArray);
// Now we perform our sorting.
sw.Reset();
sw.Start();
Array.Sort(myArray, ComplexComparer.Default);
sw.Stop();
double elapsed = sw.Elapsed.TotalMilliseconds;
if (elapsed < bestTime)
bestTime = elapsed;
}
Console.WriteLine("Array.Sort best time is " + bestTime.ToString());
// Now we perform our test using linq
bestTime = double.MaxValue; // i forgot this before
for (int repeat = 0; repeat < repeatCount; ++repeat)
{
// We fill our array with random data
RandomComplexArray(myArray);
// Now we perform our sorting.
sw.Reset();
sw.Start();
myArray = myArray.OrderByDescending(c => c.ModulusSquared()).ToArray();
sw.Stop();
double elapsed = sw.Elapsed.TotalMilliseconds;
if (elapsed < bestTime)
bestTime = elapsed;
}
Console.WriteLine("linq best time is " + bestTime.ToString());
Console.WriteLine();
}
Console.WriteLine("Press enter to quit.");
Console.ReadLine();
}
}
}
And here the results:
For 5 input ...
Array.Sort best time is 0,0004
linq best time is 0,0018
For 100 input ...
Array.Sort best time is 0,0267
linq best time is 0,0298
For 1000 input ...
Array.Sort best time is 0,3568
linq best time is 0,4107
For 100000 input ...
Array.Sort best time is 57,3536
linq best time is 64,0196
For 250000 input ...
Array.Sort best time is 157,8832
linq best time is 194,3723
For 1000000 input ...
Array.Sort best time is 692,8211
linq best time is 1058,3259
Press enter to quit.
My machine is an Intel I5, 64 bit windows seven.
Sorry! I did a small stupid bug in the previous edit!
ARRAY.SORT OUTPEFORMS LINQ, yes by a very small amount, but as suspected, this amount grows with n, seems in a not-so-linear way. It seems to me both code overhead and a memory problem (cache miss, object allocation, GC ... don't know).
You can always use SortedList :) Assuming modulus is int:
var complexNumbers = new SortedList<int, Complex>();
complexNumbers.Add(number.Modulus(), number);
public struct Complex: IComparable<Complex>
{
//complex rectangular number: a + bi
public decimal A
public decimal B
//synonymous with absolute value, or in geometric terms, distance
public decimal Modulus() { ... }
//CompareTo() is the default comparison used by most built-in sorts;
//all we have to do here is pass through to Decimal's IComparable implementation
//via the results of the Modulus() methods
public int CompareTo(Complex other){ return this.Modulus().CompareTo(other.Modulus()); }
}
You can now use any sorting method you choose on any collection of Complex instances; Array.Sort(), List.Sort(), Enumerable.OrderBy() (it doesn't use your IComparable, but if Complex were a member of a containing class you could sort the containing class by the Complex members without having to go the extra level down to comparing moduli), etc etc.
You stated you wanted to sort in descending order; you may consider multiplying the results of the Modulus() comparison by -1 before returning it. However, I would caution against this as it may be confusing; you would have to use a method that normally gives you descending order to get the list in ascending order. Instead, most sorting methods allow you to specify either a sorting direction, or a custom comparison which can still make use of the IComparable implementation:
//This will use your Comparison, but reverse the sort order based on its result
myEnumerableOfComplex.OrderByDescending(c=>c);
//This explicitly negates your comparison; you can also use b.CompareTo(a)
//which is equivalent
myListOfComplex.Sort((a,b) => return a.CompareTo(b) * -1);
//DataGridView objects use a SortDirection enumeration to control and report
//sort order
myGridViewOfComplex.Sort(myGridViewOfComplex.Columns["ComplexColumn"], ListSortDirection.Descending);

c# ref for speed

I understand full the ref word in the .NET
Since using the same variable, would increase speed to use ref instead of making copy?
I find bottleneck to be in password general.
Here is my codes
protected internal string GetSecurePasswordString(string legalChars, int length)
{
Random myRandom = new Random();
string myString = "";
for (int i = 0; i < length; i++)
{
int charPos = myRandom.Next(0, legalChars.Length - 1);
myString = myString + legalChars[charPos].ToString();
}
return myString;
}
is better to ref before legalchars?
Passing a string by value does not copy the string. It only copies the reference to the string. There's no performance benefit to passing the string by reference instead of by value.
No, you shouldn't pass the string reference by reference.
However, you are creating several strings pointlessly. If you're creating long passwords, that could be why it's a bottleneck. Here's a faster implementation:
protected internal string GetSecurePasswordString(string legalChars, int length)
{
Random myRandom = new Random();
char[] chars = new char[length];
for (int i = 0; i < length; i++)
{
int charPos = myRandom.Next(0, legalChars.Length - 1);
chars[i] = legalChars[charPos];
}
return new string(chars);
}
However, it still has three big flaws:
It creates a new instance of Random each time. If you call this method twice in quick succession, you'll get the same password twice. Bad idea.
The upper bound specified in a Random.Next() call is exclusive - so you'll never use the last character of legalChars.
It uses System.Random, which is not meant to be in any way cryptographically secure. Given that this is meant to be for a "secure password" you should consider using something like System.Security.Cryptography.RandomNumberGenerator. It's more work to do so because the API is harder, but you'll end up with a more secure system (if you do it properly).
You might also want to consider using SecureString, if you get really paranoid.
strings in .Net are immutable , so all modify operations on strings always result in creation ( and garbage collection) of new strings. No performance gain would be achieved by using ref in this case. Instead , use StringBuilder.
A word about the general performance gain of passing a string ByReference ("ref") instead of ByValue:
There is a performance gain, but it is very small!
Consider the program below where a function is called 10.000.0000 times with a string argument by value and by reference. The average time measured was
ByValue: 249 milliseconds
ByReference: 226 milliseconds
In general "ref" is a little faster, but often it's not worth worrying about it.
Here is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
namespace StringPerformanceTest
{
class Program
{
static void Main(string[] args)
{
const int n = 10000000;
int k;
string time, s1;
Stopwatch sw;
// List for testing ("1", "2", "3" ...)
List<string> list = new List<string>(n);
for (int i = 0; i < n; i++)
list.Add(i.ToString());
// Test ByVal
k = 0;
sw = Stopwatch.StartNew();
foreach (string s in list)
{
s1 = s;
if (StringTestSubVal(s1)) k++;
}
time = GetElapsedString(sw);
Console.WriteLine("ByVal: " + time);
Console.WriteLine("123 found " + k + " times.");
// Test ByRef
k = 0;
sw = Stopwatch.StartNew();
foreach (string s in list)
{
s1 = s;
if (StringTestSubRef(ref s1)) k++;
}
time = GetElapsedString(sw);
Console.WriteLine("Time ByRef: " + time);
Console.WriteLine("123 found " + k + " times.");
}
static bool StringTestSubVal(string s)
{
if (s == "123")
return true;
else
return false;
}
static bool StringTestSubRef(ref string s)
{
if (s == "123")
return true;
else
return false;
}
static string GetElapsedString(Stopwatch sw)
{
if (sw.IsRunning) sw.Stop();
TimeSpan ts = sw.Elapsed;
return String.Format("{0:00}:{1:00}:{2:00}.{3:000}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds);
}
}
}

Plinq gives different results from Linq - what am I doing wrong?

Can anyone tell me what the correct Plinq code is for this? I'm adding up the square root of the absolute value of the sine of each element fo a double array, but the Plinq is giving me the wrong result.
Output from this program is:
Linq aggregate = 75.8310477905274 (correct)
Plinq aggregate = 38.0263653589291 (about half what it should be)
I must be doing something wrong, but I can't work out what...
(I'm running this with Visual Studio 2008 on a Core 2 Duo Windows 7 x64 PC.)
Here's the code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
double[] array = new double[100];
for (int i = 0; i < array.Length; ++i)
{
array[i] = i;
}
double sum1 = array.Aggregate((total, current) => total + Math.Sqrt(Math.Abs(Math.Sin(current))));
Console.WriteLine("Linq aggregate = " + sum1);
IParallelEnumerable<double> parray = array.AsParallel<double>();
double sum2 = parray.Aggregate((total, current) => total + Math.Sqrt(Math.Abs(Math.Sin(current))));
Console.WriteLine("Plinq aggregate = " + sum2);
}
}
}
Aggregate works slightly differently in PLINQ.
From MSDN Blogs:
Rather than expecting a value to
initialize the accumulator to, the
user gives us a factory function that
generates the value:
public static double Average(this IEnumerable<int> source)
{
return source.AsParallel().Aggregate(
() => new double[2],
(acc, elem) => { acc[0] += elem; acc[1]++; return acc; },
(acc1, acc2) => { acc1[0] += acc2[0]; acc1[1] += acc2[1]; return acc1; },
acc => acc[0] / acc[1]);
}
Now, PLINQ can initialize an
independent accumulator for each
thread. Now that each thread gets its
own accumulator, both the folding
function and the accumulator combining
function are free to mutate the
accumulators. PLINQ guarantees that
accumulators will not be accessed
concurrently from multiple threads .
So, in your case, you would also need to pass an accumulator function which sums the outputs of the paralleled aggregates (hence why you're seeing a result that is roughly half of what it should be).
Thank you MSDN Blogs. It now seems to be working correctly. I changed my code as follows:
using System;
using System.Linq;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
Test();
}
static void Test()
{
double[] array = new double[100];
for (int i = 0; i < array.Length; ++i)
{
array[i] = i;
}
double sum1 = array.Aggregate((total, current) => total + Math.Sqrt(Math.Abs(Math.Sin(current))));
Console.WriteLine("Linq aggregate = " + sum1);
IParallelEnumerable<double> parray = array.AsParallel();
double sum2 = parray.Aggregate
(
0.0,
(total1, current1) => total1 + Math.Sqrt(Math.Abs(Math.Sin(current1))),
(total2, current2) => total2 + current2,
acc => acc
);
Console.WriteLine("Plinq aggregate = " + sum2);
}
}
}

Categories

Resources