Optimization of number grinding - c#

Long story short, i have to solve 20..200 block-tridiagonal linear systems during an iterational process. Size of systems is 50..100 blocks, 50..100 x 50..100 each. I will write down here my thoughts on it, and i ask you to share your opinion on my thoughts, as it is possible that i am mistaken in one regard or another.
To solve those equations, i use a matrix version of Thomas algorithm. It's exactly like scalar one, except instead of scalar coefficients in equations i have matrices (i.e. instead of "a_i x_{i-1} + b_i x_i + c_i x_{i+1} = f_i" i have "A_i X_{i-1} + B_i X_i + C_i X_{i+1} = F_i", where A_i, B_i, C_i - matrices; F_i and X_i are vectors.
Asymptotic complexity of such algorithm is O(N*M^3), where N is the size of overall matrix in blocks, and M is the size of each block.
Right now my bottleneck is inversion operation. Deep inside nested loops i have to calculate /a lot/ of inversions that look like "(c_i - a_i * alpha_i)^-1", where alpha_i is a dense MxM matrix. I am doing it using Gauss-Jordan algorithm, using additional memory (which i will have to use anyway later in the program) and O(M^3) operations.
Trying to find info on how to optimize inversion operation, i've found only threads about solving AX=B systems 'canonically', i.e. X=A^-1 B, with suggestions to use LU factorization instead of it. Sadly, as my inversion is a part of Thomas algorithm, if i resort to LU factorization, i will have to do it for a M*NxM*N matrix, which wil rise the complexity of solving the linear system by extra N^2 to O(N^3*M^3). That's slowing down by a factor of 2500..10000, which is quite bad.
Approximate or iterative inversions are out of scope, too, as slightest residual with exact inversion will cumulate very fast and cause global iterational process to explode.
I do calculations in parallel with Parallel.For(), solving each of the 20..200 systems separately.
Right now, to solve 20 such systems with N,M=50 on average takes 872ms (i7-3630QM, 2.4Ghz, 4cores (8 with hyper threading)).
And finally, here come the questions.
Am i correct on what i wrote here? Is there an algorithm to significantly speed up calculations over what they are now?
Inside of number-grinder part of my program i use only For loops (most of them are with constant boundaries, the exception being one of the loops inside of inversion algorithm) double arithmetic (+,-,*,/) and standard arrays ([], [,], [,,]). Will there be any speed-up if i rewrite this part as unsafe? Or as a library in C?
How much is C# overhead on such tasks (double arrays grinding)? Are C compilers better at optimization of such simple code than C# 'compiler'?
What should i look at when optimizing numbergrinder in C#? Is it suited for such task at all?

Related

Dijkstra algorithm expanded with extra limit variable

I am having trouble implementing this into my current path finding algorithm.
Currently I have Dijkstra written and works like it should, but I need to step further away and add a limit (range). I can better explain with an image:
Let's say I have range of 80. I want to go from A to E. My current algorithm, works as it should, so it results in A->B-E.
However, I need to go only on paths with weight not more than the range - 80, which would mean that A->B->E is not the option any more, but A->C->D->B->E (considering that range/limit resets on every stop)
So far, I have implemented a bool named Possible which would return for the single part of path (e.g. A->B) is it possible comparing to my limit / range.
My main problem is that I do not know where/how to start. My only idea was to see where Possible is false (A->B on the total route A->B->E) and run the algorithm from A to A->E again without / excluding B stop/vertex.
Is this a good approach? Because of that my big O notation would increment twice (as far as I understand it).
I see two ways of doing this
Create a new graph G' that contains only edges < 80, and look for shortest path there... reduction time is O(V+E), and additional O(V+E) memory usage
You can change Dijkstra's algorithm, to ignore edges > 80, just skip edges >80, when giving values to neighbor vertices, the complexity and memory usage will stay the same in this case
Create a temporary version of your graph, and set all weights above the threshold to infinity. Then run the ordinary Dijkstra algorithm on it.
Complexity will increase or not, depending on your version of the algorithm:
if you have O(V^2) then it will increase to O(E + V^2)
if you have the O(ElogV) version then it will increase to O(E + ElogV)
if you have the O(E + VlogV) version it will remain the same
As noted by ArsenMkrt you can as well remove these edges, which makes even more sense but will make the complexity a bit worse. Modifying the algorithm to just skip those edges seems to be the best option though, as he suggested in his answer.

Making C# mandelbrot drawing more efficient

First of all, I am aware that this question really sounds as if I didn't search, but I did, a lot.
I wrote a small Mandelbrot drawing code for C#, it's basically a windows form with a PictureBox on which I draw the Mandelbrot set.
My problem is, is that it's pretty slow. Without a deep zoom it does a pretty good job and moving around and zooming is pretty smooth, takes less than a second per drawing, but once I start to zoom in a little and get to places which require more calculations it becomes really slow.
On other Mandelbrot applications my computer does really fine on places which work much slower in my application, so I'm guessing there is much I can do to improve the speed.
I did the following things to optimize it:
Instead of using the SetPixel GetPixel methods on the bitmap object, I used LockBits method to write directly to memory which made things a lot faster.
Instead of using complex number objects (with classes I made myself, not the built-in ones), I emulated complex numbers using 2 variables, re and im. Doing this allowed me to cut down on multiplications because squaring the real part and the imaginary part is something that is done a few time during the calculation, so I just save the square in a variable and reuse the result without the need to recalculate it.
I use 4 threads to draw the Mandelbrot, each thread does a different quarter of the image and they all work simultaneously. As I understood, that means my CPU will use 4 of its cores to draw the image.
I use the Escape Time Algorithm, which as I understood is the fastest?
Here is my how I move between the pixels and calculate, it's commented out so I hope it's understandable:
//Pixel by pixel loop:
for (int r = rRes; r < wTo; r++)
{
for (int i = iRes; i < hTo; i++)
{
//These calculations are to determine what complex number corresponds to the (r,i) pixel.
double re = (r - (w/2))*step + zeroX ;
double im = (i - (h/2))*step - zeroY;
//Create the Z complex number
double zRe = 0;
double zIm = 0;
//Variables to store the squares of the real and imaginary part.
double multZre = 0;
double multZim = 0;
//Start iterating the with the complex number to determine it's escape time (mandelValue)
int mandelValue = 0;
while (multZre + multZim < 4 && mandelValue < iters)
{
/*The new real part equals re(z)^2 - im(z)^2 + re(c), we store it in a temp variable
tempRe because we still need re(z) in the next calculation
*/
double tempRe = multZre - multZim + re;
/*The new imaginary part is equal to 2*re(z)*im(z) + im(c)
* Instead of multiplying these by 2 I add re(z) to itself and then multiply by im(z), which
* means I just do 1 multiplication instead of 2.
*/
zRe += zRe;
zIm = zRe * zIm + im;
zRe = tempRe; // We can now put the temp value in its place.
// Do the squaring now, they will be used in the next calculation.
multZre = zRe * zRe;
multZim = zIm * zIm;
//Increase the mandelValue by one, because the iteration is now finished.
mandelValue += 1;
}
//After the mandelValue is found, this colors its pixel accordingly (unsafe code, accesses memory directly):
//(Unimportant for my question, I doubt the problem is with this because my code becomes really slow
// as the number of ITERATIONS grow, this only executes more as the number of pixels grow).
Byte* pos = px + (i * str) + (pixelSize * r);
byte col = (byte)((1 - ((double)mandelValue / iters)) * 255);
pos[0] = col;
pos[1] = col;
pos[2] = col;
}
}
What can I do to improve this? Do you find any obvious optimization problems in my code?
Right now there are 2 ways I know I can improve it:
I need to use a different type for numbers, double is limited with accuracy and I'm sure there are better non-built-in alternative types which are faster (they multiply and add faster) and have more accuracy, I just need someone to point me where I need to look and tell me if it's true.
I can move processing to the GPU. I have no idea how to do this (OpenGL maybe? DirectX? is it even that simple or will I need to learn a lot of stuff?). If someone can send me links to proper tutorials on this subject or tell me in general about it that would be great.
Thanks a lot for reading that far and hope you can help me :)
If you decide to move the processing to the gpu, you can choose from a number of options. Since you are using C#, XNA will allow you to use HLSL. RB Whitaker has the easiest XNA tutorials if you choose this option. Another option is OpenCL. OpenTK comes with a demo program of a julia set fractal. This would be very simple to modify to display the mandlebrot set. See here
Just remember to find the GLSL shader that goes with the source code.
About the GPU, examples are no help for me because I have absolutely
no idea about this topic, how does it even work and what kind of
calculations the GPU can do (or how is it even accessed?)
Different GPU software works differently however ...
Typically a programmer will write a program for the GPU in a shader language such as HLSL, GLSL or OpenCL. The program written in C# will load the shader code and compile it, and then use functions in an API to send a job to the GPU and get the result back afterwards.
Take a look at FX Composer or render monkey if you want some practice with shaders with out having to worry about APIs.
If you are using HLSL, the rendering pipeline looks like this.
The vertex shader is responsible for taking points in 3D space and calculating their position in your 2D viewing field. (Not a big concern for you since you are working in 2D)
The pixel shader is responsible for applying shader effects to the pixels after the vertex shader is done.
OpenCL is a different story, its geared towards general purpose GPU computing (ie: not just graphics). Its more powerful and can be used for GPUs, DSPs, and building super computers.
WRT coding for the GPU, you can look at Cudafy.Net (it does OpenCL too, which is not tied to NVidia) to start getting an understanding of what's going on and perhaps even do everything you need there. I've quickly found it - and my graphics card - unsuitable for my needs, but for the Mandelbrot at the stage you're at, it should be fine.
In brief: You code for the GPU with a flavour of C (Cuda C or OpenCL normally) then push the "kernel" (your compiled C method) to the GPU followed by any source data, and then invoke that "kernel", often with parameters to say what data to use - or perhaps a few parameters to tell it where to place the results in its memory.
When I've been doing fractal rendering myself, I've avoided drawing to a bitmap for the reasons already outlined and deferred the render phase. Besides that, I tend to write massively multithreaded code which is really bad for trying to access a bitmap. Instead, I write to a common store - most recently I've used a MemoryMappedFile (a builtin .Net class) since that gives me pretty decent random access speed and a huge addressable area. I also tend to write my results to a queue and have another thread deal with committing the data to storage; the compute times of each Mandelbrot pixel will be "ragged" - that is to say that they will not always take the same length of time. As a result, your pixel commit could be the bottleneck for very low iteration counts. Farming it out to another thread means your compute threads are never waiting for storage to complete.
I'm currently playing with the Buddhabrot visualisation of the Mandelbrot set, looking at using a GPU to scale out the rendering (since it's taking a very long time with the CPU) and having a huge result-set. I was thinking of targetting an 8 gigapixel image, but I've come to the realisation that I need to diverge from the constraints of pixels, and possibly away from floating point arithmetic due to precision issues. I'm also going to have to buy some new hardware so I can interact with the GPU differently - different compute jobs will finish at different times (as per my iteration count comment earlier) so I can't just fire batches of threads and wait for them all to complete without potentially wasting a lot of time waiting for one particularly high iteration count out of the whole batch.
Another point to make that I hardly ever see being made about the Mandelbrot Set is that it is symmetrical. You might be doing twice as much calculating as you need to.
For moving the processing to the GPU, you have lots of excellent examples here:
https://www.shadertoy.com/results?query=mandelbrot
Note that you need an WebGL capable browser to view that link. Works best in Chrome.
I'm no expert on fractals but you seem to have come far already with the optimizations. Going beyond that may make the code much harder to read and maintain so you should ask yourself it is worth it.
One technique I've often observed in other fractal programs is this: While zooming, calculate the fractal at a lower resolution and stretch it to full size during render. Then render at full resolution as soon as zooming stops.
Another suggestion is that when you use multiple threads you should take care that each thread don't read/write memory of other threads because this will cause cache collisions and hurt performance. One good algorithm could be split the work up in scanlines (instead of four quarters like you did now). Create a number of threads, then as long as there as lines left to process, assign a scanline to a thread that is available. Let each thread write the pixel data to a local piece of memory and copy this back to main bitmap after each line (to avoid cache collisions).

Multithreading and Multiprocessing from imported matlab files

This is probably a long shot but I asked a question about converting one of the statistics toolbox codes earlyier into C# realising that it was just a huge and lengthy process and there was not much in the way to automate it (really what I wanted as the references I provided explained why it was so hard to do by hand, as the comments I got where: why dont you try convert it and ask questions on where you are stuck, which obviously my question wasnt understood!).
The reason I was looking to do this is because of the long processing time required by matlab to complete what im working on (k-means and bayes classifiers on large data sets). So I thought well hey why not just convert the code into C# and try my hand at multithreading and Multiprocessing, this might provide a functional means to decrease the processing time. But obviously its extremely hard to convert all of matlabs functions to C# by hand to accommandate this.
So my question is if I import matlabs files into C# is it possible to have them used/ran in multithreading and multiprocessing fashion or will the imported files just run like they do in matlab?
The reason (I think) it runs slow in matlab is that the functions or some of them in the statistics toolbox only benefit from multithreading specifically:
MATHEMATICS
Arrays and matrices
• Basic information: ISFINITE, ISINF, ISNAN, MAX, MIN
• Operators: +, -, .*, ./, .\, .^, *, ^, \ (MLDIVIDE), / (MRDIVIDE)
• Array operations: PROD, SUM
• Array manipulation: BSXFUN, SORT
Linear algebra
• Matrix Analysis: DET, RCOND
• Linear Equations: CHOL, INV, LINSOLVE, LU, QR
• Eigenvalues and singular values: EIG, HESS, SCHUR, SVD, QZ
Elementary math
• Trigonometric: ACOS, ACOSD, ACOSH, ASIN, ASIND, ASINH, ATAN, ATAND, ATANH, COS, COSD, COSH,HYPOT, SIN, SIND, SINH, TAN, TAND, TANH
• Exponential: EXP, POW2, SQRT
• Complex: ABS
• Rounding and remainder: CEIL, FIX, FLOOR, MOD, REM, ROUND
Special Functions
• ERF, ERFC, ERFCINV, ERFCX, ERFINV, GAMMA, GAMMALN
DATA ANALYSIS
• CONV2, FILTER, FFT and IFFT of multiple columns or long vectors, FFTN, IFFTN
So im not to sure how or in what way I could potentially decrease the processing time, the kmeans and bayes classifier when processeing near tens of thousands of records really is just unbearable on its processing time (understandable).
This is not something you will be able to do easily. In fact I would say it is not possible.
If you attempt it you have the following issues to deal with:
Find a (semi) automated way to convert math lab functionality into C#
This does not exist to my knowledge.
Alter the resulting code to be multithreading enabled
To make modify a mathematical algorithm to supported multiple threads is very difficult and sometimes even impossible due to the data structures used
Also keep in mind that some mathematical problems do not scale with the number of processors, so you might not even get the benefit you expected.

C# LP/Lagrange with Bounded Variables

Summary: How would I go about solving this problem?
Hi there,
I'm working on a mixture-style maximization problem where my variables are going to be bounded by minima and maxima. A representative example of my problem might be:
maximize: (2x-3y+4z)/(x^2+y^2+z^2+3x+4y+5z+10)
subj. to: x+y+z=1
1 < x < 2
-2 < y < 3
5 < z < 8
where numerical coefficients and the minima/maxima are given.
My final project is involving a more complicated problem similar to the one above. The structure of the problems won't change- only the coefficients and inputs will change. So with the example above, I would be looking for a set of functions that might allow a C# program to quickly determine x, then y, then z like:
x = f(given inputs)
y = f(given inputs,x)
z = f(given inputs,x,y)
Would love to hear your thoughts on this one!
Thanks!
The standard optimization approach for your type of problem, non-linear minimization, is the Levenberg-Marquardt algorithm:
Levenberg–Marquardt algorithm
but unfortunately it does not directly support the linear constraints you have added. Many different approaches have been tried to add linear constraints to Levenberg-Marquardt with varying success.
Another algorithm I can recommend in this situation is the Simplex algorithm:
Nelder–Mead method
Like the Levenberg-Marquardt, it also works with non-linear equations but handles linear constraints which act like discontinuities. This could work well for your case above.
In either case, this is not so much a programming problem as an algorithm selection problem. The literature is rife with algorithms and you can find C# implementations of either of the above with a little searching.
You can also combine algorithms. For example, you can do a preliminary search with Simplex with the constraints and the refine it with Levenberg-Marquardt without the constraints.
If your problem is that you want to solve linear programming problems efficiently, you can use Cassowary.net or NSolver.
If your problem is implementing a linear programming algorithm efficiently, you may want to read Combinatorial Optimization: Algorithms and Complexity which covers the Simplex algorithm in most of the detail provided in the short text An Illustrated Guide to Linear Programming but also includes information on the Ellipsoid algorithm, which can be more efficient for more complex constraint systems.
There's nothing inherently C#-specific about your question, but tagging it with that implies you're looking for a solution in C#; accordingly, reviewing the source code to the two toolkits above may serve you well.

HLSL Computation - process pixels in order?

Imagine I want to, say, compute the first one million terms of the Fibonacci sequence using the GPU. (I realize this will exceed the precision limit of a 32-bit data type - just used as an example)
Given a GPU with 40 shaders/stream processors, and cheating by using a reference book, I can break up the million terms into 40 blocks of 250,000 strips, and seed each shader with the two start values:
unit 0: 1,1 (which then calculates 2,3,5,8,blah blah blah)
unit 1: 250,000th term
unit 2: 500,000th term
...
How, if possible, could I go about ensuring that pixels are processed in order? If the first few pixels in the input texture have values (with RGBA for simplicity)
0,0,0,1 // initial condition
0,0,0,1 // initial condition
0,0,0,2
0,0,0,3
0,0,0,5
...
How can I ensure that I don't try to calculate the 5th term before the first four are ready?
I realize this could be done in multiple passes but setting a "ready" bit whenever a value is calculated, but that seems incredibly inefficient and sort of eliminates the benefit of performing this type of calculation on the GPU.
OpenCL/CUDA/etc probably provide nice ways to do this, but I'm trying (for my own edification) to get this to work with XNA/HLSL.
Links or examples are appreciated.
Update/Simplification
Is it possible to write a shader that uses values from one pixel to influence the values from a neighboring pixel?
You cannot determine the order the pixels are processed. If you could, that would break the massive pixel throughput of the shader pipelines. What you can do is calculating the Fibonacci sequence using the non-recursive formula.
In your question, you are actually trying to serialize the shader units to run one after another. You can use the CPU right away and it will be much faster.
By the way, multiple passes aren't as slow as you might think, but they won't help you in your case. You cannot really calculate any next value without knowing the previous ones, thus killing any parallelization.

Categories

Resources