Related
Although my question sounds trivial, it really is NOT. Hope you can help me.
I want to implement interval arithmetic in my .NET (C#) project. This means that every number is defined by an lower bound and an upper bound. This is helpfull for problems like
1 / 3 = 0.333333333333333 (15 significant digits)
since you would then have
1 / 3 = [ 0.33333333333333 , 0.333333333333334 ] (14 significant digits each)
, so I now FOR SURE that the right answer lays between those two numbers. Without the interval representation I would already have a rounding error with me (i.e. 0.0000000000000003).
To achieve this I wrote my own Interval type that overloads all standard operators like +-*/, etc. To make this type work correctly I need to be able to round the result of 1 / 3 in two directions. Rounding the result down will give me the lower bound for my interval, rounding the result up will give me the upper bound for my interval.
.NET has the Math.Round(double,int) method which rounds the double to int decimal places. Looks great but it can't be forced to round up/down. Math.Round(1.0/3.0,14) would round down, but the also needed up-rounding to 0.33...34 can't be achieved like this.
But there are Math.Ceil and Math.Floor you might say! Okay, those methods round to the next lower or upper integer. So if I want to round to 14 decimal places I first need to reform my result:
1 / 3 = 0.333333333333333 -> *E14 -> 33333333333333.3
So now I can call Math.Ceil and Math.Floor and get both rounded results after reforming back
33333333333333 & 33333333333334 -> /E14 -> 0.33333333333333 & 0.33333333333334
Looks great, but: Let's say my number goes near the double.MaxValue. I can't just *E14 a value near double.MaxValue since this will give me an OverflowException. So this is no solution either.
And, to top all of these facts: All this fails even harder when trying to round 0.9999999999999999999999999 (more than 15 digits) since the internal representation is already rounded to 1 before I can even start trying to round down.
I could try to somehow parse a string containing the double but this won't help since (1/3 * 3).ToString() will already print 1 instead of 0.99...9.
Decimal does not work either since I don't want that deep precision, 14 digits are enough; but I still want that double range!
In C++, where several interval arithmetic implementations exist, this problem could be solved by telling the processor dynamically to swith its roundmode to for example "always down" or "always up". I couldn't find any way to do this in .NET.
So, do you have any ideas?
Thanks in advance!
Assume nextDown(x) is a function that returns the largest double that is less than x, and nextUp(x) is a function that returns the smallest double that is greater than x. See Get next smallest Double number for implementation ideas.
Where you would have rounded a lower bound result down, instead use the nextDown of the round-to-nearest result. Where you would have rounded an upper bound up, use the nextUp of the round-to-nearest result.
This method ensures the interval continues to contain the exact real number result. It introduces extra rounding error - in some cases the lower bound will be one ULP smaller than it should be, and/or the upper bound will be one ULP bigger. However, it is a minimal widening of the interval, much less widening than you would get working in decimal or by suppressing low significance bits.
This might be more like a long comment than a real answer.
This code returns an "interval" (I just use Tuple<,>, you can use your own Interval type) based on truncating the seven least significant bits:
static Tuple<double, double> GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(double number)
{
if (double.IsInfinity(number) || double.IsNaN(number))
return Tuple.Create(number, number); // maybe treat this case differently
var i = BitConverter.DoubleToInt64Bits(number);
const int numberOfBitsToClear = 7; // your seven, can change this value, must be below 52
const long precision = 1L << numberOfBitsToClear;
const long bitMask = ~(precision - 1L);
//truncate i
i &= bitMask;
return Tuple.Create(BitConverter.Int64BitsToDouble(i), BitConverter.Int64BitsToDouble(i + precision));
}
Disclaimer: I am not sure if this is useful for any purpose. In particular not sure it is useful for interval arithmetic.
With this code, GetMinMaxIntervalBasedOnBinaryNumbersThatAreRoundOnLastSevenBits(1.0 / 3.0) returns the tuple (0.333333333333329, 0.333333333333336).
This code, just like the code you ask for in your question, has the obvious "issue" that if the original value is close to (or even equal to) one of the "round" numbers we use, then the returned interval is "skewed", with the original number being close to one of the ends of the interval. For example, with input 42.0 (already round), you get out the tuple (42, 42.0000000000009).
One good thing about this code is I expect it to be extremely fast.
I'm writing a simple extension that calculates an average for an array. It works fine except when values are very big. So here is an example
const int div = 100;
double num = 0;
for (int i = 0; i < div; i++)
{
num += double.MaxValue/div;
}
Console.WriteLine(num);
Console.WriteLine(double.MaxValue);
I expect to get double.MaxValue but i get Infinity because of rounding error. Is it possible to change an algorithm or handle this situation? I know that there is some techniques to work with floats (rounding to even, for example), but I'm looking for something that could be helpful in this very case.
I hope an answer isn't No, you cannot, just humble yourself, you have no chance when you work with floats
One possibility would be to not divide each term by number of items, but divide the sum once afterward, in order to accumulate less error...
Of course MaxValue+MaxValue would overflow in this case. Yes, but your library would cleverly mitigate this problem by detecting overflow (arrange to trap the exception), and scale the operands by 1/2 in this case.
At the end after the division, you would apply inverse scale by appropriate power of 2.
Yes but the sum of MaxValue,3,-MaxValue might be exhibit very bad accuracy (like answering zero instead of 1).
Ah, no problem, you can have a perfectly exact sum with code like this Precise sum of floating point numbers, that's easy, modulo the scale protection you'll have to mix with...
A small collateral effect is that your average is no more O(n) just a bit more exensive (hem...). Oh bad luck, some real-time application might expect O(n), and since you're writing a general purpose library...
So, in order to balance both expectations, you might choose to sum with kind of Kahan sum, and sacrifice some accuracy for speed...
Oh but why? What are the expectations exactly? This is the main question you'd have to answer... Do you prefer a library that guaranty best accuracy possible at the price of speed (think of crlibm vs libm), best speed at the price of a few corner cases, exception free behaviour whatever the illness of inputs (your original question), or a mix of above?
Unfortunately, you can't have them all together...
In all case, as Patricia said, document it.
In loop you are dividing it with zero and that's why the value of num is Infinite.
Here's a silly fun question:
Let's say we have to perform a simple operation where we need half of the value of a variable. There are typically two ways of doing this:
y = x / 2.0;
// or...
y = x * 0.5;
Assuming we're using the standard operators provided with the language, which one has better performance?
I'm guessing multiplication is typically better so I try to stick to that when I code, but I would like to confirm this.
Although personally I'm interested in the answer for Python 2.4-2.5, feel free to also post an answer for other languages! And if you'd like, feel free to post other fancier ways (like using bitwise shift operators) as well.
Python:
time python -c 'for i in xrange(int(1e8)): t=12341234234.234 / 2.0'
real 0m26.676s
user 0m25.154s
sys 0m0.076s
time python -c 'for i in xrange(int(1e8)): t=12341234234.234 * 0.5'
real 0m17.932s
user 0m16.481s
sys 0m0.048s
multiplication is 33% faster
Lua:
time lua -e 'for i=1,1e8 do t=12341234234.234 / 2.0 end'
real 0m7.956s
user 0m7.332s
sys 0m0.032s
time lua -e 'for i=1,1e8 do t=12341234234.234 * 0.5 end'
real 0m7.997s
user 0m7.516s
sys 0m0.036s
=> no real difference
LuaJIT:
time luajit -O -e 'for i=1,1e8 do t=12341234234.234 / 2.0 end'
real 0m1.921s
user 0m1.668s
sys 0m0.004s
time luajit -O -e 'for i=1,1e8 do t=12341234234.234 * 0.5 end'
real 0m1.843s
user 0m1.676s
sys 0m0.000s
=>it's only 5% faster
conclusions: in Python it's faster to multiply than to divide, but as you get closer to the CPU using more advanced VMs or JITs, the advantage disappears. It's quite possible that a future Python VM would make it irrelevant
Always use whatever is the clearest. Anything else you do is trying to outsmart the compiler. If the compiler is at all intelligent, it will do the best to optimize the result, but nothing can make the next guy not hate you for your crappy bitshifting solution (I love bit manipulation by the way, it's fun. But fun != readable)
Premature optimization is the root of all evil. Always remember the three rules of optimization!
Don't optimize.
If you are an expert, see rule #1
If you are an expert and can justify the need, then use the following procedure:
Code it unoptimized
determine how fast is "Fast enough"--Note which user requirement/story requires that metric.
Write a speed test
Test existing code--If it's fast enough, you're done.
Recode it optimized
Test optimized code. IF it doesn't meet the metric, throw it away and keep the original.
If it meets the test, keep the original code in as comments
Also, doing things like removing inner loops when they aren't required or choosing a linked list over an array for an insertion sort are not optimizations, just programming.
I think this is getting so nitpicky that you would be better off doing whatever makes the code more readable. Unless you perform the operations thousands, if not millions, of times, I doubt anyone will ever notice the difference.
If you really have to make the choice, benchmarking is the only way to go. Find what function(s) are giving you problems, then find out where in the function the problems occur, and fix those sections. However, I still doubt that a single mathematical operation (even one repeated many, many times) would be a cause of any bottleneck.
Multiplication is faster, division is more accurate. You'll lose some precision if your number isn't a power of 2:
y = x / 3.0;
y = x * 0.333333; // how many 3's should there be, and how will the compiler round?
Even if you let the compiler figure out the inverted constant to perfect precision, the answer can still be different.
x = 100.0;
x / 3.0 == x * (1.0/3.0) // is false in the test I just performed
The speed issue is only likely to matter in C/C++ or JIT languages, and even then only if the operation is in a loop at a bottleneck.
If you want to optimize your code but still be clear, try this:
y = x * (1.0 / 2.0);
The compiler should be able to do the divide at compile-time, so you get a multiply at run-time. I would expect the precision to be the same as in the y = x / 2.0 case.
Where this may matter a LOT is in embedded processors where floating-point emulation is required to compute floating-point arithmetic.
Just going to add something for the "other languages" option.
C: Since this is just an academic exercise that really makes no difference, I thought I would contribute something different.
I compiled to assembly with no optimizations and looked at the result.
The code:
int main() {
volatile int a;
volatile int b;
asm("## 5/2\n");
a = 5;
a = a / 2;
asm("## 5*0.5");
b = 5;
b = b * 0.5;
asm("## done");
return a + b;
}
compiled with gcc tdiv.c -O1 -o tdiv.s -S
the division by 2:
movl $5, -4(%ebp)
movl -4(%ebp), %eax
movl %eax, %edx
shrl $31, %edx
addl %edx, %eax
sarl %eax
movl %eax, -4(%ebp)
and the multiplication by 0.5:
movl $5, -8(%ebp)
movl -8(%ebp), %eax
pushl %eax
fildl (%esp)
leal 4(%esp), %esp
fmuls LC0
fnstcw -10(%ebp)
movzwl -10(%ebp), %eax
orw $3072, %ax
movw %ax, -12(%ebp)
fldcw -12(%ebp)
fistpl -16(%ebp)
fldcw -10(%ebp)
movl -16(%ebp), %eax
movl %eax, -8(%ebp)
However, when I changed those ints to doubles (which is what python would probably do), I got this:
division:
flds LC0
fstl -8(%ebp)
fldl -8(%ebp)
flds LC1
fmul %st, %st(1)
fxch %st(1)
fstpl -8(%ebp)
fxch %st(1)
multiplication:
fstpl -16(%ebp)
fldl -16(%ebp)
fmulp %st, %st(1)
fstpl -16(%ebp)
I haven't benchmarked any of this code, but just by examining the code you can see that using integers, division by 2 is shorter than multiplication by 2. Using doubles, multiplication is shorter because the compiler uses the processor's floating point opcodes, which probably run faster (but actually I don't know) than not using them for the same operation. So ultimately this answer has shown that the performance of multiplaction by 0.5 vs. division by 2 depends on the implementation of the language and the platform it runs on. Ultimately the difference is negligible and is something you should virtually never ever worry about, except in terms of readability.
As a side note, you can see that in my program main() returns a + b. When I take the volatile keyword away, you'll never guess what the assembly looks like (excluding the program setup):
## 5/2
## 5*0.5
## done
movl $5, %eax
leave
ret
it did both the division, multiplication, AND addition in a single instruction! Clearly you don't have to worry about this if the optimizer is any kind of respectable.
Sorry for the overly long answer.
Firstly, unless you are working in C or ASSEMBLY, you're probably in a higher level language where memory stalls and general call overheads will absolutely dwarf the difference between multiply and divide to the point of irrelevance. So, just pick what reads better in that case.
If you're talking from a very high level it won't be measurably slower for anything you're likely to use it for. You'll see in other answers, people need to do a million multiply/divides just to measure some sub-millisecond difference between the two.
If you're still curious, from a low level optimisation point of view:
Divide tends to have a significantly longer pipeline than multiply. This means it takes longer to get the result, but if you can keep the processor busy with non-dependent tasks, then it doesn't end up costing you any more than a multiply.
How long the pipeline difference is is completely hardware dependant. Last hardware I used was something like 9 cycles for a FPU multiply and 50 cycles for a FPU divide. Sounds a lot, but then you'd lose 1000 cycles for a memory miss, so that can put things in perspective.
An analogy is putting a pie in a microwave while you watch a TV show. The total time it took you away from the TV show is how long it was to put it in the microwave, and take it out of the microwave. The rest of your time you still watched the TV show. So if the pie took 10 minutes to cook instead of 1 minute, it didn't actually use up any more of your tv watching time.
In practice, if you're going to get to the level of caring about the difference between Multiply and Divide, you need to understand pipelines, cache, branch stalls, out-of-order prediction, and pipeline dependencies. If this doesn't sound like where you were intending to go with this question, then the correct answer is to ignore the difference between the two.
Many (many) years ago it was absolutely critical to avoid divides and always use multiplies, but back then memory hits were less relevant, and divides were much worse. These days I rate readability higher, but if there's no readability difference, I think its a good habit to opt for multiplies.
Write whichever is more clearly states your intent.
After your program works, figure out what's slow, and make that faster.
Don't do it the other way around.
Do whatever you need. Think of your reader first, do not worry about performance until you are sure you have a performance problem.
Let compiler do the performance for you.
Actually there is a good reason that as a general rule of thumb multiplication will be faster than division. Floating point division in hardware is done either with shift and conditional subtract algorithms ("long division" with binary numbers) or - more likely these days - with iterations like Goldschmidt's algorithm. Shift and subtract needs at least one cycle per bit of precision (the iterations are nearly impossible to parallelize as are the shift-and-add of multiplication), and iterative algorithms do at least one multiplication per iteration. In either case, it's highly likely that the division will take more cycles. Of course this does not account for quirks in compilers, data movement, or precision. By and large, though, if you are coding an inner loop in a time sensitive part of a program, writing 0.5 * x or 1.0/2.0 * x rather than x / 2.0 is a reasonable thing to do. The pedantry of "code what's clearest" is absolutely true, but all three of these are so close in readability that the pedantry is in this case just pedantic.
If you are working with integers or non floating point types don't forget your bitshifting operators: << >>
int y = 10;
y = y >> 1;
Console.WriteLine("value halved: " + y);
y = y << 1;
Console.WriteLine("now value doubled: " + y);
Multiplication is usually faster - certainly never slower.
However, if it is not speed critical, write whichever is clearest.
I have always learned that multiplication is more efficient.
Floating-point division is (generally) especially slow, so while floating-point multiplication is also relatively slow, it's probably faster than floating-point division.
But I'm more inclined to answer "it doesn't really matter", unless profiling has shown that division is a bit bottleneck vs. multiplication. I'm guessing, though, that the choice of multiplication vs. division isn't going to have a big performance impact in your application.
This becomes more of a question when you are programming in assembly or perhaps C. I figure that with most modern languages that optimization such as this is being done for me.
Be wary of "guessing multiplication is typically better so I try to stick to that when I code,"
In context of this specific question, better here means "faster". Which is not very useful.
Thinking about speed can be a serious mistake. There are profound error implications in the specific algebraic form of the calculation.
See Floating Point arithmetic with error analysis. See Basic Issues in Floating Point Arithmetic and Error Analysis.
While some floating-point values are exact, most floating point values are an approximation; they are some ideal value plus some error. Every operation applies to the ideal value and the error value.
The biggest problems come from trying to manipulate two nearly-equal numbers. The right-most bits (the error bits) come to dominate the results.
>>> for i in range(7):
... a=1/(10.0**i)
... b=(1/10.0)**i
... print i, a, b, a-b
...
0 1.0 1.0 0.0
1 0.1 0.1 0.0
2 0.01 0.01 -1.73472347598e-18
3 0.001 0.001 -2.16840434497e-19
4 0.0001 0.0001 -1.35525271561e-20
5 1e-05 1e-05 -1.69406589451e-21
6 1e-06 1e-06 -4.23516473627e-22
In this example, you can see that as the values get smaller, the difference between nearly equal numbers create non-zero results where the correct answer is zero.
I've read somewhere that multiplication is more efficient in C/C++; No idea regarding interpreted languages - the difference is probably negligible due to all the other overhead.
Unless it becomes an issue stick with what is more maintainable/readable - I hate it when people tell me this but its so true.
I would suggest multiplication in general, because you don't have to spend the cycles ensuring that your divisor is not 0. This doesn't apply, of course, if your divisor is a constant.
As with posts #24 (multiplication is faster) and #30 - but sometimes they are both just as easy to understand:
1*1e-6F;
1/1e6F;
~ I find them both just as easy to read, and have to repeat them billions of times. So it is useful to know that multiplication is usually faster.
There is a difference, but it is compiler dependent. At first on vs2003 (c++) I got no significant difference for double types (64 bit floating point). However running the tests again on vs2010, I detected a huge difference, up to factor 4 faster for multiplications. Tracking this down, it seems that vs2003 and vs2010 generates different fpu code.
On a Pentium 4, 2.8 GHz, vs2003:
Multiplication: 8.09
Division: 7.97
On a Xeon W3530, vs2003:
Multiplication: 4.68
Division: 4.64
On a Xeon W3530, vs2010:
Multiplication: 5.33
Division: 21.05
It seems that on vs2003 a division in a loop (so the divisor was used multiple times) was translated to a multiplication with the inverse. On vs2010 this optimization is not applied any more (I suppose because there is slightly different result between the two methods). Note also that the cpu performs divisions faster as soon as your numerator is 0.0. I do not know the precise algorithm hardwired in the chip, but maybe it is number dependent.
Edit 18-03-2013: the observation for vs2010
Java android, profiled on Samsung GT-S5830
public void Mutiplication()
{
float a = 1.0f;
for(int i=0; i<1000000; i++)
{
a *= 0.5f;
}
}
public void Division()
{
float a = 1.0f;
for(int i=0; i<1000000; i++)
{
a /= 2.0f;
}
}
Results?
Multiplications(): time/call: 1524.375 ms
Division(): time/call: 1220.003 ms
Division is about 20% faster than multiplication (!)
After such a long and interesting discussion here is my take on this: There is no final answer to this question. As some people pointed out it depends on both, the hardware (cf piotrk and gast128) and the compiler (cf #Javier's tests). If speed is not critical, if your application does not need to process in real-time huge amount of data, you may opt for clarity using a division whereas if processing speed or processor load are an issue, multiplication might be the safest.
Finally, unless you know exactly on what platform your application will be deployed, benchmark is meaningless. And for code clarity, a single comment would do the job!
Here's a silly fun answer:
x / 2.0 is not equivalent to x * 0.5
Let's say you wrote this method on Oct 22, 2008.
double half(double x) => x / 2.0;
Now, 10 years later you learn that you can optimize this piece of code. The method is referenced in hundreds of formulas throughout your application. So you change it, and experience a remarkable 5% performance improvement.
double half(double x) => x * 0.5;
Was it the right decision to change the code? In maths, the two expressions are indeed equivalent. In computer science, that does not always hold true. Please read Minimizing the effect of accuracy problems for more details. If your calculated values are - at some point - compared with other values, you will change the outcome of edge cases. E.g.:
double quantize(double x)
{
if (half(x) > threshold))
return 1;
else
return -1;
}
Bottom line is; once you settle for either of the two, then stick to it!
Well, if we assume that an add/subtrack operation costs 1, then multiply costs 5, and divide costs about 20.
Technically there is no such thing as division, there is just multiplication by inverse elements. For example You never divide by 2, you in fact multiply by 0.5.
'Division' - let's kid ourselves that it exists for a second - is always harder that multiplication because to 'divide' x by y one first needs to compute the value y^{-1} such that y*y^{-1} = 1 and then do the multiplication x*y^{-1}. If you already know y^{-1} then not calculating it from y must be an optimization.
I'm messing around with Fourier transformations. Now I've created a class that does an implementation of the DFT (not doing anything like FFT atm). This is the implementation I've used:
public static Complex[] Dft(double[] data)
{
int length = data.Length;
Complex[] result = new Complex[length];
for (int k = 1; k <= length; k++)
{
Complex c = Complex.Zero;
for (int n = 1; n <= length; n++)
{
c += Complex.FromPolarCoordinates(data[n-1], (-2 * Math.PI * n * k) / length);
}
result[k-1] = 1 / Math.Sqrt(length) * c;
}
return result;
}
And these are the results I get from Dft({2,3,4})
Well it seems pretty okay, since those are the values I expect. There is only one thing I find confusing. And it all has to do with the rounding of doubles.
First of all, why are the first two numbers not exactly the same (0,8660..443 8 ) vs (0,8660..443). And why can't it calculate a zero, where you'd expect it. I know 2.8E-15 is pretty close to zero, but well it's not.
Anyone know how these, marginal, errors occur and if I can and want to do something about it.
It might seem that there's not a real problem, because it's just small errors. However, how do you deal with these rounding errors if you're for example comparing 2 values.
5,2 + 0i != 5,1961524 + i2.828107*10^-15
Cheers
I think you've already explained it to yourself - limited precision means limited precision. End of story.
If you want to clean up the results, you can do some rounding of your own to a more reasonable number of siginificant digits - then your zeros will show up where you want them.
To answer the question raised by your comment, don't try to compare floating point numbers directly - use a range:
if (Math.Abs(float1 - float2) < 0.001) {
// they're the same!
}
The comp.lang.c FAQ has a lot of questions & answers about floating point, which you might be interested in reading.
From http://support.microsoft.com/kb/125056
Emphasis mine.
There are many situations in which precision, rounding, and accuracy in floating-point calculations can work to generate results that are surprising to the programmer. There are four general rules that should be followed:
In a calculation involving both single and double precision, the result will not usually be any more accurate than single precision. If double precision is required, be certain all terms in the calculation, including constants, are specified in double precision.
Never assume that a simple numeric value is accurately represented in the computer. Most floating-point values can't be precisely represented as a finite binary value. For example .1 is .0001100110011... in binary (it repeats forever), so it can't be represented with complete accuracy on a computer using binary arithmetic, which includes all PCs.
Never assume that the result is accurate to the last decimal place. There are always small differences between the "true" answer and what can be calculated with the finite precision of any floating point processing unit.
Never compare two floating-point values to see if they are equal or not- equal. This is a corollary to rule 3. There are almost always going to be small differences between numbers that "should" be equal. Instead, always check to see if the numbers are nearly equal. In other words, check to see if the difference between them is very small or insignificant.
Note that although I referenced a microsoft document, this is not a windows problem. It's a problem with using binary and is in the CPU itself.
And, as a second side note, I tend to use the Decimal datatype instead of double: See this related SO question: decimal vs double! - Which one should I use and when?
In C# you'll want to use the 'decimal' type, not double for accuracy with decimal points.
As to the 'why'... repsensenting fractions in different base systems gives different answers. For example 1/3 in a base 10 system is 0.33333 recurring, but in a base 3 system is 0.1.
The double is a binary value, at base 2. When converting to base 10 decimal you can expect to have these rounding errors.
I have a large set of numbers, probably in the multiple gigabytes range. First issue is that I can't store all of these in memory. Second is that any attempt at addition of these will result in an overflow. I was thinking of using more of a rolling average, but it needs to be accurate. Any ideas?
These are all floating point numbers.
This is not read from a database, it is a CSV file collected from multiple sources. It has to be accurate as it is stored as parts of a second (e.g; 0.293482888929) and a rolling average can be the difference between .2 and .3
It is a set of #'s representing how long users took to respond to certain form actions. For example when showing a messagebox, how long did it take them to press OK or Cancel. The data was sent to me stored as seconds.portions of a second; 1.2347 seconds for example. Converting it to milliseconds and I overflow int, long, etc.. rather quickly. Even if I don't convert it, I still overflow it rather quickly. I guess the one answer below is correct, that maybe I don't have to be 100% accurate, just look within a certain range inside of a sepcific StdDev and I would be close enough.
You can sample randomly from your set ("population") to get an average ("mean"). The accuracy will be determined by how much your samples vary (as determined by "standard deviation" or variance).
The advantage is that you have billions of observations, and you only have to sample a fraction of them to get a decent accuracy or the "confidence range" of your choice. If the conditions are right, this cuts down the amount of work you will be doing.
Here's a numerical library for C# that includes a random sequence generator. Just make a random sequence of numbers that reference indices in your array of elements (from 1 to x, the number of elements in your array). Dereference to get the values, and then calculate your mean and standard deviation.
If you want to test the distribution of your data, consider using the Chi-Squared Fit test or the K-S test, which you'll find in many spreadsheet and statistical packages (e.g., R). That will help confirm whether this approach is usable or not.
Integers or floats?
If they're integers, you need to accumulate a frequency distribution by reading the numbers and recording how many of each value you see. That can be averaged easily.
For floating point, this is a bit of a problem. Given the overall range of the floats, and the actual distribution, you have to work out a bin-size that preserves the accuracy you want without preserving all of the numbers.
Edit
First, you need to sample your data to get a mean and a standard deviation. A few thousand points should be good enough.
Then, you need to determine a respectable range. Folks pick things like ±6σ (standard deviations) around the mean. You'll divide this range into as many buckets as you can stand.
In effect, the number of buckets determines the number of significant digits in your average. So, pick 10,000 or 100,000 buckets to get 4 or 5 digits of precision. Since it's a measurement, odds are good that your measurements only have two or three digits.
Edit
What you'll discover is that the mean of your initial sample is very close to the mean of any other sample. And any sample mean is close to the population mean. You'll note that most (but not all) of your means are with 1 standard deviation of each other.
You should find that your measurement errors and inaccuracies are larger than your standard deviation.
This means that a sample mean is as useful as a population mean.
Wouldn't a rolling average be as accurate as anything else (discounting rounding errors, I mean)? It might be kind of slow because of all the dividing.
You could group batches of numbers and average them recursively. Like average 100 numbers 100 times, then average the result. This would be less thrashing and mostly addition.
In fact, if you added 256 or 512 at once you might be able to bit-shift the result by either 8 or 9, (I believe you could do this in a double by simply changing the floating point mantissa)--this would make your program extremely quick and it could be written recursively in just a few lines of code (not counting the unsafe operation of the mantissa shift).
Perhaps dividing by 256 would already use this optimization? I may have to speed test dividing by 255 vs 256 and see if there is some massive improvement. I'm guessing not.
You mean of 32-bit and 64-bit numbers. But why not just use a proper Rational Big Num library? If you have so much data and you want an exact mean, then just code it.
class RationalBignum {
public Bignum Numerator { get; set; }
public Bignum Denominator { get; set; }
}
class BigMeanr {
public static int Main(string[] argv) {
var sum = new RationalBignum(0);
var n = new Bignum(0);
using (var s = new FileStream(argv[0])) {
using (var r = new BinaryReader(s)) {
try {
while (true) {
var flt = r.ReadSingle();
rat = new RationalBignum(flt);
sum += rat;
n++;
}
}
catch (EndOfStreamException) {
break;
}
}
}
Console.WriteLine("The mean is: {0}", sum / n);
}
}
Just remember, there are more numeric types out there than the ones your compiler offers you.
You could break the data into sets of, say, 1000 numbers, average these, and then average the averages.
This is a classic divide-and-conquer type problem.
The issue is that the average of a large set of numbers is the same
as the average of the first-half of the set, averaged with the average of the second-half of the set.
In other words:
AVG(A[1..N]) == AVG( AVG(A[1..N/2]), AVG(A[N/2..N]) )
Here is a simple, C#, recursive solution.
Its passed my tests, and should be completely correct.
public struct SubAverage
{
public float Average;
public int Count;
};
static SubAverage AverageMegaList(List<float> aList)
{
if (aList.Count <= 500) // Brute-force average 500 numbers or less.
{
SubAverage avg;
avg.Average = 0;
avg.Count = aList.Count;
foreach(float f in aList)
{
avg.Average += f;
}
avg.Average /= avg.Count;
return avg;
}
// For more than 500 numbers, break the list into two sub-lists.
SubAverage subAvg_A = AverageMegaList(aList.GetRange(0, aList.Count/2));
SubAverage subAvg_B = AverageMegaList(aList.GetRange(aList.Count/2, aList.Count-aList.Count/2));
SubAverage finalAnswer;
finalAnswer.Average = subAvg_A.Average * subAvg_A.Count/aList.Count +
subAvg_B.Average * subAvg_B.Count/aList.Count;
finalAnswer.Count = aList.Count;
Console.WriteLine("The average of {0} numbers is {1}",
finalAnswer.Count, finalAnswer.Average);
return finalAnswer;
}
The trick is that you're worried about an overflow. In that case, it all comes down to order of execution. The basic formula is like this:
Given:
A = current avg
C = count of items
V = next value in the sequence
The next average (A1) is:
(C * A) + V
A1 = ———————————
C + 1
The danger is over the course of evaulating the sequence, while A should stay relatively manageable C will become very large.
Eventually C * A will overflow the integer or double types.
One thing we can try is to re-write it like this, to reduce the chance of an overflow:
A1 = C/(C+1) * A/(C+1) + V/(C+1)
In this way, we never multiply C * A and only deal with smaller numbers. But the concern now is the result of the division operations. If C is very large, C/C+1 (for example) may not be meaningful when constrained to normal floating point representations. The best I can suggest is to use the largest type possible for C here.
Here's one way to do it in pseudocode:
average=first
count=1
while more:
count+=1
diff=next-average
average+=diff/count
return average
Sorry for the late comment, but isn't it the formula above provided by Joel Coehoorn rewritten wrongly?
I mean, the basic formula is right:
Given:
A = current avg
C = count of items
V = next value in the sequence
The next average (A1) is:
A1 = ( (C * A) + V ) / ( C + 1 )
But instead of:
A1 = C/(C+1) * A/(C+1) + V/(C+1)
shouldn't we have:
A1 = C/(C+1) * A + V/(C+1)
That would explain kastermester's post:
"My math ticks off here - You have C, which you say "go towards infinity" or at least, a really big number, then: C/(C+1) goes towards 1. A /(C+1) goes towards 0. V/(C+1) goes towards 0. All in all: A1 = 1 * 0 + 0 So put shortly A1 goes towards 0 - seems a bit off. – kastermester"
Because we would have A1 = 1 * A + 0, i.e., A1 goes towards A, which it's right.
I've been using such method for calculating averages for a long time and the aforementioned precision problems have never been an issue for me.
With floating point numbers the problem is not overflow, but loss of precision when the accumulated value gets large. Adding a small number to a huge accumulated value will result in losing most of the bits of the small number.
There is a clever solution by the author of the IEEE floating point standard himself, the Kahan summation algorithm, which deals exactly with this kind of problems by checking the error at each step and keeping a running compensation term that prevents losing the small values.
If the numbers are int's, accumulate the total in a long. If the numbers are long's ... what language are you using? In Java you could accumulate the total in a BigInteger, which is an integer which will grow as large as it needs to be. You could always write your own class to reproduce this functionality. The gist of it is just to make an array of integers to hold each "big number". When you add two numbers, loop through starting with the low-order value. If the result of the addition sets the high order bit, clear this bit and carry the one to the next column.
Another option would be to find the average of, say, 1000 numbers at a time. Hold these intermediate results, then when you're done average them all together.
Why is a sum of floating point numbers overflowing? In order for that to happen, you would need to have values near the max float value, which sounds odd.
If you were dealing with integers I'd suggest using a BigInteger, or breaking the set into multiple subsets, recursively averaging the subsets, then averaging the averages.
If you're dealing with floats, it gets a bit weird. A rolling average could become very inaccurate. I suggest using a rolling average which is only updated when you hit an overflow exception or the end of the set. So effectively dividing the set into non-overflowing sets.
Two ideas from me:
If the numbers are ints, use an arbitrary precision library like IntX - this could be too slow, though
If the numbers are floats and you know the total amount, you can divide each entry by that number and add up the result. If you use double, the precision should be sufficient.
Why not just scale the numbers (down) before computing the average?
If I were to find the mean of billions of doubles as accurately as possible, I would take the following approach (NOT TESTED):
Find out 'M', an upper bound for log2(nb_of_input_data). If there are billions of data, 50 may be a good candidate (> 1 000 000 billions capacity). Create an L1 array of M double elements. If you're not sure about M, creating an extensible list will solve the issue, but it is slower.
Also create an associated L2 boolean array (all cells set to false by default).
For each incoming data D:
int i = 0;
double localMean = D;
while (L2[i]) {
L2[i] = false;
localMean = (localMean + L1[i]) / 2;
i++;
}
L1[i] = localMean;
L2[i] = true;
And your final mean will be:
double sum = 0;
double totalWeight = 0;
for (int i = 0; i < 50) {
if (L2[i]) {
long weight = 1 << i;
sum += L1[i] * weight;
totalWeight += weight;
}
}
return sum / totalWeight;
Notes:
Many proposed solutions in this thread miss the point of lost precision.
Using binary instead of 100-group-or-whatever provides better precision, and doubles can be safely doubled or halved without losing precision!
Try this
Iterate through the numbers incrementing a counter, and adding each number to a total, until adding the next number would result in an overflow, or you run out of numbers.
( It makes no difference if the inputs are integers or floats - use the largest precision float you can and convert each input to that type)
Divide the total by the counter to get a mean ( a floating point), and add it to a temp array
If you had run out of numbers, and there is only one element in temp, that's your result.
Start over using the temp array as input, ie iteratively recurse until you reached the end condition described earlier.
depending on the range of numbers it might be a good idea to have an array where the subscript is your number and the value is the quantity of that number, you could then do your calculation from this