.NET 3.5 JIT not working when running the application - c#

The following code gives different output when running the release inside Visual Studio, and running the release outside Visual Studio. I'm using Visual Studio 2008 and targeting .NET 3.5. I've also tried .NET 3.5 SP1.
When running outside Visual Studio, the JIT should kick in. Either (a) there's something subtle going on with C# that I'm missing or (b) the JIT is actually in error. I'm doubtful that the JIT can go wrong, but I'm running out of other possiblities...
Output when running inside Visual Studio:
0 0,
0 1,
1 0,
1 1,
Output when running release outside of Visual Studio:
0 2,
0 2,
1 2,
1 2,
What is the reason?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Test
{
struct IntVec
{
public int x;
public int y;
}
interface IDoSomething
{
void Do(IntVec o);
}
class DoSomething : IDoSomething
{
public void Do(IntVec o)
{
Console.WriteLine(o.x.ToString() + " " + o.y.ToString()+",");
}
}
class Program
{
static void Test(IDoSomething oDoesSomething)
{
IntVec oVec = new IntVec();
for (oVec.x = 0; oVec.x < 2; oVec.x++)
{
for (oVec.y = 0; oVec.y < 2; oVec.y++)
{
oDoesSomething.Do(oVec);
}
}
}
static void Main(string[] args)
{
Test(new DoSomething());
Console.ReadLine();
}
}
}

It is a JIT optimizer bug. It is unrolling the inner loop but not updating the oVec.y value properly:
for (oVec.x = 0; oVec.x < 2; oVec.x++) {
0000000a xor esi,esi ; oVec.x = 0
for (oVec.y = 0; oVec.y < 2; oVec.y++) {
0000000c mov edi,2 ; oVec.y = 2, WRONG!
oDoesSomething.Do(oVec);
00000011 push edi
00000012 push esi
00000013 mov ecx,ebx
00000015 call dword ptr ds:[00170210h] ; first unrolled call
0000001b push edi ; WRONG! does not increment oVec.y
0000001c push esi
0000001d mov ecx,ebx
0000001f call dword ptr ds:[00170210h] ; second unrolled call
for (oVec.x = 0; oVec.x < 2; oVec.x++) {
00000025 inc esi
00000026 cmp esi,2
00000029 jl 0000000C
The bug disappears when you let oVec.y increment to 4, that's too many calls to unroll.
One workaround is this:
for (int x = 0; x < 2; x++) {
for (int y = 0; y < 2; y++) {
oDoesSomething.Do(new IntVec(x, y));
}
}
UPDATE: re-checked in August 2012, this bug was fixed in the version 4.0.30319 jitter. But is still present in the v2.0.50727 jitter. It seems unlikely they'll fix this in the old version after this long.

I believe this is in a genuine JIT compilation bug. I would report it to Microsoft and see what they say. Interestingly, I found that the x64 JIT does not have the same problem.
Here is my reading of the x86 JIT.
// save context
00000000 push ebp
00000001 mov ebp,esp
00000003 push edi
00000004 push esi
00000005 push ebx
// put oDoesSomething pointer in ebx
00000006 mov ebx,ecx
// zero out edi, this will store oVec.y
00000008 xor edi,edi
// zero out esi, this will store oVec.x
0000000a xor esi,esi
// NOTE: the inner loop is unrolled here.
// set oVec.y to 2
0000000c mov edi,2
// call oDoesSomething.Do(oVec) -- y is always 2!?!
00000011 push edi
00000012 push esi
00000013 mov ecx,ebx
00000015 call dword ptr ds:[002F0010h]
// call oDoesSomething.Do(oVec) -- y is always 2?!?!
0000001b push edi
0000001c push esi
0000001d mov ecx,ebx
0000001f call dword ptr ds:[002F0010h]
// increment oVec.x
00000025 inc esi
// loop back to 0000000C if oVec.x < 2
00000026 cmp esi,2
00000029 jl 0000000C
// restore context and return
0000002b pop ebx
0000002c pop esi
0000002d pop edi
0000002e pop ebp
0000002f ret
This looks like an optimization gone bad to me...

I copied your code into a new Console App.
Debug Build
Correct output with both debugger and no debugger
Switched to Release Build
Again, correct output both times
Created a new x86 configuration (I'm on running X64 Windows 2008 and was using 'Any CPU')
Debug Build
Got the correct output both F5 and CTRL+F5
Release Build
Correct output with Debugger attached
No debugger - Got the incorrect output
So it is the x86 JIT incorrectly generating the code. Have deleted my original text about reordering of loops etc. A few other answers on here have confirmed that the JIT is unwinding the loop incorrectly when on x86.
To fix the problem you can change the declaration of IntVec to a class and it works in all flavours.
Think this needs to go on MS Connect....
-1 to Microsoft!

Related

C# Unexpected loop performance. Possible JIT bound check bug? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I noticed something odd when comparing the generated JIT of 2 methods which should perform the same.
To my surprise, the generated JIT had major differences and it's length was almost doubled for the supposedly simpler method M1.
The methods I compared were M1 and M2.
The number of assignments is the same, so the only difference should be how the bound checks are handled for each method.
using System;
public class C {
static void M1(int[] left, int[] right)
{
for (int i = 0; i < 5; i++)
{
left[i] = 1;
right[i] = 1;
}
}
static void M2(int[] left, int[] right)
{
for (int i = 0; i < 10; i+=2)
{
left[i] = 1;
right[i] = 1;
}
}
}
Generated JIT for each method:
C.M1(Int32[], Int32[])
L0000: sub rsp, 0x28
L0004: xor eax, eax
L0006: test rcx, rcx
L0009: setne r8b
L000d: movzx r8d, r8b
L0011: test rdx, rdx
L0014: setne r9b
L0018: movzx r9d, r9b
L001c: test r9d, r8d
L001f: je short L005c
L0021: cmp dword ptr [rcx+8], 5
L0025: setge r8b
L0029: movzx r8d, r8b
L002d: cmp dword ptr [rdx+8], 5
L0031: setge r9b
L0035: movzx r9d, r9b
L0039: test r9d, r8d
L003c: je short L005c
L003e: movsxd r8, eax
L0041: mov dword ptr [rcx+r8*4+0x10], 1
L004a: mov dword ptr [rdx+r8*4+0x10], 1
L0053: inc eax
L0055: cmp eax, 5
L0058: jl short L003e
L005a: jmp short L0082
L005c: cmp eax, [rcx+8]
L005f: jae short L0087
L0061: movsxd r8, eax
L0064: mov dword ptr [rcx+r8*4+0x10], 1
L006d: cmp eax, [rdx+8]
L0070: jae short L0087
L0072: mov dword ptr [rdx+r8*4+0x10], 1
L007b: inc eax
L007d: cmp eax, 5
L0080: jl short L005c
L0082: add rsp, 0x28
L0086: ret
L0087: call 0x00007ffc50fafc00
L008c: int3
C.M2(Int32[], Int32[])
L0000: sub rsp, 0x28
L0004: xor eax, eax
L0006: mov r8d, [rcx+8]
L000a: cmp eax, r8d
L000d: jae short L0036
L000f: movsxd r9, eax
L0012: mov dword ptr [rcx+r9*4+0x10], 1
L001b: cmp eax, [rdx+8]
L001e: jae short L0036
L0020: mov dword ptr [rdx+r9*4+0x10], 1
L0029: add eax, 2
L002c: cmp eax, 0xa
L002f: jl short L000a
L0031: add rsp, 0x28
L0035: ret
L0036: call 0x00007ffc50fafc00
L003b: int3
M1's length is double of M2's!
What could explain this and is it some kind of bug?
EDIT
Figured out that M1 creates a version for the loop without bound checks, and that's why M1 is longer. Still the question remains, Why M1 performs worse, even though it doesn't perform bound checking at all?
I also ran BenchmarkDotNet and verified that M2 performs about 20% - 30% faster than M1 for arrays of length 10
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.14393.3930 (1607/AnniversaryUpdate/Redstone1)
Intel Core i7-4790 CPU 3.60GHz (Haswell), 1 CPU, 8 logical and 4 physical cores
Frequency=3515622 Hz, Resolution=284.4447 ns, Timer=TSC
.NET Core SDK=3.1.401
[Host] : .NET Core 3.1.7 (CoreCLR 4.700.20.36602, CoreFX 4.700.20.37001), X64 RyuJIT
DefaultJob : .NET Core 3.1.7 (CoreCLR 4.700.20.36602, CoreFX 4.700.20.37001), X64 RyuJIT
| Method | Mean | Error | StdDev | Ratio |
|-------- |---------:|----------:|----------:|------:|
| M1Bench | 4.372 ns | 0.0215 ns | 0.0201 ns | 1.00 |
| M2Bench | 3.350 ns | 0.0340 ns | 0.0301 ns | 0.77 |
But, there's a lot of overhead up front for M1() to know it can use
the "fast" path...if your arrays aren't large enough, the overhead
would dominate and produce counter-intuitive results.
Peter Duniho
The overhead of choosing the path (in the JIT) for optimized bound check with loops of type:
for(int i = 0; i < array.Length; i++)
won't be beneficial for smaller loops.
As loops grow larger, eliminating bound checks becomes more beneficial, and surpasses the performance of a non-optimized path.
examples for non optimized loops:
for(int i = 0; i < array.Length; i+=2)
for(int i = 0; i <= array.Length; i++)
for(int i = 0; i < array.Length / 2; i++)

C# Performance on Small Functions

One of my co-workers has been reading Clean Code by Robert C Martin and got to the section about using many small functions as opposed to fewer large functions. This led to a debate about the performance consequence of this methodology. So we wrote a quick program to test the performance and are confused by the results.
For starters here is the normal version of the function.
static double NormalFunction()
{
double a = 0;
for (int j = 0; j < s_OuterLoopCount; ++j)
{
for (int i = 0; i < s_InnerLoopCount; ++i)
{
double b = i * 2;
a = a + b + 1;
}
}
return a;
}
Here is the version I made that breaks the functionality into small functions.
static double TinyFunctions()
{
double a = 0;
for (int i = 0; i < s_OuterLoopCount; i++)
{
a = Loop(a);
}
return a;
}
static double Loop(double a)
{
for (int i = 0; i < s_InnerLoopCount; i++)
{
double b = Double(i);
a = Add(a, Add(b, 1));
}
return a;
}
static double Double(double a)
{
return a * 2;
}
static double Add(double a, double b)
{
return a + b;
}
I use the stopwatch class to time the functions and when I ran it in debug I got the following results.
s_OuterLoopCount = 10000;
s_InnerLoopCount = 10000;
NormalFunction Time = 377 ms;
TinyFunctions Time = 1322 ms;
These results make sense to me especially in debug as there is additional overhead in function calls. It is when I run it in release that I get the following results.
s_OuterLoopCount = 10000;
s_InnerLoopCount = 10000;
NormalFunction Time = 173 ms;
TinyFunctions Time = 98 ms;
These results confuse me, even if the compiler was optimizing the TinyFunctions by in-lining all the function calls, how could that make it ~57% faster?
We have tried moving variable declarations around in NormalFunctions and it basically no effect on the run time.
I was hoping that someone would know what is going on and if the compiler can optimize TinyFunctions so well, why can't it apply similar optimizations to NormalFunction.
In looking around we found where someone mentioned that having the functions broken out allows the JIT to better optimize what to put in the registers, but NormalFunctions only has 4 variables so I find it hard to believe that explains the massive performance difference.
I'd be grateful for any insight someone can provide.
Update 1
As pointed out below by Kyle changing the order of operations made a massive difference in the performance of NormalFunction.
static double NormalFunction()
{
double a = 0;
for (int j = 0; j < s_OuterLoopCount; ++j)
{
for (int i = 0; i < s_InnerLoopCount; ++i)
{
double b = i * 2;
a = b + 1 + a;
}
}
return a;
}
Here are the results with this configuration.
s_OuterLoopCount = 10000;
s_InnerLoopCount = 10000;
NormalFunction Time = 91 ms;
TinyFunctions Time = 102 ms;
This is more what I expected but still leaves the question as to why order of operations can have a ~56% performance hit.
Furthermore, I then tried it with integer operations and we are back to not making any sense.
s_OuterLoopCount = 10000;
s_InnerLoopCount = 10000;
NormalFunction Time = 87 ms;
TinyFunctions Time = 52 ms;
And this doesn't change regardless of the order of operations.
I can make performance match much better by changing one line of code:
a = a + b + 1;
Change it to:
a = b + 1 + a;
Or:
a += b + 1;
Now you'll find that NormalFunction might actually be slightly faster and you can "fix" that by changing the signature of the Double method to:
int Double( int a ) { return a * 2; }
I thought of these changes because this is what was different between the two implementations. After this, their performance is very similar with TinyFunctions being a few percent slower (as expected).
The second change is easy to explain: the NormalFunction implementation actually doubles an int and then converts it to a double (with an fild opcode at the machine code level). The original Double method loads a double first and then doubles it, which I would expect to be slightly slower.
But that doesn't account for the bulk of the runtime discrepancy. That comes almost down entirely to that order change I made first. Why? I don't really have any idea. The difference in machine code looks like this:
Original Changed
01070620 push ebp 01390620 push ebp
01070621 mov ebp,esp 01390621 mov ebp,esp
01070623 push edi 01390623 push edi
01070624 push esi 01390624 push esi
01070625 push eax 01390625 push eax
01070626 fldz 01390626 fldz
01070628 xor esi,esi 01390628 xor esi,esi
0107062A mov edi,dword ptr ds:[0FE43ACh] 0139062A mov edi,dword ptr ds:[12243ACh]
01070630 test edi,edi 01390630 test edi,edi
01070632 jle 0107065A 01390632 jle 0139065A
01070634 xor edx,edx 01390634 xor edx,edx
01070636 mov ecx,dword ptr ds:[0FE43B0h] 01390636 mov ecx,dword ptr ds:[12243B0h]
0107063C test ecx,ecx 0139063C test ecx,ecx
0107063E jle 01070655 0139063E jle 01390655
01070640 mov eax,edx 01390640 mov eax,edx
01070642 add eax,eax 01390642 add eax,eax
01070644 mov dword ptr [ebp-0Ch],eax 01390644 mov dword ptr [ebp-0Ch],eax
01070647 fild dword ptr [ebp-0Ch] 01390647 fild dword ptr [ebp-0Ch]
0107064A faddp st(1),st 0139064A fld1
0107064C fld1 0139064C faddp st(1),st
0107064E faddp st(1),st 0139064E faddp st(1),st
01070650 inc edx 01390650 inc edx
01070651 cmp edx,ecx 01390651 cmp edx,ecx
01070653 jl 01070640 01390653 jl 01390640
01070655 inc esi 01390655 inc esi
01070656 cmp esi,edi 01390656 cmp esi,edi
01070658 jl 01070634 01390658 jl 01390634
0107065A pop ecx 0139065A pop ecx
0107065B pop esi 0139065B pop esi
0107065C pop edi 0139065C pop edi
0107065D pop ebp 0139065D pop ebp
0107065E ret 0139065E ret
Which is opcode-for-opcode identical except for the order of the floating point operations. That makes a huge performance difference but I don't know enough about x86 floating point operations to know why exactly.
Update:
With the new integer version we see something else curious. In this case it seems the JIT is trying to be clever and apply an optimization because it turns this:
int b = 2 * i;
a = a + b + 1;
Into something like:
mov esi, eax ; b = i
add esi, esi ; b += b
lea ecx, [ecx + esi + 1] ; a = a + b + 1
Where a is stored in the ecx register, i in eax, and b in esi.
Whereas the TinyFunctions version gets turned into something like:
mov eax, edx
add eax, eax
inc eax
add ecx, eax
Where i is in edx, b is in eax, and a is in ecx this time around.
I suppose for our CPU architecture this LEA "trick" (explained here) ends up being slower than just using the ALU proper. It is still possible to change the code to get the performance between the two to line up:
int b = 2 * i + 1;
a += b;
This ends up forcing the NormalFunction approach to end up getting turned into mov, add, inc, add as it appears in the TinyFunctions approach.

Converting float NaN values from binary form and vice-versa results a mismatch

I make a conversion "bytes[4] -> float number -> bytes[4]" without any arithmetics.
In bytes I have a single precision number in IEEE-754 format (4 bytes per number, little endian order as in a machine).
I encounter a issue, when bytes represents a NaN value converted not verbatim.
For example:
{ 0x1B, 0xC4, 0xAB, 0x7F } -> NaN -> { 0x1B, 0xC4, 0xEB, 0x7F }
Code for reproduction:
using System;
using System.Linq;
namespace StrangeFloat
{
class Program
{
private static void PrintBytes(byte[] array)
{
foreach (byte b in array)
{
Console.Write("{0:X2}", b);
}
Console.WriteLine();
}
static void Main(string[] args)
{
byte[] strangeFloat = { 0x1B, 0xC4, 0xAB, 0x7F };
float[] array = new float[1];
Buffer.BlockCopy(strangeFloat, 0, array, 0, 4);
byte[] bitConverterResult = BitConverter.GetBytes(array[0]);
PrintBytes(strangeFloat);
PrintBytes(bitConverterResult);
bool isEqual = strangeFloat.SequenceEqual(bitConverterResult);
Console.WriteLine("IsEqual: {0}", isEqual);
}
}
}
Result ( https://ideone.com/p5fsrE ):
1BC4AB7F
1BC4EB7F
IsEqual: False
This behaviour depends from platform and configuration: this code convert a number without errors on x64 in all configurations or in x86/Debug. On x86/Release an error exists.
Also, if I change
byte[] bitConverterResult = BitConverter.GetBytes(array[0]);
to
float f = array[0];
byte[] bitConverterResult = BitConverter.GetBytes(f);
then it erroneus also on x86/Debug.
I do research the problem and found that compiler generate x86 code that use a FPU registers (!) to a hold a float value (FLD/FST instructions). But FPU set a high bit of mantissa to 1 instead of 0, so it modify value although logic was is just pass a value without change.
On x64 platform a xmm0 register used (SSE) and it works fine.
[Question]
What is this: it is a somewhere documented undefined behavior for a NaN values or a JIT/optimization bug?
Why compiler use a FPU and SSE when no arithmetic operations was made?
Update 1
Debug configuration - pass value via stack without side effects - correct result:
byte[] bitConverterResult = BitConverter.GetBytes(array[0]);
02232E45 mov eax,dword ptr [ebp-44h]
02232E48 cmp dword ptr [eax+4],0
02232E4C ja 02232E53
02232E4E call 71EAC65A
02232E53 push dword ptr [eax+8] // eax+8 points to "1b c4 ab 7f" CORRECT!
02232E56 call 7136D8E4
02232E5B mov dword ptr [ebp-5Ch],eax // eax points to managed
// array data "fc 35 d7 70 04 00 00 00 __1b c4 ab 7f__" and this is correct
02232E5E mov eax,dword ptr [ebp-5Ch]
02232E61 mov dword ptr [ebp-48h],eax
Release configuration - optimizer or a JIT does a strange pass via FPU registers and breaks a data - incorrect
byte[] bitConverterResult = BitConverter.GetBytes(array[0]);
00B12DE8 cmp dword ptr [edi+4],0
00B12DEC jbe 00B12E3B
00B12DEE fld dword ptr [edi+8] // edi+8 points to "1b c4 ab 7f"
00B12DF1 fstp dword ptr [ebp-10h] // ebp-10h points to "1b c4 eb 7f" (FAIL)
00B12DF4 mov ecx,dword ptr [ebp-10h]
00B12DF7 call 70C75810
00B12DFC mov edi,eax
00B12DFE mov ecx,esi
00B12E00 call dword ptr ds:[4A70860h]
I just translate #HansPassant comment as an answer.
"The x86 jitter uses the FPU to handle floating point values. This is
not a bug. Your assumption that those byte values are a proper
argument to a method that takes a float argument is just wrong."
In other words, this is just a GIGO case (Garbage In, Garbage Out).

Run vs. Debug = different results in Assembler

I'm learning assembler. I practise with this code:
ASM:
;-------------------------------------------------------------------------
.586
.MODEL flat, stdcall
public srednia_harm
OPTION CASEMAP:NONE
INCLUDE include\windows.inc
INCLUDE include\user32.inc
INCLUDE include\kernel32.inc
.CODE
jeden dd 1.0
DllEntry PROC hInstDLL:HINSTANCE, reason:DWORD, reserved1:DWORD
mov eax, TRUE
ret
DllEntry ENDP
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
srednia_harm PROC
push ebp
mov esp,ebp
push esi
mov esi, [ebp+8] ; address of array
mov ecx, [ebp+12] ; the number of elements
finit
fldz ; the current value of the sum - st(0)=0
mianownik:
fld dword PTR jeden ;ST(0)=1, ST(1)=sum
fld dword PTR [esi] ;loading of array elements - ST(0)=tab[i], ST(1)=1 ST(2)=suma
fdivp st(1), st(0) ; st(1)=st(1)/(st0) -> ST(0)=1/tab[i], ST(1)=suma
faddp st(1),st(0) ; st(1)=st(0)+st(1) -> st(0)=suma+1/tab[i]
add esi,4
loop mianownik
pop esi
pop ebp
ret
srednia_harm ENDP
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
;-------------------------------------------------------------------------
END DllEntry
DEF:
LIBRARY "biblioteka"
EXPORTS
srednia_harm
C#:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Runtime.InteropServices;
namespace GUI
{
unsafe class FunkcjeAsemblera //imports of assembler's function
{
[DllImport("bibliotekaASM.dll", CallingConvention = CallingConvention.StdCall)]
private static extern float srednia_harm(float[] table, int n);
public float wywolajTest(float[] table, int n)
{
float wynik = srednia_harm(table, n);
return wynik;
}
}
}
C#:
private void button6_Click(object sender, EventArgs e)
{
FunkcjeAsemblera funkcje = new FunkcjeAsemblera();
int n = 4;
float[] table = new float[n];
for (int i = 0; i < n; i++)
table[i] = 1;
float wynik = funkcje.wywolajTest(table, n);
textBox6.Text = wynik.ToString();
}
When i run this code everything is fine. The result is 4 as I expected. But I tried to understand that code, so I set a lot of breakpoints in ASM function. Then the problems started. Arrat was exactly where it should be in memory but the seond parameter is lost. Address pointed to an empty field in the memory. I tried a lots of combinations, I changed types ant it still was the same.
I made some researched but i didn't find any clues. How it possible that when I run program everything works fine and in DEBUG not?
Ok, I tested this in Debug and Release mode. I enabled Properties->Debug->EnableNativeCodedebugging. It works in both cases with Step Into(F11). The 'n'-variable is accessed properly.
One problem I noticed is an improper PROC setup. The code as above accesses the two variables relative to EBP but does not clean up the stack(stdcall [in which the callee is responsible for cleaning up the stack]#Wikipedia).
push ebp
mov esp,ebp
push esi
mov esi,dword ptr [ebp+8]
mov ecx,dword ptr [ebp+0Ch]
wait
...
add esi,4
loop 6CC7101F
pop esi
pop ebp
ret <-- two params not cleaned up
The following is the code assembled by the PROC heading below:
push ebp
mov ebp,esp
push esi
mov esi,dword ptr [ebp+8]
mov ecx,dword ptr [ebp+0Ch]
wait
...
add esi,4
loop 6CC7101F
pop esi
leave <-- restores EBP
ret 8 <-- two params cleaned up
I suggest changing the PROC to
srednia_harm PROC uses esi lpArr: DWORD, num: DWORD
mov esi, lpArr
mov ecx, num
...
ret
srednia_harm ENDP
Maybe that has been the cause of some troubles.

Why looping in Delphi faster than C#?

Delphi:
procedure TForm1.Button1Click(Sender: TObject);
var I,Tick:Integer;
begin
Tick := GetTickCount();
for I := 0 to 1000000000 do
begin
end;
Button1.Caption := IntToStr(GetTickCount()-Tick)+' ms';
end;
C#:
private void button1_Click(object sender, EventArgs e)
{
int tick = System.Environment.TickCount;
for (int i = 0; i < 1000000000; ++i)
{
}
tick = System.Environment.TickCount - tick;
button1.Text = tick.ToString()+" ms";
}
Delphi gives around 515 ms
C# gives around 3775 ms
Delphi is compiled to native code, whereas C# is compiled to CLR code which is then translated at runtime. That said C# does use JIT compilation, so you might expect the timing to be more similar, but it is not a given.
It would be useful if you could describe the hardware you ran this on (CPU, clock rate).
I do not have access to Delphi to repeat your experiment, but using native C++ vs C# and the following code:
VC++ 2008
#include <iostream>
#include <windows.h>
int main(void)
{
int tick = GetTickCount() ;
for (int i = 0; i < 1000000000; ++i)
{
}
tick = GetTickCount() - tick;
std::cout << tick << " ms" << std::endl ;
}
C#
using System;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int tick = System.Environment.TickCount;
for (int i = 0; i < 1000000000; ++i)
{
}
tick = System.Environment.TickCount - tick;
Console.Write( tick.ToString() + " ms" ) ;
}
}
}
I initially got:
C++ 2792ms
C# 2980ms
However I then performed a Rebuild on the C# version and ran the executable in <project>\bin\release and <project>\bin\debug respectively directly from the command line. This yielded:
C# (release): 720ms
C# (debug): 3105ms
So I reckon that is where the difference truly lies, you were running the debug version of the C# code from the IDE.
In case you are thinking that C++ is then particularly slow, I ran that as an optimised release build and got:
C++ (Optimised): 0ms
This is not surprising because the loop is empty, and the control variable is not used outside the loop so the optimiser removes it altogether. To avoid that I declared i as a volatile with the following result:
C++ (volatile i): 2932ms
My guess is that the C# implementation also removed the loop and that the 720ms is from something else; this may explain most of the difference between the timings in the first test.
What Delphi is doing I cannot tell, you might look at the generated assembly code to see.
All the above tests on AMD Athlon Dual Core 5000B 2.60GHz, on Windows 7 32bit.
If this is intended as a benchmark, it's an exceptional bad one as in both cases the loop can be optimized away, so you have to look at the generated machine code to see what's going on. If you use release mode for C#, the following code
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 1000000000; ++i){ }
sw.Stop();
Console.WriteLine(sw.Elapsed);
is transformed by the JITter to this:
push ebp
mov ebp,esp
push edi
push esi
call 67CDBBB0
mov edi,eax
xor eax,eax ; i = 0
inc eax ; ++i
cmp eax,3B9ACA00h ; i == 1000000000?
jl 0000000E ; false: jmp
mov ecx,edi
cmp dword ptr [ecx],ecx
call 67CDBC10
mov ecx,66DDAEDCh
call FFE8FBE0
mov esi,eax
mov ecx,edi
call 67CD75A8
mov ecx,eax
lea eax,[esi+4]
mov dword ptr [eax],ecx
mov dword ptr [eax+4],edx
call 66A94C90
mov ecx,eax
mov edx,esi
mov eax,dword ptr [ecx]
mov eax,dword ptr [eax+3Ch]
call dword ptr [eax+14h]
pop esi
pop edi
pop ebp
ret
TickCount is not a reliable timer; you should use .Net's Stopwatch class. (I don't know what the Delphi equivalent is).
Also, are you running a Release build?
Do you have a debugger attached?
The Delphi compiler uses the for loop counter downwards (if possible); the above code sample is compiled to:
Unit1.pas. 42: Tick := GetTickCount();
00489367 E8B802F8FF call GetTickCount
0048936C 8BF0 mov esi,eax
Unit1.pas.43: for I := 0 to 1000000000 do
0048936E B801CA9A3B mov eax,$3b9aca01
00489373 48 dec eax
00489374 75FD jnz $00489373
You are comparing native code against VM JITted code, and that is not fair. Native code will be ALWAYS faster since the JITter can not optimize the code like a native compiler can.
That said, comparing Delphi against C# is not fair at all, a Delphi binary will win always (faster, smaller, without any kind of dependencies, etc).
Btw, I'm sadly amazed how many posters here don't know this differences... or may be you just hurted some .NET zealots that try to defend C# against anything that shows there are better options out there.
this is the c# disassembly:
DEBUG:
// int i = 0; while (++i != 1000000000) ;//==for(int i ...blah blah blah)
0000004e 33 D2 xor edx,edx
00000050 89 55 B8 mov dword ptr [ebp-48h],edx
00000053 90 nop
00000054 EB 00 jmp 00000056
00000056 FF 45 B8 inc dword ptr [ebp-48h]
00000059 81 7D B8 00 CA 9A 3B cmp dword ptr [ebp-48h],3B9ACA00h
00000060 0F 95 C0 setne al
00000063 0F B6 C0 movzx eax,al
00000066 89 45 B4 mov dword ptr [ebp-4Ch],eax
00000069 83 7D B4 00 cmp dword ptr [ebp-4Ch],0
0000006d 75 E7 jne 00000056
as you see it is a waste of cpu.
EDIT:
RELEASE:
//unchecked
//{
//int i = 0; while (++i != 1000000000) ;//==for(int i ...blah blah blah)
00000032 33 D2 xor edx,edx
00000034 89 55 F4 mov dword ptr [ebp-0Ch],edx
00000037 FF 45 F4 inc dword ptr [ebp-0Ch]
0000003a 81 7D F4 00 CA 9A 3B cmp dword ptr [ebp-0Ch],3B9ACA00h
00000041 75 F4 jne 00000037
//}
EDIT:
and this is the c++ version:running about 9x faster in my machine.
__asm
{
PUSH ECX
PUSH EBX
XOR ECX, ECX
MOV EBX, 1000000000
NEXT: INC ECX
CMP ECX, EBX
JS NEXT
POP EBX
POP ECX
}
You should attach a debugger and take a look at the machine code generated by each.
Delphi would almost definitely optimise that loop to execute in reverse order (ie DOWNTO zero rather than FROM zero) - Delphi does this whenever it determines it is "safe" to do, presumably because either subtraction or checking against zero is faster than addition or checking against a non-zero number.
What happens if you try both cases specifying the loops to execute in reverse order?
In Delphi the break condition is calculated only once before the loop procedure begins whereas in C# the break condition is calculated in each loop pass again.
That’s why the looping in Delphi is faster than in C#.
"// int i = 0; while (++i != 1000000000) ;"
That's interesting.
while (++i != x) is not the same as for (; i != x; i++)
The difference is that the while loop doesn't execute the loop for i = 0.
(try it out: run something like this:
int i;
for (i = 0; i < 5; i++)
Console.WriteLine(i);
i = 0;
while (++i != 5)
Console.WriteLine(i);

Categories

Resources