Why use flags+bitmasks rather than a series of booleans? - c#

Given a case where I have an object that may be in one or more true/false states, I've always been a little fuzzy on why programmers frequently use flags+bitmasks instead of just using several boolean values.
It's all over the .NET framework. Not sure if this is the best example, but the .NET framework has the following:
public enum AnchorStyles
{
None = 0,
Top = 1,
Bottom = 2,
Left = 4,
Right = 8
}
So given an anchor style, we can use bitmasks to figure out which of the states are selected. However, it seems like you could accomplish the same thing with an AnchorStyle class/struct with bool properties defined for each possible value, or an array of individual enum values.
Of course the main reason for my question is that I'm wondering if I should follow a similar practice with my own code.
So, why use this approach?
Less memory consumption? (it doesn't seem like it would consume less than an array/struct of bools)
Better stack/heap performance than a struct or array?
Faster compare operations? Faster value addition/removal?
More convenient for the developer who wrote it?

It was traditionally a way of reducing memory usage. So, yes, its quite obsolete in C# :-)
As a programming technique, it may be obsolete in today's systems, and you'd be quite alright to use an array of bools, but...
It is fast to compare values stored as a bitmask. Use the AND and OR logic operators and compare the resulting 2 ints.
It uses considerably less memory. Putting all 4 of your example values in a bitmask would use half a byte. Using an array of bools, most likely would use a few bytes for the array object plus a long word for each bool. If you have to store a million values, you'll see exactly why a bitmask version is superior.
It is easier to manage, you only have to deal with a single integer value, whereas an array of bools would store quite differently in, say a database.
And, because of the memory layout, much faster in every aspect than an array. It's nearly as fast as using a single 32-bit integer. We all know that is as fast as you can get for operations on data.

Easy setting multiple flags in any order.
Easy to save and get a serie of 0101011 to the database.

Among other things, its easier to add new bit meanings to a bitfield than to add new boolean values to a class. Its also easier to copy a bitfield from one instance to another than a series of booleans.

It can also make Methods clearer. Imagine a Method with 10 bools vs. 1 Bitmask.

Actually, it can have a better performance, mainly if your enum derives from an byte.
In that extreme case, each enum value would be represented by a byte, containing all the combinations, up to 256. Having so many possible combinations with booleans would lead to 256 bytes.
But, even then, I don't think that is the real reason. The reason I prefer those is the power C# gives me to handle those enums. I can add several values with a single expression. I can remove them also. I can even compare several values at once with a single expression using the enum. With booleans, code can become, let's say, more verbose.

From a domain Model perspective, it just models reality better in some situations. If you have three booleans like AccountIsInDefault and IsPreferredCustomer and RequiresSalesTaxState, then it doesnn't make sense to add them to a single Flags decorated enumeration, cause they are not three distinct values for the same domain model element.
But if you have a set of booleans like:
[Flags] enum AccountStatus {AccountIsInDefault=1,
AccountOverdue=2 and AccountFrozen=4}
or
[Flags] enum CargoState {ExceedsWeightLimit=1,
ContainsDangerousCargo=2, IsFlammableCargo=4,
ContainsRadioactive=8}
Then it is useful to be able to store the total state of the Account, (or the cargo) in ONE variable... that represents ONE Domain Element whose value can represent any possible combination of states.

Raymond Chen has a blog post on this subject.
Sure, bitfields save data memory, but
you have to balance it against the
cost in code size, debuggability, and
reduced multithreading.
As others have said, its time is largely past. It's tempting to still do it, cause bit fiddling is fun and cool-looking, but it's no longer more efficient, it has serious drawbacks in terms of maintenance, it doesn't play nicely with databases, and unless you're working in an embedded world, you have enough memory.

I would suggest never using enum flags unless you are dealing with some pretty serious memory limitations (not likely). You should always write code optimized for maintenance.
Having several boolean properties makes it easier to read and understand the code, change the values, and provide Intellisense comments not to mention reduce the likelihood of bugs. If necessary, you can always use an enum flag field internally, just make sure you expose the setting/getting of the values with boolean properties.

Space efficiency - 1 bit
Time efficiency - bit comparisons are handled quickly by hardware.
Language independence - where the data may be handled by a number of different programs you don't need to worry about the implementation of booleans across different languages/platforms.
Most of the time, these are not worth the tradeoff in terms of maintance. However, there are times when it is useful:
Network protocols - there will be a big saving in reduced size of messages
Legacy software - once I had to add some information for tracing into some legacy software.
Cost to modify the header: millions of dollars and years of effort.
Cost to shoehorn the information into 2 bytes in the header that weren't being used: 0.
Of course, there was the additional cost in the code that accessed and manipulated this information, but these were done by functions anyways so once you had the accessors defined it was no less maintainable than using Booleans.

I have seen answers like Time efficiency and compatibility. those are The Reasons, but I do not think it is explained why these are sometime necessary in times like ours. from all answers and experience of chatting with other engineers I have seen it pictured as some sort of quirky old time way of doing things that should just die because new way to do things are better.
Yes, in very rare case you may want to do it the "old way" for performance sake like if you have the classic million times loop. but I say that is the wrong perspective of putting things.
While it is true that you should NOT care at all and use whatever C# language throws at you as the new right-way™ to do things (enforced by some fancy AI code analysis slaping you whenever you do not meet their code style), you should understand deeply that low level strategies aren't there randomly and even more, it is in many cases the only way to solve things when you have no help from a fancy framework. your OS, drivers, and even more the .NET itself(especially the garbage collector) are built using bitfields and transactional instructions. your CPU instruction set itself is a very complex bitfield, so JIT compilers will encode their output using complex bit processing and few hardcoded bitfields so that the CPU can execute them correctly.
When we talk about performance things have a much larger impact than people imagine, today more then ever especially when you start considering multicores.
when multicore systems started to become more common all CPU manufacturer started to mitigate the issues of SMP with the addition of dedicated transactional memory access instructions while these were made specifically to mitigate the near impossible task to make multiple CPUs to cooperate at kernel level without a huge drop in perfomrance it actually provides additional benefits like an OS independent way to boost low level part of most programs. basically your program can use CPU assisted instructions to perform memory changes to integers sized memory locations, that is, a read-modify-write where the "modify" part can be anything you want but most common patterns are a combination of set/clear/increment.
usually the CPU simply monitors if there is any other CPU accessing the same address location and if a contention happens it usually stops the operation to be committed to memory and signals the event to the application within the same instruction. this seems trivial task but superscaler CPU (each core has multiple ALUs allowing instruction parallelism), multi-level cache (some private to each core, some shared on a cluster of CPU) and Non-Uniform-Memory-Access systems (check threadripper CPU) makes things difficult to keep coherent, luckily the smartest people in the world work to boost performance and keep all these things happening correctly. todays CPU have a large amount of transistor dedicated to this task so that caches and our read-modify-write transactions work correctly.
C# allows you to use the most common transactional memory access patterns using Interlocked class (it is only a limited set for example a very useful clear mask and increment is missing, but you can always use CompareExchange instead which gets very close to the same performance).
To achieve the same result using a array of booleans you must use some sort of lock and in case of contention the lock is several orders of magnitude less permorming compared to the atomic instructions.
here are some examples of highly appreciated HW assisted transaction access using bitfields which would require a completely different strategy without them of course these are not part of C# scope:
assume a DMA peripheral that has a set of DMA channels, let say 20 (but any number up to the maximum number of bits of the interlock integer will do). When any peripheral's interrupt that might execute at any time, including your beloved OS and from any core of your 32-core latest gen wants a DMA channel you want to allocate a DMA channel (assign it to the peripheral) and use it. a bitfield will cover all those requirements and will use just a dozen of instructions to perform the allocation, which are inlineable within the requesting code. basically you cannot go faster then this and your code is just few functions, basically we delegate the hard part to the HW to solve the problem, constraints: bitfield only
assume a peripheral that to perform its duty requires some working space in normal RAM memory. for example assume a high speed I/O peripheral that uses scatter-gather DMA, in short it uses a fixed-size block of RAM populated with the description (btw the descriptor is itself made of bitfields) of the next transfer and chained one to each other creating a FIFO queue of transfers in RAM. the application prepares the descriptors first and then it chains with the tail of the current transfers without ever pausing the controller (not even disabling the interrupts). the allocation/deallocation of such descriptors can be made using bitfield and transactional instructions so when it is shared between diffent CPUs and between the driver interrupt and the kernel all will still work without conflicts. one usage case would be the kernel allocates atomically descriptors without stopping or disabling interrupts and without additional locks (the bitfield itself is the lock), the interrupt deallocates when the transfer completes.
most old strategies were to preallocate the resources and force the application to free after usage.
If you ever need to use multitask on steriods C# allows you to use either Threads + Interlocked, but lately C# introduced lightweight Tasks, guess how it is made? transactional memory access using Interlocked class. So you likely do not need to reinvent the wheel any of the low level part is already covered and well engineered.
so the idea is, let smart people (not me, I am a common developer like you) solve the hard part for you and just enjoy general purpose computing platform like C#. if you still see some remnants of these parts is because someone may still need to interface with worlds outside .NET and access some driver or system calls for example requiring you to know how to build a descriptor and put each bit in the right place. do not being mad at those people, they made our jobs possible.
In short : Interlocked + bitfields. incredibly powerful, don't use it

It is for speed and efficiency. Essentially all you are working with is a single int.
if ((flags & AnchorStyles.Top) == AnchorStyles.Top)
{
//Do stuff
}

Related

Is C# really slower than say C++?

I've been wondering about this issue for a while now.
Of course there are things in C# that aren't optimized for speed, so using those objects or language tweaks (like LinQ) may cause the code to be slower.
But if you don't use any of those tweaks, but just compare the same pieces of code in C# and C++ (It's easy to translate one to another). Will it really be that much slower ?
I've seen comparisons that show that C# might be even faster in some cases, because in theory the JIT compiler should optimize the code in real time and get better results:
Managed Or Unmanaged?
We should remember that the JIT compiler compiles the code at real time, but that's a 1-time overhead, the same code (once reached and compiled) doesn't need to be compiled again at run time.
The GC doesn't add a lot of overhead either, unless you create and destroy thousands of objects (like using String instead of StringBuilder). And doing that in C++ would also be costly.
Another point that I want to bring up is the better communication between DLLs introduced in .Net. The .Net platform communicates much better than Managed COM based DLLs.
I don't see any inherent reason why the language should be slower, and I don't really think that C# is slower than C++ (both from experience and lack of a good explanation)..
So, will a piece of the same code written in C# will be slower than the same code in C++ ?
In if so, then WHY ?
Some other reference (Which talk about that a bit, but with no explanation about WHY):
Why would you want to use C# if its slower than C++?
Warning: The question you've asked is really pretty complex -- probably much more so than you realize. As a result, this is a really long answer.
From a purely theoretical viewpoint, there's probably a simple answer to this: there's (probably) nothing about C# that truly prevents it from being as fast as C++. Despite the theory, however, there are some practical reasons that it is slower at some things under some circumstances.
I'll consider three basic areas of differences: language features, virtual machine execution, and garbage collection. The latter two often go together, but can be independent, so I'll look at them separately.
Language Features
C++ places a great deal of emphasis on templates, and features in the template system that are largely intended to allow as much as possible to be done at compile time, so from the viewpoint of the program, they're "static." Template meta-programming allows completely arbitrary computations to be carried out at compile time (I.e., the template system is Turing complete). As such, essentially anything that doesn't depend on input from the user can be computed at compile time, so at runtime it's simply a constant. Input to this can, however, include things like type information, so a great deal of what you'd do via reflection at runtime in C# is normally done at compile time via template metaprogramming in C++. There is definitely a trade-off between runtime speed and versatility though -- what templates can do, they do statically, but they simply can't do everything reflection can.
The differences in language features mean that almost any attempt at comparing the two languages simply by transliterating some C# into C++ (or vice versa) is likely to produce results somewhere between meaningless and misleading (and the same would be true for most other pairs of languages as well). The simple fact is that for anything larger than a couple lines of code or so, almost nobody is at all likely to use the languages the same way (or close enough to the same way) that such a comparison tells you anything about how those languages work in real life.
Virtual Machine
Like almost any reasonably modern VM, Microsoft's for .NET can and will do JIT (aka "dynamic") compilation. This represents a number of trade-offs though.
Primarily, optimizing code (like most other optimization problems) is largely an NP-complete problem. For anything but a truly trivial/toy program, you're pretty nearly guaranteed you won't truly "optimize" the result (i.e., you won't find the true optimum) -- the optimizer will simply make the code better than it was previously. Quite a few optimizations that are well known, however, take a substantial amount of time (and, often, memory) to execute. With a JIT compiler, the user is waiting while the compiler runs. Most of the more expensive optimization techniques are ruled out. Static compilation has two advantages: first of all, if it's slow (e.g., building a large system) it's typically carried out on a server, and nobody spends time waiting for it. Second, an executable can be generated once, and used many times by many people. The first minimizes the cost of optimization; the second amortizes the much smaller cost over a much larger number of executions.
As mentioned in the original question (and many other web sites) JIT compilation does have the possibility of greater awareness of the target environment, which should (at least theoretically) offset this advantage. There's no question that this factor can offset at least part of the disadvantage of static compilation. For a few rather specific types of code and target environments, it can even outweigh the advantages of static compilation, sometimes fairly dramatically. At least in my testing and experience, however, this is fairly unusual. Target dependent optimizations mostly seem to either make fairly small differences, or can only be applied (automatically, anyway) to fairly specific types of problems. Obvious times this would happen would be if you were running a relatively old program on a modern machine. An old program written in C++ would probably have been compiled to 32-bit code, and would continue to use 32-bit code even on a modern 64-bit processor. A program written in C# would have been compiled to byte code, which the VM would then compile to 64-bit machine code. If this program derived a substantial benefit from running as 64-bit code, that could give a substantial advantage. For a short time when 64-bit processors were fairly new, this happened a fair amount. Recent code that's likely to benefit from a 64-bit processor will usually be available compiled statically into 64-bit code though.
Using a VM also has a possibility of improving cache usage. Instructions for a VM are often more compact than native machine instructions. More of them can fit into a given amount of cache memory, so you stand a better chance of any given code being in cache when needed. This can help keep interpreted execution of VM code more competitive (in terms of speed) than most people would initially expect -- you can execute a lot of instructions on a modern CPU in the time taken by one cache miss.
It's also worth mentioning that this factor isn't necessarily different between the two at all. There's nothing preventing (for example) a C++ compiler from producing output intended to run on a virtual machine (with or without JIT). In fact, Microsoft's C++/CLI is nearly that -- an (almost) conforming C++ compiler (albeit, with a lot of extensions) that produces output intended to run on a virtual machine.
The reverse is also true: Microsoft now has .NET Native, which compiles C# (or VB.NET) code to a native executable. This gives performance that's generally much more like C++, but retains the features of C#/VB (e.g., C# compiled to native code still supports reflection). If you have performance intensive C# code, this may be helpful.
Garbage Collection
From what I've seen, I'd say garbage collection is the poorest-understood of these three factors. Just for an obvious example, the question here mentions: "GC doesn't add a lot of overhead either, unless you create and destroy thousands of objects [...]". In reality, if you create and destroy thousands of objects, the overhead from garbage collection will generally be fairly low. .NET uses a generational scavenger, which is a variety of copying collector. The garbage collector works by starting from "places" (e.g., registers and execution stack) that pointers/references are known to be accessible. It then "chases" those pointers to objects that have been allocated on the heap. It examines those objects for further pointers/references, until it has followed all of them to the ends of any chains, and found all the objects that are (at least potentially) accessible. In the next step, it takes all of the objects that are (or at least might be) in use, and compacts the heap by copying all of them into a contiguous chunk at one end of the memory being managed in the heap. The rest of the memory is then free (modulo finalizers having to be run, but at least in well-written code, they're rare enough that I'll ignore them for the moment).
What this means is that if you create and destroy lots of objects, garbage collection adds very little overhead. The time taken by a garbage collection cycle depends almost entirely on the number of objects that have been created but not destroyed. The primary consequence of creating and destroying objects in a hurry is simply that the GC has to run more often, but each cycle will still be fast. If you create objects and don't destroy them, the GC will run more often and each cycle will be substantially slower as it spends more time chasing pointers to potentially-live objects, and it spends more time copying objects that are still in use.
To combat this, generational scavenging works on the assumption that objects that have remained "alive" for quite a while are likely to continue remaining alive for quite a while longer. Based on this, it has a system where objects that survive some number of garbage collection cycles get "tenured", and the garbage collector starts to simply assume they're still in use, so instead of copying them at every cycle, it simply leaves them alone. This is a valid assumption often enough that generational scavenging typically has considerably lower overhead than most other forms of GC.
"Manual" memory management is often just as poorly understood. Just for one example, many attempts at comparison assume that all manual memory management follows one specific model as well (e.g., best-fit allocation). This is often little (if any) closer to reality than many peoples' beliefs about garbage collection (e.g., the widespread assumption that it's normally done using reference counting).
Given the variety of strategies for both garbage collection and manual memory management, it's quite difficult to compare the two in terms of overall speed. Attempting to compare the speed of allocating and/or freeing memory (by itself) is pretty nearly guaranteed to produce results that are meaningless at best, and outright misleading at worst.
Bonus Topic: Benchmarks
Since quite a few blogs, web sites, magazine articles, etc., claim to provide "objective" evidence in one direction or another, I'll put in my two-cents worth on that subject as well.
Most of these benchmarks are a bit like teenagers deciding to race their cars, and whoever wins gets to keep both cars. The web sites differ in one crucial way though: they guy who's publishing the benchmark gets to drive both cars. By some strange chance, his car always wins, and everybody else has to settle for "trust me, I was really driving your car as fast as it would go."
It's easy to write a poor benchmark that produces results that mean next to nothing. Almost anybody with anywhere close to the skill necessary to design a benchmark that produces anything meaningful, also has the skill to produce one that will give the results he's decided he wants. In fact it's probably easier to write code to produce a specific result than code that will really produce meaningful results.
As my friend James Kanze put it, "never trust a benchmark you didn't falsify yourself."
Conclusion
There is no simple answer. I'm reasonably certain that I could flip a coin to choose the winner, then pick a number between (say) 1 and 20 for the percentage it would win by, and write some code that would look like a reasonable and fair benchmark, and produced that foregone conclusion (at least on some target processor--a different processor might change the percentage a bit).
As others have pointed out, for most code, speed is almost irrelevant. The corollary to that (which is much more often ignored) is that in the little code where speed does matter, it usually matters a lot. At least in my experience, for the code where it really does matter, C++ is almost always the winner. There are definitely factors that favor C#, but in practice they seem to be outweighed by factors that favor C++. You can certainly find benchmarks that will indicate the outcome of your choice, but when you write real code, you can almost always make it faster in C++ than in C#. It might (or might not) take more skill and/or effort to write, but it's virtually always possible.
Because you don't always need to use the (and I use this loosely) "fastest" language? I don't drive to work in a Ferrari just because it's faster...
Circa 2005 two MS performance experts from both sides of the native/managed fence tried to answer the same question. Their method and process are still fascinating and the conclusions still hold today - and I'm not aware of any better attempt to give an informed answer. They noted that a discussion of potential reasons for differences in performance is hypothetical and futile, and a true discussion must have some empirical basis for the real world impact of such differences.
So, the Old New Raymond Chen, and Rico Mariani set rules for a friendly competition. A Chinese/English dictionary was chosen as a toy application context: simple enough to be coded as a hobby side-project, yet complex enough to demonstrate non trivial data usage patterns. The rules started simple - Raymond coded a straightforward C++ implementation, Rico migrated it to C# line by line, with no sophistication whatsoever, and both implementations ran a benchmark. Afterwards, several iterations of optimizations ensued.
The full details are here: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.
This dialogue of titans is exceptionally educational and I whole heartily recommend to dive in - but if you lack the time or patience, Jeff Atwood compiled the bottom lines beautifully:
Eventually, C++ was 2x faster - but initially, it was 13x slower.
As Rico sums up:
So am I ashamed by my crushing defeat? Hardly. The managed code
achieved a very good result for hardly any effort. To defeat the
managed version, Raymond had to:
Write his own file/io stuff
Write his own string class
Write his own allocator
Write his own international mapping
Of course he used available lower level libraries to do this,
but that's still a lot of work. Can you call what's left an STL
program? I don't think so.
That is my experience still, 11 years and who knows how many C#/C++ versions later.
That is no coincidence, of course, as these two languages spectacularly achieve their vastly different design goals. C# wants to be used where development cost is the main consideration (still the majority of software), and C++ shines where you'd save no expenses to squeeze every last ounce of performance out of your machine: games, algo-trading, data-centers, etc.
C++ always have an edge for the performance. With C#, I don't get to handle memory and I have literally tons of resources available for me to do my job.
What you need to question yourself is more about which one saves you time. Machines are incredibly powerful now and most of your code should be done in a language that allows you to get the most value in the least amount of time.
If there is a core processing that takes way too long in C#, you could then build a C++ and interop your way to it with C#.
Stop thinking about your code performance. Start building value.
C# is faster than C++. Faster to write. For execution times, nothing beats a profiler.
But C# does not have as much libraries as C++ can interface easily.
And C# depends heavily on windows...
BTW, time critical applications are not coded in C# or Java, primarily due to uncertainty of when the Garbage Collection will be performed.
In modern times, application or execution speed is not as important as was previously. Development schedules, correctness and robustness are higher priorities. A high speed version of an application is no good if it has lots of bugs, crashes a lot or worse, misses an opportunity to get to market or be deployed.
Since development schedules are a priority, new languages are coming out that speed up development. C# is one of these. C# also assists in correctness and robustness by removing features from C++ that cause common problems: one example is pointers.
The differences in execution speed of an application developed with C# and one developed using C++ is negligible on most platforms. This is due to the fact that the execution bottlenecks are not language dependent but usually depend on the operating system or I/O. For example if C++ performs a function in 5 ms but C# uses 2ms, and waiting for data takes 2 seconds, the time spent in the function is insignificant compared to the time waiting for data.
Choose a language that is best suited for the developers, platform and projects. Work towards the goals of correctness, robustness and deployment. The speed of an application should be treated as a bug: prioritize it, compare to other bugs, and fix as necessary.
A better way to look at it everything is slower than C/C++ because it abstracts away rather than following the stick and mud paradigm. It's called systems programming for a reason, you program against the grain or bare metal. Doing so also grants you speed you cannot achieve with other languages like C# or Java. But alas C roots are all about doing things the hard way, so your mostly going to be writing more code and spending more time debugging it.
C is also case sensitive, also objects in C++ also follow strict rule sets. Example a purple ice cream cone may not be the same as a blue ice cream cone, though they might be cones they may not necessarily belong to the cone family and if you forget to define what cone is you bug out. Thus the properties of ice cream may or may not be clones. Now the speed argument, C/C++ uses the stack and heap approach this is where bare metal gets it's metal.
With the boost library you can achieve incredible speeds unfortunately most game studios stick to the standard library. The other reason for this might be because software written in C/C++ tends to be massive in file size, as it's a giant collection of files instead of a single file. Also take note all operating systems are written in C so generally why must we ask the question what could be faster?!
Also caching is not faster than pure memory management, sorry but this just doesn't fan out. Memory is something physical, caching is something software does in order to gain a kick in performance. One could also reason that without physical memory caching would simply not exist. It doesn't void the fact memory must be managed at some level whether its automated or manual.
Why would you write a small application that doesn't require much in the way of optimization in C++, if there is a faster route(C#)?
Getting an exact answer to your question is not really possible unless you perform benchmarks on specific systems. However, it is still interesting to think about some fundamental differences between programming languages like C# and C++.
Compilation
Executing C# code requires an additional step where the code is JIT'ed. With regard to performance that will be in favor of C++. Also, the JIT compiler is only able to optimize the generated code within the unit of code that is JIT'ed (e.g. a method) while a C++ compiler can optimize across method calls using more aggressive techniques.
However, The JIT compiler is able to optimize the generated machine code to closely match the underlying hardware enabling it to take advantage of additional hardware features if they exist. To my knowledge the .NET JIT compiler doesn't do that but it would conceiveably be able to generate different code for Atom as opposed to Pentium CPU's.
Memory access
The garbage collected architecture can in many cases create more optimal memory access patterns than standard C++ code. If the memory area used for the first generation is small enough in can stay within the CPU cache increasing performance. If you create and destroy a lot of small objects the overhead of maintaing the managed heap may be smaller than what is required by the C++ runtime. Again, this is highly dependent on the application. A study Python of performance demonstrates that a specific managed Python application is able to scale much better than the compiled version as a result of more optimal memory access patterns.
Don't let confusing!
If a C# application is written in the best case and a C++ application is written in the best case, the C++ is faster.
Many reason is here about why C++ is faster that C# inherently, such as C# uses virtual machine similar to JVM in Java. Basically higher level language has less performance (if uses in best case).
If you're an experienced professional C# programmer just like you're an experienced professional C++ programmer, developing an application using C# is much more easy and fast than C++.
Many other situations between these situations is possible. For example, you can write an C# application and C++ application so that C# app runs faster than C++ one.
For choosing a language you should note the circumstances of the project and its subject. For a general business project you should use C#. For a hight performance required project like a Video Converter or Image Processing project you should choose C++.
Update:
OK. Lets compare some practical reason about why most possible speed of C++ is more than C#. Consider a good written C# application and same C++ version:
C# uses a VM as a middle layer for executing the application. It has overhead.
AFAIK CLR could not optimises all C# codes in target machine. C++ application could be compiled on target machine with most optimisation.
In C# the most possible optimisation for runtime means most possible fast VM. VM has overhead anyway.
C# is a higher level language thus it generates more program code lines for the final process. (consider difference between an Assembly application and Ruby one! same condition is between C++ and a higher level language such as C#/Java)
If you prefer to get some more info in practice as an expert, see this. It is about Java but it also applies to C#.
The primary concern would not be speed, but stability across windows versions and upgrades. Win32 is mostly immune across windows versions making it highly stable.
When servers are decommissioned and software migrated, A lot of anxiety happens around anything using .Net and usually a lot of panic about .net versions but a Win32 application built 10 years ago just keeps running like nothing happened.
I have been specializing in optimization for about 15 years, and regularly re write C++ code, making heavy use of compiler intrinsics as much as possible because C++ performance is often nowhere near what the CPU is capable of. Cache performance often needs to be considered. Many vector maths instructions are required to replace the standard C++ floating point code.
A great deal of STL code is re written and often runs many times faster. Maths and code which makes heavy use of data can be re written with spectacular results, as the CPU approaches its optimum performance.
None of this is possible in C#. To compare their relative #real time# performance is really a staggeringly ignorant question. The fastest piece of code in C++ will be when each single assembler instruction is optimised for the task in hand, with no unnecessary instructions - at all. Where each piece of memory is used when it is required, and not copied n times because that’s what the language design requires. Where each required memory movement works in harmony with the cache.
Where the final algorithm cannot be improved, based on the exact real time requirements, considering accuracy and functionality.
Then you will be approaching an optimal solution.
To compare C# with this ideal situation is staggering. C# can’t compete. In fact, I am currently re writing a whole bunch of C# code (when I say re writing I mean removing and replacing it completely) because it is not even in the same city, let alone ball park when it comes to heavy lifting real time performance.
So please, stop fooling yourselves. C# is slow. Dead slow. All software is slowing down, and C# is making this speed decline worse. All software runs using the fetch execute cycle in assembler (you know – on the CPU). You use 10 times as many instructions; it’s going to go 10 times as slow. You cripple the cache; it’s going to go even slower. You add garbage collect to a real time piece of software then you are often fooled into thinking that the code runs ‘ok’ there are just those few moments every now and then when the code goes ‘a bit slow for a while’.
Try adding a garbage collection system to code where every cycle counts. I wonder if the stock market trading software has garbage collection (you know – on the system running on the new undersea cable which cost $300 million?). Can we spare 300 milliseconds every 2 seconds? What about flight control software on the space shuttle – is GC ok there? How about engine management software in performance vehicles? (Where victory in a season can be worth millions).
Garbage collection in real time is a complete failure.
So no, emphatically, C++ is much faster. C# is a leap backwards.

Performance gains in re-writing C# code in C/C++

I wrote part of a program that does some heavy work with strings in C#. I initially chose C# not only because it was easier to use .NET's data structures, but also because I need to use this program to analyse some 2-3 million text records in a database, and it is much easier to connect to databases using C#.
There was a part of the program that was slowing down the whole code, and I decided to rewrite it in C using pointers to access every character in the string, and now the part of the code that took some 119 seconds to analyse 10,000,000 strings in C# takes the C code only 5 seconds! Performance is a priority, so I am considering rewriting the whole program in C, compiling it into a dll (something which I didn't know how to do when I started writing the program) and using DllImport from C# to use its methods to work with the database strings.
Given that rewriting the whole program will take some time, and since using DllImport to work with C#'s strings requires marshalling and such things, my question is will the performance gains from the C dll's faster string handling outweigh the performance hit of having to repeatedly marshal strings to access the C dll from C#?
First, profile your code. You might find some real headsmacker that speeds the C# code up greatly.
Second, writing the code in C using pointers is not really a fair comparison. If you are going to use pointers why not write it in assembly language and get real performance? (Not really, just reductio ad absurdam.) A better comparison for native code would be to use std::string. That way you still get a lot of help from the string class and C++ exception-safety.
Given that you have to read 2-3 million records from the DB to do this work, I very much doubt that the time spent cracking the strings is going to outweigh the elapsed time taken to load the data from the DB. So, consider instead how to structure your code so that you can begin string processing while the DB load is in progress.
If you use a SqlDataReader (say) to load the rows sequentially, it should be possible to batch up N rows as fast as possible and hand off to a separate thread for the post-processing that is your current headache and reason for this question. If you are on .Net 4.0 this is simplest to do using Task Parallel Library, and System.Collections.Concurrent could also be useful for collation of results between the threads.
This approach should mean that neither the DB latency nor the string processing is a show-stopping bottleneck, because they happen in parallel. This applies even if you are on a single-processor machine because your app can process strings while it's waiting for the next batch of data to come back from the DB over the network. If you find string processing is the slowest, use more threads (ie. Tasks) for that. If the DB is the bottleneck, then you have to look at external means to improve its performance - DB hardware or schema, network infrastructure. If you need some results in hand before processing more data, TPL allows dependencies to be created between Tasks and the coordinating thread.
My point is that I doubt it's worth the pain of re-engineering the entire app in native C or whatever. There are lots of ways to skin this cat.
One option is to rewrite the C code as unsafe C#, which ought to have roughly the same performance and won't incur any interop penalties.
There's no reason to write in C over C++, and C/C++ does not exist.
The performance implications of marshalling are fairly simple. If you have to marshal every string individually, then your performance is gonna suck. If you can marshal all ten million strings in one call, then marshalling isn't gonna make any difference at all. P/Invoke is not the fastest operation in the world but if you only invoke it a few times, it's not really gonna matter.
It might be easier to re-write your core application in C++ and then use C++/CLI to merge it with the C# database end.
There are some pretty good answers here already, especially #Steve Townsend's.
However, I felt it worth underlining a key point: There is intrinisically no reason why C code "will be faster" than C# code. That idea is a myth. Under the bonnet they both produce machine code that runs on the same CPU. As long as you don't ask the C# to do more work than the C, then it can perform just as well.
By switching to C, you forced yourself to be more frugal (you avoided using high level features like managed strings, bounds-checking, garbage collection, exception handling, etc, and simply treated your strings as blocks of raw bytes). If you applied these low-level techniques to your C# code (i.e. treating your data as raw blocks of bytes as you did in C), you would find much less difference in the speed.
For example: Last week I re-wrote (in C#) a class that a junior had written (also in C#). I achieved a 25x speed improvement over the original code by applying the same approach I would use if I were writing it in C (i.e. thinking about performance). I achieved the same speedup you're claiming without having to change to a different language at all.
Finally, just because an isolated case can be made 24x faster, it does not mean you can make your whole program 24x faster across the board by porting it all to C. As Steve said, profile it to work out where it's slow, and expend your effort only where it'll provide significant benefits. If you blindly convert to C you'll probably find you've spent a lot of time making some already-working-code a lot less maintainable.
(P.S. My viewpoint comes from 29 years experience writing assembler, C, C++, and C# code, and understanding that the language is just a tool for generating machine-code - in the case of C# vs C++ vs C, it is primarily the programmer's skill, not the language used, that determines whether the code will run quickly or slowly. C/C++ programmers tend to be better than C# programmers because they have to be - C# allows you to be lazy and get the code written quickly, while C/C++ make you do more work and the code takes longer to write. But a good programmer can get great performance out of C#, and a poor programmer can wrest abysmal performance out of C/C++)
With strings being immutable in .NET, I have no doubt that an optimised C implementation will outperform an optimised C# implemented - no doubt!
P/Invoke does incur an overhead but if you implement the bulk of the logic in C and only expose very granular API for C#, I believe you are in a much better shape.
At the end of the day, writing an implementation in C means taking longer - but that will give you better performance if you preprepared for extra development cost.
Make yourself familiar with mixed assemblies - this is better than Interop. Interop is a fast track way to deal with native libs, but mixed assemblies perform better.
Mixed assemblies on MSDN
As usual the main thing is testing and measuring...
For concatenation of long strings or multiple strings always use StringBuilder. What not everybody knows, is that StringBuilder cannot only be used to make concatenating strings faster, but also Insertion, Removal and Replacement of characters.
If this isn't fast enough for you, you can use char- or byte-arrays instead of a strings and operate on these. If you are done with manipulation you can convert the array back to a string.
There is also the option in C# to use unsafe code to get a pointer to a string and modifiy the otherwise immutable string but I wouldn't really recommend this.
As others have alread said, you can use managed C++ (C++/CLI) to nicely interoperate between .NET and managed code.
Would you mind showing us the code, maybe there are other options for optimizing?
When you start to optimize a program at a late stage (the application was written without optimization in mind) then you have to identify the bottlenecks.
Profiling is the first step to see where all those CPU cycles are going.
Just keep in mind that C# profilers will only profile your .Net application -not the IIS server implemented in the kernel nor the network stack.
And this can be an invisible bottleneck that beats by several orders of magnitude what you are focussing on when trying to make progress.
There you think that you have no influence on IIS implemented as a kernel driver -and you are right.
But you can do without it - and save a lot of time and money.
Put your talent where it can make the difference - not where you are forced to run with your feet tied together.
The inherent differences are usually given as 2x less CPU, 5x memory. In practice, few people are good enough at or C++ to gain the benefits.
There's additional gain for skimping on Unicode support, but only you can know your application well enough to know if that's safe.
Use the profiler first, make sure you not I/O bound.

Does it really matter to distinct between short, int, long?

In my C# app, I would like to know whether it is really important to use short for smaller numbers, int for bigger etc. Does the memory consumption really matter?
Unless you are packing large numbers of these together in some kind of structure, it will probably not affect the memory consumption at all. The best reason to use a particular integer type is compatibility with an API. Other than that, just make sure the type you pick has enough range to cover the values you need. Beyond that for simple local variables, it doesn't matter much.
The simple answer is that it's not really important.
The more complex answer is that it depends.
Obviously you need to choose a type that will hold your datastructure without overflowing, and even if you're only storing smaller numbers then choosing int is probably the most sensible thing to do.
However, if your application loads a lot of data or runs on a device with limited memory then you might need to choose short for some values.
For C# apps that aren't trying to mirror some sort of structure from a file, you're better off using ints or whatever your native format is. The only other time it might matter is if using arrays on the order of millions of entries. Even then, I'd still consider ints.
Only you can be the judge of whether the memory consumption really matters to you. In most situations it won't make any discernible difference.
In general, I would recommend using int/Int32 where you can get away with it. If you really need to use short, long, byte, uint etc in a particular situation then do so.
This is entirely relative to the amount of memory you can afford to waste. If you aren't sure, it probably doesn't matter.
The answer is: it depends. The question of whether memory matters is entirely up to you. If you are writing a small application that has minimal storage and memory requirements, then no. If you are google, storing billions and billions of records on thousands of servers, then every byte can cost some real money.
There are a few cases where I really bother choosing.
When I have memory limitations
When I do bitshift operations
When I care about x86/x64 portability
Every other case is int all the way
Edit : About x86/x64
In x86 architecture, an int is 32 bits but in x64, an int is 64 bits
If you write "int" everywhere and move from one architecture to another, it might leads to problems. For example you have an 32 bits api that export a long. You cast it to an integer and everything is fine. But when you move to x64, the hell breaks loose.
The int is defined by your architecture so when you change architecture you need to be aware that it might lead to potential problems
That all depends on how you are using them and how many you have. Even if you only have a few in memory at a time - this might drive the data type in your backing store.
Memory consumption based on the type of integers you are storing is probably not an issue in a desktop or web app. In a game or a mobile device app, it may be more of an issue.
However, the real reason to differentiate between the types is the kind of numbers you need to store. If you have really big numbers, or high precision, you may need to use long to store it.
The context of the situation is very important here. You don't need to take a guess at whether it is important or not though, we are dealing with quantifiable things here. We know that we are saving 2 bytes by using a short instead of an int.
What do you estimate the largest number of instances are going to be in memory at a given point in time? If there are a million then you are saving ~2Mb of Ram. Is that a large amount of ram? Again, it depends on the context, if the app is running on a desktop with 4Gb of ram you probably don't care too much about the 2Mb.
If there will be hundreds of millions of instances in memory the savings are going to get pretty big, but if that is the case you may just not have enough ram to deal with it and you may have to store this structure on disk and work with parts of it at a time.
Int32 will be fine for almost anything. Exceptions include:
if you have specific needs where a different type is clearly better. Example: if you're writing a 16 bit emulator, Int16 (aka: short) would probably be better to represent some of the internals
when an API requires a certain type
one time, I had an invalid int cast and Visual Studio's first suggestion was to verify my value was less than infinity. I couldn't find a good type for that without using the pre-defined constants, so i used ulong since that was the closest I could come in .NET 2.0 :)

Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

I was going to write a long-winded post, but I'll boil it down here:
I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS.
I think it is because of my object-orented code.
Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes.
What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster.
I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them.
As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here.
Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)
There's a terrific presentation from GDC 2008 that is worth reading if you are an XNA developer. It's called Understanding XNA Framework Performance.
For your current architecture - you haven't really described it well enough to give a definite answer - you probably are doing too much unnecessary "stuff" in a tight loop. If I had to guess, I'd suggest that your current method is thrashing the cache - you need to fix your data layout.
In the ideal case you should have a nice big array of small-as-possible value types (structs not classes), and a heavily inlined loop that shoves data into it linearly.
(Aside: regarding what is fast: Integer and floating point maths is very fast - in general, you shouldn't use lookup tables. Function calls are pretty fast - to the point that copying large structs when you pass them will be more significant. The JIT will inline simple getters and setters - although you shouldn't depend on it to inline anything else in very tight loops - like your blitter.)
HOWEVER - even if optimised - your current architecture sucks. What you are doing flies in the face of how a modern GPU works. You should be loading your sprites onto your GPU and letting it composite your scene.
If you want to manipulate your sprites at a pixel level (for example: pallet swapping as you have mentioned) then you should be using pixel shaders. The CPU on the 360 (and on PCs) is fast, but the GPU is so much faster when you're doing something like this!
The Sprite Effects XNA sample is a good place to get started.
Have you profiled your code to determine where the slowdown is? Before you go rewriting your application, you ought to at least know which parts need to be rewritten.
I strongly suspect that the overhead of the accessors and data conversions is trivial. It's much more likely that your algorithms are doing unnecessary work, recomputing values that they could cache, and other things that can be addressed without blowing up your object design.
Are you specifying a color and such for each pixel or something? If that is the case I think you should really think about the architecture some more. Start using sprites that will speed things up.
EDIT
Okay I think what your solution could be load several sprites with different colours (a sprite of a few pixels) and reuse those. It is faster to point to the same sprite than to assign a different colour to each pixel as the sprite has already been loaded into memory
As with any performance problem, you should profile the application to identify the bottlenecks rather than trying to guess. I seriously doubt that getters and setters are at the root of your problem. The compiler almost always inlines these sorts of functions. I'm also curious what you have against math. Multiplying two integers, for instance, is one of the fastest things the computer can do.

What is faster- Java or C# (or good old C)? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intel compiler on Core2 Quad CPU's. It's mostly integer arithmetic.
My benchmarks so far show Java and C are about on par with each other, and .NET/C# trails by about 5%- however a number of my coworkers are claiming that .NET with the right optimizations will beat both of these given enough time for the JIT to do its work.
I always assume that the JIT would have done it's job within a few minutes of the app starting (Probably a few seconds in my case, as it's mostly tight loops), so I'm not sure whether to believe them
Can anyone shed any light on the situation? Would .NET beat Java? (Or am I best just sticking with C at this point?).
The code is highly multithreaded and data sets are several terabytes in size.
Haskell/Erlang etc are not options in this case as there is a significant quantity of existing legacy C code that will be ported to the new system, and porting C to Java/C# is a lot simpler than to Haskell or Erlang. (Unless of course these provide a significant speedup).
Edit: We are considering moving to C# or Java because they may, in theory, be faster. Every percent we can shave off our processing time saves us tens of thousands of dollars per year. At this point we are just trying to evaluate whether C, Java, or c# would be faster.
The key piece of information in the question is this:
Every percent we can shave off our
processing time saves us tens of
thousands of dollars per year
So you need to consider how much it will cost to shave each percent off. If that optimization effort costs tens of thousands of dollars per year, then it isn't worth doing. You could make a bigger saving by firing a programmer.
With the right skills (which today are rarer and therefore more expensive) you can hand-craft assembler to get the fastest possible code. With slightly less rare (and expensive) skills, you can do almost as well with some really ugly-looking C code. And so on. The more performance you squeeze out of it, the more it will cost you in development effort, and there will be diminishing returns for ever greater effort. If the profit from this stays at "tens of thousands of dollars per year" then there will come a point where it is no longer worth the effort. In fact I would hazard a guess you're already at that point because "tens of thousands of dollars per year" is in the range of one salary, and probably not enough to buy the skills required to hand-optimize a complex program.
I would guess that if you have code already written in C, the effort of rewriting it all as a direct translation in another language will be 90% wasted effort. It will very likely perform slower simply because you won't be taking advantage of the capabilities of the platform, but instead working against them, e.g. trying to use Java as if it was C.
Also within your existing code, there will be parts that make a crucial contribution to the running time (they run frequently), and other parts that are totally irrelevant (they run rarely). So if you have some idea for speeding up the program, there is no economic sense in wasting time applying it to the parts of the program that don't affect the running time.
So use a profiler to find the hot spots, and see where time is being wasted in the existing code.
Update when I noticed the reference to the code being "multithreaded"
In that case, if you focus your effort on removing bottlenecks so that your program can scale well over a large number of cores, then it will automatically get faster every year at a rate that will dwarf any other optimization you can make. This time next year, quad cores will be standard on desktops. The year after that, 8 cores will be getting cheaper (I bought one over a year ago for a few thousand dollars), and I would predict that a 32 core machine will cost less than a developer by that time.
I'm sorry, but that is not a simple question. It would depend a lot on what exactly was going on. C# is certainly no slouch, and you'd be hard-pressed to say "java is faster" or "C# is faster". C is a very different beast... it maybe has the potential to be faster - if you get it right; but in most cases it'll be about the same, but much harder to write.
It also depends how you do it - locking strategies, how you do the parallelization, the main code body, etc.
Re JIT - you could use NGEN to flatten this, but yes; if you are hitting the same code it should be JITted very early on.
One very useful feature of C#/Java (over C) is that they have the potential to make better use of the local CPU (optimizations etc), without you having to worry about it.
Also - with .NET, consider things like "Parallel Extensions" (to be bundled in 4.0), which gives you a much stronger threading story (compared to .NET without PFX).
Don't worry about language; parallelize!
If you have a highly multithreaded, data-intensive scientific code, then I don't think worrying about language is the biggest issue for you. I think you should concentrate on making your application parallel, especially making it scale past a single node. This will get you far more performance than just switching languages.
As long as you're confined to a single node, you're going to be starved for compute power and bandwidth for your app. On upcoming many-core machines, it's not clear that you'll have the bandwidth you need to do data-intensive computing on all the cores. You can do computationally intensive work (like a GPU does), but you may not be able to feed all the cores if you need to stream a lot of data to every one of them.
I think you should consider two options:
MapReduce
Your problem sounds like a good match for something like Hadoop, which is designed for very data-intensive jobs.
Hadoop has scaled to 10,000 nodes on Linux, and you can shunt your work off either to someone else's (e.g. Amazon's, Microsoft's) or your own compute cloud. It's written in Java, so as far as porting goes, you can either call your existing C code from within Java, or you can port the whole thing to Java.
MPI
If you don't want to bother porting to MapReduce, or if for some reason your parallel paradigm doesn't fit the MapReduce model, you could consider adapting your app to use MPI. This would also allow you to scale out to (potentially thousands) of cores. MPI is the de-facto standard for computationally intensive, distributed-memory applications, and I believe there are Java bindings, but mostly people use MPI with C, C++, and Fortran. So you could keep your code in C and focus on parallelizing the performance-intensive parts. Take a look at OpenMPI for starters if you are interested.
I'm honestly surprised at those benchmarks.
In a computationally intensive product I would place a large wager on C to perform faster. You might write code that leaks memory like a sieve, and has interesting threading related defects, but it should be faster.
The only reason I could think that Java or C# would be faster is due to a short run length on the test. If little or no GC happened, you'll avoid the overhead of actually deallocating memory. If the process is iterative or parallel, try sticking a GC.Collect wherever you think you're done a bunch of objects(after setting things to null or otherwise removing references).
Also, if you're dealing with terabytes of data, my opinion is you're going to be much better off with deterministic memory allocation that you get with C. If you deallocate roughly close to when you allocate your heap will stay largely unfragmented. With a GC environment you may very well end up with your program using far more memory after a decent run length than you would guess, just because of fragmentation.
To me this sounds like the sort of project where C would be the appropriate language, but would require a bit of extra attention to memory allocation/deallocation. My bet is that C# or Java will fail if run on a full data set.
Quite some time ago Raymond Chen and Rico Mariani had a series of blog posts incrementally optimising a file load into a dictionary tool. While .NET was quicker early on (i.e. easy to make quick) the C/Win32 approach eventually was significantly faster -- but at considerable complexity (e.g. using custom allocators).
In the end the answer to which is faster will heavily depend on how much time you are willing to expend on eking every microsecond out of each approach. That effort (assuming you do it properly, guided by real profiler data) will make a far greater difference than choice of language/platform.
The first and last performance blog entries:
Chen part 1
Mariani part 1
Check final part
Mariani final part
(The last link gives an overall summary of the results and some analysis.)
It is going to depend very much on what you are doing specifically. I have Java code that beats C code. I have Java code that is much slower than C++ code (I don't do C#/.NET so cannot speak to those).
So, it depends on what you are doing, I am sure you can find something that is faster in language X than language Y.
Have you tried running the C# code through a profiler to see where it is taking the most time (same with Java and C while you are at it). Perhaps you need to do something different.
The Java HotSpot VM is more mature (roots of it going back to at least 1994) than the .NET one, so it may come down to the code generation abilities of both for that.
You say "the code is multithreaded" which implies that the algorithms are parallelisable. Also, you save the "data sets are several terabytes in size".
Optimising is all about finding and eliminating bottlenecks.
The obvious bottleneck is the bandwidth to the data sets. Given the size of the data, I'm guessing that the data is held on a server rather than on a desktop machine. You haven't given any details of the algorithms you're using. Is the time taken by the algorithm greater than the time taken to read/write the data/results? Does the algorithm work on subsets of the total data?
I'm going to assume that the algorithm works on chunks of data rather than the whole dataset.
You have two scenarios to consider:
The algorithm takes more time to process the data than it does to get the data. In this case, you need to optimise the algorithm.
The algorithm takes less time to process the data than it does to get the data. In this case, you need to increase the bandwidth between the algorithm and the data.
In the first case, you need a developer that can write good assembler code to get the most out of the processors you're using, leveraging SIMD, GPUs and multicores if they're available. Whatever you do, don't just crank up the number of threads because as soon as the number of threads exceeds the number of cores, your code goes slower! This due to the added overhead of switching thread contexts. Another option is to use a SETI like distributed processing system (how many PCs in your organisation are used for admin purposes - think of all that spare processing power!). C#/Java, as bh213 mentioned, can be an order of magnitude slower than well written C/C++ using SIMD, etc. But that is a niche skillset these days.
In the latter case, where you're limited by bandwidth, then you need to improve the network connecting the data to the processor. Here, make sure you're using the latest ethernet equipment - 1Gbps everywhere (PC cards, switches, routers, etc). Don't use wireless as that's slower. If there's lots of other traffic, consider a dedicated network in parallel with the 'office' network. Consider storing the data closer to the clients - for every five or so clients use a dedicated server connected directly to each client which mirrors the data from the server.
If saving a few percent of processing time saves "tens of thousands of dollars" then seriously consider getting a consultant in, two actually - one software, one network. They should easily pay for themselves in the savings made. I'm sure there's many here that are suitably qualified to help.
But if reducing cost is the ultimate goal, then consider Google's approach - write code that keeps the CPU ticking over below 100%. This saves energy directly and indirectly through reduced cooling, thus costing less. You'll want more bang for your buck so it's C/C++ again - Java/C# have more overhead, overhead = more CPU work = more energy/heat = more cost.
So, in summary, when it comes to saving money there's a lot more to it than what language you're going to choose.
If there is already a significant quantity of legacy C code that will be added to the system then why move to C# and Java?
In response to your latest edit about wanting to take advantage of any improvements in processing speed....then your best bet would be to stick to C as it runs closer to the hardware than C# and Java which have the overhead of a runtime environment to deal with. The closer to the hardware you can get the faster you should be able to run. Higher Level languages such as C# and Java will result in quicker development times...but C...or better yet Assembly will result in quicker processing time...but longer development time.
I participated in a few TopCoder's Marathon matches where performance was they key to victory.
My choice was C#. I think C# solutions placed slightly above Java and were slighly slower than C++... Until somebody wrote a code in C++ that was a order of magnitude faster. You were alowed to use Intel compiler and the winning code was full of SIMD insturctions and you cannot replicate that in C# or Java. But if SIMD is not an option, C# and Java should be good enough as long as you take care to use memory correctly (e.g. watch for cache misses and try to limit memory access to the size of L2 cache)
You question is poorly phrased (or at least the title is) because it implies this difference is endemic and holds true for all instances of java/c#/c code.
Thankfully the body of the question is better phrased because it presents a reasonably detailed explanation of the sort of thing your code is doing. It doesn't state what versions (or providers) of c#/java runtimes you are using. Nor does it state the target architecture or machine the code will run on. These things make big differences.
You have done some benchmarking, this is good. Some suggestions as to why you see the results you do:
You aren't as good at writing performant c# code as you are at java/c (this is not a criticism, or even likely but it is a real possibility you should consider)
Later versions of the JVM have some serious optimizations to make uncontended locks extremely fast. This may skew things in your favour (And especially the comparison with the c implementation threading primitives you are using)
Since the java code seems to run well compared to the c code it is likely that you are not terribly dependent on the heap allocation strategy (profiling would tell you this).
Since the c# code runs less well than the java one (and assuming the code is comparable) then several possible reasons exist:
You are using (needlessly) virtual functions which the JVM will inline but the CLR will not
The latest JVM does Escape Analysis which may make some code paths considerably more efficient (notably those involving string manipulation whose lifetime is stack bound
Only the very latest 32 bit CLR will inline methods involving non primitive structs
Some JVM JIT compilers use hotspot style mechanisms which attempt to detect the 'hotspots' of the code and spend more effort re-jitting them.
Without an understanding of what your code spends most of its time doing it is impossible to make specific suggestions. I can quite easily write code which performs much better under the CLR due to use of structs over objects or by targeting runtime specific features of the CLR like non boxed generics, this is hardly instructive as a general statement.
Actually it is 'Assembly language'.
Depends on what kind of application you are writing.
Try The Computer Language Benchmarks Game
http://shootout.alioth.debian.org/u32q/benchmark.php?test=all&lang=csharp&lang2=java&box=1
http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=csharp&lang2=java&box=1
To reiterate a comment, you should be using the GPU, not the CPU if you are doing arithmetic scientific computing. Matlab with CUDA plugins would be much more awesome than Java or c# if Matlab licensing is not an issue. The nVidia documentation shows how to compile any CUDA function into a mex file. If you need free software, I like pycuda.
If however, GPUs are not an option, I personally like C for a lot of routines because the optimizations the compiler makes are not as complicated as JIT: you don't have to worry about whether a "class" becomes like a "struct" or not. In my experience, problems can usually be broken down such that higher-level things can be written in a very expressive language like Python (rich primitives, dynamic types, incredibly flexible reflection), and transformations can be written in something like C. Additionally, there's neat compiler software, like PLUTO (automatic loop parallelization and OpenMP code generation), and libraries like Hoard, tcmalloc, BLAS (CUBLAS for gpu), etc. if you choose to go the C/C++ route.
One thing to notice is that IF your application(s) would benefit of lazy evaluation a functional programming language like Haskell may yield speedups of a totally different magnitude than the theretically optimal structured/OO code just by not evaluating unnecessary branches.
Also, if you are talking about the monetary benefit of better performance, don't forget to add the cost of maintaing your software into the equation.
Surely the answer is to go and buy the latest PC with the most cores/processors you can afford. If you buy one of the latest 2x4 core PCs you will find not only does it have twice as many cores as a quad core but also they run 25-40% faster than the previous generation of processors/machines.
This will give you approximately a 150% speed up. Far more than choosing Java/C# or C.
and whats more your get the same again every 18 months if you keep buying in new boxes!
You can sit there for months rewriting you code or I could go down to my local PC store this afternoon and be running faster than all your efforts same day.
Improving code quality/efficiency is good but sometimes implementation dollars are better spent elsewhere.
Writing in one language or another will only give you small speed ups for a large amount of work. To really speed things up you might want to look at the following:
Buying the latest fastest Hardware.
Moving from 32 bit operating system to 64 bit.
Grid computing.
CUDA / OpenCL.
Using compiler optimisation like vectorization.
I would go with C# (or Java) because your development time will probably be much faster than with C. If you end up needing extra speed then you can always rewrite a section in C and call it as a module.
My preference would be C or C++ because I'm not separated from the machine language by a JIT compiler.
You want to do intense performance tuning, and that means stepping through the hot spots one instruction at a time to see what it is doing, and then tweaking the source code so as to generate optimal assembler.
If you can't get the compiler to generate what you consider good enough assembler code, then by all means write your own assembler for the hot spot(s). You're describing a situation where the need for performance is paramount.
What I would NOT do if I were in your shoes (or ever) is rely on anecdotal generalizations about one language being faster or slower than another. What I WOULD do is multiple passes of intense performance tuning along the lines of THIS and THIS and THIS. I have done this sort of thing numerous times, and the key is to iterate the cycle of diagnosis-and-repair because every slug fixed makes the remaining ones more evident, until you literally can't squeeze another cycle out of that turnip.
Good luck.
Added: Is it the case that there is some seldom-changing configuration information that determines how the bulk of the data is processed? If so, it may be that the program is spending a lot of its time re-interpreting the configuration info to figure out what to do next. If so, it is usually a big win to write a code generator that will read the configuration info and generate an ad-hoc program that can whizz through the data without constantly having to figure out what to do.
Depends what you benchmark and on what hardware. I assume it's speed rather than memory or CPU usage.But....
If you have a dedicated machine for an app only with very large amounts of memory then java might be 5% faster.
If you go down in the real world with limited memory and more apps running on the same machine .net looks better at utilizing computing resources :see here
If the hardware is very constrained, C/C++ wins hands down.
If you are using a highly multithreaded code, I would recommend you to take a look at the upcoming Task Parallel Library (TPL) for .NET and the Parallel Pattern Library (PPL) for native C++ applications. That will save you a lot of issues with thread/dead lockíng and all other issues that you would spend a lot of time digging into and solving for yourself.
For my self, I truly believe that the memory management in the managed world will be more efficient and beat the native code in the long term.
If much of your code is in C why not keep it?
In principal and by design it's obvious that C is faster. They may close the gap over time but they always have more level os indirection and "safety". C is fast because it's "unsafe". Just think about bound checking. Interfacing to C is supported in every langauge. And so I can not see why one would not like to just wrap the C code up if it's still working and use it in whatever language you like
I would consider what everyone else uses - not the folks on this site, but the folks who write the same kind of massively parallel, or super high-performance applications.
I find they all write their code in C/C++. So, just for this fact alone (ie. regardless of any speed issues between the languages), I would go with C/C++. The tools they use and have developed will be of much more use to you if you're writing in the same language.
Aside from that, I've found C# apps to have somewhat less than optimal performance in some areas, multithreading is one. .NET will try to keep you safe from thread problems (probably a good thing in most cases), but this will cause your specific case problems (to test: try writing a simple loop that accesses a shared object using lots of threads. Run that on a single core PC and you get better performance than if you run it on a multiple core box - .net is adding its own locks to make sure you don't muck it up)(I used Jon Skeet's singleton benchmark. The static lock on took 1.5sec on my old laptop, 8.5s on my superfast desktop, the lock version is even worse, try it yourself)
The next point is that with C you tend to access memory and data directly - nothing gets in the way, with C#/Java you will use some of the many classes that are provided. These will be good in the general case, but you're after the best, most efficient way to access this (which, for your case is a big deal with multi-terabytes of data, those classes were not designed with those datasets in mind, they were designed for the common cases everyone else uses), so again, you would be safer using C for this - you'll never get the GC getting clogged up by a class that creates new strings internally when you read a couple of terabytes of data if you write it in C!
So it may appear that C#/Java can give you benefits over a native application, but I think you'll find those benefits are only realised for the kind of line-of-business applications that are commonly written.
Note that for heavy computations there is a great advantage in having tight loops which can fit in the CPU's first level cache as it avoids having to go to slower memory repeatedly to get the instructions.
Even for level two cache a large program like Quake IV gets a 10% performance increase with 4 Mb level 2 cache versus 1 Mb level 2 cache - http://www.tomshardware.com/reviews/cache-size-matter,1709-5.html
For these tight loops C is most likely the best as you have the most control of the generated machine code, but for everything else you should go for the platform with the best libraries for the particular task you need to do. For instance the netlib libraries are reputed to have very good performance for a very large set of problems, and many ports to other languages are available.
If every percentage will really save you tens of thousands of dollars, then you should bring in a domain expert to help with the project. Well designed and written code with performance considered at the initial stages may be an order of magnitude faster, saving you 90%, or $900,000. I recently found a subtle flaw in some code that sped up a process by over 100 times. A colleague of mine found an algorithm that was running in O(n^3) that he re-wrote to make it O(N log n). This tends to be where the huge performance saving are.
If the problem is so simple that you are certain that a better algorithm cannot be employed giving you significant savings, then C is most likely your best language.
The most important things are already said here. I would add:
The developer utilizes a language which the compiler(s) utilize(s) to generate machine instructions which the processor(s) utilize(s) to use system resources. A program will be "fast" when ALL parts of the chain perform optimally.
So for the "best" language choice:
take that language which you are best able to control and
which is able to instruct the compiler sufficiently to
generate nearly optimal machine code so that
the processor on the target machine is able to utilize processing resources optimally.
If you are not a performance expert you will have a hard time to archieve 'peak performance' within ANY language. Possibly C++ still provides the most options to control the machine instructions (especially SSE extensions a.s.o).
I suggest to orient on the well known 80:20 rule. This is fairly well true for all: the hardware, the languages/platforms and the developer efforts.
Developers have always relied on the hardware to fix all performance issues automatically due to an upgrade to a faster processor f.e.. What might have worked in the past will not work in the (nearest) future. The developer now has the responsibility to structure her programs accordingly for parallelized execution. Languages for virtual machines and virtual runtime environments will show some advantage here. And even without massive parallelization there is little to no reason why C# or Java shouldn't succeed similar well as C++.
#Edit: See this comparison of C#, Matlab and FORTRAN, where FORTRAN does not win alone!
Ref; "My benchmarks so far show Java and C are about on par with each other"
Then your benchmarks are severely flawed...
C will ALWAYS be orders of magnitudes faster then both C# and Java unless you do something seriously wrong...!
PS!
Notice that this is not an attempt to try to bully neither C# nor Java, I like both Java and C#, and there are other reasons why you would for many problems choose either Java or C# instead of C. But neither Java nor C# would in a correct written tests NEVER be able to perform with the same speed as C...
Edited because of the sheer number of comments arguing against my rhetoric
Compare these two buggers...
C#
public class MyClass
{
public int x;
public static void Main()
{
MyClass[] y = new MyClass[1000000];
for( int idx=0; idx < 1000000; idx++)
{
y[idx] = new MyClass();
y[idx].x = idx;
}
}
}
against this one (C)
struct MyClass
{
int x;
}
void Main()
{
MyClass y[1000000];
for( int idx = 0; idx < 1000000; idx++)
{
y[idx].x = idx;
}
}
The C# version first of all needs to store its array on the heap. The C version stores the array on the stack. To store stuff on the stack is merely changing the value of an integer value while to store stuff on the heap means finding a big enough chunk of memory and potentially means traversing the memory for a pretty long time.
Now mostly C# and Java allocates huge chunks of memory which they keep on spending till it's out which makes this logic execute faster. But even then to compare this against changing the value of an integer is like an F16 against an oil tanker speedwise...
Second of all in the C version since all those objects are already on the stack we don't need to explicitly create new objects within the loop. Yet again for C# this is a "look for available memory operation" while the C version is a ZIP (do nothing operation)
Third of all is the fact that the C version will automatically delete all these objects when they run out of scope. Yet again this is an operation which ONLY CHANGES THE VALUE OF AN INTEGER VALUE. Which would on most CPU architectures take between 1 and 3 CPU cycles. The C# version doesn't do that, but when the Garbage Collector kicks in and needs to collect those items my guess is that we're talking about MILLIONS of CPU cycles...
Also the C version will instantly become x86 code (on an x86 CPU) while the C# version would first become IL code. Then later when executed it would have to be JIT compiled, which probably alone takes orders of magnitudes longer time then only executing the C version.
Now some wise guy could probably execute the above code and measure CPU cycles. However that's basically no point at all in doing because mathematically it's proven that the Managed Version would probably take several million times the number of CPU cycles as the C version. So my guess is that we're now talking about 5-8 orders of magnitudes slower in this example. And sure, this is a "rigged test" in that I "looked for something to prove my point", however I challenge those that commented badly against me on this post to create a sample which does NOT execute faster in C and which also doesn't use constructs which you normally never would use in C due to "better alternatives" existing.
Note that C# and Java are GREAT languages. I prefer them over C ANY TIME OF THE DAY. But NOT because they're FASTER. Because they are NOT. They are ALWAYS slower then C and C++. Unless you've coded blindfolded in C or C++...
Edit;
C# of course have the struct keyword, which would seriously change the speed for the above C# version, if we changed the C# class to a value type by using the keyword struct instead of class. The struct keyword means that C# would store new objects of the given type on the stack - which for the above sample would increase the speed seriously. Still the above sample happens to also feature an array of these objects.
Even though if we went through and optimized the C# version like this, we would still end up with something several orders of magnitudes slower then the C version...
A good written piece of C code will ALWAYS be faster then C#, Java, Python and whatever-managed-language-you-choose...
As I said, I love C# and most of the work I do today is C# and not C. However I don't use C# because it's faster then C. I use C# because I don't need the speed gain C gives me for most of my problems.
Both C# and Java is though ridiculously slower then C, and C++ for that matter...

Categories

Resources