Which platform should I use : native C++ or C#? [closed] - c#

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop a windows application. If I use native C++ and MFC for user interface then the application will be very fast and tiny. But using MFC is very complicated. Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run. But developing GUI is very easy by using WinForm. Which one do you prefer?

"fast" and "slow" are subjective, especially with today's PC's. I'm not saying deliberately make the thing slow, but there isn't nearly as much overhead in writing a managed application as you might think. The JIT etc work very well to make the code execute very fast. And you can also NGEN for extra start-up speed if you really need.
Actually, if you have time to learn it, you might want to consider WPF rather than winform - this is a different skill-set, but allows you to make very good use of graphics hardware etc.
Also - .NET framework comes with new OS installs, and is still very common on those that pre-date it. So for me it would be a fairly clear choice to develop with C#/.NET. The time to develop a robust and fully tested C++ app (with no leaks, etc) is (for me at least) much greater than the same with C#.

Be careful not to optimize too early; you mention the speed of the code but for most Windows user interface operations, the difference is not noticeable as the main bottlenecks of drawing and disk access are no different for either approach.
My recommendation is that you use C# and WPF or WinForms for your user interface. If you encounter some slow downs, use a profiler to determine where and then consider replacing some business logic with native code, but only if there is a benefit.

There are a great many possible Windows applications, and each has its own requirements.
If your application needs to be fast (and what I work on does), then native C++ is a good way to go.
If your application needs to be small (perhaps for really fast transmission over slow lines), then use whatever gets it small.
If your application is likely to be downloaded a lot, you probably want to be leery of later versions of .NET, which your users might not have yet.
If, like most, it will be fast and small enough anyway on the systems it's likely to be used on, use what will allow you to develop fastest and best.
In almost all cases, the right thing to optimize is developer effort. Use whatever gets a high-quality job done fastest and best.

First.. (though I'm a die hard c++ coder) I have to admit c# is in most cases perfectly fine where speed and size are concerned. In some cases the application is smaller, because the interpreted part is already on the target system. (don't spamm me on that one, an app with a dll is smaller then the app all in one. Windows just happens to ship with the "DLL" already there.)
As to coding.. I honestly don't think there is a significant difference. I don't spend alot of my time typing code. Most of it is thinking out a problem. The code part is quite small. Saving a few lines here and there.. Blahh it's not an argument for me.. If it were I'd be working in APL. Learning the STL, MFC and what have you is likely just as intensive as learning the c# libraries. In the end they're all the same.
C# does have one thing going for it.. A market. It's the latest "hot" skill and so theres a market for it. Jobs are easy to find. Now keep in mind java was a "hot" skill a few years back and now ever tom dick and harry has it on their resume. That makes it harder to niche yourself.
Ok.. all that said.. I LOVE C++.. There's nothing like getting dirty when I really need to. When the MFC libs don't do the job I take a look at what they're sitting on and so on and so on.. It's a perenial language and I belive it's still at or near the most used lang in the world. Yah c++, Yah!.

Note also that most Windows computer already have .NET installed on them, so that really shouldn't be a concern.
Also, aside from the .NET installation, .NET applications tend to be quite small.
And for most application with a UI, the speed of the User is the really limiting time factor.

C# applications are slower to start than MFC applications, but you might not notice a speed difference between the two once the application is loaded.

Having no information on the application you plan to develop, I vote for WPF.

In my opinion, the requirements should help you decide the platform. What is more important: Having an application that is easily maintainable or one that must be extremely fast and small ?
A large class of applications nowadays can be written using .NET and managed code and this is in general beneficial to the development in the long term. From my experience, .NET applications are usually fast enough for most use cases and they are simpler to create.
Native C++ still has its use, but just for being "faster and smaller", when "fast enough and small enough" is sufficient does not sound enough as a justification.

The speed argument between native and managed code is largely a non-issue at this point. Each release of the .NET Framework makes performance improvements over the previous ones and application performance is always a very high priority for the .NET development teams.
Starting with Windows Vista and Windows Server 2008, the .NET Framework is installed as part of the operating system. It is also part of Windows Update so almost any Windows XP system will also have it installed. If the requirement that the framework be installed on the target machine is really that much of a problem there are also compilers that will essentially embed the required runtime portions into your application to generate a single exe, but they are expensive (and in my opinion, not really worth the cost).

MFC isn't hard to learn, actually it is very easy.
Almost equal to C#.

Choice of a language or tool should be dictated by the functional and performance requirements of your project and expertise. If performance is a real consideration for you and you have done some analysis to prefer C++ over C#, then you have a decision already. Note though that having MFC based application is not terribly efficient either. On the other hand, overheads in .NET applications are over-stated.
Performance is something that is really a function of how well you write your code and what scalability requirements exist. If you would only have to work with one client with a maximum database records of 1K, then we should not be talking performance.
If ease of development and maintainability is more important, certainly C# would be the choice.
So I am not sure this is a question that can be given an answer as choice A or B with the data you have provided. You need to do the analysis of functional and non-functional requirements and decide.

Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run
An MFC app requires MFC dll's to run (and probably VC runtime as well!), so they might need to be installed, or if they are statically linked, you add to the size of the exe.

Blockquote
.NET is easier to work with. Unless you'll lose users by using it or will have trouble with code migration, you should probably use .NET. It is highly unlikely that speed will be an issue for this. Size probably doesn't matter that much, either.

Which technology are you more familiar with?
The information you gave does not include anything that would help decide. Yes, MFC apps tend to be smaller (if you include the runtime size which isn't a suitable measure in the long run), more responsive and mor costly to develop. So what?

Related

Converting a C++/Python desktop application to web app

I have a desktop application that's essentially a glorified Monte Carlo simulation. The simulation is written in C++ (<10,000 lines of code) and the GUI, which calls the .exe is written in python (don't ask, I had (bad) reasons). I'm looking to convert this to a web application on Azure, which I know is going to be a less than trivial task, but I'm hoping to get advice on how to best do this. From how I see this, I have three options, modifying the C++ code and somehow converting this to a web app, rewriting entirely in the .NET framework using C#, or rewriting entirely with Django/Python. Regardless of what I do, I'll have a ton to learn, and I'll need to completely rewrite the front end.
Modifying the C++ code seemed like the best option at first. However, after doing some research, I'm thinking my modifications will be heavy, and C++ really isn't an ideal fit for web apps to begin with. I'm worried that between the likely heavy modifications and the finagling I'll have to do with C++, that will greatly outweigh me getting to reuse a good amount of code.
The next option would be to to completely rewrite the program using .NET and C#. I don't know C#, but I've read where it's not a tough transition to learn C# after knowing C++. I also feel like "translating" certain chunks of C++ code will be manageable, as the languages are so similar.
The final option I see is Python and Django. Obviously, knowing Python should help, but I feel like I'll have to do even more rewriting, as it's less similar to C++. I'm also worried about performance, as I'm assuming python will be slower to complete the simulation than C#.
Obviously this is going to be a pretty large undertaking for myself regardless of the option I choose, but minimizing the difficulty is certainly a top priority here. In addition, performance is a huge consideration, as the user is under a time crunch, and the more iterations the simulation can run, the better the result.
For longer term considerations, I would like this to be something that can be worked on in the future. Additionally, I do hope to continue writing web apps after I complete this first project. These two considerations make me even more focused on doing this "right" and not just quick.

Is C# really slower than say C++?

I've been wondering about this issue for a while now.
Of course there are things in C# that aren't optimized for speed, so using those objects or language tweaks (like LinQ) may cause the code to be slower.
But if you don't use any of those tweaks, but just compare the same pieces of code in C# and C++ (It's easy to translate one to another). Will it really be that much slower ?
I've seen comparisons that show that C# might be even faster in some cases, because in theory the JIT compiler should optimize the code in real time and get better results:
Managed Or Unmanaged?
We should remember that the JIT compiler compiles the code at real time, but that's a 1-time overhead, the same code (once reached and compiled) doesn't need to be compiled again at run time.
The GC doesn't add a lot of overhead either, unless you create and destroy thousands of objects (like using String instead of StringBuilder). And doing that in C++ would also be costly.
Another point that I want to bring up is the better communication between DLLs introduced in .Net. The .Net platform communicates much better than Managed COM based DLLs.
I don't see any inherent reason why the language should be slower, and I don't really think that C# is slower than C++ (both from experience and lack of a good explanation)..
So, will a piece of the same code written in C# will be slower than the same code in C++ ?
In if so, then WHY ?
Some other reference (Which talk about that a bit, but with no explanation about WHY):
Why would you want to use C# if its slower than C++?
Warning: The question you've asked is really pretty complex -- probably much more so than you realize. As a result, this is a really long answer.
From a purely theoretical viewpoint, there's probably a simple answer to this: there's (probably) nothing about C# that truly prevents it from being as fast as C++. Despite the theory, however, there are some practical reasons that it is slower at some things under some circumstances.
I'll consider three basic areas of differences: language features, virtual machine execution, and garbage collection. The latter two often go together, but can be independent, so I'll look at them separately.
Language Features
C++ places a great deal of emphasis on templates, and features in the template system that are largely intended to allow as much as possible to be done at compile time, so from the viewpoint of the program, they're "static." Template meta-programming allows completely arbitrary computations to be carried out at compile time (I.e., the template system is Turing complete). As such, essentially anything that doesn't depend on input from the user can be computed at compile time, so at runtime it's simply a constant. Input to this can, however, include things like type information, so a great deal of what you'd do via reflection at runtime in C# is normally done at compile time via template metaprogramming in C++. There is definitely a trade-off between runtime speed and versatility though -- what templates can do, they do statically, but they simply can't do everything reflection can.
The differences in language features mean that almost any attempt at comparing the two languages simply by transliterating some C# into C++ (or vice versa) is likely to produce results somewhere between meaningless and misleading (and the same would be true for most other pairs of languages as well). The simple fact is that for anything larger than a couple lines of code or so, almost nobody is at all likely to use the languages the same way (or close enough to the same way) that such a comparison tells you anything about how those languages work in real life.
Virtual Machine
Like almost any reasonably modern VM, Microsoft's for .NET can and will do JIT (aka "dynamic") compilation. This represents a number of trade-offs though.
Primarily, optimizing code (like most other optimization problems) is largely an NP-complete problem. For anything but a truly trivial/toy program, you're pretty nearly guaranteed you won't truly "optimize" the result (i.e., you won't find the true optimum) -- the optimizer will simply make the code better than it was previously. Quite a few optimizations that are well known, however, take a substantial amount of time (and, often, memory) to execute. With a JIT compiler, the user is waiting while the compiler runs. Most of the more expensive optimization techniques are ruled out. Static compilation has two advantages: first of all, if it's slow (e.g., building a large system) it's typically carried out on a server, and nobody spends time waiting for it. Second, an executable can be generated once, and used many times by many people. The first minimizes the cost of optimization; the second amortizes the much smaller cost over a much larger number of executions.
As mentioned in the original question (and many other web sites) JIT compilation does have the possibility of greater awareness of the target environment, which should (at least theoretically) offset this advantage. There's no question that this factor can offset at least part of the disadvantage of static compilation. For a few rather specific types of code and target environments, it can even outweigh the advantages of static compilation, sometimes fairly dramatically. At least in my testing and experience, however, this is fairly unusual. Target dependent optimizations mostly seem to either make fairly small differences, or can only be applied (automatically, anyway) to fairly specific types of problems. Obvious times this would happen would be if you were running a relatively old program on a modern machine. An old program written in C++ would probably have been compiled to 32-bit code, and would continue to use 32-bit code even on a modern 64-bit processor. A program written in C# would have been compiled to byte code, which the VM would then compile to 64-bit machine code. If this program derived a substantial benefit from running as 64-bit code, that could give a substantial advantage. For a short time when 64-bit processors were fairly new, this happened a fair amount. Recent code that's likely to benefit from a 64-bit processor will usually be available compiled statically into 64-bit code though.
Using a VM also has a possibility of improving cache usage. Instructions for a VM are often more compact than native machine instructions. More of them can fit into a given amount of cache memory, so you stand a better chance of any given code being in cache when needed. This can help keep interpreted execution of VM code more competitive (in terms of speed) than most people would initially expect -- you can execute a lot of instructions on a modern CPU in the time taken by one cache miss.
It's also worth mentioning that this factor isn't necessarily different between the two at all. There's nothing preventing (for example) a C++ compiler from producing output intended to run on a virtual machine (with or without JIT). In fact, Microsoft's C++/CLI is nearly that -- an (almost) conforming C++ compiler (albeit, with a lot of extensions) that produces output intended to run on a virtual machine.
The reverse is also true: Microsoft now has .NET Native, which compiles C# (or VB.NET) code to a native executable. This gives performance that's generally much more like C++, but retains the features of C#/VB (e.g., C# compiled to native code still supports reflection). If you have performance intensive C# code, this may be helpful.
Garbage Collection
From what I've seen, I'd say garbage collection is the poorest-understood of these three factors. Just for an obvious example, the question here mentions: "GC doesn't add a lot of overhead either, unless you create and destroy thousands of objects [...]". In reality, if you create and destroy thousands of objects, the overhead from garbage collection will generally be fairly low. .NET uses a generational scavenger, which is a variety of copying collector. The garbage collector works by starting from "places" (e.g., registers and execution stack) that pointers/references are known to be accessible. It then "chases" those pointers to objects that have been allocated on the heap. It examines those objects for further pointers/references, until it has followed all of them to the ends of any chains, and found all the objects that are (at least potentially) accessible. In the next step, it takes all of the objects that are (or at least might be) in use, and compacts the heap by copying all of them into a contiguous chunk at one end of the memory being managed in the heap. The rest of the memory is then free (modulo finalizers having to be run, but at least in well-written code, they're rare enough that I'll ignore them for the moment).
What this means is that if you create and destroy lots of objects, garbage collection adds very little overhead. The time taken by a garbage collection cycle depends almost entirely on the number of objects that have been created but not destroyed. The primary consequence of creating and destroying objects in a hurry is simply that the GC has to run more often, but each cycle will still be fast. If you create objects and don't destroy them, the GC will run more often and each cycle will be substantially slower as it spends more time chasing pointers to potentially-live objects, and it spends more time copying objects that are still in use.
To combat this, generational scavenging works on the assumption that objects that have remained "alive" for quite a while are likely to continue remaining alive for quite a while longer. Based on this, it has a system where objects that survive some number of garbage collection cycles get "tenured", and the garbage collector starts to simply assume they're still in use, so instead of copying them at every cycle, it simply leaves them alone. This is a valid assumption often enough that generational scavenging typically has considerably lower overhead than most other forms of GC.
"Manual" memory management is often just as poorly understood. Just for one example, many attempts at comparison assume that all manual memory management follows one specific model as well (e.g., best-fit allocation). This is often little (if any) closer to reality than many peoples' beliefs about garbage collection (e.g., the widespread assumption that it's normally done using reference counting).
Given the variety of strategies for both garbage collection and manual memory management, it's quite difficult to compare the two in terms of overall speed. Attempting to compare the speed of allocating and/or freeing memory (by itself) is pretty nearly guaranteed to produce results that are meaningless at best, and outright misleading at worst.
Bonus Topic: Benchmarks
Since quite a few blogs, web sites, magazine articles, etc., claim to provide "objective" evidence in one direction or another, I'll put in my two-cents worth on that subject as well.
Most of these benchmarks are a bit like teenagers deciding to race their cars, and whoever wins gets to keep both cars. The web sites differ in one crucial way though: they guy who's publishing the benchmark gets to drive both cars. By some strange chance, his car always wins, and everybody else has to settle for "trust me, I was really driving your car as fast as it would go."
It's easy to write a poor benchmark that produces results that mean next to nothing. Almost anybody with anywhere close to the skill necessary to design a benchmark that produces anything meaningful, also has the skill to produce one that will give the results he's decided he wants. In fact it's probably easier to write code to produce a specific result than code that will really produce meaningful results.
As my friend James Kanze put it, "never trust a benchmark you didn't falsify yourself."
Conclusion
There is no simple answer. I'm reasonably certain that I could flip a coin to choose the winner, then pick a number between (say) 1 and 20 for the percentage it would win by, and write some code that would look like a reasonable and fair benchmark, and produced that foregone conclusion (at least on some target processor--a different processor might change the percentage a bit).
As others have pointed out, for most code, speed is almost irrelevant. The corollary to that (which is much more often ignored) is that in the little code where speed does matter, it usually matters a lot. At least in my experience, for the code where it really does matter, C++ is almost always the winner. There are definitely factors that favor C#, but in practice they seem to be outweighed by factors that favor C++. You can certainly find benchmarks that will indicate the outcome of your choice, but when you write real code, you can almost always make it faster in C++ than in C#. It might (or might not) take more skill and/or effort to write, but it's virtually always possible.
Because you don't always need to use the (and I use this loosely) "fastest" language? I don't drive to work in a Ferrari just because it's faster...
Circa 2005 two MS performance experts from both sides of the native/managed fence tried to answer the same question. Their method and process are still fascinating and the conclusions still hold today - and I'm not aware of any better attempt to give an informed answer. They noted that a discussion of potential reasons for differences in performance is hypothetical and futile, and a true discussion must have some empirical basis for the real world impact of such differences.
So, the Old New Raymond Chen, and Rico Mariani set rules for a friendly competition. A Chinese/English dictionary was chosen as a toy application context: simple enough to be coded as a hobby side-project, yet complex enough to demonstrate non trivial data usage patterns. The rules started simple - Raymond coded a straightforward C++ implementation, Rico migrated it to C# line by line, with no sophistication whatsoever, and both implementations ran a benchmark. Afterwards, several iterations of optimizations ensued.
The full details are here: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.
This dialogue of titans is exceptionally educational and I whole heartily recommend to dive in - but if you lack the time or patience, Jeff Atwood compiled the bottom lines beautifully:
Eventually, C++ was 2x faster - but initially, it was 13x slower.
As Rico sums up:
So am I ashamed by my crushing defeat? Hardly. The managed code
achieved a very good result for hardly any effort. To defeat the
managed version, Raymond had to:
Write his own file/io stuff
Write his own string class
Write his own allocator
Write his own international mapping
Of course he used available lower level libraries to do this,
but that's still a lot of work. Can you call what's left an STL
program? I don't think so.
That is my experience still, 11 years and who knows how many C#/C++ versions later.
That is no coincidence, of course, as these two languages spectacularly achieve their vastly different design goals. C# wants to be used where development cost is the main consideration (still the majority of software), and C++ shines where you'd save no expenses to squeeze every last ounce of performance out of your machine: games, algo-trading, data-centers, etc.
C++ always have an edge for the performance. With C#, I don't get to handle memory and I have literally tons of resources available for me to do my job.
What you need to question yourself is more about which one saves you time. Machines are incredibly powerful now and most of your code should be done in a language that allows you to get the most value in the least amount of time.
If there is a core processing that takes way too long in C#, you could then build a C++ and interop your way to it with C#.
Stop thinking about your code performance. Start building value.
C# is faster than C++. Faster to write. For execution times, nothing beats a profiler.
But C# does not have as much libraries as C++ can interface easily.
And C# depends heavily on windows...
BTW, time critical applications are not coded in C# or Java, primarily due to uncertainty of when the Garbage Collection will be performed.
In modern times, application or execution speed is not as important as was previously. Development schedules, correctness and robustness are higher priorities. A high speed version of an application is no good if it has lots of bugs, crashes a lot or worse, misses an opportunity to get to market or be deployed.
Since development schedules are a priority, new languages are coming out that speed up development. C# is one of these. C# also assists in correctness and robustness by removing features from C++ that cause common problems: one example is pointers.
The differences in execution speed of an application developed with C# and one developed using C++ is negligible on most platforms. This is due to the fact that the execution bottlenecks are not language dependent but usually depend on the operating system or I/O. For example if C++ performs a function in 5 ms but C# uses 2ms, and waiting for data takes 2 seconds, the time spent in the function is insignificant compared to the time waiting for data.
Choose a language that is best suited for the developers, platform and projects. Work towards the goals of correctness, robustness and deployment. The speed of an application should be treated as a bug: prioritize it, compare to other bugs, and fix as necessary.
A better way to look at it everything is slower than C/C++ because it abstracts away rather than following the stick and mud paradigm. It's called systems programming for a reason, you program against the grain or bare metal. Doing so also grants you speed you cannot achieve with other languages like C# or Java. But alas C roots are all about doing things the hard way, so your mostly going to be writing more code and spending more time debugging it.
C is also case sensitive, also objects in C++ also follow strict rule sets. Example a purple ice cream cone may not be the same as a blue ice cream cone, though they might be cones they may not necessarily belong to the cone family and if you forget to define what cone is you bug out. Thus the properties of ice cream may or may not be clones. Now the speed argument, C/C++ uses the stack and heap approach this is where bare metal gets it's metal.
With the boost library you can achieve incredible speeds unfortunately most game studios stick to the standard library. The other reason for this might be because software written in C/C++ tends to be massive in file size, as it's a giant collection of files instead of a single file. Also take note all operating systems are written in C so generally why must we ask the question what could be faster?!
Also caching is not faster than pure memory management, sorry but this just doesn't fan out. Memory is something physical, caching is something software does in order to gain a kick in performance. One could also reason that without physical memory caching would simply not exist. It doesn't void the fact memory must be managed at some level whether its automated or manual.
Why would you write a small application that doesn't require much in the way of optimization in C++, if there is a faster route(C#)?
Getting an exact answer to your question is not really possible unless you perform benchmarks on specific systems. However, it is still interesting to think about some fundamental differences between programming languages like C# and C++.
Compilation
Executing C# code requires an additional step where the code is JIT'ed. With regard to performance that will be in favor of C++. Also, the JIT compiler is only able to optimize the generated code within the unit of code that is JIT'ed (e.g. a method) while a C++ compiler can optimize across method calls using more aggressive techniques.
However, The JIT compiler is able to optimize the generated machine code to closely match the underlying hardware enabling it to take advantage of additional hardware features if they exist. To my knowledge the .NET JIT compiler doesn't do that but it would conceiveably be able to generate different code for Atom as opposed to Pentium CPU's.
Memory access
The garbage collected architecture can in many cases create more optimal memory access patterns than standard C++ code. If the memory area used for the first generation is small enough in can stay within the CPU cache increasing performance. If you create and destroy a lot of small objects the overhead of maintaing the managed heap may be smaller than what is required by the C++ runtime. Again, this is highly dependent on the application. A study Python of performance demonstrates that a specific managed Python application is able to scale much better than the compiled version as a result of more optimal memory access patterns.
Don't let confusing!
If a C# application is written in the best case and a C++ application is written in the best case, the C++ is faster.
Many reason is here about why C++ is faster that C# inherently, such as C# uses virtual machine similar to JVM in Java. Basically higher level language has less performance (if uses in best case).
If you're an experienced professional C# programmer just like you're an experienced professional C++ programmer, developing an application using C# is much more easy and fast than C++.
Many other situations between these situations is possible. For example, you can write an C# application and C++ application so that C# app runs faster than C++ one.
For choosing a language you should note the circumstances of the project and its subject. For a general business project you should use C#. For a hight performance required project like a Video Converter or Image Processing project you should choose C++.
Update:
OK. Lets compare some practical reason about why most possible speed of C++ is more than C#. Consider a good written C# application and same C++ version:
C# uses a VM as a middle layer for executing the application. It has overhead.
AFAIK CLR could not optimises all C# codes in target machine. C++ application could be compiled on target machine with most optimisation.
In C# the most possible optimisation for runtime means most possible fast VM. VM has overhead anyway.
C# is a higher level language thus it generates more program code lines for the final process. (consider difference between an Assembly application and Ruby one! same condition is between C++ and a higher level language such as C#/Java)
If you prefer to get some more info in practice as an expert, see this. It is about Java but it also applies to C#.
The primary concern would not be speed, but stability across windows versions and upgrades. Win32 is mostly immune across windows versions making it highly stable.
When servers are decommissioned and software migrated, A lot of anxiety happens around anything using .Net and usually a lot of panic about .net versions but a Win32 application built 10 years ago just keeps running like nothing happened.
I have been specializing in optimization for about 15 years, and regularly re write C++ code, making heavy use of compiler intrinsics as much as possible because C++ performance is often nowhere near what the CPU is capable of. Cache performance often needs to be considered. Many vector maths instructions are required to replace the standard C++ floating point code.
A great deal of STL code is re written and often runs many times faster. Maths and code which makes heavy use of data can be re written with spectacular results, as the CPU approaches its optimum performance.
None of this is possible in C#. To compare their relative #real time# performance is really a staggeringly ignorant question. The fastest piece of code in C++ will be when each single assembler instruction is optimised for the task in hand, with no unnecessary instructions - at all. Where each piece of memory is used when it is required, and not copied n times because that’s what the language design requires. Where each required memory movement works in harmony with the cache.
Where the final algorithm cannot be improved, based on the exact real time requirements, considering accuracy and functionality.
Then you will be approaching an optimal solution.
To compare C# with this ideal situation is staggering. C# can’t compete. In fact, I am currently re writing a whole bunch of C# code (when I say re writing I mean removing and replacing it completely) because it is not even in the same city, let alone ball park when it comes to heavy lifting real time performance.
So please, stop fooling yourselves. C# is slow. Dead slow. All software is slowing down, and C# is making this speed decline worse. All software runs using the fetch execute cycle in assembler (you know – on the CPU). You use 10 times as many instructions; it’s going to go 10 times as slow. You cripple the cache; it’s going to go even slower. You add garbage collect to a real time piece of software then you are often fooled into thinking that the code runs ‘ok’ there are just those few moments every now and then when the code goes ‘a bit slow for a while’.
Try adding a garbage collection system to code where every cycle counts. I wonder if the stock market trading software has garbage collection (you know – on the system running on the new undersea cable which cost $300 million?). Can we spare 300 milliseconds every 2 seconds? What about flight control software on the space shuttle – is GC ok there? How about engine management software in performance vehicles? (Where victory in a season can be worth millions).
Garbage collection in real time is a complete failure.
So no, emphatically, C++ is much faster. C# is a leap backwards.

Would Java be a realistic option for writing a music DAW (Digital Audio Workstation)

I program in Java doing a lot of web related stuff but I've been toying with the idea of creating a very simple DAW in some language. I considered C# but it doesn't seem to support Direct X anymore (Though there are some libraries that work with differing degrees of success). I was curious if anyone out there had an opinion on playing a lot of multi-channel sounds through Java. I would also at some point need to hack in some VST support (which would probably not be trivial. I'm really afraid that my only option will be C++, and that would be unpleasant enough to make me not actually work on it (know some C++, but not really enough to write something this intense).
Anyone have some ideas? Thanks
VST support in Java may be reasonably easy after all; I've heard of positive experiences with http://github.com/mhroth/jvsthost (that is to say, someone I had a conversation with on a forum seemed to be up and running with it pretty quickly, running a number of different synths successfully).
An aside: Personally, I'm developing some software in Java that uses SuperCollider as an audio backend (disclaimer: my actual experience of Java sound is limited). While it would probably be just about possible to build a DAW around SuperCollider, I wouldn't really recommend it as the tool for that job. However, I also don't quite understand why you want to build a DAW in the first place... should you decide you want to explore alternative means of making music with computers, you might give SC a look (also ChucK I found very easy to get started with and quite a lot of fun) :-)
Anyway, back to the question... while I tend to refer to Java specifically, much of this will go for C# as well:
Traditionally, garbage collection has been a source of concern doing anything where time is of the essence in Java; in a DAW, for example, this may manifest itself as inaccurate timing or clicks in the output where the GC interrupts the program long enough that it is not able to process a complete buffer. This will be particularly true if you want to use small buffers for low latency, and/or are not careful about the amount of garbage generated. However, I don't want to spread FUD about Java sound: as I mentioned, I haven't really used it heavily myself, and in any case I believe these issues are improving. It is certainly an issue you will need to be aware of, but probably not a show-stopper.
I imagine that a big bottleneck in any DAW will be file IO, which shouldn't suffer through Java as long as proper care is taken.
If you start doing intense DSP on many channels simultaneously, then it may be that Java computation performance isn't totally optimal (although probably not bad really); however if you mostly do basic mixing in your DAW code and any DSP with VSTs, then his should be a non-issue anyway.
In terms of actual audio IO, I see that there are also ASIO implementations for Java, should you be interested. I don't even have indirect experience of those, so I really won't vouch for them. Java 1.7 is supposed to have improved low-latency audio support, FWIW (although from what I've read, the applications they have in mind are not things like DAWs). DirectX support I don't think should be a major factor for a DAW. In that sense, you might not want to dismiss C#, as it is a very nice language.
There are already some DAWs that are using the java plattform (frinika or javaDAW
for example). So I think it's a reasonable option.
I'm working on something similar so I would have to say it is possible, my laptop was stolen and I've has to start over but I've rebuilt most of it. So far the track threads have been lining up pretty well but I'm considering implementing something like LWJGL's timer for better precision. Tritonus is a very helpful library and you can find it at jsresources.org as well as some very helpful examples. I've learned a lot there. I'd you send me an e-mail I'd be happy to share my code with you.

Why is C used for driver development rather than C#? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
C# driver development?
Why do we use C for device driver development rather than C#?
Because C# programs cannot run in kernel mode (Ring 0).
The main reason is that C is much better suited for low-level (close to hardware) development than C#. C was designed as a form of portable assembly code. Also, many times it may be difficult or unsafe for C# to be used. The key time is when drivers are ran at ring-0, or kernel mode. Most applications are not meant to be run in this mode, including the .NET runtime.
While it may be theoretically possible to do, many times C is more suited to the task. Some of the reasons are tighter control over what machine code is produced and being closer to the processor is almost always better when working directly with hardware.
Ignoring C#'s managed language limitations for a little bit, object oriented languages, such as C#, often do things under the hood which may get in the way when developing drivers. Driver development -- actually reading and manipulating bits in hardware -- often has tight timing constraints and makes use of programming practices which are avoided in other types of programming, such as busy waits and dereferencing pointers which were set from integer constant values. Device drivers are often actually written in a mix of C and inline assembly (or make use of some other method of issuing instructions which C compilers don't normally produce). The low level locking mechanisms alone (written in assembly) are enough to make using C# difficult. C# has lots of locking functionality, but when you dig down they are at the threading level. Drivers need to be able to block interrupts as well as other threads to preform some tasks.
Object oriented languages also tend to allocate and deallocate (via garbage collection) lots of memory over and over. Within an interrupt handler, though, your ability to access heap allocation functionality is severely restricted. This is both because heap allocation may trigger garbage collection (expensive) and because you have to avoid and deal with both the interrupt code and the non-interrupt code trying to allocate at the same time. Without lots of limitations placed on this code, the compiler's output, and on whatever is managing the code (VM or possibly compiled natively and only using libraries) you will likely end up with some very odd bugs.
Managed languages have to be managed by someone. This management probably relies on a functioning underlying operating system. This produces a bootstrap problem for using managed code for many (but not all) drivers.
It is entirely possible to write device drivers in C#. If you are driving a serial device, such as a GPS device, then it may be trivial (though you will be making use of a UART chip or USB driver somewhere lower, which is probably written in C or assembly) since it can be done as an application. If you are writing an ethernet card (that isn't used as for network booting) then it may be (theoretically) possible to use C# for some parts of this, but you would probably rely heavily on libraries written in other languages and/or use of an operating system's userspace driver functionality.
C is used for drivers because it has relatively predictable output. If you know a little bit of assembly for a processor you can write some code in C, compile for that processor, and have a pretty good guess at what the assembly for that will look like. You will also know the determinism of those instructions. None of them are going to surprise you and kick off the garbage collector or call a destructor (or finalize).
Because C# is a high level language and it cannot talk to processors directly. C code compile straight into native code that a processor can understand.
Just to build a little to Darin Dimitrov's answer. Yes C# programs cannot run in kernel mode.
But why can't they?
In Patrick Dussud's interview for Behind the Code he describes an attempt that was made during the development of Vista to include the CLR at a low level*. The wall that they hit was that the CLR takes dependencies the OS security library which in turn takes dependencies on the UI level. They were not able to resolve this for Vista. Except for singularity I don't know of any other effort to do this.
*Note that while "low level" may not have been sufficient for being able to write drivers in C# it was at the very least necessary.
The C is a language which is supports more for interfacing with other peripherals. So C is used to develop System Softwares.The only problem is that one need to manage the memory .It is a developer's nightmare.But in C# the memory management can be done easiy and automatically(there are n number of differences between c and C#,try googling ).
Here's the honest truth. People who tend to be good at hardware or hardware interfaces, are not very sophisticated programmers. They tend to stick to simpler languages like C. Otherwise, frameworks would have evolved that allow languages like C++ or even C# to be used at kernel level. Entire OSes have been written in C++ (ECOS). So, IMHO, it is mostly tradition.
Now, there are some legitimate arguments against using more sophisticated languages in demanding code like drivers / kernel. There is the visibility aspect. C# and C++ compilers do a lot behind the scenes. An innocuous assignment statement may hide a shitload of code (operator overrides, properties). The cost of exceptions may not be clearly visible. Garbage collection makes the lifetime of objects / memory unclear. All the features that make programming easier, are also rope to hang yourself with.
Then there is the required ecosystem for the language. If the required features pull in too many components, the size alone may become a factor. If the primitives in the language tend to be heavy (for the sake of useful software abstractions), that's a burden drivers and kernels may not be willing to carry.

Managed language for scientific computing software

Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?
Python has become pretty big in scientific computing lately. It is a managed language, so you don't have to remember to free your memory. At the same time, it has packages for scientific and numerical computing (NumPy, SciPy), which gives you performance similar to compiled languages. Also, Python can be pretty easily integrated with C code.
Python is a very expressive language, making it easier to write and read than many traditional languages. It also resembles MATLAB in some ways, making it easier to use for scientists than, say, C++ or Fortran.
The University of Oslo has recently starting teaching Python as the default language for all science students outside the department of informatics (who still learn Java).
Simula Research Laboratory, which is heavily into scientific computing, partial differential equations etc., uses python extensively.
You are asking a fundamentally flawed question. The entire point of managed languages is that you don't handle memory. That is handled by the Garbage Collector, while you can make certain actions to better allow it to do its job in an efficient manner, it is not your job to do its job.
The things you can do to improve performance in a world where performance is not controlled by you are simple. Make sure you don't hold onto references you don't need. And use stack based variables if you need more control over the situation.
F# seems to be somewhat targeted at this audience. There is actually a book called F# for scientists.
Also this question was asked over at Lambda the Ultimate.
You might be surprised at the number of people that use Matlab for this, and as it could be considered a programming language and certainly manages its own memory (with support for huge data sets, etc) then it should seriously be considered as a solution here.
Further, it will generate program code (may require a separate plugin?) so once you arrive at an algorithm you want to package up you can have it generate the C code to perform the work you originally had in your M script or simulink model.
-Adam
Not exactly sure what the question is, but you might want to check out Fortress
I think I would paraphrase the question by saying is the .NET memory manager capable of handling the job of memory management for scientific computing where traditionally hand tuned routines have been used for improving memory performance, especially for very large (GByte) matrices?
The author of this article certainly believes that it is:
Harness the Features of C# to Power Your Scientific Computing Projects
As others have pointed out, a major point of managed code is that you don't need to deal with memory management tasks yourself. This is a major advantage as it allows you to concentrate on the algorithms.
I would think that functional languages would be best suited to this type of task.
BlackBox Component Builder, developed by Oberon microsystems, is the component-based development environment for the programming language „Component Pascal“.
Due to its stability, performance and simplicity, BlackBox is perfectly suited for science and engineering applications.
http://www.oberon.ch/blackbox.html
(Disclosure: I work for Oberon microsystems)
Regards,
tamberg
The best option is Python with NumPy/ SciPy/ IPython. It has excellent performance because the core math is happening in libraries written in highly optimized C and Fortran. Since you interact with it using Python, everything from your perspective is clean and managed with extremely succinct, readable code and garbage collection.
The short answer is that you can control the memory and performance of programs written in managed languages by choosing a suitable language (like OCaml or F#) and learning how to optimize in that language. The long answer requires a book on the specific language you are using, such as OCaml for Scientists or Visual F# 2010 for Technical Computing.
The subjects you need to learn about are algorithmic optimizations, low-level optimizations, data structures and the internal representation of types in your chosen language. If you are writing parallel algorithms then it is also particularly important to learn about caches.
With a managed language you don't get that control as easily. The whole point in these languages is to handle malloc, garbage, and so on. Each managed language will handle that differently.
With Perl running out of memory is considered a fatal error. You can save the day via some small measure with $^M but this is only if your compiler has been compiled with that feature, and you add code provisions for it.
Because of its overhead, a .NET application will incur a performance penalty relative to an unmanaged application. However, because this overhead is more-or-less a constant unrelated to the overall size of the application (WARNING: over-simplification), it becomes relatively less of a penalty the larger the application.
So I would go with .NET (so long as it provides you with the libraries you need). Managing memory is a pain, and you have to do it a lot to be good at it. Within .NET, choose whatever language you're most comfortable, so long as it's not J# or VB.NET and is C#.

Categories

Resources