Is C# code faster than Visual Basic.NET code? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Is C# code faster than Visual Basic.NET code, or that is a myth?

That is a myth. They compile down to the same CLR. However the compiler for the same routine may come out slightly differently in the CLR. So for certain routines some may be slightly better like (0.0000001%) faster in C# and vice versa for VB.NET, but they are both running off the same common runtime so they both are the same in performance where it counts.

The only reason that the same code in vb.Net might be slower than c# is that VB defaults to have checked arithmetic on and c# doesn't.
By default, arithmetic operations and overflows in Visual Basic are checked; in c#, they are not.
If you disable that then the resulting IL is likely to be identical. To test this take your code and run it through Reflector and you will see that it looks very similar if you switch from c# to vb.Net views.
It is possible that an optimization (or just difference in behaviour) in the c# compiler verses the vb.net compiler might lead to one slightly favouring the other. This is:
Unlikely to be significant
if it was it would be low hanging fruit to fix
Unlikely to happen.
c# and vb.net's abstract syntax trees are very close in structure. You could automatically transliterate a great deal of vb.Net into c# and vice versa. What is more the result would stand a good chance of looking idiomatic.
There are a few constructs in c# not in vb.net such as unsafe pointers. Where used they might provide some benefits but only if they were actually used, and used properly. If you are down to that sort of optimization you should be benchmarking appropriately.
Frankly if it makes a really big difference then the question should not be "Which of c#/vb.net should I use" you should instead be asking yourself why you don't move some code over to C++/CLI.
The only way I could think of that the different compilers could introduce serious, pervasive differences is if one chose to:
Implement tail calls in different places
These can make things faster or slower and certainly would affect things on deeply recursive functions. The 4.0 JIT compiler on all platforms will now respect all tail call instructions even if it has to do a lot of work to achieve it.
Implemented iterator blocks or anonymous lambdas significantly more efficiently.
I believe both compilers are about as efficient at a high level as they are going to get though in this regard. Both languages would require explicit support for the 'yield foreach' style available to f#'s sequence generators.
Boxed when it was not necessary, perhaps by not using the constrained opcode
I have never seen this happen but would love an example where it does.
Both the c# and vb.net compilers currently leave such optimization complexities as en-registering of variables, calling conventions, inlining and unrolling entirely up to the common JIT compiler in the CLR. This is likely to have far more of an impact on anything else (especially when the 32 bit and 64bit JIT's can now behave quite differently).

The framework is written in C# but that still does not tell about performance differences between C# or VB as everything is compiled to IL language which then actually executed (including JITted and so on).
The responsibility is on each specific language compiler that what kind of IL they produce based on source code. If other compiler produces better suited IL than other, then it could have performance difference. I don't know exactly that is there this kind of areas where they would cause drastically different IL, but I doubt the differences would still be huge.
Other aspect is completely then the C#'s ability to run unsafe code like using raw pointers etc that can give performance on special scenarios.

There might be a slight difference in the compiler optimization, but I'd say there is no noticeable difference. Both C# and VB.NET compile to Common Intermediate Language. In some cases, you may be able to get a significant performance gain in C# by using unsafe code, but under most circumstances I wouldn't recommend doing so. If you need something that performance critical, you shouldn't use C# either.
The myth probably started because of the huge difference in Visual Basic 6 performance compared to the average C++ application.

I was at a Microsoft conference and the MS employees stated that C# is up to 8% faster than VB.NET. So if this is a myth, it was started by the people that work at MS. If I can find the slides that state this I would post them, but this was when C# just came out. I think that even if it was true at one point in time, that the only reason for one to be faster than the other is how things are configured by default like ShuggyCoUk said.

It depends on what you're doing. I think there's no real difference between VB and C#. The both of them are .Net languages and they're compiled in IL.
More info? Read this:
http://devlicio.us/blogs/robert_dunaway/archive/2006/10/19/To-use-or-not-use-Microsoft.VisualBasic.dll-_2800_all-.NET-Languages-could-benefit_3F002900_.aspx

As usual the answer is that it depends... By itself, no, VB.Net is not slower than C#, at least nothing that you will notice. Yes, there will be slight differences in compiler optimization, but IL generated will be essentially the same.
However, VB.Net comes with a compatibility library for programmers used to VB6. I am remembering about those string methods like left, right, mid, old VB programmers would expect. Those string manipulation functions are slower. I'm not sure you would notice an impact, but depending on the intensity of their use, I'd bet the answer would be yes. Why are those methods slower than "native" .net string methods? Because they are less type-safe. Basically, you can throw almost anything at them and they will try to do what you want them to, just like in old VB6.
I am thinking about string manipulation, but if I think harder, I'm sure I'll remember about more methods thrown into that compatibility layer (I don't remember the assembly's name, but remember it is referenced by default in VB.Net) that would have a performance impact if used instead of their .net "native" equivalent.
So, if you keep on programming like you were in VB6, then you might notice an impact. If not, it's ok.

It is not really a myth. While C# and VB.Net both compile to IL, the actual instructions produced are likely to be different because 1. the compilers may have different optimisations and 2. the extra checks that VB.Net does by default e.g. arithmetic overflow. So in many cases the performance will be the same but in some cases C# will be faster. It's also possible VB.Net might be quicker in rare circumstances.

There are some small differences in the generated code that may make C# slightly faster in some situations. For example VB.NET has some extra code to clear the local variables in a method while C# doesn't.
However, those differences are barely measurable, and most code is so far from optimal that you are just starting in the wrong end by switching language to make the code run faster. You can take just about any CPU intensive code and rather easily make it twice as fast. Some code can be made 10 times faster, other code perhaps 10000 times faster. In that context the few percent that you may gain by using C# instead of VB.NET is not worth the effort.
On the other hand, learning C# may very well be an effective way of speeding up your code. Not because C# can produce faster code, but because you will get a better understanding of both C# and VB.NET, enabling you to write code that performs better in either language.
Edit:
The C# and VB.NET compilers are obviously developed more or less in sync. The speed difference between C# 1 and C# 2 is something like 30%, the difference between the parallel versions of C# and VB.NET is a lot less.

C# and VB.Net are both compiled by IL. Also C++ and F# are compiled by it. In fact, the four languages I mentioned are executed at the same speed. There isn't in these "a faster language": the unique difference is between auto garbage-collected languages (C#, VB.Net, F# etc.) and those aren't auto-collected (such as C++). This second group generally is slower, because rarely the developer know how and when collect the garbage in the heap memory, however, if you are informed about the heap memory, the program could result faster if in C++. Hard graphics programs are usually made in C++ (such as the most of Adobe programs). You can also manually collect the garbage in C# (System.GC.Collect();) and in VB.Net (System.GC.Collect).
I know this answer is not fully inherent to the question, but I want to offer you many ways and choices. You choose the right way for your programs.

Related

Are all languages used within .net Equally performant?

I know the "Sales pitch" answer is yes to this question, but is it technically true.
The Common Language Runtime (CLR) is designed as an intermediate language based on Imperative Programming (IP), but this has obvious implications when dealing with Declarative Programming (DP).
So how efficient is a language based on a different paradigm than the Imperative Style when implemented in the CLR?
I also get the feeling that the step to DP would incur an extra level of abstraction that might not model at all performant, would this be a fair comment?
I have done some simple tests using F# and it all looks great, but am I missing something if the programs get more complex?
There is no guarantee that languages produce the same IL for equivalent code, so I can safely say that there is no guarantee that all .NET languages are equally performant.
However, if they produce the same IL output, then there is no difference.
First of all, the wide range of languages on the .NET platform definitely contains languages that generate code with different performance, so not all languages are equally performant. They all compile to the same intermediate language (IL), but the generated code may be different, some languages may rely on Reflection or dynamic language runtime (DLR) etc.
However, it is true that the BCL (and other libraries used by the languages) will have the same performance regardless of what language do you call them from - this means that if you use some library that does expensive calculations or rendering without doing complex calculations yourself, it doesn't really matter which language you use to call it.
I think the best way to think about the problem is not to think about languages, but about different features and styles of programming available in those languages. The following lists some of them:
Unsafe code: You can use unsafe code in C++/CLI and to some point also in C#. This is probably the most efficent way to write certain operations, but you loose some safety guarantees.
Statically typed, imperative: This is the usual style of programming in C# and VB.Net, but you can also use imperative style from F#. Notably, many tail-recursive functions are compiled to statically typed, imperative IL code, so this also applies to some F# functions
Statically typed, functional: This is used by most F# programs. The generated code is largely different than what imperative category uses, but it is still statically typed, so there is no significant performance loss. Comparing imperative and functional is somewhat difficult as the optimal implementation looks quite different in both of the versions.
Dynamically typed: Languages like IronPython and IronRuby use dynamic language runtime, which implements dynamic method calls etc. This is somewhat slower than statically typed code (but DLR is optimized in many ways). Note that code written using C# 4.0 dynamic also falls into this category.
There are many other languages that may not fall into any of these categoires, however I believe that the above list covers most of the common cases (and definitely covers all Microsoft languages).
I'm sure that there are scenarios where idiomatic code is slightly more performant when written in one .NET language than another. However, stepping back a bit, why does it matter? Do you have a performance target in mind? Even within a single language there are often choices which you can make that affect performance, and you will sometimes need to trade performance off against maintainability or development time. If you don't have a target for what constitutes acceptable performance, then it's impossible to evaluate whether any performance differences between languages are meaningful or negligible.
Additionally, compilers evolve, so what's true of the relative performance today won't necessarily hold going forward. And the JIT compiler is evolving too. Even processor designs are variable and evolving, so the same JITTed native code can perform differently across processors with different cache hierarchies, pipeline sizes, branch prediction, etc.
Having said all of that, there are probably a few broad rules that largely hold true:
Algorithm differences are probably going to make a bigger difference than compiler differences (at least when comparing statically typed languages running on the CLR)
For problems which can be parallelized easily, languages which make it easy to take advantage of multiple processors/cores will provide a simple way to speed up your code.
At the end of the day, all programming languages are compiled into the native machine code of the CPU they're running on, so the same questions could be asked of any language at all (not just ones that compile to MSIL).
For languages that are essentially just syntatic variants of each other (e.g. C# vs. VB.NET) then I wouldn't expect there to be much difference. But if the languages are too divergent (e.g. C# vs. F#) then you can't really make a valid comparison because you can't really write two "equivalent" non-trivial code samples in both languages anyway.
The language can just be thought of as the "front-end" to the IL code, so the only difference between the languages is whether the compiler will produce the same IL code or less/more efficient code.
From most of what I've read online it seems as though the managed C++ compiler does the best job of optimizing the IL code, although I haven't seen anything that shows a remarkable difference between the main languages C#/C++/VB.NET.
You could even try compiling the following into IL and take a look!?
F#
#light
open System
printfn "Hello, World!\n"
Console.ReadKey(true)
C#
// Hello1.cs
public class Hello1
{
public static void Main()
{
System.Console.WriteLine("Hello, World!");
System.Console.ReadKey(true);
}
}

Developing Math libraries

I am looking to create a custom math library for the project I am working on. The project is written in C#, and I am slightly concerned whether C# will be fast enough. The library will have a number of custom math formulas and equasions to be applied to very large data sets. Simulations and matrix operations will be done as well (i.e. Monte Carlo simulations) so it'd have to be fast.
One thought is to create the math library in C++ and reference this .dll within the C# project. I am wondering whether it is worth the effort?
The general rule of thumb is "don't optimize until you need to," so I would lean towards just writing it in C# and optimizing the code later on.
But, in this situation where optimizing might require reimplementing everything in another language, I would do some testing first. Write a small app using the most processor-intensive math you expect in both C# and C++, then compare the times to see if the C# one is acceptable.
If you will be using it in C#, then you might as well put it in C# to start with. You buy more with managed code than you save with pointer wrangling. If you are worried about memory and cache issues, then just use arrays of types instead of objects. It gives you more control over how the memory is laid out.
The optimizers and JIT compilers will buy you more than enough speed to make up for any inefficiencies.
It's a little hard to say anything definitive one way or the other. I'd suggest sticking with C# if that's what you've started or what the rest of your project is based around. Keep some canonical data sets aside and establish some benchmarks as you develop. If you find performance to fall below some unacceptable threshold, and your profiling leads you to believe the problem is intrinsic to C#, then write a C++ component to solve those specific needs.
One thing to keep in mind is that using bytecode languages like C# or Java lets the JIT compiler in the runtime optimise your code. In practice, this means that the runtime performance of your code only gets better over time. Unlike C++, where the machine code is produced once at compile time and never changes, the performance of your C# code can continuously improve along with improvements to the underlying JIT compiler.
A serious amount of research is going into JIT compiler technology these days. Taking advantage of this now is an excellent approach.
One of the reasons I like using C# for numerical programming is that it is fairly easy to interface with native code. C# and the latest .NET runtimes and JIT compilers are pretty darn good, but sometimes you just can't beat highly optimized native code. For example, here is what I have done for linear algebra stuff. Write some nice object oriented classes that hide the implementation of key operations. For me this meant creating Matrix and Vector classes with addition and multiplication functions/operators. When I encountered an algorithm that had to perform several matrx-matrix products and transpose products with rather large matrices (thousands of rows by hundreds of columns) over many iterations, things got too slow. I re-implemented the matrix multiplication function to call a highly optimized matrix-matrix multiplication function from Intel's Math Kernel library (dgemm). This gave me a better than 20x speed up. Plus the unpleasant API for this native routine (dgemm takes no less than 13 parameters!) was hidden from users of the matrix class.
So, I would suggest using C# for your library and drop down to optimized native code when and where needed.

Is there any reason C# does not support manual inline methods? And what about optional parameters?

Is there any design reason for that (like the reason they gave up multi inheritance)?
or it just wasn't important enough?
And same question applies for optional parameters in methods... this was already in the first version of vb.net... so it surely no laziness that cause MS not to allow optional parameters, probably architecture decision.. and it seems they had change of heart about that, because C# 4 is going to include that..
What was the decision and why did they give it up?
Edit:
Maybe readers didn't fully understand me. I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used millions of times per second.
Say I have a method called Add(int num), and this method is used quiet a lot with 1 as parameter (Add(1);), I've found out it is faster to implement a special method especially for one. And I don't mean overloading - Writing a new method called AddOne, and literally copy the Add method into it, except that instead of using num I'm writing 1. This might seems horribly weird to you, but it's actually faster.
(as much as ugly it is)
That made me wonder why C# doesn't support manual inline which can be amazingly helpful here.
Edit 2:
I asked myself whether or not to add this. I'm very well familiar with the weirdness (and disadvantages) of choosing a platform such as dot net for such project, but I think dot net optimizations are more important than you think... especially features such as Any CPU etc.
To answer part of your question, see Eric Gunnerson's blog post: Why doesn't C# have an 'inline' keyword?
A quote from his post:
For C#, inlining happens at the JIT
level, and the JIT generally makes a
decent decision.
EDIT: I'm not sure of the reason for delayed optional parameters support, however saying they "gave up" on it sounds as though they were expected to implement it based on our expectations of what other languages offered. I imagine it wasn't high on their priority list and they had deadlines to get certain features out the door for each version. It probably didn't rise in importance till now, especially since method overloading was an available alternative. Meanwhile we got generics (2.0), and the features that make LINQ possible etc. (3.0). I'm happy with the progression of the language; the aforementioned features are more important to me than getting support for optional parameters early on.
Manual inlining would be almost useless. The JIT compiler inlines methods during native code compilation where appropriate, and I think in almost all cases the JIT compiler is better at guessing when it is appropriate than the programmer.
As for optional parameters, I don't know why they weren't there in previous versions. That said, I don't like them to be there in C# 4, because I consider them somewhat harmful because the parameter get baked into the consuming assembly and you have to recompile it if you change the standard values in a DLL and want the consuming assembly to use the new ones.
EDIT:
Some additional information about inlining. Although you cannot force the JIT compiler to inline a method call, you can force it to NOT inline a method call. For this, you use the System.Runtime.CompilerServices.MethodImplAttribute, like so:
internal static class MyClass
{
[System.Runtime.CompilerServices.MethodImplAttribute(MethodImplOptions.NoInlining)]
private static void MyMethod()
{
//Powerful, magical code
}
//Other code
}
My educated guess: the reason earlier versions of C# didn't have optional parameters is because of bad experiences with them in C++. On the surface, they look straight-forward enough, but there are a few bothersome corner cases. I think one of Herb Sutter's books describes this in more detail; in general, it has to do with overriding virtual methods. Maximilian has mentioned one of the .NET corner cases in his answer.
You can also pretty much get by with out them by manually writing multiple overloads; that may not be very nice for the author of the class, but clients will hardly notice the difference between overloads and optional parameters.
So after all these years w/o them, why did C# 4.0 add them? 1) improved parity with VB.NET, and 2) easier interop with COM.
I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used literally millions of times per second.
Then you chose a wrong language. I assume you actually profiled your code (right?) and know that there is nothing apart from micro-optimisations that can help you. Also, you're using a high-performance native bigint library and not writing your own, right?
If that's true, don't use .NET. If you think you can gain speed on partial specialisation, go to Haskell, C, Fortran or any other language that either does it automatically, or can expose inlining to you to do it by hand.
If Add(1) really matters to you, heap allocations will matter too.
However, you should really look at what the profiler can tell you...
C# has added them in 4.0: http://msdn.microsoft.com/en-us/library/dd264739(VS.100).aspx
As to why they weren't done from the beginning, its most likely because they felt method overloads gave more flexibility. With overloading you can specify multiple 'defaults' based on the other parameters that you're taking. Its also not that much more syntax.
Even in languages like C++, inlining something doesn't guarantee that it'll happen; it's a hint to the compiler. The compiler can either take the hint, or do its own thing.
C# is another step removed from the generated assembly code (via IL + the JIT), so it becomes even harder to guarantee that something will inline. Furthermore, you have issues like the x86 + x64 implementations of the JIT differing in behaviour.
Java doesn't include an inline keyword either. The better Java JITs can inline even virtual methods, nor does the use of keywords like private or final make any difference (it used to, but that is now ancient history).

What makes the Java compiler so fast?

I was wondering about what makes the primary Java compiler (javac by sun) so fast at compilation?
..as well as the C# .NET compiler from Microsoft.
I am comparing them with C++ compilers (such as G++), so maybe my question should have been, what makes C++ compilers so slow :)
That question was nicely answered in this one: Why does C++ compilation take so long? (as jalf pointed out in the comments section)
Basically it's the missing modules concept of C++, and the aggressive optimization done by the compiler.
I think the most difficult part is not the need to compile the header files (unless they are really big, but you can use precompiled headers in that case). The worst part is always the fact that C++'s grammar is too wildly context-sensitive. Despite the fact I like C++, I feel sorry for anybody who has to write a C++ parser.
There are a couple of things that make the C++ compiler slower than those of Java/C#. The grammar is much more complex, generic programming support is much more powerfull in C++, but at the same time it is more expensive to compile. Inclusion of files work in a different way than importing modules.
Inclussion of header files
First, whenever you include a file in C++ the contents of the file (.h usually) are injected in the current compilation unit (include guards avoid reinjecting the same header twice), and this is transitive. That is, if you include header a.h, that in turns includes b.h, your compilation unit will include all code in a.h and all code in b.h.
Java (or C#, I will talk about Java, but they are similar in this) don't have include files, they depend on the binaries from the compilation of the used classes. This means that whenever you compile a.java that uses an object B defined in b.java, it just checks the binary b.class, it does not need to go deeper to check the dependencies of B, so it can cut the process earlier (with just one level of checking).
At the same time, including files only includes the language definitions, and processing it requires time. When the Java/C# compiler reads a binary it has the same information but already processed by the compilation step that generated it.
So at the end, in C/C++ more files are included and at the same time processing of those includes is more expensive than processing of binary modules.
Templates
Templates are special in their own way. They can be precompiled, but they are usually not (for a good set of reasons). This means that in all compilation units that use std::vector the whole set of vector methods used (unused template methods don't get compiled) is processed and the binary code generated by the compiler. At a later step, during linking, redundant definitions of the same method will get dropped, but during compilation they must be processed.
Support in Java for generics is more limited in many ways. At the end, for example, there is only one Vector class binary, and whenever the compiler sees Vector in java what it does is generating type checking code before delegating to the real Vector implementation (that stores plain Object) and that is not generic. The compiler does provide the type warranties, but does not compile Vector for each type.
In C# it is, once again, different. C# support for generics is more complex than that of Java, and at the end generic classes are different than plain classes but anyway they get compiled only once as the binary format has all required information.
Because they do something quite different, C++ compiler produces optimized native code whereas C#, VB .Net and Java compiler produce an intermidiate language than when you first execute the application is turned into native code, and that is why you get slow loading of application in Java etc. the first time you execute the application.
The C++ compiler has to do the full optimization where the JITed languages optimize when you execute the application.
Someone would argue that you have to measure C++ compile time = Java compile time + time for JITing the first time you load the application if you want to be correct, but i don't think that would be right and fair because you are comparing native languages to JITed, or else oranges to apples.
The C++ compiler must repeatedly compile all the header files and there are lots of them, so this is one thing that slows it down.
One of the more time consuming tasks when compiling is code optimization.
Javac does very little optimization on the code when doing the compilation. Optimization is instead done by the JVM when running the application.
A C/C++ needs to be optimized when compiling since optimization of compiled machine code is hard.
You got it right in your last sentence: it's not java or C# that's fast to compile, it's C++ that is exceptionally slow to compile, due to its complex grammar and features, most importantly templates
If you think javac is fast try Jikes.... (see http://jikes.sourceforge.net/)
It is a Java Compiler written in C++. Unfortunately they haven't kept up with the latest Java Compiler specs but if you want to see fast this is it.
Tony
I think part of it is the complexity of the languages. C++ is incredibly mutable, with the ability to override pretty much any operator or piece of syntax (like overriding the () operator). This means the compiler has to do a lot more work just to determine what operations to actually run, even for simple things. Java and C# don't have this issue, as the syntax is fixed, and they're generally much simpler to parse.
It's a bit difficult comparing bytecode languages like java with natively compiled languages like C++. A better comparison is Delphi vs C++, where Delphi is much faster to compile. Since this has nothing to do with optimization or byte code, it must be due to differences in language syntax and the relative performance of includes vs. modules/units.
Is Java compiler fast?
The Java to class translation shall be blindingly fast since it is just a glorified zip with some syntax checking so to be fair if compared to a real compiler that is doing optimization and object code generation the "translation" from Java to class is trivial.
Did a comparison with fairly small program "hello world" and-and compare to GCC (C/C++/Ada) and found that javac was 30 times slower, and it got even worse in runtime?

Is F# really better than C# for math?

Unmanaged languages notwithstanding, is F# really better than C# for implementing math? And if that's the case, why?
I think most of the important points were already mentioned by someone else:
F# lets you solve problems in a way mathematicians think about them
Thanks to higher-order functions, you can use simpler concepts to solve difficult problems
Everything is immutable by default, which makes the program easier to understand (and also easier to parallelize)
It is definitely true that you can use some of the F# concepts in C# 3.0, but there are limitations. You cannot use any recursive computations (because C# doesn't have tail-recursion) and this is how you write primitive computations in functional/mathematical way. Also, writing complex higher order functions (that take other functions as arguments) in C# is difficult, because you have to write types explicitly (while in F#, types are inferred, but also automatically generalized, so you don't have to explicitly make a function generic).
Also, I think the following point from Marc Gravell isn't a valid objection:
From a maintenance angle, I'm of the view that suitably named properties etc are easier to use (over full life-cycle) than tuples and head/tail lists, but that might just be me.
This is of course true. However, the great thing about F# is that you can start writing the program using tuples & head/tail lists and later in the development process turn it into a program that uses .NET IEnumerables and types with properties (and that's how I believe typical F# programmer works*). Tuples etc. and F# interactive development tools give you a great way to quickly prototype solutions (and when doing something mathematical, this is essential because most of the development is just experimenting when you're looking for the best solution). Once you have the prototype, you can use simple source code transformations to wrap the code inisde an F# type (which can also be used from C# as an ordinary class). F# also gives you a lot of ways to optimize the code later in terms of performance.
This gives you the benefits of easy to use langauges (e.g. Python), which many people use for prototyping phase. However, you don't have to rewrite the whole program later once you're done with prototyping using an efficient language (e.g. C++ or perhaps C#), because F# is both "easy to use" and "efficient" and you can fluently switch between these two styles.
(*) I also use this style in my functional programming book.
F# has many enormous benefits over C# in the context of mathematical programs:
F# interactive sessions let you run code on-the-fly to obtain results immediately and even visualize them, without having to build and execute a complete application.
F# supports some features that can provide massive performance improvements in the context of mathematics. Most notably, the combination of inline and higher-order functions allow mathematical code to be elegantly factored without adversely affecting performance. C# cannot express this.
F# supports some features that make it possible to implement mathematical concepts far more naturally than can be obtained in C#. For example, tail calls make it much easier to implement recurrence relations simply and reliably. C# cannot express this either.
Mathematical problems often require the use of more sophisticated data structures and algorithms. Expressing complicated solutions is vastly easier with F# compared to C#.
If you would like a case study, I converted an implementation of QR decomposition over System.Double from 2kLOC of C#. The F# was only 100 lines of code, runs over 10× faster and is generalized over the type of number so it works not only on float32, float and System.Numerics.Complex but can even be applied to symbolic matrices to obtain symbolic results!
FWIW, I write books on this subject as well as commercial software.
F# supports units of measure, which can be very useful for math work.
I'm from a maths background, and have looked at F#, but I still prefer C# for most purposes. There are a couple of things that F# makes easier, but in general I still prefer C# by a large margin.
Some of the touted F# benefits (immutability, higher-order functions, etc) can still be done in C# (using delegates etc for the latter). This is even more apparent when using C# 3.0 with lambda support, which makes it very easy and expressive to declare functional code.
From a maintenance angle, I'm of the view that suitably named properties etc are easier to use (over full life-cycle) than tuples and head/tail lists, but that might just be me.
One of the areas where C# lets itself down for maths is in generics and their support for operators. So I spend some time addressing this ;-p My results are available in MiscUtil, with overview here.
This post looks like it might be relevant: http://fsharpnews.blogspot.com/2007/05/ffts-again.html
Also: C# / F# Performance comparison
The biggest advantage for pure math is what PerpetualCoder said, F# looks more like a math problem so it's going to be easier for a mathematician to write. It reminded me a lot of MATLAB when I looked at it.
I am not sure if its better or worse but there is certainly a difference in the approach. Static languages over specify how a problem will be solved. Functional languages like F# or Haskell do not do that and are more tailored at how a mathematician would solve a particular problem. Then you have books like this that tout python to be good at it. If you are talking from a performance point of view nothing can beat C. If you are talking from libraries I believe Functional Langauges (F# and the likes), Fortan (yes its not dead yet), Python have excellent libraries for math.
One of the great advantages of functional languages is the fact they they can run on multi-processor or multi-core systems, in parallel without requiring you to change any code.
That means you can speed up your algorithms by simply adding cores.

Categories

Resources