Developing Math libraries - c#

I am looking to create a custom math library for the project I am working on. The project is written in C#, and I am slightly concerned whether C# will be fast enough. The library will have a number of custom math formulas and equasions to be applied to very large data sets. Simulations and matrix operations will be done as well (i.e. Monte Carlo simulations) so it'd have to be fast.
One thought is to create the math library in C++ and reference this .dll within the C# project. I am wondering whether it is worth the effort?

The general rule of thumb is "don't optimize until you need to," so I would lean towards just writing it in C# and optimizing the code later on.
But, in this situation where optimizing might require reimplementing everything in another language, I would do some testing first. Write a small app using the most processor-intensive math you expect in both C# and C++, then compare the times to see if the C# one is acceptable.

If you will be using it in C#, then you might as well put it in C# to start with. You buy more with managed code than you save with pointer wrangling. If you are worried about memory and cache issues, then just use arrays of types instead of objects. It gives you more control over how the memory is laid out.
The optimizers and JIT compilers will buy you more than enough speed to make up for any inefficiencies.

It's a little hard to say anything definitive one way or the other. I'd suggest sticking with C# if that's what you've started or what the rest of your project is based around. Keep some canonical data sets aside and establish some benchmarks as you develop. If you find performance to fall below some unacceptable threshold, and your profiling leads you to believe the problem is intrinsic to C#, then write a C++ component to solve those specific needs.

One thing to keep in mind is that using bytecode languages like C# or Java lets the JIT compiler in the runtime optimise your code. In practice, this means that the runtime performance of your code only gets better over time. Unlike C++, where the machine code is produced once at compile time and never changes, the performance of your C# code can continuously improve along with improvements to the underlying JIT compiler.
A serious amount of research is going into JIT compiler technology these days. Taking advantage of this now is an excellent approach.

One of the reasons I like using C# for numerical programming is that it is fairly easy to interface with native code. C# and the latest .NET runtimes and JIT compilers are pretty darn good, but sometimes you just can't beat highly optimized native code. For example, here is what I have done for linear algebra stuff. Write some nice object oriented classes that hide the implementation of key operations. For me this meant creating Matrix and Vector classes with addition and multiplication functions/operators. When I encountered an algorithm that had to perform several matrx-matrix products and transpose products with rather large matrices (thousands of rows by hundreds of columns) over many iterations, things got too slow. I re-implemented the matrix multiplication function to call a highly optimized matrix-matrix multiplication function from Intel's Math Kernel library (dgemm). This gave me a better than 20x speed up. Plus the unpleasant API for this native routine (dgemm takes no less than 13 parameters!) was hidden from users of the matrix class.
So, I would suggest using C# for your library and drop down to optimized native code when and where needed.

Related

Is it a good idea to run dynamically generated code on my GPU with the use of OpenCL, or there are better ways?

Introduction
So, let me introduce a problem. Currently, I'm writing a program in C# that has a lot of computations in it (more precisely it's a neural network lib) and by far I've used standard arrays to store matrixes, but I thought of it's better to create a 2d, 3d matrix class to encapsulate all matrix operations I need and then clean loops in my code.
As you may know, it's pretty easy to accomplish with basic operators overloading, but I came around another problem, it would be slower than regular for loops over arrays, as in the case you have a big equation, intermediate classes which are produced by overloading of operators may cause overhead. I googled it and found the article that turned out to be very useful for me. In short, the writer uses additional classes to first create an equation tree, second compile it in a C# method with the use of MSIL (Microsoft Intermediate Language) solves the equation at once.
But then I thought of the possibility of running matrix calculations on my GPU, as it will be even faster. I came around a NuGet package Cloo that uses OpenCL and a wrapper for it (I'd like it to work on any video card not only NVidia with its CUDA) to run C code on your GPU, but it as I've just said it uses C code that has to be written as a string.
Question
Finally, my question. Is it a good idea to generate the C code string dynamically, from the equation tree, to calculate my optimized equations on a GPU or there are other ways to accomplish that.
First of all, right answer for your question - implement both ways and write the benchmark. Because we don't know, what GPU is used, what CPU is used, what matrix size, etc.
Theoretically, GPU should be faster if you are able to use SIMD/SIMT approach without branches (e.g. without ifs). So if you can write planar code which operates internal arrays, then GPU (event embedded) will work faster. However, the main word here is theoretically.
Practically:
.Net-based (and JVM-based) code is much simpler to support.
OpenCL code works different for NVidia/AMD/Intel GPUs (because sometimes you can store a lot of data in the local memory, sometimes you couldn't; sometimes you can rely on fast GDDR6, sometimes videocard (for example - embedded) just shares computer RAM).
Some GPU memory profiling requires video freeze (to mitigate fluctuations). And you will have many other interesting items during the GPU development.
However to help you:
Try Tensorflow Matrix multiplication first. It has .Net bindings and it can do mathematics operations on both CPU and GPU.
Compare .Net code with native, for example - Rust/Kotlin Native/C++. Probably you can just move all computations into the native part (all these options are much simpler than OpenCL coding and supporting).
From my prospective (no proofs here, sorry) it is much simpler to write code on multiple languages than generate code at language X from language Y.

Are there any open source projects that implement the same functionality with and without exceptions?

I'm working on my thesis about the impact of using exceptions on code complexity. It would be really great if I had a few thousand LOC that use good old error handling and exceptions for the same functionality. I don't even know where to start googling. Any C#, Java, C++, D project would suffice. My best guess is a project that switched to exceptions at a given version. Any help is appreciated.
Considering that in both Java and C# exception handling is essential for pretty much the complete base libraries, I doubt it.
Java is pretty much completely out of the loop because without out parameters you have to resort to extremely strange constructs (eg you either always return Object arrays or implement classes with a return value + the value that should be returned,..).
In c# you could theoretically get around using exceptions and using error codes if you ignore the base library, but I still doubt anyone would want to program that way. For both languages it's just integrated way too much into the core concept.
So your best bet of the named languages would be C++, but then C++ exceptions have a whole lot of problems compared to more modern implementations - really no fun to use them. You may look around for eg Python programs, I could imagine someone programming python without exceptions.
Anyways it's extremely unlikely (independent of language; although C++ is probably the only one where I could imagine it at all) to find a project that changed from error codes to exception handling - after all that'd be pretty much a complete rewrite..
I don't think you will find such projects, even if some project switched at some time, they will still be very much different, so you would compare apples and oranges anyway. Thesis is not supposed to be based on anecdotal information, questionable testing, and unwarranted conclusions.
You can approach this topic from two angles. One is to discuss theoretical implications of two approaches of error handling and illustrate that with three-liners. Another, is to conduct a controlled experiment writing probably short (~1000 lines) some real-life scenario test case and analyse it, followed by discussion whether it would or wouldn't scale on larger systems. And of course, if you have time (at least couple years) and money (at least couple million $) to hire a group of experienced developers and provide them with large-scale problems, you can gather some valuable statistics.
Not sure it fits, but:
GTK+, the C library, uses error codes whereas gtkmm, its C++ wrapper, wraps them in exceptions. (Example: GTK+ g_thread_create() vs gtkmm Glib::Thread::create()) Both are object-oriented.

What Python features will excite the interest of a C# developer? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
For someone who’s been happily programming in C# for quite some time now and planning to learn a new language I find the Python community more closely knit than many others.
Personally dynamic typing puts me off, but I am fascinated by the way the Python community rallies around it. There are a lot of other things I expect I would miss in Python (LINQ, expression trees, etc.)
What are the good things about Python that developers love? Stuff that’ll excite me more than C#.
For me its the flexibility and elegance, but there are a handful of things I wish could be pulled in from other languages though (better threading, more robust expressions).
In typical I can write a little bit of code in python and do a lot more than the same amount of lines in many other languages. Also, in python code form is of utmost importance and the syntax lends its self to highly readable, clean looking code. That of course helps out with maintenance.
I love having a command line interpreter that I can quickly prototype an algorithm in rather than having to start up a new project, code, compile, test, repeat. Not to mention the fact I can use it to help me automate my server maintenance as well (I double as a SA for my company).
The last thing that comes to mind immediately is the vast amounts of libraries. There are a lot of things already solved out there, the built-in library has a lot to offer, and the third party ones are many times very good (not always though).
Being able to type in some code and get the result back immediately.
(Disclaimer: I use both C# and Python regularly, and I think both have their good and bad points.)
I'm primarily .NET developer and using Python for me personal projects.
What are the good things about python that developers love?
I can say for myself - Python is like a breath of fresh air.
1) It's simple to learn, took about a week for me in the evenings. I'm saying about Python + Django. Python syntax is quite simple.
2) It's simple to use. No troubles installing Python + Django on Windows at all.
3) It can be run on Windows and UNIX.
4) I need it for web, so I get cheaper hosting than ASP.NET.
5) All the advantages of Python language over C#. Like tuples - so useful!
The only thing I don't like is that my favorite IDE Visual Studio doesn't support it (I know about IronPython, don't you worry).
I'm a very heavy user of both C# and Python; I've built very complicated applications in both languages, and I've also embedded Python scripting in my major C# application. I'm not using either to do much in the way of web work right now, but other than that I feel like I'm pretty qualified to answer the question.
The things about Python that excite me, in particular:
The deep integration of generators into the language. This was the first thing that made me realize that I needed to take a long, serious look at Python. My appreciation for this has deepened considerably since I've become conversant with the itertools module, which looks like a nifty set of tools but is in fact a new way of life.
The coupling of dynamic typing and the fact that everything's an object makes pretty sophisticated techniques extremely simple to implement. It's so easy to replace logic with tables in Python (e.g. o = class_map[k]() instead of if k='foo': o = Foo()) that it becomes a basic technique. It's so normal in Python to write methods that take methods as parameters that you don't raise an eyebrow when you see d = defaultdict(list).
zip, and the methods that are designed with it in mind. It takes a while before you can intuitively grasp what dict(zip(k, v)) and d.update(zip(k, v)) are doing, but it's a paradigm-shifting moment when you get there. An entire universe of uninteresting and potentially error-laden code eliminated, just by using one function. Then you start designing functions and classes with the expectation that they'll be used in conjunction with zip, and suddenly your code gets simpler and easier. (Protip: Or itertools.izip. Or itertools.izip_longest.)
Speaking of dictionaries, the way that they're deeply integrated into the language. Understanding what a line of code like self.__dict__.update(**kwargs) does is another one of those paradigm-shifting moments.
List comprehensions and generator expressions, of course.
Inexpensive exceptions.
An interactive intepreter.
Function decorators.
IronPython, which is so much simpler to use than we have any right to expect.
And that's without even getting into the remarkable array of functionality in the standard modules, or the ridiculous bounty of third-party tools like BeautifulSoup or SQL Alchemy or Pylons.
One of the most direct benefits that I've gotten from getting deeply into Python is that it has greatly improved my C# code. I could generally understand code that had a variable of type Dictionary<string, Action<Foo>> in it, but it didn't seem natural to write it. (I use static dictionaries to replace hard-coded logic far more frequently today than I did a year ago.) I have no difficulty understanding what LINQ is doing now, or how IEnumerable<T> and return yield work.
So what don't I like about Python?
Dynamic typing really limits what you can do with static code analysis. Not only isn't there a tool like Resharper for Python, in a language where it's possible to write getattr(x, y)() there really can't be.
It has a bunch of inelegant conventions. How I would love to be able to go back in time and try to talk GVR out of the idea that lambda expressions should be introduced with the word lambda - it's pretty damning that something as fundamental as lambda expressions should be more concise in C# than they are in Python. The leading and trailing double-underscore convention is horrible, and the fact that people mutely acquiesce to it is testimony to Dostoevsky's observation that man is the animal who can get used to anything. And don't get me started on the fact that a module with the name of StringIO was allowed to get out the door.
Some of the features that make Python work on multiple platforms also make it kind of baffling. It's easy to use import, but it's really not easy to understand what the hell it's actually doing. (Where is it looking? What does __init__.py do? Etc.)
The amazingly rich library of standard modules is so amazingly rich that it's hard to know what's in it. It's often easier to write a function than it is to find out whether or not there's something in the standard library that does the same thing - I'm looking at you, itertools.chain.
Your question is kind of like a plumber asking why carpenters are always going on and on about hammers. After all the plumber doesn't have a hammer and has never missed it. Python (even IronPython) and C# target different types of developers and different types of programs. I am very comfortable in Python and enjoy the freedom to focus on the business rules without being distracted by the syntax requirements of the language. On the other hand I have written some fairly substantial code in C# and would be very concerned about the lack of type safety had I taken on the same task in Python. This is not to say that Python is a "toy" language. You can (and people have) write a complete medium or large application in Python. You have the freedom of dynamic typing, but you also have the responsibility to keep it all straight (frameworks help here). Similarly you can write a small application in C#, but you will bring along some overhead you do not likely need.
So if the problem is a nail use a hammer, if the problem is a screw use a screw driver. In other words spend some time to learn Python, get to know it's streangths (text processing, quick coding cycles, simple clean code, etc) and then when you are looking at tackling a new problem ask whether you would be better off in Python or C#. One thing is certain. So long as C# is the only programming language you know, it is the only one you will ever use.
Pat O
My language of choice is C#, and I didn't quite see the point for me to learn Python so far. This talk from PDC09 really piqued my interest: the guy demonstrates how you can use IronPython (or IronRuby) to make a C# app scriptable (in his demo, drop a Python script in a text box, and it works with/extends your C# code). I found this really fascinating: I don't even know where I would start to do something similar in C#, and this made me at least appreciate that it brings something different to the table, which could really enrich what I can develop!
I'm an asymmetrical user of both languages, in a sense that I use C# mostly professionally and Python for all my "fun" projects (not that work is never fun, but... you know...)
This difference of context may skew my perspective, including my opinion that they are two distinct types (pun intended) of languages for, generally, distinct purposes.
This said, it may not be a coincidence that Python is, at this point in time, [one of?] the languages of choice for all kinds of cutting edge, somewhat scholarly, technology/science oriented projects. (And BTW, this "scholarly" keyword here doesNOT imply, that Python is a university toy, plenty of "serious" applications in plenty of domains/industry are proof to the contrary). This may be due to several factors:
(I don't develop most points, readily well expressed in other responses)
the openness and quasi universal availability of Python (unlike C# !)
the lightweight / ease of use / low learning curve
the extensive, high quality, "standard" library and the extensiver (and occasionally bum quality, but on the whole available, open-sourced, etc.) additional library.
the wide array of open source projects in Python language
the relative ease to bind with C/C++ for reusing legacy code, but also for placing performance-critical portions of a project
the generally higher level of abstraction of may constructs of the language
the multi-paradigms (imperative, object oriented and functional)
the availability of practitioners in so many domain of science and technology
and, yes, the
"herd mentality effect" mentioned in a remark, possibly in a [self?] deriding way. The fact that a language attracts a broad, "closely knit" community, makes it attractive too, beyond the superficial ("look cool" and such) traits of herd mentality. Put in broader context, sometimes the best technology/language to use is not measured on the its intrinsic merits but on the overall "picture", including the user community.
I like all stuff with [] and {}. Selectors like this [-1:1]. Possibility to write less code, but more something meaningfull, that gives to write Models and other declarative things very DRY.
Like any programming language, it is just a tool in the box or a brush by which you may paint your creation. Any creative endeavour requires that the artist loves the tools he uses; otherwise, the outcome suffers. Some people like Python for the same reason others love Perl. Incidentally, I have found that most Python lovers loathe Perl's flexible and expressive syntax. As a Perl lover, I don't hate Python, but consider it to be overly structured and restrictive.
If you ask me, all of these throngs of people who seem to love Python were silently suffering under the tool choices before Python came into being. Some suffered under Perl, others under something else. In other words, I believe that when Python came along, it found a large group of silent sufferers longing for a tool like Python.
I can't program in Python because I can't "think" in Python. I can "think" in Perl, therefore, it is the tool I prefer. The silently suffering mass of, now, Python users seem to have found some long lost salvation. Now if they could only keep their evangelism to themselves :).
If you are familiar with the .NET CLR and prefer a statically-typed language, but you like Python's lightweight syntax, then perhaps Boo is the language for you.
Don't get me wrong, I am and will always be a devoted fan of C#.
But sometimes there are things I can't do in C#. lthough C# keeps reducing those gaps, Python is still the language I go to to fill them.
It's dynamic, flexible, powerful, and clean. Lovely language. Whenever I need to script or build dynamic or functional (as in functional programming) software, I go Python.
For me Python is the most elegant language I've used. The syntax is minimalist (significantly less punctuation than most) and intentionally modeled after the psuedo-code conventions which are ubiquitously used by programmer to outline their intentions.
Python's if __name__ == '__main__': suite encourages re-use and test driven development.
For example, the night before last I hacked together to run thousands of ssh jobs (with about 100 concurrently) and gather up all the results (output, error messages, exit values) ... and record the time take on each. It also handles timeouts (An ssh command can stall indefinitely on connection to a thrashing system --- it's connection timeouts and retry options don't apply after the socket connection is made, not matter if the authentication stalls). This only takes a few dozen lines of Python and it's really is easiest to create it as a class (defined above the __main__ suite) and do my command line parsing in a simple wrapper down inside __main__. That's sufficient to do the job at hand (I ran the script on 25,000 hosts the next day, in about two hours). It I can now use this code in other scripts as easily as:
from sshwrap import SSHJobMan
cmd = '/etc/init.d/foo restart'
targets = queryDB(some_criteria)
job = SSHJobMan(cmd, targets)
job.start()
while not job.done():
completed = job.poll()
# ...
# Deal with incremental disposition of of completed jobs
for each in sorted(job.results):
# ...
# Summarize results
... and so on.
So my script can be used for simple jobs ... and it can be imported as a module for more specialized work that couldn't be described on my wrapper's command line. (For example I could start up "consumer" subprocesses for handling other work on each host where the job was successful while spitting out service tickets or automated reboot requests for all hosts reporting timeouts or failures, etc).
For modules which have no standalone usage I can use the __main__ suite to contain unit-tests. Thus every module can contain its own tests ... which, in fact, can be integrated into the "doc strings" using the doctest module from the standard libraries. (Which, incidentally, means that properly formatted examples in the documentary comments can be kept in sync with the implementation ... since they are parts of the unit-test suite).
The main thing I like about Python is its very concise, readable syntax. Though using indentation as a block delimiter can seem strange at first, once you begin to code a lot in the language I find it begins to make sense. Though the core language is quite simple, its more advanced features, e.g. list comprehension, decorators and generators, are rather useful too.
In addition, the Python standard library is just fantastic; its documentation is very well written, and it contains a lot of very useful packages. I also find that there are plenty of good bindings for C libraries, such as PyGTK, Webkit and Qt, to name but a few.
One caveat is that Python, like most dynamic languages, is quite slow in comparison with compiled, statically-typed languages. However, you can easily extend it with C, allowing you to write code requiring better performance in C and the rest in Python.
It's a great language overall, and (for me at least) makes coding more productive and enjoyable.

Is C# code faster than Visual Basic.NET code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Is C# code faster than Visual Basic.NET code, or that is a myth?
That is a myth. They compile down to the same CLR. However the compiler for the same routine may come out slightly differently in the CLR. So for certain routines some may be slightly better like (0.0000001%) faster in C# and vice versa for VB.NET, but they are both running off the same common runtime so they both are the same in performance where it counts.
The only reason that the same code in vb.Net might be slower than c# is that VB defaults to have checked arithmetic on and c# doesn't.
By default, arithmetic operations and overflows in Visual Basic are checked; in c#, they are not.
If you disable that then the resulting IL is likely to be identical. To test this take your code and run it through Reflector and you will see that it looks very similar if you switch from c# to vb.Net views.
It is possible that an optimization (or just difference in behaviour) in the c# compiler verses the vb.net compiler might lead to one slightly favouring the other. This is:
Unlikely to be significant
if it was it would be low hanging fruit to fix
Unlikely to happen.
c# and vb.net's abstract syntax trees are very close in structure. You could automatically transliterate a great deal of vb.Net into c# and vice versa. What is more the result would stand a good chance of looking idiomatic.
There are a few constructs in c# not in vb.net such as unsafe pointers. Where used they might provide some benefits but only if they were actually used, and used properly. If you are down to that sort of optimization you should be benchmarking appropriately.
Frankly if it makes a really big difference then the question should not be "Which of c#/vb.net should I use" you should instead be asking yourself why you don't move some code over to C++/CLI.
The only way I could think of that the different compilers could introduce serious, pervasive differences is if one chose to:
Implement tail calls in different places
These can make things faster or slower and certainly would affect things on deeply recursive functions. The 4.0 JIT compiler on all platforms will now respect all tail call instructions even if it has to do a lot of work to achieve it.
Implemented iterator blocks or anonymous lambdas significantly more efficiently.
I believe both compilers are about as efficient at a high level as they are going to get though in this regard. Both languages would require explicit support for the 'yield foreach' style available to f#'s sequence generators.
Boxed when it was not necessary, perhaps by not using the constrained opcode
I have never seen this happen but would love an example where it does.
Both the c# and vb.net compilers currently leave such optimization complexities as en-registering of variables, calling conventions, inlining and unrolling entirely up to the common JIT compiler in the CLR. This is likely to have far more of an impact on anything else (especially when the 32 bit and 64bit JIT's can now behave quite differently).
The framework is written in C# but that still does not tell about performance differences between C# or VB as everything is compiled to IL language which then actually executed (including JITted and so on).
The responsibility is on each specific language compiler that what kind of IL they produce based on source code. If other compiler produces better suited IL than other, then it could have performance difference. I don't know exactly that is there this kind of areas where they would cause drastically different IL, but I doubt the differences would still be huge.
Other aspect is completely then the C#'s ability to run unsafe code like using raw pointers etc that can give performance on special scenarios.
There might be a slight difference in the compiler optimization, but I'd say there is no noticeable difference. Both C# and VB.NET compile to Common Intermediate Language. In some cases, you may be able to get a significant performance gain in C# by using unsafe code, but under most circumstances I wouldn't recommend doing so. If you need something that performance critical, you shouldn't use C# either.
The myth probably started because of the huge difference in Visual Basic 6 performance compared to the average C++ application.
I was at a Microsoft conference and the MS employees stated that C# is up to 8% faster than VB.NET. So if this is a myth, it was started by the people that work at MS. If I can find the slides that state this I would post them, but this was when C# just came out. I think that even if it was true at one point in time, that the only reason for one to be faster than the other is how things are configured by default like ShuggyCoUk said.
It depends on what you're doing. I think there's no real difference between VB and C#. The both of them are .Net languages and they're compiled in IL.
More info? Read this:
http://devlicio.us/blogs/robert_dunaway/archive/2006/10/19/To-use-or-not-use-Microsoft.VisualBasic.dll-_2800_all-.NET-Languages-could-benefit_3F002900_.aspx
As usual the answer is that it depends... By itself, no, VB.Net is not slower than C#, at least nothing that you will notice. Yes, there will be slight differences in compiler optimization, but IL generated will be essentially the same.
However, VB.Net comes with a compatibility library for programmers used to VB6. I am remembering about those string methods like left, right, mid, old VB programmers would expect. Those string manipulation functions are slower. I'm not sure you would notice an impact, but depending on the intensity of their use, I'd bet the answer would be yes. Why are those methods slower than "native" .net string methods? Because they are less type-safe. Basically, you can throw almost anything at them and they will try to do what you want them to, just like in old VB6.
I am thinking about string manipulation, but if I think harder, I'm sure I'll remember about more methods thrown into that compatibility layer (I don't remember the assembly's name, but remember it is referenced by default in VB.Net) that would have a performance impact if used instead of their .net "native" equivalent.
So, if you keep on programming like you were in VB6, then you might notice an impact. If not, it's ok.
It is not really a myth. While C# and VB.Net both compile to IL, the actual instructions produced are likely to be different because 1. the compilers may have different optimisations and 2. the extra checks that VB.Net does by default e.g. arithmetic overflow. So in many cases the performance will be the same but in some cases C# will be faster. It's also possible VB.Net might be quicker in rare circumstances.
There are some small differences in the generated code that may make C# slightly faster in some situations. For example VB.NET has some extra code to clear the local variables in a method while C# doesn't.
However, those differences are barely measurable, and most code is so far from optimal that you are just starting in the wrong end by switching language to make the code run faster. You can take just about any CPU intensive code and rather easily make it twice as fast. Some code can be made 10 times faster, other code perhaps 10000 times faster. In that context the few percent that you may gain by using C# instead of VB.NET is not worth the effort.
On the other hand, learning C# may very well be an effective way of speeding up your code. Not because C# can produce faster code, but because you will get a better understanding of both C# and VB.NET, enabling you to write code that performs better in either language.
Edit:
The C# and VB.NET compilers are obviously developed more or less in sync. The speed difference between C# 1 and C# 2 is something like 30%, the difference between the parallel versions of C# and VB.NET is a lot less.
C# and VB.Net are both compiled by IL. Also C++ and F# are compiled by it. In fact, the four languages I mentioned are executed at the same speed. There isn't in these "a faster language": the unique difference is between auto garbage-collected languages (C#, VB.Net, F# etc.) and those aren't auto-collected (such as C++). This second group generally is slower, because rarely the developer know how and when collect the garbage in the heap memory, however, if you are informed about the heap memory, the program could result faster if in C++. Hard graphics programs are usually made in C++ (such as the most of Adobe programs). You can also manually collect the garbage in C# (System.GC.Collect();) and in VB.Net (System.GC.Collect).
I know this answer is not fully inherent to the question, but I want to offer you many ways and choices. You choose the right way for your programs.

Is F# really better than C# for math?

Unmanaged languages notwithstanding, is F# really better than C# for implementing math? And if that's the case, why?
I think most of the important points were already mentioned by someone else:
F# lets you solve problems in a way mathematicians think about them
Thanks to higher-order functions, you can use simpler concepts to solve difficult problems
Everything is immutable by default, which makes the program easier to understand (and also easier to parallelize)
It is definitely true that you can use some of the F# concepts in C# 3.0, but there are limitations. You cannot use any recursive computations (because C# doesn't have tail-recursion) and this is how you write primitive computations in functional/mathematical way. Also, writing complex higher order functions (that take other functions as arguments) in C# is difficult, because you have to write types explicitly (while in F#, types are inferred, but also automatically generalized, so you don't have to explicitly make a function generic).
Also, I think the following point from Marc Gravell isn't a valid objection:
From a maintenance angle, I'm of the view that suitably named properties etc are easier to use (over full life-cycle) than tuples and head/tail lists, but that might just be me.
This is of course true. However, the great thing about F# is that you can start writing the program using tuples & head/tail lists and later in the development process turn it into a program that uses .NET IEnumerables and types with properties (and that's how I believe typical F# programmer works*). Tuples etc. and F# interactive development tools give you a great way to quickly prototype solutions (and when doing something mathematical, this is essential because most of the development is just experimenting when you're looking for the best solution). Once you have the prototype, you can use simple source code transformations to wrap the code inisde an F# type (which can also be used from C# as an ordinary class). F# also gives you a lot of ways to optimize the code later in terms of performance.
This gives you the benefits of easy to use langauges (e.g. Python), which many people use for prototyping phase. However, you don't have to rewrite the whole program later once you're done with prototyping using an efficient language (e.g. C++ or perhaps C#), because F# is both "easy to use" and "efficient" and you can fluently switch between these two styles.
(*) I also use this style in my functional programming book.
F# has many enormous benefits over C# in the context of mathematical programs:
F# interactive sessions let you run code on-the-fly to obtain results immediately and even visualize them, without having to build and execute a complete application.
F# supports some features that can provide massive performance improvements in the context of mathematics. Most notably, the combination of inline and higher-order functions allow mathematical code to be elegantly factored without adversely affecting performance. C# cannot express this.
F# supports some features that make it possible to implement mathematical concepts far more naturally than can be obtained in C#. For example, tail calls make it much easier to implement recurrence relations simply and reliably. C# cannot express this either.
Mathematical problems often require the use of more sophisticated data structures and algorithms. Expressing complicated solutions is vastly easier with F# compared to C#.
If you would like a case study, I converted an implementation of QR decomposition over System.Double from 2kLOC of C#. The F# was only 100 lines of code, runs over 10× faster and is generalized over the type of number so it works not only on float32, float and System.Numerics.Complex but can even be applied to symbolic matrices to obtain symbolic results!
FWIW, I write books on this subject as well as commercial software.
F# supports units of measure, which can be very useful for math work.
I'm from a maths background, and have looked at F#, but I still prefer C# for most purposes. There are a couple of things that F# makes easier, but in general I still prefer C# by a large margin.
Some of the touted F# benefits (immutability, higher-order functions, etc) can still be done in C# (using delegates etc for the latter). This is even more apparent when using C# 3.0 with lambda support, which makes it very easy and expressive to declare functional code.
From a maintenance angle, I'm of the view that suitably named properties etc are easier to use (over full life-cycle) than tuples and head/tail lists, but that might just be me.
One of the areas where C# lets itself down for maths is in generics and their support for operators. So I spend some time addressing this ;-p My results are available in MiscUtil, with overview here.
This post looks like it might be relevant: http://fsharpnews.blogspot.com/2007/05/ffts-again.html
Also: C# / F# Performance comparison
The biggest advantage for pure math is what PerpetualCoder said, F# looks more like a math problem so it's going to be easier for a mathematician to write. It reminded me a lot of MATLAB when I looked at it.
I am not sure if its better or worse but there is certainly a difference in the approach. Static languages over specify how a problem will be solved. Functional languages like F# or Haskell do not do that and are more tailored at how a mathematician would solve a particular problem. Then you have books like this that tout python to be good at it. If you are talking from a performance point of view nothing can beat C. If you are talking from libraries I believe Functional Langauges (F# and the likes), Fortan (yes its not dead yet), Python have excellent libraries for math.
One of the great advantages of functional languages is the fact they they can run on multi-processor or multi-core systems, in parallel without requiring you to change any code.
That means you can speed up your algorithms by simply adding cores.

Categories

Resources