Good language to develop a game server in? - c#

I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out).
I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development?
Please help me and my picky self arrive on a solution.
UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?

I hate to say it, and I know I'm risking a down mod here, but it doesn't sound like there's a language out there for you. All programming languages have their quirks and programmers simply have to adapt to them. It's completely possible to write a working server in Python without classes (eliminating the "self" variable class references) and likewise just as easy to write C++ with clean syntax.
If you're looking to deploy cross-platform and want to develop cross-platform as well, your best bet would probably be Java. It shorter development cycles than compiled languages like C and C++, but is higher performance (arguable, but I've always been anti-Java =P) than interpreted languages like Python and Perl and you don't have to work with unofficial implementations like Mono that may from time to time not support all of a language's features.

I might be going slightly off-topic here, but the topic interests me as I have (hobby-wise) worked on quite a few game servers (MMORPG servers) - on others' code as well as mine. There is literature out there that will be of interest to you, drop me a note if you want some references.
One thing that strikes me in your question is the want to serve a thousand users off a multithreaded application. From my humble experience, that does not work too well. :-)
When you serve thousands of users you want a design that is as modular as possible, because one of your primary goals will be to keep the service as a whole up and running. Game servers tend to be rather complex, so there will be quite a few show-stopping bugs. Don't make your life miserable with a single point of failure (one application!).
Instead, try to build multiple processes that can run on a multitude of hosts. My humble suggestion is the following:
Make them independent, so a failing process will be irrelevant to the service.
Make them small, so that the different parts of the service and how they interact are easy to grasp.
Don't let users communicate with the gamelogic OR DB directly. Write a proxy - network stacks can and will show odd behaviour on different architectures when you have a multitude of users. Also make sure that you can later "clean"/filter what the proxies forward.
Have a process that will only monitor other processes to see if they are still working properly, with the ability to restart parts.
Make them distributable. Coordinate processes via TCP from the start or you will run into scalability problems.
If you have large landscapes, consider means to dynamically divide load by dividing servers by geography. Don't have every backend process hold all the data in memory.
I have ported a few such engines written in C++ and C# for hosts operating on Linux, FreeBSD and also Solaris (on an old UltraSparc IIi - yes, mono still runs there :). From my experience, C# is well fast enough, considering on what ancient hardware it operates on that sparc machine.
The industry (as far as I know) tends to use a lot of C++ for the serving work and embeds scripting languages for the actual game logic. Ah, written too much already - way cool topic.

Erlang is a language which is designed around concurrency and distribution over several servers, which is perfect for server software. Some links about Erlang and game-servers:
http://www.devmaster.net/articles/mmo-scalable-server/
http://www.erlang-consulting.com/euc2005/mmog/mmog_in_erlang.htm
I'm thinking of writing a game-server in Erlang myself.

Speaking of pure performance, if you can run Java 6 you get about 1:1 performance when compared to optimized C++ (special cases notwithstanding, sometimes Java is faster, sometimes C++), the only problem you will have is of course stuff like database libraries, interconnectivity, scalability and such. I believe there's a variety of good to great solutions available to each of these problems but you won't find one language which would solve everything for you so I have to give you the age old advice: Choose the language you like and use that one.
Oh, you're still reading this? :) Well, here's some extra pointers.
EVE Online uses Python for its client and server side code and it's both bug-ridden and laggy as something I don't think I should write here so that'd be an example of how Python can be extended to (poorly) serve vast amounts of users.
While Java has some good to great solutions to various related problems, it's really not the best language out there for vast amount of users; it doesn't scale well to extremes without tuning. However there's multi-VM solutions to this which somewhat fix the issue, for example Terracotta is said to do the job well.
While C++ is rather cumbersome, it allows for such a low-level interaction with the system that you may actually find yourself doing things you thought you couldn't do. I'm thinking of something like dynamic per-core microclustering of runtime code blocks for "filling" every possible clock cycle of the processor as efficiently as possible for maximum performance and things like that.
Mono is far behind the .NET VM/equivalent on Windows platforms so you wouldn't be able to use the latest and fanciest features of C#. However Windows XP (x64) OEM licenses are so laughably cheap at the moment that with small investment you could get a bunch of those and you could then run your code on the platform it was meant to be. And don't fall into the Linux hype, Linux is your saviour only if you really know how to use it and especially XP is pretty damn fast and stable nowadays.

What kind of performance do you need?
twisted is great for servers that need lots of concurrency, as is erlang. Either supports massive concurrency easily and has facilities for distributed computing.
If you want to span more than one core in a python app, do the same thing you'd do if you wanted to span more than one machine — run more than one process.

More details about this game server might help folks better answer your question. Is this a game server in the sense of something like a Counter Strike dedicated server which sits in the background and hosts multiplayer interactions or are you writing something which will be hosted on an HTTP webserver?
Personally, if it were me, I'd be considering Java or C++. My personal preference and skill set would probably lead me towards C++ because I find Java clumsy to work with on both platforms (moreso on Linux) and don't have the confidence that C# is ready for prime-time in Linux yet.
That said, you also need to have a pretty significant community hammering on said server before performance of your language is going to be so problematic. My advise would be to write it in whatever language you can at the moment and if your game grows to be of sufficient size, invest in a rewrite at that time.

You could as well use Java and compile the code using GCC to a native executable.
That way you don't get the performance hit of the bytecode engine (Yes, I know - Java out of the box is as fast as C++. It must be just me who always measures a factor 5 performance difference). The drawback is that the GCC Java-frontend does not support all of the Java 1.6 language features.
Another choice would be to use your language of choice, get the code working first and then move the performance critical stuff into native code. Nearly all languages support binding to compiled libraries.
That does not solve your "python does not multithread well"-problem, but it gives you more choices.

The obvious candidates are Java and Erlang:
Pro Java:
ease of development
good development environments
stability, good stack traces
well-known (easy to find experienced programmers, lots of libraries, books, ...)
quite fast, mature VM
Pro Erlang:
proven in systems that need >99.9% uptime
ability to have software updates without downtime
scalable (not only multi-core, but also multi-machine)
Contra Erlang:
unfamiliar syntax and programming paradigm
not so well known; hard to get experienced programmers for
VM is not nearly as fast as java
If your game server mainly works as a event dispatcher (with a bit of a database tucked on), Erlang's message-driven paradigm should be a good match.
In this day and age, I would not consider using an unmanaged language (like C or C++); the marginal performance benefits simply aren't worth the hassle.

It may depend a lot on what language your "game logic" (you may know this term as "business logic") is best expressed in. For example, if the game logic is best expressed in Python (or any other particular language) it might be best to just write it in Python and deal with the performance issues the hard way with either multi-threading or clustering. Even though it may cost you a lot of time to get the performance you want out of Python it will be less that the time it will take you to express "player A now casts a level 70 Spell of darkness in the radius of 7 units effecting all units that have spoken with player B and .... " in C++.
Something else to consider is what protocol you will be using to communicate with the clients. If you have a complex binary protocol C++ may be easier (esp. if you already had experience doing it before) while a JSON (or similar) may be easier to parse in Python. Yes, i know C++ and python aren't languages you are limited to (or even considering) but i'm refer to them generally here.
Probably comes down to what language you are the best at. A poorly written program which you hated writing will be worse that one written in a language you know and enjoy, even if the poorly written program was in an arguable more powerful language.

You could also look at jRuby. It comes with lots of the benefits of Java and lots of the benefits of Ruby in one neat package. You'll have access to huge libraries from both languages.

What are your objectives? Not the creation of the game itself, but why are you creating it?
If you're doing it to learn a new language, then pick the one that seems the most interesting to you (i.e., the one you most want to learn).
If it is for any other reason, then the best language will be the one that you already know best and enjoy using most. This will allow you to focus on working out the game logic and getting something up and running so that you can see progress and remain motivated to continue, rather than getting bogged down in details of the language you're using and losing interest.
If your favorite language proves inadequate in some ways (too slow, not expressive enough, whatever), then you can rewrite the problem sections in a more suitable language when issues come up - and you won't know the best language to address the specific problems until you know what the problems end up being. Even if your chosen language proves entirely unsuitable for final production use and the whole thing has to be rewritten, it will give you a working prototype with tested game logic, which will make dealing with the new language far easier.

You could take a look at Stackless Python. It's an alternative Python interpreter that provides greater support for concurrency. Both EVE Online's server and client software use Stackless Python.
Disclaimer: I haven't used Stackless Python extensively myself, so I can't provide any first-hand accounts of its effectiveness.

There is a pretty cool framework in development that addresses all your needs:
Project Darkstar from Sun. So I'd say Java seems to be a good language for game server development :-)

I know facebook uses a combination of Erlang and C++ for their chat engine.
Whatever you decide, if you choose a combination of languages, check out facebook's thrift framework for cross language services deployment. They open sourced it about a year+ back:
http://incubator.apache.org/thrift/

C++ and Java are quite slow compared to C. The language should be a tool but not a crutch.

Related

Where/When do C# and the .NET Framework fail to be the right tool?

In my non-programming life, I always attempt to use the appropriate tool for the job, and I feel that I do the same in my programming life, but I find that I am choosing C# and .NET for almost everything. I'm finding it hard to come up with (realistic business) needs that cannot be met by .NET and C#.
Obviously embedded systems might require something less bloated than the .NET Micro Framework, but I'm really looking for line of business type situations where .NET is not the best tool.
I'm primarly a C# and .NET guy since its what I'm the most comfortable in, but I know a fair amount of C++, php, VB, PowerShell, batch files, and Java, as well as being versed in the web technologes (JavaScript, HTML, and CSS). But I'm open minded about it my skill set and I'm looking for cases where C# and .NET are not the right tool for the job.
I choose .NET and C# because I'm comfortable with it, but I'm looking for cases where it isn't appropriate.
C# and the .NET Framework might not be the best choice for a hard real-time application. Your application will hose on the first garbage collection, and real-time systems often have memory constraints that make the full-blown .NET framework unsuitable.
That said, there are ways around these problems, see here: http://www.windowsfordevices.com/c/a/Windows-For-Devices-Articles/Adding-Realtime-to-Windows-Embedded/
C# might not be a good choice for complex algorithms, especially those benefitting from parallelism, that would be better expressed using a functional language like F#.
See: https://stackoverflow.com/questions/141985/why-should-a-net-developer-learn-f
There isn't, really, all that much difference between the problem domains served by different programming languages. Rather, the choice usually comes down to
What languages do you/your team already know you can be productive in?
What is available in the libraries (built-in or available from elsewhere) for the language?
The answer to this question will therefore depend on you. For example, if I personally was doing a quick text processing task I'd whip it up in Perl, because I know Perl well and can do that sort of task efficiently: if you asked me to do it in C# I'd say that was the wrong tool for me, because I can do it quicker in Perl.
If you are looking to learn and diversify your programming toolbox -- which is a good idea -- then rather than asking where C# is the wrong tool, you need to ask which language is most appropriate for each task, and make an effort to learn that language better. Otherwise C# will always be the best tool for all jobs, for you.
you've asked an interesting question.
I'll rephrase it: Why Object Oriented? And why .NET? And when not?
I suppose the thing to keep in mind is why OO is so popular. In the modern world, much of the demand for programs is essentially for business. This is why object oriented paradigms are so popular; it is often the most straightforward way to turn a business problem into a program. You basically take a look at a business, break down what the interacting parts (people, machines, places, etc) are, and write something that mimics it in code. So OO is popular because it allows you to mimic many real world situations.
.NET I suspect is popular because it seems so comprehensive. You get loads of components with it, and all you're really doing is mimicking a business issue by writing some connective tissue between these components. Add to that the fact that there's a huge community of people using it already, and the network effect speaks in .NETs favour.
Finally, when would you NOT use .NET?
If your problem is not a business problem, ie isn't merely an issue of connecting some premade components, you might need something different. For instance, if you're writing a driver for a new piece of hardware, that driver is really a layer below the business layer, because
1) It needs to work regardless of what the composition of components is used for
2) The business layer doesn't really care how it works
There's plenty of programming problems where you wouldn't use an OO model, but I suspect OO is useful because it connects all the parts (which aren't OO, like databases and drivers) to create a whole.
I would not use C# for application that make heavy use of resources and need close acces to hardware, ie high profile computer games.
Real time applications (lets say some app that monitors the temperature in a nuclear plant, unless of course Homer simpson in runnig it) have been mentioned, but not games.
World class 3D intensive, IA Intensive games are best server by C++ (at leats the core of them), because you need to be close to the procedural paradigm and hardware, and you need to tell the computer what to do and how to do it, without anything in the middle (the CLR)
C# and .NET are not the right solution if you work in a heterogeneous environment with many platforms. For all practical purposes .NET is a Microsoft-only solution (yes, I know about Mono and I stand by my statement) that locks you in to one vendor and hardware architecture. If your workplace has Macs and Linux boxes and SPARC servers and PowerPC blade servers, etc. etc. etc. then C#/.NET is not going to do you a whole lot of good.
You also have the problem of vendor lock-in. Let's say you write a server application in C# and .NET. Now let's say ARM's recent foray into server-grade components pans out and ARM-equipped server kit hits the market like a thunderbolt. If you use C#/.NET for your app you're hosed until Microsoft ports their stuff over to the ARM-based architectures (if ever -- NT once supported many more architectures than it does now: the trend is toward shrinking the Windows ecosphere, not expanding it). By locking yourself in to one vendor-specific technology you've made yourself less able to survive market shifts.
They move the programming paradigm away from small interoperating tools to large monolithic applications, due to the fact that their load times, minimum worknig sets and interactions with the file system are much higher than those written in compiled-to-host languages.
That's not always a bad thing, but it does mean that the highly efficient (albeit high learning curve) command-line based programming models - a la UNIX - become less used, which is a shame in my opinion.
London Stock Exchange was originally written in .NET
http://blogs.computerworld.com/london_stock_exchange_suffers_net_crash
You can gain some insightful insights about why using .NET and any non-deterministic memory releasing apps(aka garbage collection) for that matter, does make things like real-time systems not amenable on .NET, see the following link. Humor is bonus
http://tech.slashdot.org/tech/08/09/08/185238.shtml
And they ditch .NET, and chooses MilleniumIT which uses C++
http://linux.slashdot.org/story/09/10/06/1742203/London-Stock-Exchange-Rejects-NET-For-Open-Source?from=rss
For anything like volume money transactions and life matter(embedded device for car engine), you cannot let just garbage collection kicks-in randomly and incessantly
[EDIT]
Why the downvote, are there any Microsoft shills here? I just talk in general terms in which the underlying paradigm .NET was architectured in (managed and garbage collected). Perhaps if I just say that the only instances where .NET are not to be used are in car engines and machines that are connected to humans (e.g. heart pacemaker, dialysis machine), I would not get downvoted
I've recently watched an InfoQ presentation where Neil Ford presents a Thoughtworks project that chose Ruby on Rails over .NET because of the alleged better flexibility of Rails and Ruby. Take a look there at their take on the subject.

How to decide between MonoTouch and Objective-C? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
After sitting through a session today on Mono at a local .Net event, the use of MonoTouch was 'touched' upon as an alternative for iPhone development. Being very comfortable in C# and .Net, it seems like an appealing option, despite some of the quirkiness of the Mono stack. However, since MonoTouch costs $400, I'm somewhat torn on if this is the way to go for iPhone development.
Anyone have an experience developing with MonoTouch and Objective-C, and if so is developing with MonoTouch that much simpler and quicker than learning Objective-C, and in turn worth the $400?
I've seen this question (and variations on it) a lot lately. What amazes me is how often people respond, but how few answer.
I have my preferences (I enjoy both stacks), but this is where most "answers" start to go wrong. It shouldn't be about what I want (or what anybody else wants).
Here's how I'd go about determining the value of MonoTouch - I can't be objective, obviously, but I think this is pretty zealotry-free:
Is this for fun or business? If you wanted to get into consulting in this area, you could make your $399 back very quickly.
Do you want to learn the platform inside-out, or do you "just" want to write apps for it?
Do you like .Net enough that using a different dev stack would take the fun out of it for you? Again, I like both stacks (Apple and Mono), but for me MonoTouch makes the experience that much more fun. I haven't stopped using Apple's tools, but that's mainly because I really do enjoy both stacks. I love the iPhone, and I love .Net. In that case, for me, MonoTouch was a no-brainer.
Do you feel comfortable working with C? I don't mean Objective-C, but C - it matters because Objective-C is C. It's a nice, fancy, friendly OO version, but if pointers give you the heebie-jeebies, MonoTouch is your friend. And don't listen to the naysayers who think you're a dev wuss if it happens that you don't like pointers (or C, etc.). I used to walk around with a copy of the IBM ROM BIOS Pocket Reference, and when I was writing assembly and forcing my computer into funny video modes and writing my own font rendering bits for them and (admittedly trashy) windowing systems, I didn't think the QuickBasic devs were wusses. I was a QuickBasic dev (in addition to the rest). Never give in to nerd machismo. If you don't like C, and if you don't like pointers, and if you want to stay as far away from manual memory management as possible (and, to be fair, it's not bad at all in ObjC), then... MonoTouch. And don't take any guff for it.
Would you like to target users or businesses? It doesn't matter much to me, but there are still people out there on Edge, and the fact is: you can create a far smaller download package if you use Apple's stack. I've been playing around with MonoTouch, and I have a decent little app going that, once compressed, gets down to about 2.7 MB (when submitting your app for distribution, you zip it - when apps are downloaded from the store, they're zipped - so when figuring out if your app is going to come in under the 10MB OTA limit, zip the sucker first - you WILL be pleasantly surprised with MonoTouch). But, MT happiness aside, half a meg vs. nearly three (for example) is something that might be important to you if you're targeting end users. If you're thinking of enterprise work, a few MB won't matter at all. And, just to be clear - I'm going to be submitting a MT-based app to the store soonishly, and I have no problem whatsoever with the size. Doesn't bother me at all. But if that's something that would concern you, then Apple's stack wins this one.
Doing any XML work? MonoTouch. Period.
String manipulation? Date manipulation? A million other little things we've gotten used to with .Net's everything-AND-the-kitchen-sink frameworks? MonoTouch.
Web services? MonoTouch.
Syntactically, they both have their advantages. Objective-C tends to be more verbose where you have to write it. You'll find yourself writing code with C# you wouldn't have to write with ObjC, but it goes both ways. This particular topic could fill a book. I prefer C# syntax, but after getting over my initial this-is-otherworldly reaction to Objective-C, I've learned to enjoy it quite a bit. I make fun of it a bit in talks (it is weird for devs who're used to C#/Java/etc.), but the truth is that I have an Objective-C shaped spot in my heart that makes me happy.
Do you plan to use Interface Builder? Because, even in this early version, I find myself doing far less work to build my UIs with IB and then using them in code. It feels like entire steps are missing from the Objective-C/IB way of doing things, and I'm pretty sure it's because entire steps are missing from the Objective-C/IB way of doing things. So far, and I don't think I've sufficiently tested, but so far, MonoTouch is the winner here for how much less work you have to do.
Do you think it's fun to learn new languages and platforms? If so, the iPhone has a lot to offer, and Apple's stack will likely get you out of your comfort-zone - which, for some devs, is fun (Hi - I'm one of those devs - I joke about it and give Apple a hard time, but I've had a lot of fun learning iPhone development through Apple's tools).
There are so many things to consider. Value is so abstract. If we're talking about cost and whether it's worth it, the answer comes down to my first bullet item: if this is for business, and if you can get the work, you'll make your money right back.
So... that's about as objective as I can be. This is a short list of what you might ask yourself, but it's a starting point.
Personally (let's drop the objectivity for a moment), I love and use both. And I'm glad I learned the Apple stack first. It was easier for me to get up and running with MonoTouch when I already knew my way around Apple's world. As others have said, you're still going to be working with CocoaTouch - it's just going to be in a .Net-ized environment.
But there's more than that. The people who haven't used MonoTouch tend to stop there - "It's a wrapper blah blah blah" - that's not MonoTouch.
MonoTouch gives you access to what CocoaTouch has to offer while also giving you access to what (a subset of) .Net has to offer, an IDE some people feel more comfortable with (I'm one of them), better integration with Interface Builder, and although you don't get to completely forget about memory-management, you get a nice degree of leeway.
If you aren't sure, grab Apple's stack (it's free), and grab the MonoTouch eval stack (it's free). Until you join Apple's dev program, both will only run against the simulator, but that's enough to help you figure out if you vastly prefer one to the other, and possible whether MonoTouch is, for you, worth the $399.
And don't listen to the zealots - they tend to be the ones who haven't used the technology they're railing against :)
There is a lot of hearsay in this post from developers that have not tried MonoTouch and Objective-C. It seems to be mostly be Objective-C developers that have never tried MonoTouch.
I am obviously biased, but you can check out what the MonoTouch community has been up to in:
http://xamarin.com
There you will find several articles from developers that have developed in both Objective-C and C#.
So, my answer to a previous similar question is to learn Objective-C. (Also, don't forget about debugging support)
This will probably offend some but to
be honest, if you are going to do any
serious development, you should learn
Objective-C. Not knowing Objective-C
in iPhone development will just be a
hindrance. You won't be able to
understand many examples; you have to
deal with the quirks of Mono whereas
if you had a working knowledge of
Objective-C you could get a lot more
out of the platform documentation.
Personally, I don't understand the
position that says increasing the
amount of information you need in
favor of using Mono over the
platform's native language. It seems
somewhat counterproductive to me. I
think if this is a very expensive
proposition (learning a new language)
then it may be worthwhile spending
some time on fundamental programming
concepts so that learning new
languages is a fairly cheap
proposition.
Another user also wrote this:
Monotouch is easier for you now. But harder later.
For example, what happens when new seeds come out you need to test against but break MonoTouch for some reason?
By sticking with Mono, any time you are looking up resources for frameworks you have to translate mentally into how you are going to use them with Mono. Your app binaries will be larger, your development time not that much faster after a few months into Objective-C, and other app developers will have that much more of an advantage over you because they are using the native platform.
Another consideration is that you are looking to use C# because you are more familiar with the language than Objective-C. But the vast majority of the learning curve for the iPhone is not Objective-C, it is the frameworks - which you will have to call into with C# as well.
For any platform, you should use the platform that directly expresses the design philosophy of that platform - on the iPhone, that is Objective-C. Think about this from the reverse angle, if a Linux developer used to programming in GTK wanted to write Windows apps would you seriously recommend that they not use C# and stick to GTK because it was "easier" for them to do so?
Using Mono is not a crutch. There are many things that it adds to the iPhone OS. LINQ, WCF, sharable code between a Silverlight app, an ASP.NET page, a WPF app, a Windows Form app, and there's also mono for Android and it will work for Windows Mobile as well.
So, you can spend a bunch of time writing Objective-C (You'll see from many studies where the exact same sample code in C# is significantly less to write than OC) and then DUPLICATE it all for other platforms. For me, I chose MonoTouch because the Cloud App I'm writing will have many interfaces, the iPhone being only one of them. Having WCF data streaming from the cloud to MonoTouch app is insanely simple. I have core libraries that are shared among the various platforms and then only need to write a simple presentation layer for the iPhone/WinMobile/Android/SilverLight/WPF/ASP.NET deployments. Recreating it all in Objective-C would be an enormous waste of time both for initial dev and maintenance as the product continues to move forward since all functionality would have to be replicated rather than reused.
The people who are insulting MonoTouch or insinuating that users of it need a crutch are lacking the Big Picture of what it means to have the .NET framework at your fingertips and maybe don't understand proper separation of logic from presentation done in a way that can be reused across platforms and devices.
Objective-C is interesting and very different from many common languages. I like a challenge and learning different approaches... but not when doing so impedes my progress or creates unnecessary re-coding. There are some really great things about the iPhone SDK framework, but all that greatness is fully supported with MonoTouch and cuts out all the manual memory management, reduces the amount of code required to perform the same tasks, allows me to reuse my assemblies, and keeps my options open to be able to move to other devices and platforms.
I switched. Monotouch let's me write apps at least 3-4 times as fast (4 apps per month compared to my old 1 per month in Obj C)
Lots less typing.
Just my experience.
If this is the only iPhone app you will ever develop, and you also have zero interest in developing Mac applications, ever, then MonoTouch is probably worth the cost.
If you think you'll ever develop more iPhone apps, or will ever want to do some Mac native development, it's probably worth it to learn Objective-C and the associated frameworks. Plus, if you're the type of programmer that enjoys learning new things, it's a fun new paradigm to study.
Personally I think you'll have a better time just learning Objective-C.
In short:
"Learning Objective-C" is not a daunting as you might think, you may even enjoy it after just the first few weeks
You are already familiar with the "C style" syntax with lots of *&(){}; everywhere
Apple has done a very good job of documenting things
You'll be interacting with the iPhone the way Apple intended, which means you'll get the benefits directly from the source not through some filter.
I have found that the projects like Unity and MonoTouch are supposed to "save you time" but ultimately you'll need to learn their domain specific language anyway and will have to side-step things at times. All that is probably going to take you just as long as it would to learn the language you were trying to avoid learning (in calendar time). In the end you didn't save any time and you are tightly coupled to some product.
EDIT: I never meant to imply anything negative about .NET I happen to be a big fan of it. My point is that adding more layers of complexity just because you aren't yet comfortable with the quirky objc bracket notation doesn't really make much sense to me.
2019 update: It's 7 years later. I still feel the same way if not more so. Sure, 'domain specific language' may have been the wrong term to use, but I still believe it's much better to write directly for the platform you are working with and avoid compatibility layers and abstractions as much as possible. If you are worried about code reuse and re-work, generally speaking any functionality your cross platform app needs to perform can probably be accomplished with modern web technologies.
To add to what others have already said (well!): my feeling is that you're basically doubling the number of bugs you have to worry about, adding the ones in MonoTouch to the ones already in iPhone OS. Updating for new OS versions will be even more painful than normal. Yuck, all around.
The only compelling case I can see for MonoTouch is organizations that have lots and lots of C# programmers and C# code lying around that they must leverage on iPhone. (The sort of shop that won't even blink at $3500.)
But for anyone starting out from scratch, I really can't see it as worthwhile or wise.
Three words: Linq to SQL
Yes it is well worth the $.
Something I'd like to add, even though there's an accepted answer - who is to say that Apple won't just reject apps that have signs of being built with Mono Touch?
I would invest the time in Objective-C mainly because of all the help you can get from sites like this. One of the strength's of Objective-C is that you can use C and C++ code, and there is a lot of projects out there that are well tested.
Another thing is that you're code (language of choice) will be supported by apple. What it iOS 5.x for instance removes the support for a third party solution like MonoTouch? What will you tell your customers then?
Maybe its better to use a platform independent solution like HTML5 if you're not entire ready to move to Objective-C?
I've been using MonoTouch for a few months now, I ported my half finished app from ObjectiveC so I could support Android at some point in the future.
Here's my experience:
Bad bits:
Xamarin Studio. Indie developers such as myself are forced into using Xamarin Studio. It is getting better every week, the developers are very active on the forums identifying and fixing bugs, but it's still very slow, frequently hangs, has a lot of bugs and debugging is pretty slow also.
Build times. Building my large (linked) app to debug on a device can take a few minutes, this is compared to XCode which deploys almost immediately. Building for the simulator (non-linked) is a bit quicker.
MonoTouch issues. I've experienced memory leak issues caused by the event handling, and have had to put in some pretty ugly workarounds to prevent the leaks, such as attaching and detaching events when entering and leaving views. The Xamarin developers are actively looking into issues like this.
3rd party libraries. I've spent quite a time converting/binding ObjectiveC libraries to use in my app, although this is getting better with automated software such as Objective Sharpie.
Larger binaries. This doesn't really bother me but thought I'd mention it. IMO a couple of extra Mb is nothing these days.
Good bits:
Multi-platform. My friend is happily creating an Android version of my app from my core codebase, we're developing in parallel and are committing to a remote Git repository on Dropbox, it's going well.
.Net. Working in C# .Net is much nicer than Objective C IMO.
MonoTouch. Pretty much everything in iOS is mirrored in .Net and it's fairly straight forward to get things working.
Xamarin. You can see that these guys are really working to improve everything, making development smoother and easier.
I definitely recommend Xamarin for cross platform development, especially if you have the money to use the Business or Enterprise editions that work with Visual Studio.
If you're solely creating an iPhone app that will never be needed on another platform, and you're an Indie developer, I'd stick with XCode and Objective C for now.
As someone with experience with both C# as well as Objective-C, I'd say for most people Xamarin will be well worth the money.
C# is a really good designed language and the C# API's are good designed as well. Of course the Cocoa Touch API's (including UIKit) have great design as well, yet the language could be improved in several ways. When writing in C# you will likely be more productive compared to writing the same code in Objective-C. This is due to several reasons, but some reasons would be:
C# has type inference. Type inference makes writing code quicker, since you don't have to "know" the type on the left-hand side of an assignment. It also makes refactoring easier and more saver.
C# has generics, which will reduce errors compared to equivalent Objective-C code (though there are some work-arounds in Objective-C, in most situations developers will avoid them).
Recently Xamarin added support for Async / Await, which makes writing asynchronous code very easy.
You'll be able to reuse part of the code base on iOS, Android and Windows Phone.
MonoTouch largely implements the CocoaTouch API's in a very straightforward way. E.g.: if you've got experience with CocoaTouch, you'll know where to find classes for controls in MonoTouch (MonoTouch.UIKit contains classes for UIButton, UIView, UINavigationController, etc..., likewise MonoTouch.Foundation got classes for NSString, NSData, etc...).
Xamarin will give users a native experience, unlike solutions like PhoneGap or Titanium.
Now Objective-C has some advantages over C#, but in most situations writing apps in C# will generally result in less develop time and cleaner code and less work to port the same app to other platforms. One notable exception might be high-performance games that rely on OpenGL.
The cost of the MonoTouch library is entirely beside the point. The reason you shouldn't use Mono for your iPhone apps, is that it is a crutch. If you can't be bothered to learn the native tools, then I have no reason to believe that your product is worth downloading.
Edit: 4/14/2010 Applications written with MonoTouch aren't eligible for the iTunes Store. This is as it should be. Apple saw plenty of shallow ports on the Mac, using cross-platform toolkits like Qt, or Adobe's own partial re-implementation of the System 7 toolbox, and the long and short of it is they're just not good enough.

Migrating a project from C# to Java

With some changes in the staffing at the office, the levels of C# expertise has dropped off precipitously and there are now more Java developers. It has gotten to the point where the higher-ups are considering moving an existing .NET project written in C# into the Java world.
Aside from the obvious problem of starting completely from scratch what are the possible ways that this company can accomplish a successful move of development on a project from .NET C# into Java?
Here are things to consider:
Is this big project? If Yes, try to stick with C#
Is this medium sized project with components? If No, try to stick with C#
Is this small project meant to be deployed on windows only? If yes, try to stick with C#
Is this old source code? If Yes, try to stick with C#
Do you use windows OS specific APIs? If Yes, try to stick with C#
Do you use any third party APIs without Java counterpart? If Yes, try to stick with C#
Do you use .Net in "deep"(data binding, User controls etc.)? If yes, try to stick with C#
Migration time is more acceptable than getting new/converted C# guys? If no, try to stick with C#
Do you think end users will not be receptive of changes, if you are to use Java framework which will change presentation? If yes, try to stick with C#
Check commercials
If you decide to convert:
Go per component
Go per layer
Have lots of tests
Check if there are tools to help (however small help may be) with migration
Just to add to Brian and Eric's opinions, I would say that picking up C# for a Java developer should be straightforward in my opinion. They are conceptually very similar languages and I would suggest training your Java developers to gain some C# skills so you won't be forced to go to the hassle of a migration process.
I subscribe to Joel's view that a total rewrite is almost always a mistake. Other posters are right: C# and Java are similar enough that any competent Java developer should be able to become competent in C# in a matter of weeks or months. That's not to say they will be experts. That takes longer but as long as you have some C# developers who can guide the process then you should be OK.
It's hard to comment on whether or not such a transition is a good or bad idea without knowing specifics of your application: size, type of application, industry and so on.
I would be extremely reticent about such a switch because, in my humble opinion, C# is now a much more modern language than Java and I say this to you as someone who has been a Java developer ofr over a decade (since the 1.0.2/1.1 days).
That's not to say that Java is bad. It's not. Sun does have a cloud hanging over it and demonstrated an unwillingness or inability to drive the platform forward in recent years.
Regardless of the languages involved, the management of this company sounds insane. For anything other than a trivial application, how can it be economically sensible to rewrite an entire code base from scratch instead of just hiring a single person with some skills in the right language? Is this a business with that well-known problem: too much spare cash?!
How long has the existing code been in development? If it's barely started, I could understand this. If it's seen a release and has active users, it will never make sense to throw it away. If you donated the C# code to a start-up with the right skills, think how much of a head start they would have over you.
Before you finish converting the .NET project into Java, all those Java developers who were part of the conversion project will have learned C#. So then you don't anymore need to convert it to Java (and you can throw away all Java code which was produced in the conversion), because now you have a development team which can do both Java and C#. Problem solved. :D
Have a look at Net2Java, which puports to assist in converting your code from C# to Java. I doubt it'll be perfect, but its one way to remove a lot of drudgery from the task, leaving you with the kinks of incompatible framework calls and language features to iron out.
Once you've done that, your task is like any other large migration project - test, test and test again. Unit tests, System integration tests, then end-user tests. You should havew those tests already in place that you used with the original application, apart from the unit tests, they will still be relevant.
If there are any components that are already isolated or any of it uses a service-oriented architecture, you could conceivably migrate one component at a time (where each individual component is a rewrite) and still have the components talk to one another using the same interoperable network protocols. Probably depends on what type of app we're talking about.
Make sure you have tons of tests, because such a migration will bite you where you expect it least.
Do you have more .Net or more Java applications in production. If your already have a substantial investment in .Net servers and applications, why not ask for volunteers among the Java developers to move to .Net? The language and syntax is very similar, so the hard part would be learning the framework and unless they would spend all their time doing UI development even learning the framework is not that hard.
In our office we have a number of very good developers who move back and forth between Java and .Net as needed.
I am not a Java expert, but from my experience working with Java code whilst being a C# fan, the following are some of the possible headaches:
Generics are implemented differently in Java and C#.
Boxing/Unboxing behaviors are different between Java and C#
Java class naming convention + lots C# generated code
String handling (i.e. Unicode/ASCII concerns) can be problematic depending on the quality of Java/C# code being ported over.
Personally, I don't think writing from scratch is a bad idea at all. Since you already have a working architecture.
In order to prove to management, you always needs to talk in terms of ROI and numbers. Show them that if you move these applications, it will take tremendous amount of time, QA resources, and can easily take a back seat if it gets de-prioritized due to some other project or new development taking importance.
I had success when I showed them the timelines, ROI, work involved, money involved, etc.
So now coming to the actual point, I do think Java developers would be able to support C# unless they have some fundamental mental block against Microsoft technolgies.
Possibly you could use jni4net - opensource bridge ?
Or list of other options I know.
I'm somewhat surprised noone even suggested the idea of rejecting migration.
I do not believe a C# developer can be forced to switch to Java (or vice versa) because he was told to (well, if he's threatened with a gun then maybe). I takes much time, exercise and passion to master at least one technology stack. You just can't start overnight with a new technology and expect to provide the same quality.
I'd personally not bother until told to start migration. At which point I'd tell the manager that I'm .NET guy and won't switch to another technology just because they decided to.
As for the technical side, it's not the language syntax that differs but rather libraries and their features. Of course, if all the latest bells and whistles of .NET 3.5 have been extensively in use then the language difference will provide you with a real challenge.
That's certainly a funny way, just decide to migrate applications from .NET to Java. Someone has no idea of the hassle involved...
I realize that this is an old question but to anyone else going down this path, you could try this open source C# to Java Converter:
http://www.cs2j.com/

PHP/Rails/Django/ASP websites should have been written in C++?

I was looking at a SO member's open source project. It was a web framework written in C++.
Now you are all probably ready to respond about how C++ is a horrible language to do websites in, and that in websites, the bottleneck is in the database.
But... I read this post:
http://art-blog.no-ip.info/cppcms/blog/post/42
and in there he makes the case that in a large site such as Wikipedia, the database servers only make up 10% of all servers. Therefore C++ would be better than languages like PHP Python Ruby C#.
Are his points valid?
The problem with the article you link to is that the author clearly doesn't really know what he's talking about when he asks where the "bottleneck" is; the fact that someone has more web servers than database servers doesn't mean "the database can't be where the problem is". What's generally meant by "the database is the bottleneck" is the same thing that's been learned by everyone who ever does run-time profiling of a web application.
Consider an application which takes half a second to return a full response. Suppose you sit down and profile it, and find that that half second of processing time breaks down as follows:
Parsing incoming request: 50ms
Querying database: 350ms
Rendering HTML for response: 50ms
Sending response back out: 50ms
If you saw a breakdown like that, where database queries constitute 70% of the actual running time of the application, you'd rightly conclude that the database is the bottleneck. And that's exactly what most people find when they do profile their applications (and, generally, the database so completely dominates the processing time that the choice of language for the rest of the processing doesn't make any difference anyone will notice).
The number of database servers involved turns out not to matter too much; the famous quote here is that people like the author of the post you've linked are the types who hear that it takes one woman nine months to have a baby, and assume that nine women working together could do it in one month. In database terms: if a given query takes 100ms to execute on a given DB, then adding more DB servers isn't going to make any one of them be able to execute that query any faster. The reason for adding more database servers is to be able to handle more concurrent requests and keep your DB from getting overloaded, not to make isolated requests go any faster.
And from there you go into the usual dance of scaling an application: caching to cut down on the total time spent retrieving data or rendering responses, load-balancing to increase the number of concurrent requests you can serve, sharding and more advanced database-design schemes to keep from bogging down under load, etc., etc.
But, you'll note, none of this has anything whatsoever to do with the programming language in use because, once again, the amount of time spent or saved by other factors grossly outweighs the amount of time gained or lost by a "fast" or a "slow" language (and, of course, there's really no such thing; so much depends on the problem domain and the skill of the programmer that you just can't have a meaningful general comparison).
Anyway, this is getting kind of long and rambling, so I'll just wrap it up with a general guideline: if you see someone arguing that "you should build in Language X because it runs faster", it's a dead giveaway that they don't really know anything about real-world performance or scaling. Because, after all, if it just came down to "write in the fastest language", they'd be recommending that we all use assembly :)
Servers are a one-time fixed cost of a few grand. Programmer time costs a lot more than that. Sure, writing websites in C++ would reduce hardware costs, but would greatly slow down development. So if you can shave one man-month of time off your development by using Ruby instead of C++, that pays for an extra server.
Better means a lot more than "faster".
There are so many more problems that you face when writing in compiled, statically typed languages like C++, and these can all affect development. Some of the primary reasons scripting languages like Ruby or PHP have been invented is so that us a programmers can get more efficiency out of the language and toolsets we work in.
Yes, websites would be faster if they were written in a language such as C++. Yes, they could serve more people, be more scalable and be more efficient. But is this good enough reason to lose out on all the benefits that interpreted languages give us? Programmer happiness, development time, ease of use, portability, and so many more I can't mention.
The right tool for the right job, and for C++, website development isn't one of them.
I would be reluctant to say that servers are a one time cost of a few grand since some cost hundreds of thousands and I would venture to say millions. A number of sources will suggest that the biggest cost to the IT industry is hardware, not manpower. But for a comparison of the languages we need to compare them, not hardware.
The idea behind languages like Ruby, Python, PHP, and Groovy is essentially Rapid Application Development (RAD). The frameworks, Ruby on Rails, Django, CakePHP, and Grails are there to better facilitate RAD. The languages are easy to use and enable developers to setup and develop with little cost and the timeline involved is reduced in comparison to other languages or setups.
Is it right or wrong? It's all personal opinion but in the end the needs of the project will define which tool set is best suited. If your application has low traffic and you want to have it developed and live in a few months then it would be ideal to use one of the previously mentioned languages and/or framework.
But what about C++, .Net, and J2EE? Theres a place for everything. These tend to have higher up front costs associated with the time and energy to architect the project, setup the development environment, and do the actual development. But the languages and frameworks built from them are better suited for scalability to accommodate heavy traffic or computations.
Look at Facebook as an example. The original site was prototyped and deployed as a PHP application and the user base was relatively small. As the site grew into the monster we know it as today they scaled their application by implementing J2EE on the back end while using the existing PHP scripting for the front end.
As someone with experience in J2EE development and Python/PHP there is a very apparent set of advantages and disadvantages. I can build a blog with PHP in a few days ready for public access but the same project in J2EE may take much longer.
Forgive my terminology but Enterprise languages (J2EE, .Net) require a significant amount of configuration and deployment efforts. Ruby, PHP, and Python do not, you simply open Notepad, write your code and save it with the correct extension and you're ready to upload.
Does that help?
Programmer happiness, development time, ease of use, portability, and so many more I can't mention
In fact... it is not so horrible and slow process to develop in C++, given right tools.
The Wiki that this project runs was written and up in few days (and yes, in C++)... Not bad for horrible C++ language ;-)
Take a look:
First Commit According to SVN Logs: 24/10/2008 http://cppcms.svn.sourceforge.net/viewvc/cppcms/wikipp/trunk/Makefile.am?view=log
First Pruduction page according to history: 30/10/2008 http://art-blog.no-ip.info/wikipp/en/changes/9
I think this is not a slow process for work at evenings to create such wiki: http://art-blog.no-ip.info/wikipp/en/page/main
No, his points aren't valid. It isn't that one is better than the other, but take C# and java for instance, they're just more advanced modern languages that have been designed with C++'s problems in mind.
It doesn't make any sense to bother with things like garbage collection if you don't have to. Also the various web framework languages have so many components, controls, modules, and many other pieces, open source or otherwise, already written for them by other developers, you would be reinventing wheels that exist in other languages, all over the place.
I'll always have a place in my heart for C++, but it's kind of idiotic or troll-worthy to suggest your better off writing websites in C++.
(EDIT: Maybe if C++ is the only language you know, or you have a lot of self-made components in C++ for the web, but your still probably better off learning a C++ descendent language for websites development, which are pretty easy to learn once you know C++/C)
His points are NOT valid for 99% of web applications. Web applications benefit from rapid iteration and friendly interfaces, both of which are better achieved with languages like PHP, Python and Ruby. If you are good or lucky enough to develop a highly used service, you will have little trouble engineering it to scale.
last I checked, php, Ruby, and Python are all written in C. Doesn't that count?

Why would you want to use C# if its slower than C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for a new language to learn after C++ and Java. I was going to try C#, but a bunch of people say its really slow because its a high level language. So why would anybody use C#? Isn't C++ much faster? Does it make development easier, but just have a slower final product?
Also, what can C# be used for? You use it with a lot of .NET stuff on windows, and with ASP.NET, but what are other situations where one would use C#? Will there be a lot of job opportunities for it?
Who exactly is this "bunch of people"? What are they comparing it against?
For the vast majority of things, C++ is not "much faster" than C#. It certainly has benefits in various situations, particularly where you want more deterministic memory handling, but in my experience the bottleneck in most applications isn't in places where C++ would help. As spoulson says, a lot of performance is in the design instead of the exact implementation - and there, it helps to be able to try different designs easily.
Why would we use C# when it's a bit slower than C++? Because it's generally reckoned (i.e. some disagree :) to be a lot easier to develop in without shooting yourself in the foot.
As for what C# can be used for... what do you want to use it for? Unless you want to develop drivers and kernels, it may well be fine for you. (Even OS development has some folks using C#...)
Job opportunities? Loads.
Downsides? Well, .NET itself is only available on Microsoft platforms. There's Mono, but it doesn't have quite the same degree of portability as Java (no doubt another "slow" language according to the same bunch of people).
Code written in assembly can be blazingly fast. Why not just write in assembly?
Don't believe everything you hear. C# has been plenty fast for all my projects. Typically, performance is more a factor of design than raw platform performance.
I'd have to say the people you were talking to simply don't know what they're talking about. Plain and simple.
Many enterprise level applications are built on top of C# and other .Net languages. There is nothing inherently slow about them. Yes the tend to have slower startup times but that's pretty much where it ends.
I noticed you mentioned Java in the list of languages. If you're comfortable with the speed of Java, C# will not present any issues. Generally speaking C# performs at least as well as Java on many different types of bench marks.
My last company was founded by 5 C++ veterans with 15+ years experience each. They spent over a month building a certain Windows service. One of them found and dabbled in C#. Within a week he'd gotten further than the collective had in their month. Shortly after, they all switched to C#.
Why C# if it may perform slower: what price do you put on that degree of rapid development?
Why should it be slow ?
Indeed, C# is compiled to 'Intermediate Language', which is JIT'ed at runtime, but this can give you a performance advantage, since the runtime can generate the most optimized for the platform the code is running on ...
Depending on the application that you want to write, the 'speed' of the language will have a minor impact.
The performance of your application will mostly be determined by the way you design your application, if you make good uses of the tools / technologies you use , etc ...
Sure, C# is not a silver bullet, and there are projects where you shouldn't use it, simply because it is not the right tool for the job, but it will do just fine for most of the business / enterprise app's.
I haven't found many instances where C# isn't a good choice of languages, and .Net (or Mono) isn't a decent platform. Notable exceptions being kernel level development, or drivers. There are plenty of areas where low level, and raw performance is needed. For most, if no all business, or Enterprise applications C# is one of the better choices for development. It's well supported, works with many other systems, libraries, communications channels and components already available not to mention being a fairly nice language (esp. 3.5) to work with.
ASP.Net wasn't a bad platform, I generally find the object/control stack to be one of the short comings for complex interactions. I think ASP.Net MVC fits better for more scalable web based applications. Just the same, it's better than many other systems I've worked with in the past.
In terms of service layers, and even GUI development it's pretty nice. I've got a lot more experience in web based applications, and service/communications/business layers than with desktop GUI applications, so can't comment much there. I feel a lot of GUI development is more about the IDE/Toolkit, than the particular language.
As to slow, you specifically list Java, in most instances, C#/.Net is as fast or faster than Java. IMHO development specifically goes smoother with C# (Visual Studio) over Java (Eclipse). For web-based apps, I like ASP.Net MVC (and even ASP.Net) over Swing. That's just me though.
The people you talked to don't know what they are talking about. C# is a very similar language to Java, all told; it has most of the same benefits and drawbacks. The way it all works is pretty similar (Java/C# is compiled into an intermediate language/bytecode that is interpreted or JIT compiled to native code, with various similar optimizations that you don't need to worry about as a programmer). It's used in a lot of the same situations as Java, and is really aimed for the same market. It's moving a lot faster and bringing in a lot of innovation as a language, but it's (in practice) pretty much Windows-only, if that's a concern of yours. The job market is similar. Both are very popular languages.
As for a language to learn, I would suggest something DIFFERENT. You say you know C++ and Java; C# shouldn't be hard to pick up. Potential employers will know this. Try Scala or Python. Both will give you some new perspectives on things (C# not so much), and make you a better programmer by teaching you new ways to think, rather than just adding another tool to your box.
C# is not always slower - in many cases, it can perform just as well as any language you listed. Usually the algorithm has more to do with the speed than the choice of language.
However, C# is very expressive, and has a great base class library to work with, and super-fast compilation. This means that it's very easy to work with, and can allow you to be much more productive than many other languages, especially C++. For example, I just had a small project that I would have budgeted 1 man-week for in C++, and we finished it up in less than a day in C#, mainly because the base class libraries simplified so many of the tasks.
one big reason to use C# over C++.........look at all those buffer overflow problems that C++ has where people take over your machine because they inject executable code in your strings
or memory leaks....garbage collector is quite handy IMHO
Even if C++ is faster to run, the difference isn't big on today's computers. That's why on mobile devices where cpu and memory are rather limited, C/C++ is still widely used.
But think about development times instead. If it takes 2 months using C++ and 1 month using C#, which would you go for? And when big modifications/refactoring need to be done, a higher level language makes it significantly faster and easier!
I Would definitely use C#. It is actually average speed, and if you liked java, it's almost exactly the same, as they are both based off C++.

Categories

Resources