Related
The project I work on is basically a data collector. By way of orientation, it may help to think of it as Wireshark (or equivalent) with parsing/analysis capabilities of the Application layer (OSI Layer 7). The current version is a legacy MFC application with 15+ years under its belt. It still works, but maintenance, stability, and scalability are real concerns that we are facing. The project leadership team has recently agreed that we need to begin developing the next generation of the product, and we are targeting .NET as the product is strictly a Windows desktop-based product.
Given that our users routinely analyze log files collected literally around the world, message timestamping is very important. The current product uses _ftime_s() to assign a timestamp, and I had assumed we would simply use System.DateTime.UtcNow to get timestamps in the future on the .NET side. That is, until I read about noda-time. Now, I'm thinking that our problem domain requires more care concerning time-related functionality than I had ever cared to consider.
So a few questions.
From the description I've provided above, does it make sense to incorporate noda-time using NodaTime.Instant for timestamping?
Given a choice, I'd much prefer to pay for dedicated support than use an open-source project for fear (paranoia?) that the project is abandoned. Any thoughts or guidance on this point from those more inclined to embrace the open-source philosophy?
noda-time is currently in its 2nd beta. Is there a target date for NodaTime 1.0.0?
As Matt says, you could easily just use DateTimeOffset to represent an instant in time. I'd argue it's not as clear as using Instant, in that it suggests you might actually be interested in a local time and an offset, rather than it really just being a timestamp - but if this is the only reason you'd use Noda Time, it would make sense to stick with DateTimeOffset.
This is a reasonable fear, but you have my personal word that I'm not about to abandon Noda Time. Of course, the reverse argument is that if I did abandon it, you'd still be able to patch it - whereas if you used a commercial product and the company folded, you'd be stuck :) I do understand the concern though.
As it happens, I'm hoping to release v1.0.0 today :)
This isn't like new versions where newer versions will still have backwards compatibility.
What I mean is something like, when the designers of C#, .NET, CLR realize that they made a mistake or they overlooked something that could be hugely beneficial but now they couldn't pursue it because of backwards compatibility, they could branch the appropriate product say like by designating it in a different manner (horizontal versioning).
Wouldn't this be more future-proof?
You can say this would be a nightmare, but there would be restrictions like you can't mix and match different branches unlike the different languages that are compatible with each other, etc (in the same branch).
This way you would say use C# 4.0, then there is something very beneficial that you could use from C# 4.0 B1 (branch 1) and just use that, even though it might require some porting effort.
Is this not a healthy development strategy, where new projects could always start using the latest and the greatest, meaning the latest version and the latest branch of a particular language (C# 6.0 B4 for example)?
I don't see any extra hassle in keeping track of things for newer languages where you already have to know things for each version anyways. So this just adds another dimension (horizontal versions) to vertical versioning.
What would be the potential pros/cons for this development strategy?
There is a huge benefit to having a large set of libraries available for a platform. Currently, if I write a .NET 4.0 application I can reference libraries which were created way back on .NET 1.1. This means that there's a lot of existing code that I can take advantage of, and this is one of the major selling points of .NET.
If I understand your proposal correctly, then if library A is written against C# 4.0B1 and library B is written against C# 4.0B2, then there is no way that my application can be written to reference both library A and library B. This would fragment the platform and make it a much harder to justify the investment in writing C# applications or libraries.
Of course, there are also costs associated with backwards compatibility (look no further than Java's implementation of generics...), but in my opinion the benefits clearly outweigh them. Having a vibrant community using a language or platform makes it easier to hire developers, to find libraries with useful functionality, to get training and support, etc. These network effects are all put at risk by creating incompatible islands within the platform.
Actually, this is a bit like the way Open Source detractors used to argue Open Source projects would end up going. After all, you, I or anyone else could take any Open Source project and fork it into a different branch tomorrow.
Thankfully, the only case where this would gain any traction in the community though, is where we either took it into a specialist rĂ´le (so that it didn't really compete with the original project, just builds on a common ancestor), or where there was massive dissatisfaction with the way the original project was run (where it's sort of a nuclear strike option in Open Source politics).
The reason it was used as a bogeyman argument against Open Source is that it would just be impossible to keep track of which version of which version of which version of which version of a given library, framework, language, component, etc. could work with which version of which version of which version of which version of another.
Luckily, whether open or closed, such branches would die a natural death in the face of the "market" (whether that market was economic or otherwise).
Many (larger) shops have enough trouble keeping up with the major revisions of .Net that are already coming out. This strategy might work for a platform that is not the predominant development platform worldwide, but for Microsoft (and many developers) it would be hell.
Backward compatibility is taken very seriously at Microsoft, especially where developers (the lifeblood of the company since Windows took off in the mid-90s) are concerned. Breaking changes have unknowable impact due to the scale of Windows, and latterly .Net, adoption. Would you want to be the guy who had to explain to Steve Ballmer why your cool fix in the new minor version of .Net broke the apps on which GE (say) run their business? There is immense effort put into making sure that legacy apps and devices will continue to run. Increasing the matrix of versions to be tested would inevitably lead to corners being cut, and we all know what happens next, right?
You can counterargue that nobody has to adopt the latest. But who here does not install Windows SPs as soon as they come out, to avoid the drip-drip of hotfixes for security issues? There is a natural inclination to want the latest, though that has to be balanced vs stability concerns.
.Net has gone a good way to removing DLL hell from the Windows developer lexicon, and to some degree decoupled developer platform progression from OS releases. I don't think the majority of Windows developers are in a hurry to see that change. Love them or hate them, Microsoft have gotten very good at managing large, infrequent releases of what is still the world's de facto desktop standard. It will be interesting to see how Google manage the same problem, as Android takes that spot in the mobile market over the next year or two.
I've been wandering on the net and I encountered a lot of message like this one
Link
ODP.NET and it's stability.
Wow, ODP.NET 10.2.0.20 is not horribly stable yet.
I find rather astonishing that such big editor releases a product that is not stable but it seems so, so should ODP be used ? Is there anything more sure ?
I always had the impression that Oracle doesn't care a great deal about .NET and MS technologies in general (which is understandable). ODP.NET doesn't fit the ADO.NET 2.0 model very well (the way it handles transactions, command parameters bound by order rather than name...). But anyway, since Microsoft has deprecated its own Oracle provider (System.Data.Oracle), you don't really have a choice... Sure, there are third party ADO.NET providers for Oracle (like the one from DevArt), but they're not free. So if you want a free ADO.NET Oracle provider, you're stuck with ODP.NET anyway
To answer the question more directly : I wouldn't say that ODP.NET is not stable. It might have a few bugs, but it's very widely used in production environments all over the world, which wouldn't be the case if it was really unstable. So I'd say it's quite unlikely that you will encounter major bugs.
The post you are quoting is 2 years old. Current stable version is 11.1.0.7.20. I haven't used it personally but I would use it for production.
If there will be problems with that (or any other components of your application) the testing should find the problems before deployment.
Get the latest stable version from Oracle's site. But even then you'd realize that there are still connection leaks. Even though the following steps might not solve your problems completely, they certainly help:
Use the using(){} construct
Write a batch process to drop all connection to your db on a regular basis. In my experience every other week for a VERY BUSY website.
Let me know if there is a better way!
I have finally decided to go with the Entity Framework since it has the best performance out of all the ORMs. But before I start reading and writing code I just want to know if there are any high traffic websites out there that use ORMs.
Currently, the released version of EF, v1.0 in .NET 3.5, has terrible performance. I did extensive testing and had several long email discussions with Microsoft on the subject over a year ago when it was first released. EF's current efficiency has a LOT to be desired, and in many cases, can generate absolutely atrocious SQL queries that decimate your performance.
Entity Framework v4.0 in .NET 4.0 is a LOT better. They have fixed most, if not all, of the poor SQL generation issues that plague EF v1.0 (including the issues I presented to them a year ago.) Whether EF v4.0 has the best performance is really yet to be seen. It is more complex than LINQ to SQL, as it provides much greater flexibility. As a release version is not yet available, its impossible to say whether EF v4.0 will be the fastest or not.
An objective answer to this would require an objective, unbiased comparison between the major ORM contendors, such as EF, LINQ to SQL, nHibernate (preferably with a LINQ provider), LLBLGen, and even some of the newcommers, such as Telerik's ORM, Subsonic and the like.
As for large-scale, high-volume production systems that use ORM's. I would suggest looking at StackOverflow.com itself, which uses LINQ to SQL. SO has become one of, if not the, top programmer communities on the Internet. Definitely high volume here, and this site performs wonderfully. As for other sites, I couldn't really say. The internal implementation details of most major web applications are generally a mystery. Most uses of ORM's that I know of are also for internal, enterprise systems. Financial systems, health care, etc. Object Databases are also used in the same kinds of systems, although they are much less frequent. I would so some searches for ORM use and high volume web sites.
One thing to note in your search. Make sure the reviews you find are current. The ORM scene has changed a LOT in the last two years. Performance, efficiency, capabilities, RDBMS tuning capability of dynamic SQL, etc. have all improved significantly since ORM's were first created around a decade ago.
I know in one of the podcasts, Jeff mentioned that stackoverflow uses Linq-to-SQL
The really high traffic websites are in fact moving away from SQL databases altogether because with write-heavy workloads common in today's apps, it's nearly impossible to make them scale beyond one machine, ORM or no ORM. This has been dubbed the "NoSQL movement"
However, while this is a very fashionable topic, it's completely irrelevant for sites that don't have thousands of active concurrent users. And worrying about ORM performance is a similar matter: most sites are not in fact "high traffic" enough for an ORM to become a problem (unless grossly misimplemented or -applied).
While not directly addressing which ORM is faster as Ayende (NHibernate author) points out can be very easy to do wrong or at least slant the way you want it to, here are apps out there that are using ORMS as part of their applications.
Twitter is (was?) using Ruby on Rails (RoR) which uses an ORM. The 37 signal guys use RoR for their apps .... I know these aren't .Net but as mentioned by kuoson, L2S is employed by SO and there are alot of people out there using NHibernate like Jeffrey Palermo and Headspring. I wouldn't be surprised to find many recently developed web apps are employing an ORM.
Even if an ORM does cost you a hit on performance, most ORM's allow you to customize the SQL used when necessary. Most suggest using the ORM and then fixing bottlenecks as they arise. Additionally, a good ORM solves so much for you that writing your own DAL is becoming a much tougher sell these days.
jrista is right on. I just want to add that you should seriously consider LINQ to SQL. From both a simplicity and a performance standpoint it is the better technology (for now). It is very fast and reasonably capable out-of-the-box. If you want to further improve LINQ to SQL, check out the PLINQO framework.
PLINQO is a framework that sits around standard LINQ to SQL and adds a ton of features including some very elegant bulk operations and caching features. Best of all PLINQO adapts to changes in your database schema but preserves your custom code. Which is VERY slick and, in my opinion, the most valuable aspect.
Sure, reddit uses parts of SQLAlchemy (for reasons unknown I believe they rewrote most it :/). Most, if not all of the large Django websites use the ORM (including the former Pownce and Curse).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know Joel says to never do it, and I agree with this in most cases. I do think there are cases where it is justified.
We have a large C++ application (around 250,000 total lines of code) that uses a MFC front end and a Windows service as the core components. We are thinking about moving the project to C#.
The reasons we are thinking about rewriting are:
Faster development time
Use of WCF and other .NET built-in features
More consistent operation on various
systems
Easier 64 bit support
Many nice .NET libraries and
components out there
Has anyone done a rewrite like this? Was it successful?
EDIT:
The project is almost 10 years old now, and we are getting to the point that adding new features we want would be writing significant functionality that .NET already has built-in.
Have you thought about instead of re writing from scratch you should start to separate out the GUI and back end layer if it is not already, then you can start to write pieces of it in C#.
the 250,000 lines were not written overnight they contains hundreds of thousands of man years of effort, so nobody sane enough would suggest to rewrite it all from scratch all at once.
The best approach if you guys are intend on doing it is piece by piece. otherwise ask for several years of development effort from your management while no new features are implemented in your existing product (basically stagnating in front of competition)
My company actually did that. We had a C++ code base of roughly that size, and everybody (programmers, management, customers) more or less agreed that it wasn't the best piece of software. We wanted some features that would have been extremely hard to implement in the old code base, so we decided (after many discussions and test projects) to rewrite it in .NET. We reused the code that was modular enough using C++/CLI (about 20% of it - mostly performance-critical number-crunching stuff that should have been written in C++ anyway), but the rest was re-written from scratch. It took about 2 man-years, but that number really depends a lot on the kind of application, the size of your team and on your programmers, of course. I would consider the whole thing a success: We were able to re-architect the whole system to enable new features that would have been near-impossible with the old code base. We also could avoid problems we often had in the old software by re-designing around them. Also, the new system is much more flexible and modular in the places where we learned that flexibility was needed. (Actually I'm sometimes surprised at how easily new features can be incorporated into the new system even though we never though of them when we designed it.)
So in a nutshell: For a medium-sized project (100k-500kloc) a rewrite is an option, but you should definitely be aware of the price and risk your taking. I would only do it if the old codebase is really low-quality and resists refactoring.
Also, there's two mistakes you shouldn't do:
Hire a new .NET programmer and let him/her do the rewrite - someone new can help, but most of the work and especially the design has to be done by developers who have enough experience with the old code, so they have a solid understanding of the requirements. Otherwise, you'll just repeat your old mistakes (plus a couple of new ones) in a different language.
Let a C++ programmer do the rewrite as their first C# project. That's a recipe for disaster, for obvious reasons. When you tackle a project of that size, you must have a solid understanding of the framework you're using.
(I think these two mistakes might reasons why so many rewrites fail.)
Its been tried before, not only C++ => C#, but VB6 => VB.NET, C++ => Java and any other old => new that you can think of. it never really worked. I think that because ppl don't consider that transformation for what it really is (a total rewrite) they tend to take it lightly.
The migration story from C++ => .NET should be thru CLI, carefully deciding what managed and whats remains unmanaged and s-l-o-w-l-y "fixing" piece by piece.
Expression Blend was originally an MFC app. The current version uses WPF for the UI but the engine is still all native. I saw a great talk by principal architect Henry Sowizral about a year ago where he described the process of the migration. Make the engine UI agnostic and you will be able to support whatever the latest UI technology is. The Expression team at one point had what he referred to as the hydra-headed version. Two front-end UIs running simultaneously with one underlying engine - in this way they could see where behavior had unintentionally deviated from the previous version. Since the UI subscribed to events and notifications, changes made in a WPF toolwindow were reflected in the old MFC toolwindow.
EDIT: Looks like some powerpoints are available here or as html here.
I've been through a project that did exactly what you're describing with approximately the same size codebase. Initially, I was completely onboard with the rewrite. It ended up taking 3+ years and nearly turned into a death march. In general, I now agree far more with the incrementalists.
Based on our experience, though, I will say that such a rewrite (especially if you're able to reuse some C++ business logic code in .NET), is not as technically dangerous as it may seem. However, it can be very socially dangerous!
First, you have to make sure that everyone fully understands that what you are undertaking initially is a "rewrite" (or "remake") not an upgrade or "reimagining." The 1998 Psycho was a shot-for-shot remake of the 1960 original. The 2003 Battlestar Galactica was a reimagining of the 1978 original. See the difference?
In our case, the initial plan was to recreate the existing product in .NET. That would not have been technically daunting, since we understood the original well. However, in practice, the urge to add and fix and improve just a few things proved irresistible, and ultimately added 2-3 years to the timeline.
Second, you have to make sure that everyone from the execs to sales staff to the end users is ok with your current product remaining unchanged during the development of the remake. If your market is moving is such a way that you won't be able to sustain your business during that period, then don't do it.
So the main obstacles for us turned out to be social, rather than technical. Users and business interests became very frustrated with the lack of visible progress. Everyone felt compelled to push for their own pet improvements and features, too, so our final product bore only a superficial resemblance to the original. It was definitely a reimagining rather than a remake.
In the end it seems to have turned out ok for us, but it was a real grind, and not something we'd choose to do again. We burned through a lot of goodwill and patience (both internal and external), which could've largely been avoided with an incremental approach.
C++ won't automatically translate to C# (not so you'd want to maintain it, anyway), and you're talking about using different frameworks.
That means you're doing a total rewrite of 250K lines of code. This is effectively the same as a new 250K-line project, except that you've got the requirements nicely spec'd out to start with. Well, not "nicely"; there's doubtless some difficult-to-understand code in there, some likely because of important issues that made elegance difficult, and the overall structure will be somewhat obscured.
That's a very large project. At the end, what you'll have is code that does the same thing, likely with more bugs, probably fairly badly structured (although you can refactor that over time), with more potential for future development. It won't have any of the new features people have been asking for during the project (unless you like living dangerously).
I'm not saying not to do it. I'm saying that you should know what you're proposing, what the cost will be, and what the benefits would be. In most cases, this adds up to "Don't do that!"
I did something similar. Part of my job involves developing & supporting some software called ContractEdge. It was originally developed in Visual C++ 6 by a team in India. Then I took over the development role after it was basically done in 2004. Later on, when Windows Vista was made available as a Beta I discovered that ContractEdge would crash in Vista. The same thing happened in the release candidate.
So I was faced with a decision. Either hunt for the problem in tens of thousands of lines of mostly unfamiliar code, or take the opportunity to rewrite it in .NET. Well, I rewrote it in VB.NET 2.0 in about 2 months. I approached it as a total rewrite, essentially scrapping everything and I simply focused on duplicating the functionality with a different language. As it turns out I only had to write about 1/10th the number of lines of code as the original. Then we held a one month long beta program to iron out any remaining bugs. Immediately after that we launched it and it's been a big success ever since, with fewer problems than the C++ version it replaced.
In our particular scenario I think the rewrite worked out well. The decision was made easier based on the fact that nobody on our team was as familiar with C++ as they were with .NET. So from that perspective, maintainability is now far easier. Nowadays I do think C++ is too low-level of a language for most business software. You really can get a lot more done in .NET with less code. I wrote about this subject on my blog.
Total rewrite for the sake of rewrite? I would not recommend it.
In addition to other responses, I would not take "faster development time" for granted. Sure, for most "business" data-centric applications it will probably be the case, but there are many areas where .NET will not bring in significant productivity increases, plus you need to take the learning curve into account.
We've done a big C++ >> C# migration as we move to .NET. It's a quite tough project. Management would hardly bite the funding for it, so you have to go for a compromise. Best approach is to leave the innermost (or lowest) layers in C++ and cover the upper part with C#, with better APIs designed with newer concepts like readability and API-usability in mind, safe-guarded with unit tests and advanced tools like FxCop. These are obviously great wins.
It also helps you layer your components a bit better as it forces certain cuts. The end product is not nice as you might end up copying a lot of code in C++ because years and years of coding contains many bug fixes and many undocumented and hard-to-understand optimizations. Add to that all the pointer tricks you could do in C (our code has evolved from C into C++ over time). As you stabilize you find yourself more and more reading the C++ code and moving it into the C# - as opposed to 'cleaner design' goals you had in mind in the beginning.
Then you find out that interop performance sucks. That may call for a second rewrite - maybe use unsafe C# code now. Grrr!
If all the team members come from C++, the new code is also look like a C++ design. Try to go for a mix of C# and C++ developers in the team, so you can get a more .NET-alike API at the end.
After a while, the project may lose interest and mgmt may not fund the entire re-write so you end up getting a C#-sugarcoated C++ code, and you may still have unicode/64-bit issues unresolved. It really calls for a very very careful planning.
I was involved in a very similar size project. It was necessary to rewrite the GUI front end because of new hardware and new requirements. We decided to port this to .NET using C++/CLI. We were able to reuse more then halve of the code and porting it work quite well.
We were able to take advantage of .NET where it made the most sense. This made major parts of the code much cleaner. We found the book "Pro Visual C++/CLI and the .NET 2.0 platform" by Stephen R. G. Fraser very helpful.
Have you considered a port to C++.NET? It might be less painful.
I'm currently rewriting a rather large web application.
One thing to remember is that when converting from one language to another especially something like C++ to .Net is that you may end up with less, and probably cleaner, code due either due to language advances or framework code.
That's one advantage for future maintainability, even aside from the opportunity to re-architect the less robust aspects of the old application.
Some additional comments.
Depending on the lifespan of your application you may be forced to rewrite it in a modern language since I suspect that C++ developers will become increasingly hard to find.
Just moving the app to a new language will not reap that great rewards. You'll probably want to do a redesign of the app as well! Do not underestimate the effort required to do this. I would guess the effort for a redesign + rewrite could be as much as 50% of the effort for the original implementation. (Of course, 50% is a totally unscientific guess).
It's way to easy fool yourself into thinking "Well, C# and WPF are just so much more productive that rewriting this mess would be a piece of cake!"
Interestingly most of the answers from people who have done this seem positive. The most important thing IMO is to have good unit tests so that you can be sure your rewrite does what you want it to do (which may not be exactly the same as what the old code did).