Automatically check for binary incompatibility between versions of a C# DLL? - c#

I'm looking for any automatable way to check for binary incompatibility between versions of a C# DLL.
Something like Unix C++'s abi-compliance-checker but for .NET, and in particular for C#, would be great. This SO answer gives a screenshot of exactly the kind of info that I'd love to be able to get regarding two different versions of a C# API.
If there isn't any direct equivalent, then it would also be helpful to get any information at all about any automatable way to get some assurance that binary compatibility isn't broken between versions of a library.
This question seems to be the authoritative list on Stack Overflow of ways to break your API. There's also an article by Jon Skeet which discusses some of these issues. (And of course there are multiple other discussions of this issue on the internet; though there is a lot more of it focused on C++ than on C# and .NET, which is what I'm asking about here.)
Amongst other things, those articles help make clear that:
Binary and compilation compatibility and incompatibility are NOT trivial problems!
Binary (in)compatibility does not always entail compilation (in)compatibility nor vice versa
Even though (it might be worth noting here) at least one Microsoft article on the topic incorrectly/misleadingly states that it does in one direction
There are different levels of incompatibility: in particular, there are some obscure things, particularly to do with reflection, that users of a library can do which will make certain API changes binary incompatible, where they wouldn't have been otherwise
As a side note, this pretty much makes it seem like real world incompatibility isn't a yes/no question: there'll be certain theoretically incompatible changes which you might well be prepared to make within a major version of your API, if you don't expect normal users of the API to be doing these more obscure things with it
That fits with the different severities of incompatibility shown in the abi-compliance-checker screenshot in the SO answer linked above
So, given that everybody these days is supposed to be updating their APIs using semantic versioning, and also given that everybody is using test suites and probably devops as well, how on earth is everybody - or even anybody?! - automatically testing for binary incompatibility changes in .NET?
I can see that careful use of a test suite is enough to let you know if you've made compilation compatibility changes, and most projects already have this - hence why my question is about binary incompatibility changes, which don't seem to be automatically detected at all in any projects I've looked at or worked on.
Just reading the above articles and being careful probably isn't enough... (though of course it's some kind of start) but I can't find any documentation about any other approach.

As a partial answer to my own question, and on the basis that something is better than nothing at all, it looks to me as if one possible pragmatic answer (i.e. it definitely won't catch all possible problems, but it should at least catch some...) would be to store compiled versions of old test suites and then to check whether these still run against compiled newer versions of the API.
In essentially all OSS projects I've worked on, the tests and the library are compiled at the same time. Anything which is a binary incompatibility but not a compilation incompatibility would never cause a failure when just running normal tests in the normal way, in such a project. But at least some such problems would show up with what I've suggested.
So I'm not just answering my question by saying 'have a test suite'! I'm suggesting something different from what I've seen normally done with test suites, that can (and does, I now use this approach) help with the problem my question is about.

Related

Protect code of WPF application written in .Net 4.7

I know this question is asked many times, I read every question but didn't find solution for my case.
Our team made an application in .Net 4.7.2 and in few days we have to deploy it. We are using web services, so even if user cracks license system, they won't be able to access services. Our only concern is to prevent its duplication (someone can resell under his brand and this happened to our previous versions) as these web services(simple CRUD operations) are very easy to implement, so someone can change URL to there servers and duplicate these services. For protection against this, we are using encrypted calls to server. Problem we are facing now is to protect this encryption algorithm and obfuscation is not enough for this.
Again our only concern is to protect code. Sorry for bad English.
I know about .Net Reactor but there are many unpacker that can unpack .Net reactor protected application. I don't know if these unpacker work on current version.
Should I use .Net Reactor?
Is there any solution out there to convert .Net 4.7 code to native code or any other way to prevent this(except for obfuscation or Ahead Of Time Compilation)?
Code you distribute can/will be analized (even copied/cloned) by all sorts of people, no way around that. Even only distributing compiled binaries is not a real hurdle for a determined adversary. Semi-compiled languages like Java's JVM or .NET often keep a lot of source information in the binary, to the point that sometimes decompiling to understandable source is more or less automatic. Source obfuscation can help a bit here, but that introduces another step (and possibly introduce bugs!), but an attacker will probably only be interested in localized swaths of code anyway.
If the services are "easy to duplicate", as you state, I wonder if they are really that valuable. Most extremely valuable 'net services use simple, even well known and publicly available protocols (as in "download a library to use our services here") to access them, but if I'd create my own clone of e.g. YouTube I'll get nowhere, the value is not in the interface but in the service offered.
Re keep encryption secret: Never forget Kerckhoffs' rules. In particular, homebrew encryption is usually ridiculously easy to break, getting at the exact algorithm is possible with some ingenuity even if it is only in hardware (like the MiFare card hack), and unless it has been carefully designed, it will be broken in short order. Do use the accepted cryptographic tools, like AES, Diffie-Hellman, RSA. Yes, it might be incur in some extra costs (in any case there are free/open source alternatives available for everything of interest), but it is much, much more secure than anything you could come up with.

Protobuf-net on UWP/.NET Native and iOS

I have a Xamarin.Forms App based on .NET Standard 1.4 that uses protobuf-net to store objects in the database that will be sent to a WCF service at a later time.
On Android and UWP "managed" everything works fine but - after searching through repositories, articles and blogposts that can no longer be accessed, and also after trying to get the precompilation tool to work, but failing at that - I have one simple (probably not) question: How do I get protobuf-net to work in "restricted" environments like UWP/.NET Native and iOS/Xamarin?
Right now I don't have a great solution for this scenario. I know some people have made it work, but I'm not expert enough in UWP / Native / iOS to give you reliable "here's the path to success" instructions.
UWP / .NET Native and iOS share (as you know) a common issue: lack of full runtime emit. I understand why this is. It is just: tricky.
Historically, protobuf-net has tried to solve this problem via a build tool that repeated the existing IL-emit usually done at runtime - as a build-time tool. This was ugly and nasty, but it worked. Kind of. To hack around some platform restrictions, protobuf-net used some of the IKVM tooling to help with this, but as the .NET framework scene has continued to expand this is basically not viable. Plus: the IKVM tool is now abandoned and won't be being maintained.
In parallel with this, there is increasing impetus to investigate some newer concepts:
full async/await for asynchronous IO sources: note that this is extremely unfriendly to IL emit, but is almost embarrassingly easy to implement in C#
"pipelines" / "channels" / "streams 2" - whatever it is being called this week; but: the new allocation-free IO concept that is being used in Kestrel (I helped kick this ball around a little bit when it was in the early stages, so I'm familiar with what needs doing) - note that this also ties into async/await
and of course: how all of the above relates to pre-generation
Right now, I'm very much of the opinion that the best route forward is for the pre-gen scenario to switch to emitting C# via build-time tooling. I have repeatedly petitioned MS for improved automated C# emit based on Roslyn, but so far: no joy (vexingly: the asp.net stuff even had a fully working proof-of-concept, but it is shelved). So right now I'm thinking: we need to assume that isn't going to happen, and basically write it independently. This isn't necessarily as complex as it sounds (and: codegen of various forms is very familiar to me). The advantage of C# emit here is that I don't need to fight the intricacies of every framework - I just need to make it compile (well, and run, obviously).
So: what's holding me back? In theory: nothing. I just need to get this stuff written and deployed. In reality: life, time, etc. I am guilty of prioritising things that impact me daily, and the reality is that I'm not really a daily user of those platforms, which means I'm not feeling the pain that you're feeling. But: I hear you loud and clear, and I am trying to ramp up the v3 work that should address these points. I genuinely want to have a good story for those things - and my aim is that by moving to a C#-emit model (for pre-gen, at least): it helps me. And if it helps me I know it won't be the forgotten toy in the attic / basement that I know is there but which it is hard to find the motivation to go to the trouble of finding.

Bestpractice approaches for reverse engineering VB6 code with out knowledge of the domain

target state: Porting VB6 Code into C#, undertake the whole project with all conceivable processes that are involved.
What would be your approach if you do not have knowledge about the domain?
There is nearly any documentation, just legacy code (up to 100.000 - 300.000 lines of code and comments vb6 files that contain up to 14.000 lines of code) written in VB6.
Disclaimer: I work for Great Migrations
We rewrite large VB6/ASP/COM applications to .NET (primarily C#) for a living and we have developed a software analysis and reengineering tool to help us do it. This tool is essentially like a VB6/ASP/COM compiler and a decompiler that authors .NET codes. Of course since the VB6 platform is very different from .NET, a direct compile/decompile is not desirable or viable, so our tool has an "analyzer" that implements various code reengineering algorithms to deal with VB6-C# incompatibilities. There is also a programmable "author" that allows the migration team to prescribe rules for setting up .NET code files, restructuring the code, and doing things like replacing COM APIs and ActiveX controls with .NET classes -- depending on what the team needs or wants.
As a by product of compiling and analyzing the code our tool produces a model of the entire VB6/ASP/COM system being upgraded. This model can be used to produce extremely detailed reports about the internal structure of the system. These models can be used to help reverse-engineer the code -- if you know the right questions to ask and you would need to understand the problem domain to do a good job.
Of course once you have build-complete .NET, you can use the various analytics and code review tools that work off assemblies. Some versions of Visual Studio have these tools and there are open source tools such as FxCop, NDepends). There are also some fantastic dynamic analysis tools (EQUATEC Tracer) that I have used.
In the end though migration teams are going to very hard pressed to verify any unknown system. Even if you are staying on the same platform, you would unable to prove an application it is "correct" if you do not know how to run it and how to setup/enter expected inputs and find/verify expected outputs. We normally leave this to the customer!
If we are doing verification for the customer, we rely heavily on side-by-side testing to validate the new version of your system -- assuming we know how to run the legacy application we assume that given the same sets of inputs and usecases it should exhibit the same behaviors and produce the same results. I have heard this Approval Tests in unit testing circles.
I admit we also rely heavily on the knowledge that the VB6/COM code is a complete, detailed, formal and production tested description of the data structures and logic of the system and that we are putting this information through a tested and retested systematic transformation. We have been developing compilers since 1977 and we have worked very hard on this VB6/ASP compiler to make sure the .NET codes that we generate preserves the semantics of the original VB6. It is not 100% every time - but it is getting closer all the time. Then again doing things by hand does not guarantee 100% correct code on the first try either...
mark's answer about Great Migrations is excellent. Do be aware there are competitor automatic tools, which also have a very good reputation.
Artinsoft's VB Upgrade Companion
Francesco Balena's VB Migration Partner

Ever done a total rewrite of a large C++ application in C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know Joel says to never do it, and I agree with this in most cases. I do think there are cases where it is justified.
We have a large C++ application (around 250,000 total lines of code) that uses a MFC front end and a Windows service as the core components. We are thinking about moving the project to C#.
The reasons we are thinking about rewriting are:
Faster development time
Use of WCF and other .NET built-in features
More consistent operation on various
systems
Easier 64 bit support
Many nice .NET libraries and
components out there
Has anyone done a rewrite like this? Was it successful?
EDIT:
The project is almost 10 years old now, and we are getting to the point that adding new features we want would be writing significant functionality that .NET already has built-in.
Have you thought about instead of re writing from scratch you should start to separate out the GUI and back end layer if it is not already, then you can start to write pieces of it in C#.
the 250,000 lines were not written overnight they contains hundreds of thousands of man years of effort, so nobody sane enough would suggest to rewrite it all from scratch all at once.
The best approach if you guys are intend on doing it is piece by piece. otherwise ask for several years of development effort from your management while no new features are implemented in your existing product (basically stagnating in front of competition)
My company actually did that. We had a C++ code base of roughly that size, and everybody (programmers, management, customers) more or less agreed that it wasn't the best piece of software. We wanted some features that would have been extremely hard to implement in the old code base, so we decided (after many discussions and test projects) to rewrite it in .NET. We reused the code that was modular enough using C++/CLI (about 20% of it - mostly performance-critical number-crunching stuff that should have been written in C++ anyway), but the rest was re-written from scratch. It took about 2 man-years, but that number really depends a lot on the kind of application, the size of your team and on your programmers, of course. I would consider the whole thing a success: We were able to re-architect the whole system to enable new features that would have been near-impossible with the old code base. We also could avoid problems we often had in the old software by re-designing around them. Also, the new system is much more flexible and modular in the places where we learned that flexibility was needed. (Actually I'm sometimes surprised at how easily new features can be incorporated into the new system even though we never though of them when we designed it.)
So in a nutshell: For a medium-sized project (100k-500kloc) a rewrite is an option, but you should definitely be aware of the price and risk your taking. I would only do it if the old codebase is really low-quality and resists refactoring.
Also, there's two mistakes you shouldn't do:
Hire a new .NET programmer and let him/her do the rewrite - someone new can help, but most of the work and especially the design has to be done by developers who have enough experience with the old code, so they have a solid understanding of the requirements. Otherwise, you'll just repeat your old mistakes (plus a couple of new ones) in a different language.
Let a C++ programmer do the rewrite as their first C# project. That's a recipe for disaster, for obvious reasons. When you tackle a project of that size, you must have a solid understanding of the framework you're using.
(I think these two mistakes might reasons why so many rewrites fail.)
Its been tried before, not only C++ => C#, but VB6 => VB.NET, C++ => Java and any other old => new that you can think of. it never really worked. I think that because ppl don't consider that transformation for what it really is (a total rewrite) they tend to take it lightly.
The migration story from C++ => .NET should be thru CLI, carefully deciding what managed and whats remains unmanaged and s-l-o-w-l-y "fixing" piece by piece.
Expression Blend was originally an MFC app. The current version uses WPF for the UI but the engine is still all native. I saw a great talk by principal architect Henry Sowizral about a year ago where he described the process of the migration. Make the engine UI agnostic and you will be able to support whatever the latest UI technology is. The Expression team at one point had what he referred to as the hydra-headed version. Two front-end UIs running simultaneously with one underlying engine - in this way they could see where behavior had unintentionally deviated from the previous version. Since the UI subscribed to events and notifications, changes made in a WPF toolwindow were reflected in the old MFC toolwindow.
EDIT: Looks like some powerpoints are available here or as html here.
I've been through a project that did exactly what you're describing with approximately the same size codebase. Initially, I was completely onboard with the rewrite. It ended up taking 3+ years and nearly turned into a death march. In general, I now agree far more with the incrementalists.
Based on our experience, though, I will say that such a rewrite (especially if you're able to reuse some C++ business logic code in .NET), is not as technically dangerous as it may seem. However, it can be very socially dangerous!
First, you have to make sure that everyone fully understands that what you are undertaking initially is a "rewrite" (or "remake") not an upgrade or "reimagining." The 1998 Psycho was a shot-for-shot remake of the 1960 original. The 2003 Battlestar Galactica was a reimagining of the 1978 original. See the difference?
In our case, the initial plan was to recreate the existing product in .NET. That would not have been technically daunting, since we understood the original well. However, in practice, the urge to add and fix and improve just a few things proved irresistible, and ultimately added 2-3 years to the timeline.
Second, you have to make sure that everyone from the execs to sales staff to the end users is ok with your current product remaining unchanged during the development of the remake. If your market is moving is such a way that you won't be able to sustain your business during that period, then don't do it.
So the main obstacles for us turned out to be social, rather than technical. Users and business interests became very frustrated with the lack of visible progress. Everyone felt compelled to push for their own pet improvements and features, too, so our final product bore only a superficial resemblance to the original. It was definitely a reimagining rather than a remake.
In the end it seems to have turned out ok for us, but it was a real grind, and not something we'd choose to do again. We burned through a lot of goodwill and patience (both internal and external), which could've largely been avoided with an incremental approach.
C++ won't automatically translate to C# (not so you'd want to maintain it, anyway), and you're talking about using different frameworks.
That means you're doing a total rewrite of 250K lines of code. This is effectively the same as a new 250K-line project, except that you've got the requirements nicely spec'd out to start with. Well, not "nicely"; there's doubtless some difficult-to-understand code in there, some likely because of important issues that made elegance difficult, and the overall structure will be somewhat obscured.
That's a very large project. At the end, what you'll have is code that does the same thing, likely with more bugs, probably fairly badly structured (although you can refactor that over time), with more potential for future development. It won't have any of the new features people have been asking for during the project (unless you like living dangerously).
I'm not saying not to do it. I'm saying that you should know what you're proposing, what the cost will be, and what the benefits would be. In most cases, this adds up to "Don't do that!"
I did something similar. Part of my job involves developing & supporting some software called ContractEdge. It was originally developed in Visual C++ 6 by a team in India. Then I took over the development role after it was basically done in 2004. Later on, when Windows Vista was made available as a Beta I discovered that ContractEdge would crash in Vista. The same thing happened in the release candidate.
So I was faced with a decision. Either hunt for the problem in tens of thousands of lines of mostly unfamiliar code, or take the opportunity to rewrite it in .NET. Well, I rewrote it in VB.NET 2.0 in about 2 months. I approached it as a total rewrite, essentially scrapping everything and I simply focused on duplicating the functionality with a different language. As it turns out I only had to write about 1/10th the number of lines of code as the original. Then we held a one month long beta program to iron out any remaining bugs. Immediately after that we launched it and it's been a big success ever since, with fewer problems than the C++ version it replaced.
In our particular scenario I think the rewrite worked out well. The decision was made easier based on the fact that nobody on our team was as familiar with C++ as they were with .NET. So from that perspective, maintainability is now far easier. Nowadays I do think C++ is too low-level of a language for most business software. You really can get a lot more done in .NET with less code. I wrote about this subject on my blog.
Total rewrite for the sake of rewrite? I would not recommend it.
In addition to other responses, I would not take "faster development time" for granted. Sure, for most "business" data-centric applications it will probably be the case, but there are many areas where .NET will not bring in significant productivity increases, plus you need to take the learning curve into account.
We've done a big C++ >> C# migration as we move to .NET. It's a quite tough project. Management would hardly bite the funding for it, so you have to go for a compromise. Best approach is to leave the innermost (or lowest) layers in C++ and cover the upper part with C#, with better APIs designed with newer concepts like readability and API-usability in mind, safe-guarded with unit tests and advanced tools like FxCop. These are obviously great wins.
It also helps you layer your components a bit better as it forces certain cuts. The end product is not nice as you might end up copying a lot of code in C++ because years and years of coding contains many bug fixes and many undocumented and hard-to-understand optimizations. Add to that all the pointer tricks you could do in C (our code has evolved from C into C++ over time). As you stabilize you find yourself more and more reading the C++ code and moving it into the C# - as opposed to 'cleaner design' goals you had in mind in the beginning.
Then you find out that interop performance sucks. That may call for a second rewrite - maybe use unsafe C# code now. Grrr!
If all the team members come from C++, the new code is also look like a C++ design. Try to go for a mix of C# and C++ developers in the team, so you can get a more .NET-alike API at the end.
After a while, the project may lose interest and mgmt may not fund the entire re-write so you end up getting a C#-sugarcoated C++ code, and you may still have unicode/64-bit issues unresolved. It really calls for a very very careful planning.
I was involved in a very similar size project. It was necessary to rewrite the GUI front end because of new hardware and new requirements. We decided to port this to .NET using C++/CLI. We were able to reuse more then halve of the code and porting it work quite well.
We were able to take advantage of .NET where it made the most sense. This made major parts of the code much cleaner. We found the book "Pro Visual C++/CLI and the .NET 2.0 platform" by Stephen R. G. Fraser very helpful.
Have you considered a port to C++.NET? It might be less painful.
I'm currently rewriting a rather large web application.
One thing to remember is that when converting from one language to another especially something like C++ to .Net is that you may end up with less, and probably cleaner, code due either due to language advances or framework code.
That's one advantage for future maintainability, even aside from the opportunity to re-architect the less robust aspects of the old application.
Some additional comments.
Depending on the lifespan of your application you may be forced to rewrite it in a modern language since I suspect that C++ developers will become increasingly hard to find.
Just moving the app to a new language will not reap that great rewards. You'll probably want to do a redesign of the app as well! Do not underestimate the effort required to do this. I would guess the effort for a redesign + rewrite could be as much as 50% of the effort for the original implementation. (Of course, 50% is a totally unscientific guess).
It's way to easy fool yourself into thinking "Well, C# and WPF are just so much more productive that rewriting this mess would be a piece of cake!"
Interestingly most of the answers from people who have done this seem positive. The most important thing IMO is to have good unit tests so that you can be sure your rewrite does what you want it to do (which may not be exactly the same as what the old code did).

Should you obfuscate a commercial .Net application?

I was thinking about obfuscating a commercial .Net application. But is it really worth the effort to select, buy and use such a tool? Are the obfuscated binaries really safe from reverse engineering?
You may not have to buy a tool - Visual Studio.NET comes with a community version of Dotfuscator. Other free obfuscation tools are listed here, and they may meet your needs.
It's possible that the obfuscated binaries aren't safe from reverse engineering, just like it's possible that your bike lock might be breakable/pickable. However, it's often the case that a small inconvenience is enough to deter would be code/bicycle thieves.
Also, if ever it comes time to assert your rights to a piece of code in court, having been seen to make an effort to protect it (by obfuscating it) may give you extra points. :-)
You do have to consider the downsides, though - it can be more difficult to use reflection with obfuscated code, and if you're using something like log4net to generate parts of log lines based on the name of the class involved, these messages can become much more difficult to interpret.
Remember that obfuscation is only a barrier to the casual examiner of your code. If someone is serious about figuring out what you wrote, you will have a very hard time stopping them.
If you have secrets in your code (like passwords), you're doing it wrong.
If you worried someone might produce your own software with your ideas, you'll have more luck in the marketplace by providing new versions that your customers want, with technical support, and by being a partner to them. Good business wins.
At our company we evaluated several different obfuscation technologies, but they all had problems. The biggest problem was that we rely a lot on reflection, e.g. to dynamically create grids based upon property names.
So all of the obfuscators rename things, you can disable it of course, but then you lose a lot of the benefit of obfuscation.
Also, in our code we have a lot of NUnit tests which rely on a lot more of the methods and properties being public, this prevented some of the obfuscators from being able to obfuscate those classes.
In the end we settled on a product called .NET Reactor
It works very well, and we don't have any of the problems associated with the other products.
"In contrast to obfuscators .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. In detail, .NET Reactor builds a native wall between potential hackers and your .NET code. The result is a standard Windows based, not MSIL compatible, file. The original .NET code remains intact, well protected by native code and invisible for prying eyes. The original .NET code is not copied on harddisk at any time. There is no tool which is able to decompile .NET Reactor protected assemblies."
The fact that you actually can reverse engineer it does not make obfuscation useless. It does raise the bar significantly.
An unobfuscated .NET assembly will show you all the source, highlighted and all just by downloading the .NET Reflector. Add obfuscation to that and you'll reduce very significatively the amount of people who'll be able to modify the code.
It depends on you are you protecting yourself from. If you'll ship it unobfuscated, you might as well open source the application and benefit from marketing. Shipping it obfuscated will only allow people to relatively easily generate modified binaries through patches instead of being able to steal your code and create a direct competitor. Getting the actual source from obfuscated code is very hard, depending on the obfuscator, of course.
I think that it depends on the type of your product. If it is directed to be used by developers - obfuscation will hurt your customers. We've been using the ArcGIS products at work, and all the DLLs are obfuscated. It's making our job a lot harder, since we can't use Reflector to decipher weird behaviors. And we're buying customers who paid thousands of dollars for the product.
So please, don't obfuscate unless you really have to.
Things you should take into account:
Obfuscation does not protect your code or logic. It just makes it harder to read and understand.
Obfuscation does no one stop from reverse engineering. It just slows the process down.
Your intellectual property is protected by law in most countries. So if an competitor uses your code or specific implementation, you can sue him.
The one and only problem obfuscation can solve is that someone creates a 1:1 (or close to 1:1) copy of your specific implementation.
Also in an ideal world reverse engineering of an obfuscated application is economical unattractive.
But back to reality:
There exists no tool on this planet that stops someone from copying user interfaces, behaviors or results any application provide or produce. Obfuscation is in this situations 100% useless
The best obfuscator on the market cannot stop one from using some kind of disassembler or hex editor and for some geeks this is pretty good to look into the heart of an application. It's just harder than on an unobfuscated code.
So the reality is that you can make it harder and more time consuming to look into your application but you won't really get any reliable protection. Regardless if you use a free or an commercial product.
Advanced technologies like control flow obfuscation or code virtualization may help to make understanding of logic sometimes really hard but they can also cause a lot of funny and hard to debug or solve problems. So they are sometimes more like an additional problem than a solution.
From my point of view obfuscation is not worth the money some companies charge for their products. If you want to nag casual developers, open source obfuscators are good enough. If you want to make it as hard as possible to look into the heart of your applications, you need to use cryptographic containers with virtual execution environments and virtual filesystems but they also provide attack vectors and may also be a source for a bag full of problems.
Your intellectual property and your products are in most countries protected by law. So if there's one competitor analyzing and copying your code, you can sue him. If a bad guy or and hacker or cracker takes your application you are pranked - but an obfuscator does not make a difference.
So you should first think about your targets, your market and what you want to achieve with an obfuscator. As you can read here (and at other places) obfuscation does not really solve the problem of reverse engineering. It only makes it harder and more time consuming. But if this is what you want, you may have a look to open source obfuscators like e.g. sharpObfuscator or obfuscar which may be good enough to nag casual coders (a List can be found here: List of .NET Obfuscators on Wikipedia).
If it is possible in your scenario you might also be interested in SaaS-Concepts. This means that you provide access to your software but not the software itself. So the customer normally has no access to your assemblies. But depending on service level, security and user base it can be expensive, complex and difficult to realize a reliable, confident and performant SaaS-Service.
No, obfuscation has been proven that it does not prevent someone from being able to decipher the compiled code. It makes it more difficult to do so but not impossible.
I am very confortable reading x86 assembly code, what about people that is working with assembly for more than 20 years ?
You will always find someone that only need a minute to see what your c# or c code is doing...
Just a note to anyone else reading this years later - I just skimmed through the Dotfuscator Community Edition (that comes with VS2008) license a few hours ago, and I believe that you cannot use this version to distribute a commercial product, or to obfuscate code from a project that involves any developers other than yourself. So for commercial app developers, it's really just a trial version.
...snip...
these messages can become much more
difficult to interpret
Yes, but the free community edition that comes with Visual Studio has a map functionality.
With that you can back track the obfuscated method names to the original names.
I've had success putting the output from one free obfuscator into a different obfuscator. In Dotfuscator CE, only some of the obfuscation tricks are included, so using a second obfuscator that has different tricks makes it more obfuscated.
It's quite simple to reverse engineer a .net app using .net reflector - since the app will generate VB, VC and C# code straight from the MSIL, and it's possible to pull out all kinds of useful gems.
Code obfuscators hide code quite well from most reverse engineering hacks, and would be a good idea to use on proprietary and competitive code that adds value to your app.
There's a pretty good article on obfuscation and it's workings here
This post and the surrounding question have some discussion which might be of value. It isn't a yes-or-no issue.
Yes you definitely should. Not to protect it from a determined person, but to get some profit and have customers. By the way, if you reach a point here someone tries to crack your software, that means you sell a popular software.
The problem is what tool to choose for the job. Check out my experience with commercial obfuscators: https://stackoverflow.com/questions/337134/what-is-the-best-net-obfuscator-on-the-market/2356575#2356575
Yes, we do. We use BitHelmet obfuscator. It's new, but it works really well.
But is it really worth the effort to select, buy and use such a tool?
I found Eazfuscator cheap (free), and easy to use: took about a day.
I already had extensive automated tests (good coverage), so I reckon I could find any bugs that are/were introduced by obfuscation.

Categories

Resources