Executable vs wrapper class - c#

I am working on a project using asp.net and c# and I need to pull in something like wkhtmltopdf. I realize that there have been several good wrapper classes written to simplify calls to the dlls using c#. But is there a reason why I should not invoke the executable directly? Is there any performance or security gain from using a wrapper library?
Although, my specific need now is to use wkhtmltopdf, I have had the same question in the past when using libraries like imagemagick as well.

It's a matter of preference. By using the wrapper classes you mentioned, the work that you do implementing components that you may not be so familiar with is reduced, thereby freeing up your valuable time to concentrate on those aspects of the application where perhaps you can make your strongest value-add, such as the overall application architecture and design, or perhaps the application's business logic.
If you choose to write all the code yourself, then you may find that you're a less productive developer than your competition.
And, as #UweKeim points out in his comment, performance may be a factor as well. If the wrapper code does not perform to your needs, you may well need to bypass it and go straight to the component/code library you're calling.
It's important to strike a balance between use of code that others have written, versus your own. Important factors are things such as, how well is the 3rd party code written, how well is it supported, how well it performs, etc. Choose wisely!

Related

How do you access the Profiler API from pure managed C# code?

Background
I am developing a library called Harmony that currently uses detouring at the assembler level to monkey patch methods at runtime. This works fine and I got this to work on all combinations of hardware and .NET but it is sort of an ugly hack that does not work when methods are inlined.
Scope
I know roughly about the profiler API and that you can alter the IL before it reaches the JITer. My library already provides a high level way to get the modified IL body so all I need to do is to use the profiler API.
Asking around it was recommended to write small C/C++ modules accessing it but I rather would want to do this from my managed code. Reasons: my library should only be one final dll, I rather not want to deal with C/C++ code, different environments might make this hard.
Question
Is this approach possible? Is there an easier way? Is there a profiler library that hides this implementation and just gives me a high level replacement callback?
Note
Please do not question the motives of my library, I am fully aware that this falls outside of any “normal” C# programming. It’s mainly for patching and modding games and so far with 1500+ stars on GitHub a very successful project. I just want to lift it to an even more compatible level.

unit testing legacy code: limits of "extract and override" vs JustMock/TypeMock/moles?

Given the following conditions:
a very old, big, C# legacy code base with no testcoverage whatsoever
(almost) every class derives from some interface
nothing is sealed
What are the practical benefits of using profiler-API-driven solutions like JustMock and TypeMock, compared to using extract&override + e.g. RhinoMocks? Are there cases I'm not aware of, besides circumventing private/protected, where using TypeMock/JustMock etc. is really needed? I'd especially welcome some experience from people having switched to one of the products.
Using extract&override seems to solve all problems when handling old legacy code, the refactoring seems dead simple, and the possibility for introducing bugs seems very minor. Is the benefit writing less test code? More beautifull classes with less virtual protected stuff? Right now, I don't 'get it', although I understand it's very helpfull to first test private methods in isolation, as public methods may be too large under the hood in such old legacy codebases.
If you don't know what extract&override is: see here.
There are many differences between the frameworks which do not regard the technology on which the frameworks built on.
For example:
API - every framework has different notations and defaults (e.g.
strict defaults vs. relaxed defaults)
Support - the propriety frameworks usually offer support with the licenses
Price - this is not a matter of usage but requires budget
The main advantage of Extract&Override is that it requires some refactoring, if the code you're working on is neglected, it's gives a good chance to go over it and refactor it toward better code and not just for testability.
The main advantage of using an Isolation framework is that you do not need to change the code under test (if it's a large codebase it could take long time just to refactor it for testability). In addition, the Isolation frameworks do not force you into specific design, this could be helpful if the legacy code matches better its existing design. Another feature which is useful in legacy code is swapping instances created in the code under test, usually refactoring instantiations takes more effort and this can be saved. Last thing is faking 3rd party code - using isolation frameworks you can isolate code which is not yours without using wrapper classes.
Disclaimer - I work at Typemock

Recommended migration strategy for C++ project in Visual Studio 6

For a large application written in C++ using Visual Studio 6, what is the best way to move into the modern era?
I'd like to take an incremental approach where we slowly move portions of the code and write new features into C# for example and compile that into a library or dll that can be referenced from the legacy application.
Is this possible and what is the best way to do it?
Edit: At this point we are limited to the Express editions which I believe don't allow use of the MFC libraries which are heavily used in our current app. It's also quite a large app with a lot of hardware dependencies so I don't think a wholesale migration is in the cards.
Edit2: We've looked into writing COM-wrapped components in C# but having no COM experience this is scary and complicated. Is it possible to generate a C# dll with a straight-C interface with all the managed goodness hidden inside? Or is COM a necessary evil?
I'd like to take an incremental
approach where we slowly move portions
of the code
That's the only realistic way to do it.
First, what kind of version control do you use? (If you use branching version control that allows you to make experiments and see what works, while minimizing the risk of compromising your code; others are OK also, but you'll have to be really careful depending on what you are using).
Edit: I just saw you are using SVN. It may be worthwile to move to mercurial or git if you have the liberty to do that (the change provides a quantum leap in what you can do with the code-base).
and write new features into C# for
example and compile that into a
library or dll that can be referenced
from the legacy application.
That's ... not necessarily a good idea. C# code can expose COM interfaces that are accessible in C++. Writing client code in C++ for modules written in C# can be fun, but you may find it taxing (in terms of effort to benefits ratio); It is also slow and error-prone (compared to writing C# client code for modules written in C++).
Better consider creating an application framework in C# and using modules (already) written in C++ for the core functionality.
Is this possible and what is the best
way to do it?
Yes, it's possible.
How many people are involved in the project?
If there are many, the best way would be to have a few (two? four?) work on the new application framework and have the rest continue as usual.
If there are few, you can consider having either a person in charge of this, or more people working part-time on it.
The percentage of people/effort assigned on each (old code maintenance and new code development) should depend on the size of the team and your priorities (Is the transition a low priority issue? Is it necessary to be finished by a given date?)
The best way to do this would be to start adapting modules of the code to be usable in multiple scenarios (with both the old code and the new one) and continue development in parallel (again, this would be greatly eased by using a branching distributed version control system).
Here's how I would go about it (iterative development, with small steps and lots of validity checks in between):
Pick a functional module (something that is not GUI-related) in the old code-base.
Remove MFC code (and other libraries not available in VS2010 Express - like ATL) references from the module picked in step 1.
Do not attempt to rewrite MFC/ATL functionality with custom code, unless for small changes (that is, it is not feasible to decide to create your own GUI framework, but it is OK to decide to write your own COM interface pointer wrapper similar to ATL's CComPtr).
If the code is heavily dependent on a library, better separate it as much as possible, then mark it down to be rewritten at a future point using new technologies. Either way, for a library heavily-dependent on MFC you're better off rewriting the code using something else (C#?).
reduce coupling with the chosen module as much as possible (make sure the code is in a separate library, decide clearly what functionality the module exposes to client code) and access the delimited functionality only through the decided exposed interface (in the old code).
Make sure the old code base still works with the modified module (test - eventually automate the testing for this module) - this is critical if you need to still stay in the market until you can ship the new version.
While maintaining the current application, start a new project (C# based?) that implements the GUI and other parts you need to modernize (like the parts heavily-dependent on MFC). This should be a thin-layer application, preferably agnostic of the business logic (which should remain in the legacy code as much as possible).
Depending on what the old code does and the interfaces you define, it may make sense to use C++/CLI instead of C# for parts of the code (it can work with native C++ pointers and managed code, allowing you to make an easy transition when comunicating between managed .NET code and C++ native code).
Make the new application use the module picked in step 1.
Pick a new module, go back to step 2.
Advantages:
refactoring will be performed (necessary for the separation of modules)
at the end you should have a battery of tests for your functional modules (if you do not already).
you still have something to ship in between.
A few notes:
If you do not use a distributed branching version control system, you're better off working on one module at a time. If you use branching/distributed source control, you can distribute different modules to different team members, and centralize the changes every time something new has been ported.
It is very important that each step is clearly delimited (so that you can roll back your changes to the last stable version, try new things and so on). This is another issue that is difficult with SVN and easy with Mercurial / Git.
Before starting, change the names of all your project files to have a .2005.vcproj extension, and do the same for the solution file. When creating the new project file, do the same with .2010.vcxproj for the project files and solution (you should still do this if you convert the solutions/projects). The idea is that you should have both in parallel and open whichever you want at any point. You shouldn't have to make a source-tree update to a different label/tag/date in source control just to switch IDEs.
Edit2: We've looked into writing
COM-wrapped components in C# but
having no COM experience this is scary
and complicated.
You can still do it, by writing wrapper code (a small templated smart pointer class for COM interfaces wouldn't go amiss for example - similar to CComPtr in ATL). If you isolated the COM code behind some wrappers you could write client code (agnostic of COM) with (almost) no problems.
Is it possible to generate a C# dll
with a straight-C interface with all
the managed goodness hidden inside? Or
is COM a necessary evil?
Not that I know of. I think COM will be a necessary evil if you plan to use server code written in C# and client code in C++.
It is possible the other way around.
Faced with the same task, my strategy would be something like:
Identify what we hope to gain by moving to 2010 development - it could be
improved quality assurance: unit testing, mocking are part of modern development tools
slicker UI: WPF provides a modern look and feel.
productivity: in some areas, .NET development is more productive than C++ development
support: new tools are supported with improvements and bugfixes.
Identify which parts of the system will not gain from being moved to C#:
hardware access, low-level algorithmic code
pretty much most bespoke non-UI working code - no point throwing it out if it already works
Identify which parts of the system need to be migrated to c#. For these parts, ensure that the current implementation in C++ is decoupled and modular so that those parts can be swapped out. If the app is a monolith, then considerable work will be needed refactoring the app so that it can be broken up and select pieces reimplemented in c#. (It is possible to refactor nothing, instead just focus on implementing new application functionality in c#.)
Now that you've identified which parts will remain in C++ and which parts will be implemented in c#, (or just stipulate that new features are in c#) then focus turns to how to integrate c# and c++ into a single solution
use COM wrappers - if your existing C++ project makes good use of OO, this is often not as difficult as it may seem. With MSVC 6 you can use the ATL classes to expose your classes as COM components.
Integrate directly the native and c# code. Integrating "legacy" compiled code requires an intermediate DLL - see here for details.
Mixing the MFC UI and c# UI is probably not achieveable, and not adviseable either as it would produce a UI mix of two distinct styles (1990s grey and 2010 vibe). It is simpler to focus on achieving incremental migration, such as implementing new application code in c# and calling that from the native C++ code. This keeps the amount of migrated c# code small to begin with. As you get more into the 2010 development, you can then take the larger chunks that cannot be migrated incrementally, such as the UI.
First, your definition of modern era is controversial. There's no reason to assume C# is better in any sense than C++. A lot has been said on whether C# helps you better avoid memory management errors, but this is hardly so with modern facilities in C++, and, it's very easy to do mess with C# in terms of resource acquisition timing, that may be dependent on what other programs are doing.
If you move straight from 6 to 2010 you may end up with some messed up project settings. If this isn't a fairly large project, and it's one of few that you need to convert, then that should be fine. Just open it in 2010, and follow the conversion wizard. Make sure to back up your project first, and verify your project settings when you're done.
In my opinion though the best way is to convert it step by step through each iteration of Visual Studio. I had to modernize 1400 projects from 2003 to 2010, and the best way that I found was to convert everything to 2005, then to 2008, and then finally to 2010. This caused the least amount of issues to arise for me.
If you only have 6 and the newest Visual Studio you may end up just having to try and go straight to the new one using the wizard. Expect some manual cleanup before everything builds correctly for you again.
Also, one more time, BACK IT UP FIRST! :)
High-level C++ code calling low-level C# code doesn't look like a good idea. The areas where .NET languages are better, are user interface, database access, networking, XML files handling. Low-level stuff like calculations, hardware access etc. is better to keep as native C++ code.
Moving to .NET, in most cases it is better to rewrite UI completely, using WPF or Windows Forms technologies. Low-level stuff remains native, and different interoperability technologies are used to connect C# and native code: PInvoke, C++/CLI wrappers or COM interoperability. After some time, you may decide to rewrite low-level native components in C#, only if it is really necessary.
About compiling native C++ code in VS2010 - I don't see any problems. Just fix all compilation errors - new compilers have more strict type checking and syntax restrictions, and catch much more bugs at compilation time.
Not sure why so many folks are advocating for COM. If you haven't already got a lot of COM in there, learning how to do it on the C++ side is going to hurt, and then you're using the slowest possible interop from the managed side. Not my first choice.
Ideally you have refactored your UI from your business logic. You can then build a new UI (WPF, WinForms, ASP.NET, web services that support some other client, whatever) and call into your business logic through P/Invoke or by writing a C++/CLI wrapper. #mdma has good advice for you assuming that the refactoring is possible.
However if you were paying me to come in and help you my very first question would be why do you want to do this? Some clients say they don't want to pay C++ devs any more, so they want all the C++ code gone. This is a scary objective because we all hate to touch code that works. Some clients want to expose their logic to ASP.NET or Reporting Services or something, so for them we concentrate on the refactoring. And some say "it looks so 1999" and for them I show them what MFC looks like now. Colours, skinning/theming including office and win7 looks, ribbon, floating/docking panes and windows, Windows 7 taskbar integration ... if you just want to look different, take a look at MFC in VS 2010 and you might not have to adjust any code at all.
Finally to make non-Express versions of VS 2010 affordable look into the Microsoft Partner Program. If you have sold your software to at least 3 customers who still speak to you, and can get through the Windows 7 logo self test (I have got VB 6 apps through that in a day or two) then you can have 5-10 copies of everything (Windows, Office, VS) for $1900 or so a year, depending on where you live.
To start I'd try and keep as much code as possible to avoid a rewrite. I'd also remove all unused code before starting the conversion.
Since VC++ 6.0 Microsoft changed the MFC libraries and the C++ Standard Library.
I recommend to start building your DLLs with no dependencies, then looking at your third party libraries, and then rebuild one dependent DLL/EXE at a time.
Introduce unit tests to make sure the behaviour of code does not change.
If you have a mixed build, using different versions of VC++, you need to guard against passing resources (file handles) between DLLs that use different versions of the VC runtime.
If at all financially possible I would strongly consider just paying the money for the version of Visual Studio that you need because you could very well lose more money on the time you spend. I do not know enough about the express editions to give a good answer on them but when integrating some code from a subcontractor that was written in C++ I used C++ / CLI. You will probably be able to reuse most of your codebase and will be familiar with the language but you will also have access to managed code and libraries. Also if you want to start writing new code in C# you can do that. The biggest problem I had with it was that in VS 2010 there is no intellisense in C++ / CLI.
Visual Studio 6 is legendary for being buggy and slow. Moving into the modern era would best be done by getting a new compiler. What is probably the easiest thing to do is write the legacy app into a DLL, then write your exe into C# and use P/Invoke. Then you never have to touch the old code again- you can just write more and more in C# and use less and less of the old DLL.
If your old code is very heavily OO, you can use C++/CLI to write wrapper classes that allow .NET to call methods on C++ objects, and collect them too if you use a reference counted smart pointer.
You can use C# to write your new components with a COM or COM+ (System.EnterpriseServices) wrapper, which will be callable from your existing C++ code.

Logging Framework, a good idea?

First of all, apologies for the subjective sounding title. This is intended as a direct question.
At present I am working on a suite of tools:
A C# Windows Service, to primarily
maintain an Oracle database.
A C# Windows Service, (which will be
used on multiple node sites) to
process content of the database.
An ASP.NET web interface to
facilitate management of the overall
"system"
Currently the tho Windows Services have been developed as Console Applications (to ease debugging/development) and I am in the midst of converting these to Services. After testing for a couple days now with these services, I'm finding that I would like to increase the granularity of my logging. I'm finding that I miss Console.WriteLine() and I would like to provide an alternate log source like a flat-file for this type of output. This has lead me to think, "Should I be using a framework, or have I got enough?"
The reason I have mentioned the aspects I am developing is to provide insight to my situation. A "Core" DLL has been created, common across all components, abstracting the interaction layer between the applications and database. It is within this DLL that a class has been created which will attempt to "log to a table in the database" else on fail "log to local Event Log". This is it, that's the extent of logging.
Throughout the aforementioned tools, there are multiple instances of logging not dissimilar to:
Log.LogError("Code", e.Message + "\n" + e.StackTrace);
Although quite basic, this method does make use of reflection to Identify the source of the error.
My Question
Looking at my current logging solution it appears "sufficient" in terms of what it does and how it is integrated with all my solutions. However, I've been looking at logging frameworks (Notably log4net) and their features impress me. The ability to, if needed in the future, add another output format (such as an SMTP server) sounds kind of cool to me! :)
What I would like to know are the benefits of moving to a framework (like log4net)? The extent of how much I will have to adapt my code? Whether or not I am just looking at the greener grass on the other side? And finally, but probably most importantly, am I doing the right thing? Should I just add the ability to my Log class to "LogDebug" and be done with it? The last thing I would want to do is completely overhaul my suite, just for a "basic" feature, but if there are other benefits (to design, reliance, good practice? etc.) I'm interested.
Thanks,
Yes. Using an existing, proven logging framework (such as Log4net) is a good idea.
Log4Net is configurable at runtime (great for tracking down issues in production code).
As a commenter pointed out, it's also very simple to use.
Proper logging is especially beneficial when running code on multiple remote systems, as far as I recall, log4net will let you send your logs to a remote syslog server without much coding overhead (meaning you can view your logs from all machines in one centralized place) doing this will massively reduce the time it takes you to get information relating to a bug or problem with the system, and should also give you an indication of how prevalent the issue is.
As mentioned in other posts, log4net also allows for multiple appenders and multiple log levels, so determining where you want certain log information (i.e. in a database or in a local flat file, hey log4net even lets you spit logs out over telnet) to be stored is an absolute doddle.
As for implementing it, there are several good sites talking you through the setup. How you actually make use of the logging objects that log4net gives you is an architectural choice, but you could simply change the constructor of an object to take a log4net object and from within this object, just use the log4net object as you would Console.WriteLine.
I find the tutorial series here particularly useful, and it'll also go in to more depth than I can here about the benefits and the different ways of configuring log4net.
Yes, you definitely want to use a logging framework. A logging framework will allow you to:
Set the logging levels for the different logger instances.
Set the "appenders" or output for each of the different logger instances.
Perhaps, more importantly, if you use a logging framework, it is very easy to swap out one implementation of the logging framework for another (perhaps a null implementation that simply discards messages); whereas, if you write all your logging statements, directly, swapping out the implementation will be a nightmare.
I think you should use Log4net, simply because it's always better to reuse than to build your own thing. log4net has been used by a lot of developers and are pretty matured.
Think about your maintenance prospect; one or two months down the road, you might need to tweak your custom logging class a bit, to add some multithreading support etc. And when you are fixing the bugs arose from your logging class, you will miss Log4net.
Well one of the bigger benefits is not having to maintain the code yourself. Most of the time, logging frameworks have a lot more functionality than your own solution. Because they are so focused on logging, those frameworks usually are pretty complete in the both functionality and ways to implement it. And then there's reliability; there's nothing worse than a logging framework that's not logging anything because it's bugged. ;)
Take for example ELMAH for ASP.net applications. It also includes notifications, exports to various target formats, etc. Things that are pretty handy but you'll never build yourself unless you really need it.
How many changes to your code are needed obviously depends on both your code and the framework of choice. It's hard to say anything about that.
I am going to give a shout out to NLog (http://nlog-project.org/home) as it doesn't suffer from the 'Straight Java Port - then rewrite' syndrome of most oss .Net libs.
Some key benefits for us were the very fast Logger.IsFooEnabled (volatile read) and the overall performance of the system.
To each its own though, but I personally prefer NLog for my projects (and some of my clients too).
Cheers,
Florian
The advantage of using a good logging framework like Log4Net is that they have a small impact upon your code in terms of lines of code that you have to alter (in other words you only have to alter each existing logging line).
Also, if you are concerned about altering your code if you change frameworks, or if you feel you want to roll your own, then you could always create your own interface to a logging framework. Then you only ever have to change your code in one place after that.
I think sysadmins expect services to log to the application event log in windows.
Look up System.Diagnostics.EventLog, although log4net will write to that too..
The initial statement in the log4j website might help in some of your questions, the underlying principles are the same of log4net:
With log4j it is possible to enable
logging at runtime without modifying
the application binary. The log4j
package is designed so that these
statements can remain in shipped code
without incurring a heavy performance
cost. Logging behavior can be
controlled by editing a configuration
file, without touching the application
binary.
Using a logger hierarchy it is
possible to control which log
statements are output at arbitrarily
fine granularity but also great ease.
This helps reduce the volume of logged
output and minimize the cost of
logging.
In this case there's clearly no need to reinvent the wheel. Most Logging frameworks are somewhat straightforward, so the extend of changes will most likely depend on the size of your existing programs.
if you write your logger class properly it will be easily expendable to any of your needs. Any framework could impress you with many features but another framework is another variable in your debugging process as it can give you an error that does not exists or can make an error by itself in combination with your application. If you are ready to make beta testing for open source software project this is fine...
In your place i would write log class with ability to extend it features you find interesting to you project based on the list of features known frameworks have. I don't see any problem to log something to file and then send it over smpt, just one small function does the job.
Moreover, you can write your own class which will be pretty abstract and put your basic code in there, if you will ever need to use external framework for testing you class would be able to use it with minimal impact on code. Just take a look how there frameworks are implemented on the code level.
think of that you will need to learn how to properly use these frameworks when your only needs for now to log very small part of it...

Duplicate Functionality Amongst Multiple Projects

I'm currently working on two social networking sites that have a lot in common, yet are distinctively different. I find myself writing a lot of the same code for both (including UI), and was wondering if there is a best practice that will limit duplicating code.
One of the main problems is that these projects are very independent of eachother and will likely have more differences than similaries soon. Also, once the initial work is done, they might be handed off to other programmers, so having shared code libraries might end up being a big problem.
Any suggestions from people that might have had to deal with a similiar situation?
PS: I'm the only developer on both of these projects, and it looks like it's going to stay that way for a while.
Abstracting shared functionality back to a framework or library with defined interfaces and default implementations is a common way to handle this. For example, your plugin architecture, if you choose to support one, is probably something that could be shared among all of your projects. Most of the time the things you want to share are pretty basic functionality or relatively abstract functionality that can be easily customized. The former are easier to recognize and factor out to common libraries. The latter may sometimes be more work than simply re-implementing the code with minor changes (sharing patterns rather than code).
One thing you want to be careful of is to let the actual re-use drive the design of common libraries rather than coming up with a shared architecture in advance. It's very tempting to get caught up in framework design and abstracting it out for shared use. Unfortunately you often find that the shared use never develops or develops in a different direction than you expected and you end up rewriting or throwing away much of the framework -- or even worse, keeping and maintaining unused code. Let YAGNI (you aren't gonna need it) be your guide and delay refactoring to common libraries until you actually have a need.
There are a couple (at least) of different approaches here, and you could certainly use both. Firstly you could remove some common code in to a separate project and just call that code staticaly. This is pretty easy to do and I sometimes take this approach with simple helper functions that probably don't belong in a class in my main project - a good example would be a math library or something like that. The other approach is to extract common functionality in to a class or interface which you then inherit and extend. Depending on what code you are looking to reuse you might use either (or both) of these approaches.
I suspect you will find it easier than you think. Try it with some simple code, set up a new project in the same solution, reference your library from your existing code and see how it goes. There is also no reason not to reference your shared project in multiple solutions either.
Having shared code libraries need not be a problem if the development gets handed off. For now you can have your 2 sites reference the same library (or libraries) which you maintain, but if and when you split the projects out to other teams you can give a copy of the shared code to each team.

Categories

Resources