Related
My team and I are looking for evidence to support either a multi library approach for like functionality, or condensing all of this functionality within a single service layer. It's important to note that this is going to be sitting behind a web api and either approach is valid, but we need to decide which holds more benefit. To illustrate the layer(s) we are looking at the following is what we'll have:
Solution
WebAPI
Services ---- This is what we're looking at
DataAccess
Bear in mind that if we did use the multi library approach we would still have a Services project, but it would be much thinner and have more specific functionality. We are not planning to independently deploy these libraries, but have everything needing to reference them either in the same solution, or access them via the web api.
What the rest of us would propose is something like the following:
Solution
WebAPI
Services
Services.Geography ---
Services.Membership --- This is the alternative approach
Services.ProductDelivery ---
etc...
The benefits we see in the first option is having all of this code organized within a single library which allows for easier extraction of duplicate code, potentially unit testing, and perhaps some relief from the build process.
The benefits of option two are having a clear delineation in functionality between projects, having isolated code which is potentially portable should the need arise, and generally being able to independently work on and configure different facets of the application.
The drawbacks we see in option one are that the Service layer now becomes responsible for each facet of the application, which bloats that library and in my opinion sort of violates Single Responsibility. We realize that rule is not as applicable to libraries so much as it is methods and classes, but it still seems like there are other benefits to be had by separating functionality. There's also the potential to mistakenly place code somewhere it doesn't belong, or use classes available to the entire project where they may not apply.
The drawbacks in option two are an obvious increase in overhead on project builds, working in configs (even though this may be desirable) and potentially cluttering the solution with more projects than necessary. I think we'd plan to consolidate like functionality into single projects (ie, we might build multiple implementations of ProductDelivery within that project to be able to switch between them or use different ones for different reasons).
We realize all of our business rules can be accomplished with either approach, we just have reached an impasse in deciding which approach is better practice.
So the question is, which of these two approaches is better practice?
I have 2 things that can make me think to chose the first option:
Your services use only one data layer library.
Your services are really short (something like implementing just the CRUD)
Split in libraries, a class that count few lines in an entire library, can be awkward. Unless, you know it will grow a lot after (in several classes, of course).
If not, I would say the option 2, is better because:
it's easier to replace a part of the service like that. Change the library that you want, and it's done.
it should be more abstracted, if you want to avoid strong coupling between each library
it should be easier to test a specific part of it.
it should be more configurable, and you can configure all of them in the project that references all of them (Even, if one or several libraries doesn't change a lot of things).
it should be less like a god library
it might be more exportable for some others projects, depending on how specialized your libraries are.
And I disagree, for these points:
a single library which allows for easier extraction of duplicate code
If you are careful, your duplicate code, can be extracted in a parent library, common to all the others. So all your duplicate code, should be automatically extracted (Except, if there is a lack of communication or people prefer to copy/paste code. But, one library would not change that. It might even be harder to find where the code already exists).
potentially unit testing
Why several libraries have to be more difficult to test ?
If you have several libraries, you will have to make them more abstracted, to allow the change. And then, your testing should be easy.
and perhaps some relief from the build process.
Why ? If all your libraries are well named, where would be your problems ?
Deploying a dll or several dlls, shouldn't be that hard.
If it's about the configuration, one library or more, you will still have the same configuration to make, not necessarily more (but probably a bit more).
I also disagree for the single responsability doesn't apply to libraries. It is.
Each library, should be responsible for one business, not all of them. If you finish with a set of libraries, it can become a framework. Even, for a framework, you will finish to have a single responsability but much more general, than the responsibility of methods, classes, libraries, etc...
But you might want an opinion from a more advanced architect/developer than me.
If someone disagree with me, don't hesitate to comment my answer. I would be happy to learn from your knowledge
With the comments from my first answer in mind.
The current plan is to have a single data layer. Many of our potential
other libraries would be third party api wrappers that don't
necessarily need to interact with the database. Those that do could
potentially have their own data layer which may or may not interact
with the same database or an independent database. I think doing that
makes them self contained and able to exist without the rest of the
solution. Still not totally sure if this is the approach we want to
take yet though.
Dependency injection ?
StructureMap as our IoC dependency resolver
You will end up with several libraries, unless, all your libraries you use, have to use themselves together.
You will either have your services becoming kind of proxies for the third party libraries, or your services will use proxies for the third party libraries.
But anyway, the proxy parts, should not be together in same library. It would be harder to change the third party library, if you do that.
If you chose the solution where your services use proxies for the third party libraries. You can inject these proxies into your services easily, thanks to the dependency injection.
If you change of third party library, change the proxy implementation and change the injection, and it's done.
But, if you chose to make your services the proxies. It's almost the same, but you have one layer less. And your service implementations have to be exported in different libraries. You will also have to be more careful when changing your service, because you will endup breaking things elsewhere in your app.
For that last point, having a proxy layer used by your services, sounds better to me, at the moment.
I'm still thinking. It will have more edits to come I imagine
I am working on a project using asp.net and c# and I need to pull in something like wkhtmltopdf. I realize that there have been several good wrapper classes written to simplify calls to the dlls using c#. But is there a reason why I should not invoke the executable directly? Is there any performance or security gain from using a wrapper library?
Although, my specific need now is to use wkhtmltopdf, I have had the same question in the past when using libraries like imagemagick as well.
It's a matter of preference. By using the wrapper classes you mentioned, the work that you do implementing components that you may not be so familiar with is reduced, thereby freeing up your valuable time to concentrate on those aspects of the application where perhaps you can make your strongest value-add, such as the overall application architecture and design, or perhaps the application's business logic.
If you choose to write all the code yourself, then you may find that you're a less productive developer than your competition.
And, as #UweKeim points out in his comment, performance may be a factor as well. If the wrapper code does not perform to your needs, you may well need to bypass it and go straight to the component/code library you're calling.
It's important to strike a balance between use of code that others have written, versus your own. Important factors are things such as, how well is the 3rd party code written, how well is it supported, how well it performs, etc. Choose wisely!
I'm starting a new project which would greatly benefit from program add-ons. The program in its most basic form reads data from a serial port and parses it into database records. Examples of add-ons that could be written would be an auto-archive add-on, an add-on to filter records, etc. I'm writing both the program and the add-ons, but some customers need custom solutions, so instead of branching off and making a completely separate program, add-ons would be great. The simplest add-on would probably be a form who's constructor takes an object reference, manipulates the object in some way, then closes.
Unfortunately, I have absolutely no idea where to start coding, and almost as little idea where to search. Everything I search for turns up browser add-ons. From what I have gathered, I need to look into dynamic loading DLLs. Besides that, I'm clueless. Does anyone have any good resources or examples I that they know of?
I'm happy to provide more details, but this project is in its inception, so I don't have a ton of specific details (specifics kind of defeats the point of add-ons, too.)
You should seriously consider using the Managed Extensibility Framework (MEF) to handle your plugin architecture. It requires thinking about things a little differently, but it is well worth the mind-stretch.
This is a simple example to illustrate the basic technique.
codeproject.com - Plugin Architecture using C#
This article demonstrates to you how
to incorporate ... as a
plugin for another application or use
it as a standalone application.
in .NET 4 you now have the Managed Extensibility Framework (MEF) to do much of the plumbing.
In .NET 3.5 you had the System.AddIn but it was deemed by many to be far too complex.
codeproject.com - AddIn Enabled Applications with System.AddIn
AddIns (sometimes called Plugins) are
seperately compiled components that an
application can locate, load and make
use of at runtime (dynamically). An
application that has been designed to
use AddIns can be enhanced (by
developing more AddIns) without the
need for the orginal application to be
modified or recompiled and tested
You really need to look at Managed Extensibility Framework (MEF). This is specifically designed to help support add-ons and other extensibility.
A very basic description (basically, your plugins must implement a special interface):
http://martinfowler.com/eaaCatalog/plugin.html
Much better article, in C#:
http://www.drdobbs.com/184403942;jsessionid=TVLM2PGYFZZB1QE1GHPCKHWATMY32JVN
I think Reflection will play a major role.
I expirimented with an app that had a plugin folder. A filesystem watcher would watch the folder, and when a new DLL was placed in it, it would use reflection to determine which types of plugins it included, loaded them, and added them to the list of available classes, etc.
Try using the term 'add-in' or 'plug-in' for your research instead of 'add-on'. That should help some.
If you're using .Net 4, there's an add-in namespace in the framework that will get you partway there.
Writing plug-in support for an app is no simple task. You'll have to maintain pretty strict separation-of-concerns across your interfaces, you'll need to provide an interop library that defines ALL of the supported plug-in types, and you'll want to do some research into dependency injection & inversion of control, in addition to the previously-suggested reflection research.
It sounds like you might have a busy weekend doing research.
One if the first things I learned when I started with C# was the most important one. You can decompile any .NET assembly with Reflector or other tools. Many developers are not aware of this fact and most of them are shocked when I show them their source code.
Protection against decompilation is still a difficult task. I am still looking for a fast, easy and secure way to do it. I don't want to obfuscate my code so my method names will be a,b,c or so. Reflector or other tools should be unable to recognize my application as .NET assembly at all. I know about some tools already but they are very expensive. Is there any other way to protect my applications?
EDIT:
The reason for my question is not to prevent piracy. I only want to stop competitors from reading my code. I know they will and they already did. They even told me so.
Maybe I am a bit paranoid but business rivals reading my code doesn't make me feel good.
One thing to keep in mind is that you want to do this in a way that makes business sense. To do that, you need to define your goals. So, exactly what are your goals?
Preventing piracy? That goal is not achievable. Even native code can be decompiled or cracked; the multitude of warez available online (even for products like Windows and Photoshop) is proof a determined hacker can always gain access.
If you can't prevent piracy, then how about merely reducing it? This, too, is misguided. It only takes one person cracking your code for it to be available to everyone. You have to be lucky every time. The pirates only have to be lucky once.
I put it to you the goal should be to maximize profits. You appear to believe that stopping piracy is necessary to this endeavor. It is not. Profit is simply revenue minus costs. Stopping piracy increases costs. It takes effort, which means adding cost somewhere in the process, and so reduces that side of the equation. Protecting your product also fails to increase your revenue. I know you look at all those pirates and see all the money you could make if only they would pay your license fees instead, but the reality is this will never happen. There is some hyperbole here, but it generally holds that pirates who are unable to crack your security will either find a similar product they can crack or do without. They will never buy it instead, and therefore they do not represent lost sales.
Additionally, securing your product actually reduces revenue. There are two reasons for this. One is the small percentage of customers who have trouble with your activation or security, and therefore decide not to buy again or ask for their money back. The other is the small percentage of people who actually try a pirated version of software to make sure it works before buying. Limiting the pirated distribution of your product (if you are somehow able to succeed at this) prevents these people from ever trying your product, and so they will never buy it. Moreover, piracy can also help your product spread to a wider audience, thus reaching more people who will be willing to pay for it.
A better strategy is to assume that your product will be pirated, and think about ways to take advantage of the situation. A couple more links on the topic:
How do i prevent my code from being stolen?
Securing a .NET Application
At work here we use Dotfuscator from PreEmptive Solutions.
Although it's impossible to protect .NET assemblies 100% Dotfuscator makes it hard enough I think.
I comes with a lot of obfuscation techniques;
Cross Assembly Renaming
Renaming Schemes
Renaming Prefix
Enhanced Overload Induction
Incremental Obfuscation
HTML Renaming Report
Control Flow
String Encryption
And it turned out that they're not very expensive for small companies. They have a special pricing for small companies.
(No I'm not working for PreEmptive ;-))
There are freeware alternatives of course;
Host your service in any cloud service provider.
How to preventing decompilation of any C# application
Pretty much describes the entire situation.
At some point the code will have to be translated to VM bytecode, and the user can get at it then.
Machine code isn't that much different either. A good interactive disassembler/debugger like IDA Pro makes just about any native application transparent. The debugger is smart enough to use AI to identify common APIs, compiler optimizations, etc. it allows the user to meticuloulsy rebuild higher level constructs from the assembly generated from machine code.
And IDA Pro supports .Net to some extent too.
Honestly, after working on an reverse engineering ( for compatibility ) project for a few years, the main thing I got out of my experience is that I probably shouldn't worry too much about people stealing my code. If anyone wants it, it will never be very hard to get it no matter what scheme I implement.
No obsfuscator can protect your application, not even any one described here. See this link, it's an deobsfuscator which can deobsfuscate almost every obsfuscator out there.
https://github.com/0xd4d/de4dot
The best way which can help you (but remember that they are also not full prof) is to use mixed codes, code your important codes in unmanaged language and make a DLL like in C or C++ and then protect them either with Armageddon or Themida.
Themida is not for every cracker, it's one of the best protector in the market, it can also protect your .NET software.
I know you don't want to obfuscate, but maybe you should check out dotfuscator, it will take your compiled assemblies and obfuscate them for you. I think it can even encrypt them.
I've heard about some projects that directly compile IL into native code.
You can get some additional info from this post:
Is it possible to compile .NET IL code to machine code?
We use SmartAssembly for .NET protection of an enterprise level distributed application, and it has worked great for us.
If you want to fully protect your app from decompilation, look at Aladdin's Hasp. You can wrap your assemblies in an encrypted shell that can only be accessed by your application. Of course one wonders how they're able to do this but it works. I don't know however if they protect your app from runtime attachment/reflection which is what Crack.NET is able to do.
-- Edit
Also be careful of compiling to native code as a solution...there are decompilers for native code as well.
Do you API?
Instead of trying to protect your one ddl file in one of your products on all of your customers devices, why not create an API service for your precious product features? Let the actual product that is saved on a device consume that API to deliver the product as you want it.
I Think this way you are 100% sure that your code is not decompiled and you set your own limits in your API so that developers / hackers don't consume your API in a way you don't want it.
Sure is some more work, but in the end, you are in control.
If someone has to steal your code, it likely means your business model is not working. What do I mean by that? For example, I buy your product and then I ask for support. You're too busy or believe my request is not valid and a waste of your time. I decode your product in order to support my relative business. Your product becomes more valuable to me and I prioritize my time in a way to resolve the business model for leveraging your product. I recode and re-brand your product and then go out and make the money that you decided to leave on the table. There are reasons for protecting code, but most likely you are looking at the problem from the wrong perspective. Of course you are. You're the "coder", and I'm the business man. ;-) Cheers!
ps. I'm also a developer. i.e. "coder"
I know this is old but, Themida is the most advanced anti-cracking software I've ever used.
It's not free, though.
Besides the third party products listed here, there is another one: NetLib Encryptionizer. However it works in a different way than the obfuscators. Obfuscators modify the assembly itself with a deobfuscation "engine" built into it. Encryptionizer encrypts the DLLs (Managed or Unmanaged) at the file level. So it does not modify the DLL except to encrypt it. The "engine" in this case is a kernel mode driver that sits between your application and the operating system. (Disclaimer: I am from NetLib Security)
First of all, apologies for the subjective sounding title. This is intended as a direct question.
At present I am working on a suite of tools:
A C# Windows Service, to primarily
maintain an Oracle database.
A C# Windows Service, (which will be
used on multiple node sites) to
process content of the database.
An ASP.NET web interface to
facilitate management of the overall
"system"
Currently the tho Windows Services have been developed as Console Applications (to ease debugging/development) and I am in the midst of converting these to Services. After testing for a couple days now with these services, I'm finding that I would like to increase the granularity of my logging. I'm finding that I miss Console.WriteLine() and I would like to provide an alternate log source like a flat-file for this type of output. This has lead me to think, "Should I be using a framework, or have I got enough?"
The reason I have mentioned the aspects I am developing is to provide insight to my situation. A "Core" DLL has been created, common across all components, abstracting the interaction layer between the applications and database. It is within this DLL that a class has been created which will attempt to "log to a table in the database" else on fail "log to local Event Log". This is it, that's the extent of logging.
Throughout the aforementioned tools, there are multiple instances of logging not dissimilar to:
Log.LogError("Code", e.Message + "\n" + e.StackTrace);
Although quite basic, this method does make use of reflection to Identify the source of the error.
My Question
Looking at my current logging solution it appears "sufficient" in terms of what it does and how it is integrated with all my solutions. However, I've been looking at logging frameworks (Notably log4net) and their features impress me. The ability to, if needed in the future, add another output format (such as an SMTP server) sounds kind of cool to me! :)
What I would like to know are the benefits of moving to a framework (like log4net)? The extent of how much I will have to adapt my code? Whether or not I am just looking at the greener grass on the other side? And finally, but probably most importantly, am I doing the right thing? Should I just add the ability to my Log class to "LogDebug" and be done with it? The last thing I would want to do is completely overhaul my suite, just for a "basic" feature, but if there are other benefits (to design, reliance, good practice? etc.) I'm interested.
Thanks,
Yes. Using an existing, proven logging framework (such as Log4net) is a good idea.
Log4Net is configurable at runtime (great for tracking down issues in production code).
As a commenter pointed out, it's also very simple to use.
Proper logging is especially beneficial when running code on multiple remote systems, as far as I recall, log4net will let you send your logs to a remote syslog server without much coding overhead (meaning you can view your logs from all machines in one centralized place) doing this will massively reduce the time it takes you to get information relating to a bug or problem with the system, and should also give you an indication of how prevalent the issue is.
As mentioned in other posts, log4net also allows for multiple appenders and multiple log levels, so determining where you want certain log information (i.e. in a database or in a local flat file, hey log4net even lets you spit logs out over telnet) to be stored is an absolute doddle.
As for implementing it, there are several good sites talking you through the setup. How you actually make use of the logging objects that log4net gives you is an architectural choice, but you could simply change the constructor of an object to take a log4net object and from within this object, just use the log4net object as you would Console.WriteLine.
I find the tutorial series here particularly useful, and it'll also go in to more depth than I can here about the benefits and the different ways of configuring log4net.
Yes, you definitely want to use a logging framework. A logging framework will allow you to:
Set the logging levels for the different logger instances.
Set the "appenders" or output for each of the different logger instances.
Perhaps, more importantly, if you use a logging framework, it is very easy to swap out one implementation of the logging framework for another (perhaps a null implementation that simply discards messages); whereas, if you write all your logging statements, directly, swapping out the implementation will be a nightmare.
I think you should use Log4net, simply because it's always better to reuse than to build your own thing. log4net has been used by a lot of developers and are pretty matured.
Think about your maintenance prospect; one or two months down the road, you might need to tweak your custom logging class a bit, to add some multithreading support etc. And when you are fixing the bugs arose from your logging class, you will miss Log4net.
Well one of the bigger benefits is not having to maintain the code yourself. Most of the time, logging frameworks have a lot more functionality than your own solution. Because they are so focused on logging, those frameworks usually are pretty complete in the both functionality and ways to implement it. And then there's reliability; there's nothing worse than a logging framework that's not logging anything because it's bugged. ;)
Take for example ELMAH for ASP.net applications. It also includes notifications, exports to various target formats, etc. Things that are pretty handy but you'll never build yourself unless you really need it.
How many changes to your code are needed obviously depends on both your code and the framework of choice. It's hard to say anything about that.
I am going to give a shout out to NLog (http://nlog-project.org/home) as it doesn't suffer from the 'Straight Java Port - then rewrite' syndrome of most oss .Net libs.
Some key benefits for us were the very fast Logger.IsFooEnabled (volatile read) and the overall performance of the system.
To each its own though, but I personally prefer NLog for my projects (and some of my clients too).
Cheers,
Florian
The advantage of using a good logging framework like Log4Net is that they have a small impact upon your code in terms of lines of code that you have to alter (in other words you only have to alter each existing logging line).
Also, if you are concerned about altering your code if you change frameworks, or if you feel you want to roll your own, then you could always create your own interface to a logging framework. Then you only ever have to change your code in one place after that.
I think sysadmins expect services to log to the application event log in windows.
Look up System.Diagnostics.EventLog, although log4net will write to that too..
The initial statement in the log4j website might help in some of your questions, the underlying principles are the same of log4net:
With log4j it is possible to enable
logging at runtime without modifying
the application binary. The log4j
package is designed so that these
statements can remain in shipped code
without incurring a heavy performance
cost. Logging behavior can be
controlled by editing a configuration
file, without touching the application
binary.
Using a logger hierarchy it is
possible to control which log
statements are output at arbitrarily
fine granularity but also great ease.
This helps reduce the volume of logged
output and minimize the cost of
logging.
In this case there's clearly no need to reinvent the wheel. Most Logging frameworks are somewhat straightforward, so the extend of changes will most likely depend on the size of your existing programs.
if you write your logger class properly it will be easily expendable to any of your needs. Any framework could impress you with many features but another framework is another variable in your debugging process as it can give you an error that does not exists or can make an error by itself in combination with your application. If you are ready to make beta testing for open source software project this is fine...
In your place i would write log class with ability to extend it features you find interesting to you project based on the list of features known frameworks have. I don't see any problem to log something to file and then send it over smpt, just one small function does the job.
Moreover, you can write your own class which will be pretty abstract and put your basic code in there, if you will ever need to use external framework for testing you class would be able to use it with minimal impact on code. Just take a look how there frameworks are implemented on the code level.
think of that you will need to learn how to properly use these frameworks when your only needs for now to log very small part of it...