Static method in Data Access Layer - c#

I have used a lot of static methods in Data Access Layer (DAL) like:
public static DataTable GetClientNames()
{
return CommonDAL.GetDataTable("....");
}
But I found out that some developers don't like the idea of static method inside DAL. I just need some reason to use or not use static methods inside DAL.
Thanks
Tony

From purists' point of view, this violates all kinds of best practices (like, dependency on implementation, tight coupling, opaque dependencies, etc.). I would've said this myself, but recently I tend to move towards simpler solutions without diving too much in "enterprizey" features and buzzwords. Therefore, if it's fine with you do write code like this, if this architecture allows for fast development and is testable and, most important, solves your business problem -- its's just fine.

If I had to pick one reason not to use static methods, that would be that it limits your ability to write unit tests against your code. For example creating mocks for your DAL will be more difficult because there is not an actual interface to code against, just a bunch of static methods. This further limits you if/when you decide to adopt frameworks that require interfaces to support things like IoC, dependency injection etc.

That's Unit of Work, just static, isn't it?

Related

Is there a better way than using static classes or singletons for MVVM?

I've found that in a lot of cases, it seems (at least superficially) to make sense to use Singletons or Static classes for models in my WPF-MVVM applications. Mostly, this is because most of my models need to be accessed throughout the application. Making my models static makes for a simple way of satisfying this requirement.
And yet, I'm conflicted because everyone on the planet seems to hate singletons. So I'm wondering I there isn't a better way, or if I'm doing something obviously wrong?
There are a couple of issues with Singletons. I'll outline them below, and then propose some alternative solutions. I am not a "Never use a singleton, or you're a crap coder" kind of guy as I believe they do have their uses. But those uses are rare.
Thread safety. If you have a global-static Singleton, then it has to be thread-safe because anything can access it at any time. This is additional overhead.
Unit Testing is more difficult with Singletons.
It's a cheap replacement for global variables (I mean, that's what a singleton is at the end of the day, although it may have methods and other fancy things).
See, it's not that Singleton's are "horrid abominations" per-se, but that it's the first design pattern many new programmers get to deal with and its' convenience obfuscates its' pitfalls (Just stick some gears on it and call it steam-punk).
In your case, you're referring to Models and these are always "instances" as they naturally reflect the data. Perhaps you are worried about the cost of obtaining these instances. Believe me, they should be negligible (down to data-access design, obviously).
So, alternatives? Pass the Model to the places that require it. This makes unit testing easier, and allows you to swap out the fundamentals of that model in a heart-beat. This also means you might want to have a look at interfaces - these denote a contract. You can then create concrete objects that implement these interfaces and voila - you're code is easily unit-testable, and modifiable.
In the singleton world, a single change to that singleton could fundamentally break everything in the code-base. Not a good thing.
I think the accepted solution to this problem is to use dependency injection. With dependency injection, you can define your models as regular, non-static classes and then have an inversion of control container "inject" an instance of your model when you want it.
There is a nice tutorial at wpftutorial.net that shows how to do dependency injection in WPF: http://wpftutorial.net/ReferenceArchitecture.html
The only static class I use is a MessageBroker because I need the same instance all through the application.
But even then it is very possible to use Unity (or another Dependency Injection/Container) to manage instances of classes that you only need one of.
This allows you to inject different versions when needed (e.g. during unit tests)
BTW: static is not the same as singleton. But I am not getting into that here. Just search stackoverflow for more fun questions on that :)

facade design pattern

Lately I've been trying to follow the TDD methodology, and this results in lot of subclasses so that one can easily mock dependencies, etc.
For example, I could have say a RecurringProfile which in turn has methods / operations which can be applied to it like MarkAsCancel, RenewProfile, MarkAsExpired, etc.. Due to TDD, these are implemented as 'small' classes, like MarkAsCancelService, etc.
Does it make sense to create a 'facade' (singleton) for the various methods/operations which can be performed on say a RecurringProfile, for example having a class RecurringProfileFacade. This would then contain methods, which delegate code to the actual sub-classes, eg.:
public class RecurringProfileFacade
{
public void MarkAsCancelled(IRecurringProfile profile)
{
MarkAsCancelledService service = new MarkAsCancelledService();
service.MarkAsCancelled(profile);
}
public void RenewProfile(IRecurringProfile profile)
{
RenewProfileService service = new RenewProfileService();
service.Renew(profile);
}
...
}
Note that the above code is not actual code, and the actual code would use constructor-injected dependencies. The idea behind this is that the consumer of such code would not need to know the inner details about which classes/sub-classes they need to call, but just access the respective 'Facade'.
First of all, is this the 'Facade' pattern, or is it some other form of design pattern?
The other question which would follow if the above makes sense is - would you do unit-tests on such methods, considering that they do not have any particular business logic function?
I would only create a facade like this if you intend to expose your code to others as a library. You can create a facade which is the interface everyone else uses.
This will give you some capability later to change the implementation.
If this is not the case, then what purpose does this facade provide? If a piece of code wants to call one method on the facade, it will have a dependency on the entire facade. Best to keep dependencies small, and so calling code would be better with a dependency on MarkAsCancelledService tha one on RecurringProfileFacade.
In my opinion, this is kind of the facade pattern since you are abstracting your services behind simple methods, though a facade pattern usually has more logic I think behind their methods. The reason is because the purpose of a facade pattern is to offer a simplified interface on a larger body of code.
As for your second question, I always unit test everything. Though, in your case, it depends, does it change the state of your project when you cancel or renew a profile ? Because you could assert that the state did change as you expected.
If your design "tells" you that you could use a Singleton to do some work for you, then it's probably bad design. TDD should lead you far away from thinking about using singletons.
Reasons on why it's a bad idea (or can be an ok one) can be found on wikipedia
My answer to your questions is: Look at other patterns! For example UnitOfWork and Strategy, Mediator and try to acheive the same functionality with these patterns and you'll be able to compare the benefits from each one. You'll probably end up with a UnitOfStrategicMediationFacade or something ;-)
Consider posting this questions over at Code Review for more in depth analysis.
When facing that kind of issue, I usually try to reason from a YAGNI/KISS perspective :
Do you need MarkAsCancelService, MarkAsExpiredService and the like in the first place ? Wouldn't these operations have a better home in RecurringProfile itself ?
You say these services are byproducts of your TDD process but TDD 1. doesn't mean stripping business entities of all logic and 2. if you do externalize some logic, it doesn't have to go into a Service. If you can't come up with a better name than [BehaviorName]Service for your extracted class, it's often a sign that you should stop and reconsider whether the behavior should really be extracted.
In short, your objects should remain cohesive, which means they shouldn't encapsulate too many responsibilities, but not become anemic either.
Assuming these services are justified, do you really need a Facade for them ? If it's only a convenient shortcut for developers, is it worth the additional maintenance (a change in one of the services will generate a change in the facade) ? Wouldn't it be simpler if each consumer of one of the services knows how to leverage that service directly ?
The other question which would follow if the above makes sense is -
would you do unit-tests on such methods, considering that they do not
have any particular business logic function?
Unit testing boilerplate code can be a real pain indeed. Some will take that pain, others consider it not worthy. Due to the repetitive and predictable nature of such code, one good compromise is to generate your tests automatically, or even better, all your boilerplate code automatically.

Unit tests in same class (with conditional compilation)?

I'm aware of (and agree with) the usual arguments for placing unit tests in a separate assembly. However, of late I've been experiencing some situations where I really want to be testing private methods. The behind-the-scenes logic in question is complex enough that testing the public and internal interfaces doesn't quite get the job done. The testing against the class's public interface feels overwrought, and I see several spots where a few tests against privates would get the job done more simply and effectively.
In the past I've tackled these kinds of situations by making the stuff I need to test protected, and creating a subclass that I can use to get at it in the test framework. But that doesn't work so well on classes that should be sealed. Not to mention bloating the test framework with all that scaffolding.
So I'm thinking of doing this instead: Place some tests in the class, where they can get at the private members. But keep them out of the production code using '#if DEBUG`.
Does this seem like a good idea?
Before anybody asks...
The solution to OP's problem is to properly incorporate IoC with DI and eliminate the need of testing private method altogether (as Joel Martinez noted). As it's been mentioned multiple times, unit testing private members is not the way to go.
However, sometimes you just can't change the code (legacy systems, risk of breaking changes - you name it) nor you can use tools that allow private members testing (like Typemock, which is paid product). For such cases, you can either not test at all, or cut corners. Which I believe is situation OP's facing.
Leaving private methods testing discussion aside...
Remember you can use reflection to access and invoke private members.
In my opinion, placing conditional debugs in the class itself is rather bad idea - it adds noise (as in, something unrelated) to the class code. Sure, it will be gone in release, but you (and possibly other programmers) will have to deal with it on the daily basics.
I realize your idea might sound good on paper - simple test wrapped with conditional debug. But in reality, tests quickly turn out to use extra variables (those will also have to be placed in the class code), some utility (extra references, custom types), testing frameworks (even more references) and what not. This all will have to be somehow connected to the class code. Put that all together, and you quickly end up with an unmaintanable monster.
Are you sure you want to deal with that? Especially considering that throwing together simple reflection-based utility is probably not that hard.
Everything you're referring to can be solved with just two concepts: Single Responsibility Principle, and Dependency Injection. It definitely sounds like you need to simplify your classes. Mind you, that doesn't mean the class must offer less value, it just means that the internals need to be simpler and some functionality may have to be delegated to others.
If you need to test this method independently of the public API of the class, then it sounds like a candidate for being removed from the class itself.
You could say the class is dependent on the private method (as is arguably evident by the need to test it separately from the class public API).
If this dependency cannot be satisfied through testing the public API of the type alone then have the class instead delegate this dependency to another type. You can either instantiate this type internally or have this type injected / resolved.
This new type can then have its own unit tests, as it's public API will be expressing what was previously a private method.

Cross-DDL Extension of an Entityclass

What I want to archieve:
Service assembly (project) that holds EntityClasses - pure Data.
GUI assembly that extends those Entities for its own pourposes - Runtime information for GUI.
What I tried:
Derivation (Gui defines class ExtendedEntity : Service.BaseEntity)
seems to be the most common and only practicable way to me, but:
Converting Service.BaseEntity to ExtendedEntity after retrieving Data from the Service is painful. can 'workaround' this by using reflection to generate new ExtendedEntity instances based on base entity instances, but that can't be the 'proper' solution.
Partial classes
is exactly what I'm looking for, except the fact, that it does not work cross-assembly.
I'd greatly appreciate any hints helping me to find a proper & clean solution without reflection cheating =)
This is not a direct answer, but you may want to think a little more about your design. Why does your GUI need intimate knowledge of the mechanics of data storage? Typically we work very hard to make sure that the the UI and the data access are loosely coupled, so we can make changes to either without fear of breaking what already work. The design you are looking to implement can lead to unforeseen problems later.
One common pattern that works well for this type of thing is called the Repository pattern. Essentially the service assembly (repository) would contain all of the knowledge required to push data into and out of a particular data store. The 'shape' of the data is well known, and shared between the GUI and the repository. The service assembly would make the CRUD operations available to the GUI, and the GUI would would hold a reference to the repository, and call methods on it to fetch, create and update the data it needs.
Here are some links to get started on the ideas of loose coupling, the repository pattern, and dependency injection.
Cohesion and coupling
What is dependency injection
What's a good repository pattern tutorial
Is decompilation an option? If yes you can use e.g. PostSharp or Mono Cecil to rewrite the classes in question and add the there the code you want to have them.
I am curios why you do not want to use the standard OO approach like derivation. It is definitely not hacking.
The "cleanest" OO solution is to use aggregation and encapsulate the Entity classes inside objects where you can fully control what you can do with the data and how you want to manipulate or query it. You have reached "heaven" when your aggregation class does not need to expose the internal Entity class anymore because your class is powerful enough to support all necessary operations with the right abstractions.
If the classes you want to extend are sealed then you need to think hard why the writers of these classes did not want you to extend them.
Eric Lippert has a nice post about the usages of the sealed keyword.
...
Now, I recognize that developers are highly practical people who just
want to get stuff done. Being able to extend any class is convenient,
sure. Typical developers say "IS-A-SHMIZ-A, I just want to slap a
Confusticator into the Froboznicator class". That developer could
write up a hash table to map one to the other, but then you have to
worry about when to remove the items, etc, etc, etc -- it's not rocket
science, but it is work.
Obviously there is a tradeoff here. The tradeoff is between letting
developers save a little time by allowing them to treat any old object
as a property bag on the one hand, and developing a well-designed,
OOPtacular, fully-featured, robust, secure, predictable, testable
framework in a reasonable amount of time -- and I'm going to lean
heavily towards the latter. Because you know what? Those same
developers are going to complain bitterly if the framework we give
them slows them down because it is half-baked, brittle, insecure, and
not fully tested!
...
Yours,
Alois Kraus
You could have your GUI assembly define extension methods on the entity classes. With appropriate using directives, this would mean the consuming code would not know or care where the methods were actually defined.
A slight annoyance would be the non-existence of extension properties, so even things that are conceptually properties would have to be implemented as methods.
It would look a little like this:
In Service assembly
public class FooDTO
{
public string Name { get; set; }
}
In GUI assembly
internal static class Extensions
{
// Artificial example!
public static int GetNameLength(this FooDTO foo)
{
return foo.Name.Length;
}
}
// Consuming code
int myFooNameLength = myFooDTO.GetNameLength();

Tramp Data vs. Testability

I'm not doing much new development right now, but a lot of refactoring of older C# subsystems whose original requirements no longer support new and I'll add unexpected requirements. I'm also now using Rhino Mocks and unit tests where possible (vs 2008).
The dilemma for me is that to make the methods testable and mockable, I need to define clear "contracts" using interfaces. However, if I do this, a lot of the global data that many of the classes use turns into tramp data, passed from method to method until it gets to its intended user; this looks ugly, and is against my sensibilities, but ... can be mocked. Making a mixed bag class with a lot of static global properties is a more attractive option but not Rhino testable. Is there a middle ground between the two? Testable but not too trampy? Pattern perhaps?
You should also understand that these applications run on an in-house corporate developed platform, so there are a lot of helper classes and services that are instantiated once per application, and then are used throughout the application, for example a database accessor helper class. Another example is using configuration files that are read once, and used throughout the application by different methods for various reasons.
Your thoughts appreciated.
What you might want to look at here is some form of the Service Locator Pattern. Make them classes find their own tramps.
Some other reasonable options would include wrapping up the bulk of the commonly used stuff in an "application context" class of some sort.
You also might wish to look into dependency injection if you haven't done so yet.

Categories

Resources