Is this design pattern(extended view helper) acceptable? - c#

The class(ViewHelper) that takes care of the user input and sending it back to the model is getting bigger and I want to create an extended class(ExtendedViewHelper) that inherit the ViewHeper class. The problem is that I don't know if it's following pure OO-design. Here comes a class diagram:
Now some code to simplify it even more:
//ViewHelper class
public ViewHelper(View tempForm)
{
xForm = tempForm;
//some more code
}
//ExtendedViewHelper class
public ExtendedViewHelper(View yForm): base(xForm)
{
//some more code
}
//And the View
public View()
{
//Instantiating the object to ExtendedViewHelper
viewHelper = new ExtendedViewHelper(this);
//Calling method from class ViewHelper
viewHelper.OnButtonClicked();
//and from ExtendedViewHelper
((ExtendedViewHelper)viewHelper).OnSecondBtnClicked();
}
Would you say that this is a good solution to the problem(if it's even considered as a problem) or am I overengineering things? Is there a better solution or should I only use Viewhelper(~700 row of code)?

The best solutions are the ones that create the least amount of coupling and the simplest possible classes.
Your View currently depends on it's ViewHelper. This is acceptable.
However, if your View ever casts something as an ExtendedViewHelper, it is then coupled to two objects, which could give the system two reasons to change and two places where things can break. This violates the Single Responsibility Principle.
The one role of the View should be to display things. It should not be concerned with where the system functionality exists or how to process commands.
The ViewHelper also should have one role. It should act as the go-between from the View to the Controller/Services/Functionality Layer. The ViewHelper should never have implementation details of how any operations are performed.
So a better solution looks like this:
public View()
{
//Instantiating the object to ExtendedViewHelper
viewHelper = new ExtendedViewHelper(this);
//Calling method from class ViewHelper
viewHelper.OnButtonClicked();
//and from ExtendedViewHelper
viewHelper.OnSecondBtnClicked();
}
//OldViewHelper Constructor
public ViewHelper(View tempForm, OldFunctionalityService oldService)
{
xForm = tempForm;
xService = oldService;
}
//First Button Implementation Code
public void OnButtonClicked()
{
xService.DoStuff();
}
//NewViewHelper Constructor
public ViewHelper(View tempForm, OldFunctionalityService oldService, NewFunctionalityService newService)
{
xForm = tempForm;
xService = oldService;
xNewService = newService;
}
//Second Button Implementation Code
public void OnSecondBtnClicked()
{
xNewService.DoStuff();
}

Related

Does/why does WebViewPage need to be inherited from twice, for generic+non-generic?

I've inherited a codebase, and found some code that I can't figure out why (and if) it's needed.
It's a custom ViewPage, but we have the exact same code repeated twice - once for ViewPage and once for ViewPage<T>.
Basically:
public class MyPageBase : WebViewPage
{
//A whole bunch of properties intended to be accessible on every page
}
public class MyPageBase<T> : WebViewPage<T>
{
//The exact same properties. Doesn't actually use T anywhere. The code is literally identical.
}
Having so much repeated code is far from ideal. Worse, I don't understand why it's needed. A few tests have shown that the top one doesn't seem to do anything, but I'm unable to do a comprehensive search of all views (this MyPageBase is used dozens of apps).
So my question is: Does/why does WebViewPage need to be inherited from twice, for generic+non-generic?
first of all it's not inherited twice you have two implementation of the same
class one is generic and other non generic.
you haven't provided us the inner code so here goes an example, say you have something like this..
public class MyPageBase<T> : WebViewPage<T>
{
//The exact same properties
private DbContext db;
public MyPageBase()
{
db = new MPContext();
}
public List<T> Fill()
{
return db.Set<T>().ToList();
}
public T FillBy(object id)
{
return db.Set<T>().Find(id);
}
}
So why do you need a generic page?
if there are some tasks that are common in all pages you just make a generic method to do the job. below is a very sample usage.
say you have USERS AND ORDERS tables in your dbcontext
public class UsersPage<USERS>:MyPageBase<USERS>{
public void Index()
{
var filledData = Fill<USERS>();
}
}
public class UsersPage<ORDERS >:MyPageBase<ORDERS >{
public void Index()
{
var filledData = Fill<ORDERS>();
}
}
ofcourse you could easily do this by
var filledData = db.USERS.ToList();
and you can ask why all the fuss? to implement the generic methods but some times there will happen to be more complex scenarios than fetching all the records etc.
say you have 20+ tables and you decide to fill only 5 records from each table. without a generic implementation
you know have to go all over 20+ pages and change your code
from
var filledData = db.TABLE_TYPE.ToList();
to
var filledData = db.TABLE_TYPE.Take(5).ToList()
however with generics you could just fix it in the below method, you could even make it parametric
public List<T> Fill()
{
return db.Set<T>().Take(5).ToList();
}
and you are safe..
now If you were to use the non generic implementation of MyPageBase
all these stuff you needed to do you had to write them over and over again.
ofcourse writing more and more code gives you experience but after a while when working in a program especially on a large scale you want to keep things simple, understandable and maintable as possible..
I'm sorry for my bad english,
I hope I was clear and this helped you!

How can this class be designed better?

We have a Web API library, that calls into a Business/Service library(where our business logic is located), which in turn calls a Data access library (Repository).
We use this type of data transfer object all over the place. It has a "Payers" property that we may have to filter (meaning, manipulate its value). I have gone about implementing that check as such, but it feels dirty to me, as I'm calling the same function all over the place. I have thought about either:
Using an attribute filter to handle this or
Making the RequestData a property on the class, and do the filtering in the constructor.
Any additional thoughts or design patterns where this could be designed more efficiently:
public class Example
{
private MyRepository _repo = new MyRepository();
private void FilterRequestData(RequestData data)
{
//will call into another class that may or may not alter RequestData.Payers
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample1(data);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample2(data);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
FilterRequestData(RequestData data);
return _repo.GetMyDataExample3(data);
}
}
public class RequestData
{
List<string> Payers {get;set;}
}
One way of dealing with repeated code like that is to use a strategy pattern with a Func (and potentially some generics depending on your specific case). You could refactor that into separate classes and everything but the basic idea looks like that:
public class MyRepository
{
internal List<ReturnData> GetMyDataExample1(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample2(RequestData arg) { return new List<ReturnData>(); }
internal List<ReturnData> GetMyDataExample3(RequestData arg) { return new List<ReturnData>(); }
}
public class ReturnData { }
public class Example
{
private MyRepository _repo = new MyRepository();
private List<ReturnData> FilterRequestDataAndExecute(RequestData data, Func<RequestData, List<ReturnData>> action)
{
// call into another class that may or may not alter RequestData.Payers
// and then execute the actual code, potentially with some standardized exception management around it
// or logging or anything else really that would otherwise be repeated
return action(data);
}
public List<ReturnData> GetMyDataExample1(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample1);
}
public List<ReturnData> GetMyDataExample2(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample2);
}
public List<ReturnData> GetMyDataExample3(RequestData data)
{
// call the shared filtering/logging/exception mgmt/whatever code and pass some additional code to execute
return FilterRequestDataAndExecute(data, _repo.GetMyDataExample3);
}
}
public class RequestData
{
List<string> Payers { get; set; }
}
This sort of thinking naturally leads to aspect oriented programming.
It's specifically designed to handle cross-cutting concerns (e.g. here, your filter function cuts across your query logic.)
As #dnickless suggests, you can do this in an ad-hoc way by refactoring your calls to remove the duplicated code.
More general solutions exist, such as PostSharp which give you a slightly cleaner way of structuring code along aspects. It is proprietary, but I believe the free tier gives enough to investigate an example like this. At the very least it's interesting to see how it would look in PostSharp, and whether you think it improves it at all! (It makes strong use of attributes, which extends first suggestion.)
(N.B. I'm not practically suggesting installing another library for a simple case like this, but highlighting how these types of problems might be examined in general.)

Is it possible to get the calling instance inside a method?

The general reason I want to do this is:
class MovieApiController : ApiController
{
public string CurrentUser {get;set;}
// ...
public string Index()
{
return Resources.GetText("Color");
}
}
class Resources
{
static string GetText(string id)
{
var caller = ??? as MovieApiController;
if (caller && caller.CurrentUser == "Bob")
{
return "Red";
}
else
{
return "Blue";
}
}
}
I don't need this to be 100% dependable. It seems like the callstack should have this information, but StackFrame doesn't seem to expose any information about the specific object on which each frame executes.
It is generally a bad idea for a method to try to "sniff" its surroundings, and produce different results based on who is making the call.
A better approach is to make your Resources class aware of whatever it needs to know in order to make its decision, and configure it in a place where all relevant information is known, for example
class MovieApiController : ApiController {
private string currentUser;
private Resources resources;
public string CurrentUser {
get {
return currentUser;
}
set {
currentUser = value;
resources = new Resources(currentUser);
}
}
// ...
public string Index() {
return resources.GetText("Color");
}
}
class Resources {
private string currentUser;
public Resources(string currentUser) {
this.currentUser = currentUser;
}
public string GetText(string id) {
if (currentUser == "Bob") {
return "Red";
} else {
return "Blue";
}
}
}
CurrentUser should be available at HttpContext.Current.User and you can leave your controller out of the resource class.
It seems like the callstack should have this information,
Why? The call stack indicated what methods are called to get where you are at - it does not have any information about instances.
Rethink your parameters be deciding what information does the method need to do its job. Reaching outside of the class (e.g. by using the callstack or taking advantage of static methods like HttpContext.Current) limit the re-usability of your code.
From what you've shown, all you need is the current user name (you don't even show where you use the id value. If you want to return different things based on what's passed in then maybe you need separate methods?
As a side note, the optimizer has a great deal of latitude in reorganizing code to make it more efficient, so there are no guarantees that the call stack even contains what you think it should from the source code.
Short answer - you can't, short of creating a custom controller factory that stores the current controller as a property of the current HttpContext, and even that could prove unpredictable.
But it's really not good for a class to behave differently by attempting to inspect its caller. When a method depends on another class it needs to get the correct behavior by depending on the right class, calling the right method, and passing the right parameters.
So in this case you could
have a parameter that you pass to GetResources that tells it what it needs to know in order to return the correct string.
Create a more specific version of the Resources class that does what you need
declare
public interface IResources
{
string GetText(string id);
}
And have multiple classes that implement IResources, use dependency injection to provide the correct implementation to this controller. Ideally that's the best scenario. MovieApiController doesn't know anything about the implementation of IResources. It just knows that there's an instance of IResources that will do what it needs. And the Resource class doesn't know anything about what is calling it. It behaves the same no matter what calls it.
That would look like this:
public class MovieApiController : ApiController
{
private readonly IResources _resources;
public MovieApiController(IResources resources)
{
_resources = resources;
}
public string Index()
{
return _resources.GetText("Color");
}
}
Notice how the controller doesn't know anything about the Resources class. It just knows that it has something that implements IResources and it uses it.
If you're using ASP.NET Core then dependency injection is built in. (There's some good reading in there on the general concept.) If you're using anything older then you can still add it in.
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-dependency-injection - This has a picture that is worth 1000 words for describing the concept.
http://www.c-sharpcorner.com/UploadFile/dacca2/implement-ioc-using-unity-in-mvc-5/
Some of these recommend understanding "inversion of control" first. You might find it easier to just implement something according to the example without trying to understand it first. The understanding comes when you see what it does.

Refactoring a C# Save Command Handler

I have the following command handler. The handler takes a command object and uses its properties to either create or update an entity.
It decides this by the Id property on the command object which is nullable. If null, then create, if not, then update.
public class SaveCategoryCommandHandler : ICommandHandler<SaveCategoryCommand>
{
public SaveCategoryCommandHandler(
ICategoryRepository<Category> categoryRepository,
ITracker<User> tracker,
IMapProcessor mapProcessor,
IUnitOfWork unitOfWork,
IPostCommitRegistrator registrator)
{
// Private fields are set up. The definitions for the fields have been removed for brevity.
}
public override void Handle(SaveCategoryCommand command)
{
// The only thing here that is important to the question is the below ternary operator.
var category = command.Id.HasValue ? GetForUpdate(command) : Create(command);
// Below code is not important to the question. It is common to both create and update operations though.
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
private Category GetForUpdate(SaveCategoryCommand command)
{
// Category is retrieved and tracking information added
}
private Category Create(SaveCategoryCommand command)
{
// Category is created via the ICategoryRepository and some other stuff happens too.
}
}
I used to have two handlers, one for creating and one for updating, along with two commands for creating and updating. Everything was wired up using IoC.
After refactoring into one class to reduce the amount of code duplication I ended up with the above handler class. Another motivation for refactoring was to avoid having two commands (UpdateCategoryCommand and CreateCategoryCommand) which was leading to more duplication with validation and similar.
One example of this was having to have two validation decorators for what were effectively the same command (as they differed by only having an Id property). The decorators did implement inheritance but it is still a pain when there are a lot of commands to deal with.
There are a few things that bug me about the refactored handler.
One is the number of dependencies being injected. Another is that there is a lot going on the class. The if ternary bothers me - it seems like a bit of a code smell.
One option is to inject some sort of helper class into the handler. This could implement some sort of ICategoryHelper interface with concrete Create and Update implementations. This would mean the ICategoryRepository and ITracker dependencies could be replaced with a single dependency on ICategoryHelper.
The only potential issue is that this would require some sort of conditional injection from the IoC container based on whether the Id field on the Command was null or not.
I am using SimpleInjector and am unsure of the syntax of how to do this or even if it can be done at all.
Is this doing this via IoC also a smell, or should it be the handlers responsibility to do this?
Are there any other patterns or approaches for solving this problem? I had thought a decorator could possibly be used but I can't really think of how to approach doing it that way.
My experience is that having two separate commands (SaveCategoryCommand and UpdateCategoryCommand) with one command handler gives the best results (although two separate command handlers might sometimes be okay as well).
The commands should not inherit from a CategoryCommandBase base class, but instead the data that both commands share should be extracted to a DTO class that is exposed as a property on both classes (composition over inheritance). The command handler should implement both interfaces and this allows it to contain shared functionality.
[Permission(Permissions.CreateCategories)]
class SaveCategory {
[Required, ValidateObject]
public CategoryData Data;
// Assuming name can't be changed after creation
[Required, StringLength(50)]
public string Name;
}
[Permission(Permissions.ManageCategories)]
class UpdateCategory {
[NonEmptyGuid]
public Guid CategoryId;
[Required, ValidateObject]
public CategoryData Data;
}
class CategoryData {
[NonEmptyGuid]
public Guid CategoryTypeId;
[Required, StringLength(250)]
public string Description;
}
Having two commands works best, because when every action has its own command, it makes it easier to log them, and allows them to give different permissions (using attributes for instance, as shown above). Having shared data object works best, because it allows you to pass it around in the command handler and allows the view to bind to it. And inheritance is almost always ugly.
class CategoryCommandHandler :
ICommandHandler<SaveCategory>,
ICommandHandler<UpdateCategory> {
public CategoryCommandHandler() { }
public void Handle(SaveCategory command) {
var c = new Category { Name = command.Name };
UpdateCategory(c, command.Data);
}
public void Handle(UpdateCategory command) {
var c = this.repository.GetById(command.CategoryId);
UpdateCategory(c, command.Data);
}
private void UpdateCategory(Category cat, CategoryData data) {
cat.CategoryTypeId = data.CategoryDataId;
cat.Description = data.Description;
}
}
Do note that CRUDy operations will always result in solutions that seem not as clean as that task based operations will. That's one of the many reasons I push developers and requiremwnt engineers to think about the tasks they want to perform. This results in better UI, greater UX, more expressive audit trails, more pleasant design, and better overall software. But some parts of your application will always be CRUDy; no matter what you do.
I think, that you can separate this command to two well defined commands, e.g. CreateCategory and UpdateCategory (of course you should choose the most appropriate names). Also, design both commands via Template Method design pattern. In base class you can define protected abstract method for category creation and in 'Handle' method you should call this protected method, after that you can process remaining logic of original 'Handle' method:
public abstract class %YOUR_NAME%CategoryBaseCommandHandler<T> : ICommandHandler<T>
{
public override void Handle(T command)
{
var category = LoadCategory(command);
MapProcessor.Map(command, category);
UnitOfWork.Commit();
Registrator.Committed += () =>
{
command.Id = category.Id;
};
}
protected abstract Category LoadCategory(T command);
}
In derived classes you just override LoadCategory method.

DataContext in static class in desktop application

i am using 3 tier architecture in my winform application so i have static class which handle the operation of equipment
public static class Equipments
{
public static void AddEquipment(string name, decimal dimLength)
{
DBClassesDataContext db = new DBClassesDataContext();
Equipment equipment = new Equipment();
equipment.Name = name;
equipment.DimLength = dimLength;
db.Equipments.InsertOnSubmit(equipment);
db.SubmitChanges();
}
public static void UpdateEquipment(int equipmentID, string name, decimal dimLength)
{
DBClassesDataContext db = new DBClassesDataContext();
Equipment oldEquipment;
oldEquipment = db.Equipments.Where("EquipmentID = #0",equipmentID).SingleOrDefault();
oldEquipment.Name = name;
oldEquipment.DimLength = dimLength;
db.SubmitChanges();}
so my questions are :
Do i need to create instance of DBClassesDataContext in each method ?
because when i done global static DBClassesDataContext it didn't work correctly.
Is there any better way to handle DBClassesDataContext instead to create it each time inside the method (like create new DBClassesDataContext each time i run a method from this class)
Thanks
Do i need to create instance of DBClassesDataContext in each method?
You should do, absolutely - just like you should normally create a new SqlConnection each time you want to access the database in non-LINQ code. In general, avoid global state - it's almost always a bad idea.
There is any better way to handle DBClassesDataContext instead to create it each time inside the method
No - that's exactly the right approach. Why would you try to avoid just creating it each time?
Even though I'll probably get stoned to death for disagreeing with the Jon Skeet, I'll post this anyway.
You definitely don't need to create the instance in every single method, or at least not like this. There's a principle I like to follow called DRY - don't repeat yourself, and repeating the same line over and over, that can be avoided, clearly violates this principle.
You have multiple options here:
1.) define the methods as instance methods, maybe something like this:
internal class MyDbActions
{
private MyDbContext _myDbContext;
private MyDbContext Db
{
get
{
if (_myDbContext == null) _myDbContext = new MyDbContext();
return _myDbContext;
}
}
internal void Add(SomeClass c)
{
Db.Table.AddObject(c);
Db.SubmitChanges();
Db.Dispose();
}
}
Or something like that, you get the idea. This can be modified to whatever you need.
2.) use can use dependency injection for your methods, so consider something like this:
public static class Equipments
{
public static void AddEquipment(DBClassesDataContext db, string name, decimal dimLength)
{
Equipment equipment = new Equipment();
equipment.Name = name;
equipment.DimLength = dimLength;
db.Equipments.InsertOnSubmit(equipment);
db.SubmitChanges();
}
}
You'd manage your datacontext outside this class.
3.) you can utilize the Repository pattern, Unit of work pattern and IoC. I won't post the example code here, because it's quite lengthy, but here's one link to give you an idea:
Repository pattern with Linq to SQL using IoC, Dependency Injection, Unit of Work

Categories

Resources