I'm working on a c# project having some tiers with data type encapsulations. But whenever I add a field to a model in a top level layer (say Application Service), I need to remember where else should I change to get my application working properly.
I'm looking for a pattern or method to prevent getting potential logical errors followed by not updating my mapping classes. I think if I can require my mapping classes to resolve newly added fields (for example by throwing an exception if they're not resolved), the problem will be solved.
So any idea for a solution? or how I can implement my own idea?
You can use a library like automapper that will give you an error if not all properties are mapped properly (http://docs.automapper.org/en/stable/Configuration-validation.html), plus it saves you from writing all the code to map each objects.
If you don't want to use a library, make sure to wrap the mappings in factories so that, at least, the code is centralised and easily discoverable, but that's still error prone. Using constructors instead of object initialisers also helps finding mappings at compile time.
Related
So I'm currently working on a project with a team, and my team and I have come across a certain design scenario that we are trying to come up with a solution for.
Background Info of Current Project Implementation:
This problem involves three main projects in our solution: Repository, Models, and Services project. In case it isn't obvious, the purpose of each project is as follows. The Models project contains models of all the data we store in our database. The Repository project has one main database access class that uses generics to interact with different tables depending on the model passed in. Lastly the Services project contains classes that interface data between the front-end and repository, and in general each service class maps 1-to-1 to a model class. As one would expect, the build dependencies are: Repository relies on Models, and Services relies on both projects.
The Issue:
The current issue we are encountering is that we need a way to ensure that if a developer attempts to query or interact with a specific type of object (Call it ModelA), then we want to ensure that a specific set of filters is always included by default (and these filters are partially based on if a particular user has permissions to view certain objects in a list). A developer should be able to override this filter.
What we want to avoid doing is having an if clause in the repository classes that says "if you're updating this model type, add these filters".
Solutions we have thought of / considered:
One solution we are currently considering is having a function in ServiceA (the service corresponding to ModelA) that appends these filters to a given query, and then to make it so that if anyone requests for the db context of a model, they must pass in a function that manipulates filtering in some fashion (in other words, if they want to interact with ModelA, they would pass in the filter function from ServiceA). The issue with this solution is that a developer needs to always be aware that if they ever interact with ModelA, they must pass in the function from ServiceA. Also, because we don't want every model to enforce certain filter options, we would likely want this to be an optional parameter, which might then cause issues where developers simply forget to include this function when interacting with ModelA.
Another solution we considered is to have an attribute (let's call it DefaultFilterAttribute) on ModelA that stores a class type that should implement a particular interface (called IFilterProvider). ServiceA would implement this interface, and ModelA's attribute would be given ServiceA as a type. Then the repository methods can check if the entity passed in has a DefaultFilterAttribute on it, and then simply call the method implemented by the class attached to the attribute. Unfortunately, as some of you might have noticed, the way our project dependencies are currently set up, we can't really implement a solution like this.
So I'm wondering if there is a clean solution to this problem, or if potentially we are thinking about the problem and/or design pattern incorrectly, and should be taking a completely different approach.
I think you're making this unnecessarily complex. What you're describing is pretty much the entire purpose of a service layer. Presumably, you'd have something like GetModelAList (that's actually a pretty bad method for a service, but just for illustration). The logic of applying certain filters automatically, then, is encapsulated in that method. The application doesn't know or care how that data is retrieved; it just knows if it wants a list of ModelA instances, it calls that method.
If you then want a way to not apply those filters, you can provide another method such as GetModelAListUnfiltered, or pass a boolean or something in to the original method that determines whether filters are automatically applied. Really you can handle this however you want, but the point is that it's all encapsulated in your service.
Lastly, you haven't specified exactly what your repository is doing, but a repository should be extremely simple, really just returning sets of all objects. Logic like what you're talking about belongs in a service layer. However, even that only applies if you're doing direct database access using something like Dapper or ADO.NET. If you're using a full-fledged ORM like Entity Framework, throw away your repository layer entirely. Yes, you heard me correctly: throw it away completely. A repository wrapped around an ORM is a useless abstraction, only serving to add more code that needs to be maintained and tested for no good reason whatsoever. The benefits some level of abstraction gives you is already provided by the service layer.
I have a VS2015 solution with a webUI project and a class lib project which acts as the domain. The class lib holds nothing more then 20 EF DB First generated classes (edmx model) and also 20 repo's which act on these classes. When I need to change the underlying db I throw away the edmx model and regenerate it. One of these classes is Domain.DbEntities.plc. My webUI references this domain lib.
After some time I added an extra project PlcCommunicator to the solution, which has a reference to the Domain lib and has methods some accepting Domain.DbEntities.plc as parameter and some returning wrapper classes which also use Domain.DbEntities.plc. My webUI project references the "PlcCommunicator" project and everything works fine.
The solution is growing larger and by the time I added more projects to it all refering and using the same Domain lib. But now I have added another project called PlcMonitoringLogger and I decided to create another smaller domain, just a subset of the main domain, which holds 5 classes which are all also just EF DB First generated edmx classes generated on the same db as the Main Domain. One of these classes is PlcMonitoringDomain.DbEntities.plc. (Note the difference with Domain.DbEntities.plc)
Now I need my PlcMonitoringLogger project to use the PLCCommunicator project. But PlcCommunicator works with Domain.DbEntities.plc and PlcMonitoringLogger only knows PlcMonitoringDomain.DbEntities.plc. So there is the problem I face.... I can change the parameters of the PlcCommunicator methods to accept plc id's instead of Domain.DbEntities.plc object's and also just return plc id's instead of Domain.DbEntities.plc objects. However, is this the right approach? Are there any pitfalls? What are the pros and cons? Another solution might be to create a base plc class, but this doesn't seem right. I want to decouple things from each other and creating base classes just doesn't feel right.
I read some stuff about bounded context's. But I can't and don't want to change al my existing projects right away to using this design pattern. Not in the last place because I have no experience in it yet and it's hard for a beginner. I think making "baby steps" to using some aspects off bounded context are the best approach, not doing total rebuilds!
So if anybody has some ideas on this topic or something useful to say please, respond!
I'm not sure what you tried to achieve by creating a subdomain, but the consequence is that they are not interchangable. So if you want to combine component which results in a domain mix-up, then you cannot do that.
IMHO, the solution is to get rid of the sub-domain, and integrate it in the current main-domain. Any project/component using the domain can without problem reference other components that use the domain as well. No multiple domains mix-up.
Let's say I have a project where I use Entity Framework, but I want to use my own classes instead of the EF classes.
Reasons for using my own classes:
Easy to add properties in code
Easy to derive and inherit
Less binding to the database
Now, my database has table names like User and Conference.
However, In my domain project, I also call my files User.cs and Conference.cs.
That means I suddenly have two objects with the same naming, which is usually very annoying to work with, because you have to use namespaces all the time to know the difference.
My question is how to solve this problem?
My ideas:
Prefix all database tables with 'db'. I usually do this, but in this case, I cannot change the database
Prefix or postfix all C# classes with "Poco" or something similar
I just don't like any of my ideas.
How do you usually do this?
It's difficult to tell without more background but it sounds like you are using the Entity Framework designer to generate EF classes. This is known as the "Model First" workflow. Have you considered using the Code First / Code Only workflow? When doing code first you can have POCO classes that have no knowledge of the database, EF, or data annotations. The mapping between the database and your POCOs can be done externally in the the DBContext or in EntityTypeConfiguration classes.
You should be able to achieve your goal of decoupling from EF with just one set of objects via code first.
To extend the above answer, the database table name User (or Users as many DB designers prefer) is the identifier for the persistence store for the object User that's defined in your code file User.cs. None of these identifiers share the same space, so there should be no confusion. Indeed, they are named similarly to create a loose coupling across spaces (data store, code, development environment) so you can maintain sanity and others can read your code.
I am new to NHibernate/FluentNHibernate. I use FNH for my coding now as I find it is easier to use. However, I am working with some existing code base which is written in NHibernate. Today I found a bug in the code where the database wasn't getting updated as expected. After about 30 mins I found out that I hadn't updated the mapping xml even though I added a new class variable - so that row in the table wasn't getting updated. My question is, is there a way to identify such incomplete mappings with NHibernate easily so that I don't have to manually check the mapping always when something goes wrong? i.e. A warning message if I am updating an object which has non default data for any fields which aren't mapped?
Take a look at the PersistenceSpecification class in FluentNHibernate: http://wiki.fluentnhibernate.org/Persistence_specification_testing
You could wrap this up using reflection to test every property if that makes sense for your system.
You could also try to use the NHibernate mapping metadata and search for unmapped properties via reflection in a UnitTest.
By using the Metatdata, it is transparent for your application if you are using fluent nhibernate or other means to create the nhibernate mapping.
If you test your mappings in UnitTests you will know during test-time not during application startup if your mappings are alright.
This question seems to be related and this shows how to query the metadata.
The bug where the database did not get updated can be caused by issues other than not mapped field/property. There may be other mapping mistakes that are impossible to catch using reflection. What if you used wrong cascade or wrong generator? Or forgot association mapping?
If you want to catch majority of mapping issues you should create an integration test that will execute against real or in-memory database. Good overview of this approach is here.
I'm relatively new to NHibernate, but have been using it for the last few programs and I'm in love. I've come to a situation where I need to aggregate data from 4-5 databases into a single database. Specifically it is serial number data. Each database will have its own mapping file, but ultimately the entities all share the same basic structure (Serial class).
I understand NHibernate wants a mapping per class, and so my initial thought was to have a base Serial Class and then inherit from it for each different database and create a unique mapping file (the inherited class would have zero content). This should work great for grabbing all the data and populating the objects. What I would then like to do is save these inherited classes (not sure what the proper term is) to the base class table using the base class mapping.
The problem is I have no idea how to force NHIbernate to use a specific mapping file for an object. Casting the inherited class to the base class does nothing when using 'session.save()' (it complains of no mapping).
Is there a way to explicitly specify which mapping to use? Or is there just some OOP principal I am missing to more specifically cast an inherited class to base class? Or is this idea just a bad one.
All of the inheritance stuff I could find with regards to NHibernate (Chapter 8) doesn't seem to be totally applicable to this function, but I could be wrong (the table-per-concrete-class looks maybe useful, but I can't wrap my head around it totally with regards to how NHibernate figures out what to do).
I don't know if this'll help, but I wouldn't be trying to do that, basically.
Essentially, I think you're possibly suffering from "golder hammer" syndrome: when you have a REALLY REALLY nice hammer (i.e. Hibernate (and I share your opinion on it; it's a MAGNIFICENT tool)), everything looks like a nail.
I'd generally try to simply have a "manual conversion" class, i.e. one which has constructors which take the hibernate classes for your individual Serial Classes and which simply copies the data over to its own specific format; then Hibernate can simply serialize it to the (single) database using its own mapping.
Effectively, the reason why I think this is a better solution is that what you're effectively trying to do is have asymmetric serialization in your class; i.e. read from one database in your derived class, write to another database in your base class. Nothing too horrible about that, really, except that it's fundamentally a unidirectional process; if you really want conversion from one database to the other, simply do the conversion, and be over with it.
This might help;
Using NHibernate with Multiple Databases
From the article;
Introduction
...
described using NHibernate with
ASP.NET; it offered guidelines for
communicating with a single database.
But it is sometimes necessary to
communicate with multiple databases
concurrently. For NHibernate to do
this, a session factory needs to exist
for each database that you will be
communicating with. But, as is often
the case with multiple databases, some
of the databases are rarely used. So
it may be a good idea to not create
session factories until they're
actually needed. This article picks up
where the previous NHibernate with
ASP.NET article left off and describes
the implementation details of this
simple-sounding approach. Although the
previous article focused on ASP.NET,
the below suggestion is supported in
both ASP.NET and .NET.
...
The first thing to do when working
with multiple databases is to
configure proper communications.
Create a separate config file for each
database, put them all into a central
config folder, and then reference them
from the web/app.config.
...
I'm not 100% sure this will do what I need, but I found this googling today about NHibernate and anonymous types:
http://infozerk.com/averyblog/refactoring-using-object-constructors-in-hql-with-nhibernate/
The interesting part (to me, I'm new to this) is the 'new' keyword in the HQL select clause. So what I could do is select the SerialX from DatabaseX using mappingX, and pass it to a constructor for SerialY (the general/base Serial). So now I have SerialY generated from mappingX/databaseX, and (hopefully) I could then session.save and NHibernate will use mappingY/databaseY.
The reason I like this is simply not having two classes with the same data persisted (I think!). There is really no functional difference between this and returning a list of SerialX, iterating through it and generating SerialY and adding it to a new list (the first and best answer given).
This doesn't have the more general benefit of making useful cases for NHibernate mappings with inheritance, but I think it will do the limited stuff I want.
While it's true you will need a mapping file/class for each of those tables, there's nothing that stops you from making all of those classes implement a common interface.
You can then aggregate them all together into a single collection in your application layer (I.e. List) where each of those classes implement List)
You will probably have to write some plumbing to keep track of which session to store it under (since you're targetting multiple databases) if you wish to do updates. But the process for doing that will vary based on how you have things set up.
I wrote a really long post with code and everything to respond to Dan. It ended up I think I missed the obvious.
public class Serial
{
public string SerialNumber {get; set;}
public string ItemNumber {get; set;}
public string OrderNumber {get; set;}
}
...
Serial serial = sessionX.get(typeof(Serial), someID);
sessionY.save(serial);
NHibernate should use mappingX for the get and mappingY for the save since the sessions aren't being shared, and the mapping is tied to the session. So I can have 2 mappings pointing to the same class because in any particular session there is only a single mapping to class relationship.
At least I think that is the case (can't test atm).
Unfortunately this specific case is really boring and not useful. In a different program of the same domain I derive from the base class for a specific portion of business logic. I didn't want to create a mapping file since it was just to make a small chunk of code easier. Anyways, I couldn't make it work in NHibernate due to the same reasons as my first question and did do the method McWafflestix describes to get around it (since it was minor).
That said I have found this via google:
http://jira.nhibernate.org/browse/NH-662
That is exactly the same situation, and it appears (possibly) addressed in NH 2.1+? I haven't followed up on it yet.
(note: Dan, in my case I am getting from several db's, only writing to one. I'm still interested in your suggestion about the interface because I think that is a good idea for other cases. Would you define the mapping against the interface? If I try and save a class that implements the interface that doesn't have a mapping definition, would NHibernate use the interface mapping? Or would I have to declare empty sublcasses in the mapping for each class that implements the interface mapping?)