Communication between classes of multiple domain's on same underlying DB table - c#

I have a VS2015 solution with a webUI project and a class lib project which acts as the domain. The class lib holds nothing more then 20 EF DB First generated classes (edmx model) and also 20 repo's which act on these classes. When I need to change the underlying db I throw away the edmx model and regenerate it. One of these classes is Domain.DbEntities.plc. My webUI references this domain lib.
After some time I added an extra project PlcCommunicator to the solution, which has a reference to the Domain lib and has methods some accepting Domain.DbEntities.plc as parameter and some returning wrapper classes which also use Domain.DbEntities.plc. My webUI project references the "PlcCommunicator" project and everything works fine.
The solution is growing larger and by the time I added more projects to it all refering and using the same Domain lib. But now I have added another project called PlcMonitoringLogger and I decided to create another smaller domain, just a subset of the main domain, which holds 5 classes which are all also just EF DB First generated edmx classes generated on the same db as the Main Domain. One of these classes is PlcMonitoringDomain.DbEntities.plc. (Note the difference with Domain.DbEntities.plc)
Now I need my PlcMonitoringLogger project to use the PLCCommunicator project. But PlcCommunicator works with Domain.DbEntities.plc and PlcMonitoringLogger only knows PlcMonitoringDomain.DbEntities.plc. So there is the problem I face.... I can change the parameters of the PlcCommunicator methods to accept plc id's instead of Domain.DbEntities.plc object's and also just return plc id's instead of Domain.DbEntities.plc objects. However, is this the right approach? Are there any pitfalls? What are the pros and cons? Another solution might be to create a base plc class, but this doesn't seem right. I want to decouple things from each other and creating base classes just doesn't feel right.
I read some stuff about bounded context's. But I can't and don't want to change al my existing projects right away to using this design pattern. Not in the last place because I have no experience in it yet and it's hard for a beginner. I think making "baby steps" to using some aspects off bounded context are the best approach, not doing total rebuilds!
So if anybody has some ideas on this topic or something useful to say please, respond!

I'm not sure what you tried to achieve by creating a subdomain, but the consequence is that they are not interchangable. So if you want to combine component which results in a domain mix-up, then you cannot do that.
IMHO, the solution is to get rid of the sub-domain, and integrate it in the current main-domain. Any project/component using the domain can without problem reference other components that use the domain as well. No multiple domains mix-up.

Related

How can I structure an ASP.NET MVC application with a "Core" database and individual derived databases using Entity Framework?

I had a hard time naming and wording this question, as there's a lot to unpack, so I apologize in advance - for anyone who spends the time to review and respond to this, I very much appreciate you.
Background:
I have a relatively large ASP.NET MVC5 application using Entity Framework 6, using a SQL Server database. Currently, the solution is split in to a few projects, mostly split by layer (business, data, etc). There is a single .edmx file and dbContext for the application, and it points to a single database at the moment.
The code/solution above represents the "core" of the system being built. However, this application is customized per client, therefore each client could have their own modules, pages, logic, etc. Due to this, we have a project in the solution for each client (only a couple right now, but will eventually be 50+ - is that an issue? Split the solution up maybe?). The intention is to be able to deploy just that client's code along with the core, or to be able to deploy just the core as well.
In addition to the custom modules in the code, they may also have their own custom database, again derived from a Core database. The custom database will always be kept up to date with the core db, but may have additional objects (tables, stored procedures, etc). On thing to note, I do not have the option of veering away from this approach - each client will definitely have their own copy of the "core", but it will be kept up to date utilizing a push tool developed in-house.
Problem/Question:
With that, which will essentially be the Core database with the potential for extra objects added in for that client's implementation.
The issue I'm struggling with is how to implement this in Entity Framework in a way which does not require me to add all of those custom db objects to the Core database, or at the very least keep them logically separated, relegated to the client projects. What would be the best way to go about this?
My Idea For Implementation
This is definitely where I am struggling at the moment. I am not really sure if my current idea will work, but I am still investigating and trying to come up with better options.
My current idea is as follows... Since I can target a specific schema when generating an EDMX, place client specific objects in a schema for their project, and utilize those to generate a dbContext in each client project/database, which inherits from the Core's dbContext implementation (containing all the "core" objects). This would mean ClientA's project would have an edmx file with just their custom tables/objects, inheriting all of the core's objects, but keeping them separate from other client's objects.
I'm not completely certain whether this approach will work (playing with it now), my initial concerns are that Entity Framework doesn't appear to generate foreign keys between the contexts. For example, if ClientA's table has a foreign key pointing to a core table, the generation tool doesn't appear to generate that relationship. That said, could I manually implement this effectively? The core code is database first, however I could implement the smaller, client specific items code-first, which I believe would give me far more flexibility. Would this be an effective approach? If not, is there a better approach out there I could use?
As a developer in very similar situation (6 years of project for multiple clients) I can say that your approach is full of pain. Customising your code per client is a road to hell.
You need to deploy the same code to every client. Core stays the same. Satellite modules developed for a specific client should be done as generic as possible (so you can re-sell them multiple times) and also deployed to everyone. The trick is to have a good toggle system that will enable only the right functionality per client.
I.e. there is a controller that saves for example company information. Everyone gets the same code, but if a customer BobTheBuilder Ltd. requires a special validation for companies, then that code goes into MyApp.BobTheBuilder.* namespace and your configuration code should know that this code should be executed instead of your general code. Needless to say that this should be done via DI container and implementations should be replaced by injecting objects that implement the common interface.
As for database - you can have multiple DB Contexts that represent your database modules. They can live in the same database, but best to separate modules by schema name. So yes, all those objects go to your codebase. Only not every tenant will get all the tables - only enabled modules should be activated and create their tenant tables.
As for project per customer - that's also is a big pain. Imagine if you have more than 10 customers and need to update Newtonsoft.Json package - that usually takes a bit more than forever! We tried that and fell back to namespace per customer overrides.
Generally here is our schema:
Tenants all get the same codebase deployed to them, but functionality is disabled by toggles
Tenants each get their own database with all the tables and enabled schemas(modules)
Do not customise your core per tenant. All customisations go into modules.
CQRS is recommended, but you can live without it. Though life is a lot easier when you have only a handful of interfaces to think about.
DI is a must. Can't make all that happen without a good container that supports multi-tenancy.
There are modules that do some specific stuff developed per customer. Each module has it's own toggles and very configurable - so multiple tenants can get the same module, but can be re-configured independently.
You can implement inheritance with the Entity Framework in an ASP.NET MVC Application:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/implementing-inheritance-with-the-entity-framework-in-an-asp-net-mvc-application
There are a few approaches Table-Per-Hierarchy (TPH) inheritance, Table Per Type (TPT) inheritance and Table-per-Concrete Class (TPC) inheritance.
You might also consider a Microservic-ie architecture if you're concerned how the different schema's will integrate.
Entity Framework doesn't appear to generate foreign keys between the contexts.
That approach sounds painful. Using Microservices to encapsulate the Core and client dBs as their own entities you could then use Message Queue's to broker communication between them.

C# Class Structure to Ensure Certain Filters Are Always Applied

So I'm currently working on a project with a team, and my team and I have come across a certain design scenario that we are trying to come up with a solution for.
Background Info of Current Project Implementation:
This problem involves three main projects in our solution: Repository, Models, and Services project. In case it isn't obvious, the purpose of each project is as follows. The Models project contains models of all the data we store in our database. The Repository project has one main database access class that uses generics to interact with different tables depending on the model passed in. Lastly the Services project contains classes that interface data between the front-end and repository, and in general each service class maps 1-to-1 to a model class. As one would expect, the build dependencies are: Repository relies on Models, and Services relies on both projects.
The Issue:
The current issue we are encountering is that we need a way to ensure that if a developer attempts to query or interact with a specific type of object (Call it ModelA), then we want to ensure that a specific set of filters is always included by default (and these filters are partially based on if a particular user has permissions to view certain objects in a list). A developer should be able to override this filter.
What we want to avoid doing is having an if clause in the repository classes that says "if you're updating this model type, add these filters".
Solutions we have thought of / considered:
One solution we are currently considering is having a function in ServiceA (the service corresponding to ModelA) that appends these filters to a given query, and then to make it so that if anyone requests for the db context of a model, they must pass in a function that manipulates filtering in some fashion (in other words, if they want to interact with ModelA, they would pass in the filter function from ServiceA). The issue with this solution is that a developer needs to always be aware that if they ever interact with ModelA, they must pass in the function from ServiceA. Also, because we don't want every model to enforce certain filter options, we would likely want this to be an optional parameter, which might then cause issues where developers simply forget to include this function when interacting with ModelA.
Another solution we considered is to have an attribute (let's call it DefaultFilterAttribute) on ModelA that stores a class type that should implement a particular interface (called IFilterProvider). ServiceA would implement this interface, and ModelA's attribute would be given ServiceA as a type. Then the repository methods can check if the entity passed in has a DefaultFilterAttribute on it, and then simply call the method implemented by the class attached to the attribute. Unfortunately, as some of you might have noticed, the way our project dependencies are currently set up, we can't really implement a solution like this.
So I'm wondering if there is a clean solution to this problem, or if potentially we are thinking about the problem and/or design pattern incorrectly, and should be taking a completely different approach.
I think you're making this unnecessarily complex. What you're describing is pretty much the entire purpose of a service layer. Presumably, you'd have something like GetModelAList (that's actually a pretty bad method for a service, but just for illustration). The logic of applying certain filters automatically, then, is encapsulated in that method. The application doesn't know or care how that data is retrieved; it just knows if it wants a list of ModelA instances, it calls that method.
If you then want a way to not apply those filters, you can provide another method such as GetModelAListUnfiltered, or pass a boolean or something in to the original method that determines whether filters are automatically applied. Really you can handle this however you want, but the point is that it's all encapsulated in your service.
Lastly, you haven't specified exactly what your repository is doing, but a repository should be extremely simple, really just returning sets of all objects. Logic like what you're talking about belongs in a service layer. However, even that only applies if you're doing direct database access using something like Dapper or ADO.NET. If you're using a full-fledged ORM like Entity Framework, throw away your repository layer entirely. Yes, you heard me correctly: throw it away completely. A repository wrapped around an ORM is a useless abstraction, only serving to add more code that needs to be maintained and tested for no good reason whatsoever. The benefits some level of abstraction gives you is already provided by the service layer.

DAL framework used by many programs on multiple computers

In our project, we have many small programs spread across multiple computers, each responsible for a specific tasks against the legacy database. Today, they are written in Fortran 77 and uses a very outdated framework for database access.
We are thinking of start developing in C# and investigate how we best can create a database framework that all applications can use. We can not use any existing frameworks when the old realtime database does not support SQL.
I am thinking of generate a DAL with T4 from the database definition.
The problem I see however is what happens when the database changes and DAL must be recompiled. Is it enough to copy the dll file containing the DAL to all computers, or do we have to recompile to applications?
The actual structure of the database does not change so often. However, a lot of lookup constants is changed regularly. Normally, no constants is deleted, but they can get new values ​​or new ones can be added. And if there is any constant that is removed, the programs that are using it must anyway be rewritten.
I fear that it may become a maintenance problem and looking for a better solution.
Edit
Primary keys are not constant in the database, but are regenerated once a year. To make programs able to find the correct row in the database, lookup constants is used. In existing programs, the constants are used directly in the code in the form <TableName(RowName)>. The constants is then replaced to the current primary key value by a preprocessor. This means that all applications must be recompiled when the database is rebuilt.
It is therefore not possible to use e.g. GetByKey(int key) in BLL as the key is not constant.
I see a number of different solutions to this as listed below, which are good and which are bad? Please tell me if you see any other better solutions:
Define lookup constant in the DAL:
BLL.TableName.GetByKey (DAL.TableNameLookup.RowName)
Pros: The constant i defined in DAL and I don't have to replace BLL if the lookups is changed.
Cons: Verbose syntax
Define lookup in BLL:
BLL.TableName.RowName
Pros: Simple syntax
Cons: I have to update BLL when the lookup constants is changed.
This might be solved both with code generation (T4) and DynamicObject. Om DynamicObject is used, the constants maybe can be defined in an XML file that can be easily updated. However, it will be significantly slower.
I do not think any of these ways is good. Please help me to come up with something better.
Use the One more Layer between DAL and Application.(ex: BL-Business Layer)
Call the DAL layer methods in BL and From Application call the BL Layer. Doing this you can avoid the dependency between DAL and Application. Then when ever you do changes in database only change in DAL Layer and Replace the dll. No need to compile the application on every change in DAL.

C# - Creating and managing application specific project files

One of the specifications for an application I'm developing is that it must work with project files.
My problem comes into how I'm going to fulfill this requirement because, since I'm working toward making the application as loose as possible using Prism and Unity, I can't decide on which implementation I'm going to use for the project files creation and managing (project files loading, saving, etc).
The application is going to be a SEO helper and will mostly handle text information, like Uri's and strings it will fetch from internet.
I thought of some possible implementations using:
a - The framework System.Configuration namespace. This was my first option, since I could easily plug new ConfigurationSection's into the Configuration object. The downside is that it leaves no opportunity (or at least I couldn't figure how) for using interfaces for abstraction.
b - Create a database for each project and save it in a file. With this implementation I could use a database framework such as nHibernate or any other (open to suggestions) to handle the object-to-db mapping.
c - Add your own here.
My question is, what do you guys think would be the better approach to handle different configuration/settings for every module that I plug into it and for persisting big lists of urls, lists of about 10k~100k urls as long with other settings?
Thanks in advance!
The simplest way will be to define your own type (class) like ProjectSettings { ... } and simply have it serialized/deserialized with the preferred serializer (XML for example).
Then you simply don't need any fancy ORMs or configurations.
Don't introduce complexity where you don't need it ;)
Configuration file is a good solution when you have few dozen configuration variables (
But here, it 's better to have database.Why? Because if you want to do some modification on 10-100k uri, it will be hard and error will be easy.
With a database (one table for project, another for string connections, another for uri..), it will be easy to query it, update it, CRUD it.
you have to use database when data are to big for a file in this case because of relationship between entities (one project have many string connection, many uri...)
For ORM, Entity Framework 4.0 because it is POCO (no metadata on entities class mapped).
Best regard

Where to put DTOs, Result Objects etc?

I have a fairly clean ASP.NET MVC project structure. However, I'm struggling on how to organize the mass of classes which are DTOs (data transfer objects), e.g. just to encapsulate post data from forms (viewmodels) but don't represent full domain objects or anything near that yet; and then the many "result" objects I have which communicate complex result information from my service layer back to the controller. Where do you stuff these/how do you organize them? I have one folder with well over 60 classes now and it's getting cluttered. Appreciate suggestions!
Domain objects should live in a separate Domain Model library. Anything that supports the Domain Model in an framework-neutral way (e.g. no references to ASP.NET MVC, WCF, WPF etc.) belongs in the Domain Model.
Classes that perform translation between the Domain Model and the specific interface framework (ASP.NET MVC in your case) belongs in that particular project (your ASP.NET MVC project).
You can have your mappers etc. in a separate Mappers folder, but personally, I think it is much more valuable to structure code along features instead of infrastructure.
I use <CompanyName>.<ProjectName>.Core to store all project specific classes which are not strictly pertaining to the particular project interface that I am writing. So DTOs, DAOs, other project-specific classes are all in there.
I also use <CompanyName>.<DotNetLibraryNamespace> to store general purpose classes that could be reused across projects, and are not specific to this project domain. For example, string manipulation classes could go in the <CompanyName>.Text namespace. I always mirror the .net namespace structure names so that anyone that uses the .net class library has an easy time finding my stuff.

Categories

Resources