Design Pattern for Data Consolidation Layer (ETL) - c#

I have to design a software using asp .net core which collects data from various datasources (s. picture below).
E.g. DataSource1 and DataSource2 are including product data like attributes. DataSource 3 is including the assets of those products.
I thought first of all I will collect the data from each datasource and persists them in own datasource with the defined entity below.
I have the advantage later at translating or tranforming the data to use one abstract entity.
My question which pattern should be good for this system? Repository, Pipeline,...??
Could you show me some pseudo code?
What about DI if I use interfaces but should have multiplied instances of datasources?

A pattern (or a set of patterns) should be applied to solve a specific problem/complexity.
I think the pattern the pattern you need here is Facade.
The problem that it will solve is that it will hide the complexity of the 'three data sources' for your client.
Within the Facade you would merge the data into a reasonable entity.
Additionally, you could make use of the Proxy pattern, which could give you the 'cache' functionality for the 'merged' entities, which could solve the second complexity you describe.
I am not sure I understand the idea of persisting these items into a fourth datastore, that might be an overkill - but in any case, that can also be achieved with the proxy class - it's just that the cache would be more permanent - if your domain 'allows' it.
As for Repository (pattern) - well, I believe that it's very likely that any reasonable solution that you apply that will hide the details of your data access will end up to be a an implementation of a Repository.
I wouldn't be too strict about naming the patterns and sticking to sample code in books or articles. Patterns are high level guidelines that can be adjusted to needs.

Related

dynamic business document creation

I am preparing a C#.Net project for our company and would like to know which design pattern is the best fit for creating all those business documents.
I have studied some of the available design patterns, to be honest I have problems in applying them to my real world problem, to get concrete here my scenario: Different types of documents have to be created, read into and maintained through windows forms and finally stored back to a database , like e.g. invoices(sales and purchase), contracts(sales and purchase), bill of lading, letter of credit, various inventory and warehouse documents, maybe later a bunch of accounting documents.
In the first place I thought factory method would do the job but I am not sure if it is the right choice for this task.
I guess it is the best approach to have a abstract class called "Document" with all the common fields (like docId, docDate, docNumber, docIssuer,etc.) as my base and then dive into the concrete creation of the desired document object.
What options are there? In case of factories: do I need to define a concrete class for each and every document and create the object(which would be simple inheritance, wouldn't it?) or how should a factory approach look like in regard to my problem?
Isn't it better to define each and every document spec (which finally would be like each and every database table field) as an own class and using builder pattern or composite pattern to assemble the desired document at runtime?
Or are there any other approaches available?
I wonder that many business-related programs have to make this decision but I could not find any prior questions on StackOverflow for this rather common issue.
As said before we are in planning phase and this issue may be considered a crucial pillar of the architecture, therefore any constructive advise would be highly appreciated.
You can use table per hierarchy as your database design, then use ORM like NHibernate or Entity Framework to construct documents (instead of using factory).

Looking for basic pointers on Repository Pattern for highly related entities

I'm writing an database app in C# using SQL Server CR E 3.5 and would like to implement a Repository Pattern. I've done several searches both on Google and SO; however, I cannot find an implementation that matches my needs so I will ask the SO community directly.
The key business objects in my app are: video, actor, tag category and tag. The basic business rules are as follows:
Every tag belongs to a tag category.
A video may or may have not multiple actors and tags associated with it.
Actors and tags may or may not have multiple videos associated with them.
Here is where things get fuzzy for me:
Should I implement a video repository that includes actors, tag categories, and tags or should each of these business objects have their own repositories? Given these objects can exist independently, I'm inclined to create a repository for each one.
If each object should have its own repository, how do I relate them? For example, should the video repository include a property that queries the tag repository for matches?
I'm looking for some guidelines or best practices for setting this up. I understand the basics of the repository pattern, but I need some advice as to how to connect them together.
You should only have a repository for your aggregate roots.
I would not recommend using the repository as a way of encapsulating all your queries. Repositories are not big dumping grounds for queries - they are a specific tool for use in scenarios where DDD is most applicable. See this article for some more info: http://ayende.com/blog/3955/repository-is-the-new-singleton
There should be no need to 'connect' or 'relate' repositories.
If you want to write a query such as "Load all the tags for videos that this user has borrowed", it is probably best not to put it in the repository. This query is most likely specific to a certain case, e.g. a UI, and should be written inside or close to the class for which the query is required. The output of the query would probably be mapped to read-only Data Transfer Objects specifically created for the UI's requirement, not to your entities.

Rich domain model with ORM

I seem to be missing something and extensive use of google didn't help to improve my understanding...
Here is my problem:
I like to create my domain model in a persistence ignorant manner, for example:
I don't want to add virtual if I don't need it otherwise.
I don't like to add a default constructor, because I like my objects to always be fully constructed. Furthermore, the need for a default constructor is problematic in the context of dependency injection.
I don't want to use overly complicated mappings, because my domain model uses interfaces or other constructs not readily supported by the ORM.
One solution to this would be to have separate domain objects and data entities. Retrieval of the constructed domain objects could easily be solved using the repository pattern and building the domain object from the data entity returned by the ORM. Using AutoMapper, this would be trivial and not too much code overhead.
But I have one big problem with this approach: It seems that I can't really support lazy loading without writing code for it myself. Additionally, I would have quite a lot of classes for the same "thing", especially in the extended context of WCF and UI:
Data entity (mapped to the ORM)
Domain model
WCF DTO
View model
So, my question is: What am I missing? How is this problem generally solved?
UPDATE:
The answers so far suggest what I already feared: It looks like I have two options:
Make compromises on the domain model to match the prerequisites of the ORM and thus have a domain model the ORM leaks into
Create a lot of additional code
UPDATE:
In addition to the accepted answer, please see my answer for concrete information on how I solved those problems for me.
I would question that matching the prereqs of an ORM is necessarily "making compromises". However, some of these are fair points from the standpoint of a highly SOLID, loosely-coupled architecture.
An ORM framework exists for one sole reason; to take a domain model implemented by you, and persist it into a similar DB structure, without you having to implement a large number of bug-prone, near-impossible-to-unit-test SQL strings or stored procedures. They also easily implement concepts like lazy-loading; hydrating an object at the last minute before that object is needed, instead of building a large object graph yourself.
If you want stored procs, or have them and need to use them (whether you want to or not), most ORMs are not the right tool for the job. If you have a very complex domain structure such that the ORM cannot map the relationship between a field and its data source, I would seriously question why you are using that domain and that data source. And if you want 100% POCO objects, with no knowledge of the persistence mechanism behind, then you will likely end up doing an end run around most of the power of an ORM, because if the domain doesn't have virtual members or child collections that can be replaced with proxies, then you are forced to eager-load the entire object graph (which may well be impossible if you have a massive interlinked object graph).
While ORMs do require some knowledge in the domain of the persistence mechanism in terms of domain design, an ORM still results in much more SOLID designs, IMO. Without an ORM, these are your options:
Roll your own Repository that contains a method to produce and persist every type of "top-level" object in your domain (a "God Object" anti-pattern)
Create DAOs that each work on a different object type. These types require you to hard-code the get and set between ADO DataReaders and your objects; in the average case a mapping greatly simplifies the process. The DAOs also have to know about each other; to persist an Invoice you need the DAO for the Invoice, which needs a DAO for the InvoiceLine, Customer and GeneralLedger objects as well. And, there must be a common, abstracted transaction control mechanism built into all of this.
Set up an ActiveRecord pattern where objects persist themselves (and put even more knowledge about the persistence mechanism into your domain)
Overall, the second option is the most SOLID, but more often than not it turns into a beast-and-two-thirds to maintain, especially when dealing with a domain containing backreferences and circular references. For instance, for fast retrieval and/or traversal, an InvoiceLineDetail record (perhaps containing shipping notes or tax information) might refer directly to the Invoice as well as the InvoiceLine to which it belongs. That creates a 3-node circular reference that requires either an O(n^2) algorithm to detect that the object has been handled already, or hard-coded logic concerning a "cascade" behavior for the backreference. I've had to implement "graph walkers" before; trust me, you DO NOT WANT to do this if there is ANY other way of doing the job.
So, in conclusion, my opinion is that ORMs are the least of all evils given a sufficiently complex domain. They encapsulate much of what is not SOLID about persistence mechanisms, and reduce knowledge of the domain about its persistence to very high-level implementation details that break down to simple rules ("all domain objects must have all their public members marked virtual").
In short - it is not solved
(here goes additional useless characters to post my awesome answer)
All good points.
I don't have an answer (but the comment got too long when I decided to add something about stored procs) except to say my philosophy seems to be identical to yours and I code or code generate.
Things like partial classes make this a lot easier than it used to be in the early .NET days. But ORMs (as a distinct "thing" as opposed to something that just gets done in getting to and from the database) still require a LOT of compromises and they are, frankly, too leaky of an abstraction for me. And I'm not big on having a lot of dupe classes because my designs tend to have a very long life and change a lot over the years (decades, even).
As far as the database side, stored procs are a necessity in my view. I know that ORMs support them, but the tendency is not to do so by most ORM users and that is a huge negative for me - because they talk about a best practice and then they couple to a table-based design even if it is created from a code-first model. Seems to me they should look at an object datastore if they don't want to use a relational database in a way which utilizes its strengths. I believe in Code AND Database first - i.e. model the database and the object model simultaneously back and forth and then work inwards from both ends. I'm going to lay it out right here:
If you let your developers code ORM against your tables, your app is going to have problems being able to live for years. Tables need to change. More and more people are going to want to knock up against those entities, and now they all are using an ORM generated from tables. And you are going to want to refactor your tables over time. In addition, only stored procedures are going to give you any kind of usable role-based manageability without dealing with every tabl on a per-column GRANT basis - which is super-painful. If you program well in OO, you have to understand the benefits of controlled coupling. That's all stored procedures are - USE THEM so your database has a well-defined interface. Or don't use a relational database if you just want a "dumb" datastore.
Have you looked at the Entity Framework 4.1 Code First? IIRC, the domain objects are pure POCOs.
this what we did on our latest project, and it worked out pretty well
use EF 4.1 with virtual keywords for our business objects and have our own custom implementation of T4 template. Wrapping the ObjectContext behind an interface for repository style dataaccess.
using automapper to convert between Bo To DTO
using autoMapper to convert between ViewModel and DTO.
you would think that viewmodel and Dto and Business objects are same thing, and they might look same, but they have a very clear seperation in terms of concerns.
View Models are more about UI screen, DTO is more about the task you are accomplishing, and Business objects primarily concerned about the domain
There are some comprimises along the way, but if you want EF, then the benfits outweigh things that you give up
Over a year later, I have solved these problems for me now.
Using NHibernate, I am able to map fairly complex Domain Models to reasonable database designs that wouldn't make a DBA cringe.
Sometimes it is needed to create a new implementation of the IUserType interface so that NHibernate can correctly persist a custom type. Thanks to NHibernates extensible nature, that is no big deal.
I found no way to avoid adding virtual to my properties without loosing lazy loading. I still don't particularly like it, especially because of all the warnings from Code Analysis about virtual properties without derived classes overriding them, but out of pragmatism, I can now live with it.
For the default constructor I also found a solution I can live with. I add the constructors I need as public constructors and I add an obsolete protected constructor for NHibernate to use:
[Obsolete("This constructor exists because of NHibernate. Do not use.")]
protected DataExportForeignKey()
{
}

How should I implement my Repository (DDD) in C# to handle multiple calls for the same Aggregate Root

What class in my project should responsible for keeping track of which Aggregate Roots have already been created so as not to create two instances for the same Entity. Should my repository keep a list of all aggregates it has created? If yes, how do i do that? Do I use a singleton repository (doesn't sound right to me)? Would another options be to encapsulate this "caching" somewhere else is some other class? What would this class look like and in what pattern would it fit?
I'm not using an O/R mapper, so if there is a technology out there that handles it, i'd need to know how it does it (as in what pattern it uses) to be able to use it
Thanks!
I believe you are thinking about the Identity Map pattern, as described by Martin Fowler.
In his full description of the pattern (in his book), Fowler discusses implementation concerns for read/write entities (those that participate in transactions) and read-only (reference data, which ideally should be read only once and subsequently cached in memory).
I suggest obtaining his excellent book, but the excerpt describing this pattern is readable on Google Books (look for "fowler identity map").
Basically, an identity map is an object that stores, for example in a hashtable, the entity objects loaded from the database. The map itself is stored in the context of the current session (request), preferably in a Unit of Work (for read/write entities). For read-only entities, the map need not be tied to the session and may be stored in the context of the process (global state).
I consider caching to be something that happens at the Service level, rather than in a repository. Repositories should be "dumb" and just do basic CRUD operations. Services can be smart enough to work with caching as necessary (which is probably more of a business rule than a low-level data access rule).
To put it simply, I don't let any code use Repositories directly - only Services can do that. Then everything else consumes the relevant services as interfaces. That gives you a nice wrapper for putting in biz logic, caching, etc.
I would say, if this "caching" managed anywhere other than the Repository then you are letting concerns leak out.
The repository is your collection of items. No code consuming the repository should have to decide whether to retrieve the object from the repository or from somewhere else.
Singleton sounds like the wrong lifetime; it likely should be per-request. This is easy to manage if you are using an IoC/DI container.
It seems like the fact that you even have consider multiple calls for the same aggregate evidences a architecture/design problem. I would be interested to hear an example of what those first and second calls might be, and why they require the same exact instance of your AR.

ASP.NET C#: Which Design Pattern should I use and why?

I am developing an app in ASP.NET C# and came across the following scenario:
I will have to create some maintenance screens for different entities (tables)
Those entities will basically have the same behaviour within the UI: Search, GetById, Save, Create and GetAll
The entities may have different structure i.e. different properties (fields)
As I am talking about 20 plus admin screens, which design pattern I could take advantage of in order to minimize the amount of code I will have to write?
I though of the bridge pattern but I am a little confused on how to implement it ...
A little bit of the technology background I am using:
ASP.NET classic (n-tier)
LINQ to SQL and DAO objects
SQL Server 2005
For a set of admin screens that are just doing CRUD (Create, Read, Update, Delete) operations and with little in the way of business logic, I'd be quite tempted to more or less eschew design patterns and take a look at asp.net dynamic data. This is especially true if you want to minimise the amount of code you want to write.
This is not a design pattern... but I would strongly suggest using Dynamic Data. Jonathan Carter has some great articles about it: http://lostintangent.com/index.php?s=dynamic+data
If you're really just doing some basic stuff like this: Search, GetById, Save, Create and GetAll, I would recommend you use repositories. If done wrong repositories can get really bad and nasty, but if you're really primarily limited to this set of operations you've basically described a repository with that set of operations.
You'll want to look at ways in which you can extract the extra logic for example of searching so that you're not creating duplicate logic.
Repositories are nice and testable as long as you make sure not to let them get out of control. I give you this warning only because I've seen far too many people create monster classes out of repositories.
The repositories work with your objects. They are basically the intermediary which handles the persistence of your data. This abstraction allows you to hide from the rest of your code how you're persisting your data. In this case the implementations of your repositories will be using LinqToSql as I believe that is what you said you were using.
There are plenty of resources explaining the repository pattern.
What you want is not a design pattern. You are looking for an ORM with scaffolding. I have used and highly recommend SubSonic - http://subsonicproject.com. You can read about its scaffolding features here: http://subsonicproject.com/web-forms-controls/the-scaffold/

Categories

Resources