I am new to SPA hot template and I am trying to use multiple connection strings (multiple database context). Most examples online use only one as follows
[BreezeController]
public class MyController : ApiController
{
readonly EFContextProvider<ContextClass> _contextProvider =
new EFContextProvider<ContextClass>();
[HttpGet]
public string Metadata()
{
return _contextProvider.Metadata();
}
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle)
{
return _contextProvider.SaveChanges(saveBundle);
}
...
What is the best way of implementing something like this where I use a repository, Unit of work and am able to join object from different content?
Let's separate (1) the Repo/Unit-of-Work issue from (2) the multiple-database issue.
Server-side coding practices
What you describe appears only in demos. The server code exists to facilitate the client-development story which is the point of these BreezeJS samples. We made no effort to illustrate proper server side coding practices. In fact, we're often showing bad practices.
We're not ashamed. We want your full attention on the Breeze client. A functional server is all we need and "doing it right" would require more code and distractions that have nothing to do with writing a Breeze client.
For the record, I would NEVER create an EFContextProvider in the controller of a real app. The controller should concentrate on mediating client requests.It should be thin. Repository or UoW classes (and helpers) in separate projects would handle business logic and persistence.
The EFContextProvider itself is production grade and appropriate for most applications that turn Breeze client requests into Entity Framework persistence operations.
Multiple databases
If you do all of the joining on the server, your web api (little "w", little "a") can present a single face to the client and you'll only need one set of metadata to describe a model that persists across multiple databases. That's great for client development and is probably the most productive approach, given that you'll have to validate and coordinate cross-database query and save operations on the server no matter what you do on the client.
There is no escaping the fact that your server will be much more complicated than if you had one model, one DbContext, and one database. Too bad you can't separate the workflows so that you have only one model/context/database per feature area. You'll probably lose the ability to use IQueryable and have to write specific endpoints for each query that your client app requires. Oh well.
I smell a deeper architectural problem. But the real world presents us with stinky situations; we must hold our noses and persevere if we possibly can.
At least you can shield the client by managing the mess entirely on the server. That's what I'd try to do.
Related
I have a situation where I need to create an application which supports multiple databases. Multiple databases means the client can use any of the database like Oracle, SQL Server, MySQL, PostgreSQL at first.
I was trying to use ORM like NHibernate or MyBatis. But they have their limitation and need expertise to use.
So I decide to user the Data Providers provided by Microsoft like ADO.NET, OLEDB, ODP.NET etc.
Is there any way so that the my logic of database keep same for all the database? I have tried IDbConeection, IDbCommand etc but they have a problem in case of Oracle (Ref Cursor).
I there any way to achieve this? Some link or guide would be appreciated.
Edit:
There is problem with the DBTypes because they are enum define differently with different data providers.
Well, real-life applications are complicated like that. Before you know it, you want to replace the UI with an App, expose your logic as a WCF service, change the e-mail service with another service provider, test pieces of your code while mocking the DAL and change the database with another one.
The usual way to deal with this is to pass all calls through an interface that separates the implementation from the caller. After that, you can implement the different DAL's.
Personally I usually go with this approach:
First create a single DLL that contains all interfaces. Basically the idea is to expose all calls that your UI, App or whatever needs through the interface. From now on, your UI doesn't talk to databases or e-mail providers anymore.
If you need to get access to the interface, you use a factory pattern. Never use 'new'; that will get you in trouble in the long run.
It's not trivial to create this, and needs proper crafting. Usually I begin with a bare minimum version, hack everything else in the UI as a first version, then move everything that touches a DB or a service into the right project while creating interfaces and finally re-engineer everything until I'm 100% satisfied.
Interfaces should be built to last. Sure, changes will happen over time, but you really want to minimize these. Think about what the future will hold, read up on what other people came up with and ensure your interfaces reflect that.
Basically you now have a working piece of software that works with a single database, mail provider, etc. So far so good.
Next, re-engineer the factory. Basically you want to use the configuration settings to pick the right provider (the right DLL that implements your interface) for your data. A simple switch can suffice in most cases.
At this point I usually make it a habit to make a ton of unit tests for the interfaces.
The last step is to create DLL's for the different database providers. One of these will be loaded at run-time in your application.
I prefer simple Linq to SQL (I also use the library from LinqConnect) because it's pretty fast. I simply start by copy-pasting the other database provider, and then re-engineer it until it works. Personally I don't believe in a magic 'support all sql databases' solution anymore: In my experience, some databases will handle certain queries a much, much faster than other databases - which means that you will probably end up with some custom code for each database anyways.
This is also the point where your unit tests are really going to pay off. Basically, you can just start with copy-paste and give it a test. If you're lucky, everything will run right away with decent performance... if not, you know where to start.
Build to last
Build things to last. Things will change:
Think about updates and test them. Prefer automatic tests.
You don't want to tinker with your Factory every day. Use Reflection, Expressions, Code generation or whatever your poison is to save yourself the trouble of changing code.
Spend time writing tests. Make sure you cover the bulk. I cannot stress this enough; under pressure people usually 'save' time by not writing tests. You'll notice that this time that you 'save' will double back on you as support when you've gone live. Every month.
What about Entity Framework
I've seen a lot of my customers get into trouble with performance because of this. In the many times that I've tested it, I had the same experience. I noticed customers hacking around EF for a lot of queries to get a bit of decent performance.
To be fair, I gave up a few years ago, and I know they have made considerable performance improvements. Still, I would test it (especially with complex queries) before considering it.
If I would use EF, I'd implement all EF stuff in a 'database common DLL', and then derive classes from that. As I said, not all databases are the same with queries - and you might want to implement some hacks that are necessary to get decent performance. Your tests will tell.
Bonuses
Other reasons for programming through interfaces has a lot of advantages in combination with proxy's. To name a few, you can easily create log sinks, caching, statistics, WCF, etc. by simply implementing the same interface. And if you end up hating your current OR mapper some day, you can just throw it away without touching a single line of your app.
I believe Microsoft's Data Access Components would be suitable to you.
https://en.wikipedia.org/wiki/Microsoft_Data_Access_Components
How about writing microservices and connect them by using a rest api?
You (and maybe your team) could provide a core application which handles the logic and the ui. This is still based on your current technology. But instead of adding directly some kind of database connection, you could provide multiple types of microservices (based on asp.net or core) providing a rest api. You get your data from each database from such a microservice. So you would develop 1 micro service for e.g. MySQl and another one for MsSQL and when a new customer comes up with oracle you write a new small microservice which handles your expected API.
More info (based on .net core) is here: https://docs.asp.net/en/latest/tutorials/first-web-api.html
I think this is a teams discussion, which kind of technology you decide to use. But today I would recommend writing a micro service. It makes the attachment of a new app for a e.g. mobile device also much easier :)
Yes its possible.
Right now am working with the same scenario where my all logic related data( typically you can call meta data) reside inside one DB and date resides in another DB.
What you need to do. you should have connection related parameter in two different file or you can call these file as prop files. now you need to have connection concrete class which take the parameter from these prop file. so where you need to create connection just supply the prop files and it will created the db connection as desired.
Using this:
https://genericunitofworkandrepositories.codeplex.com/
and the following set of blog posts:
http://blog.longle.net/2013/05/11/genericizing-the-unit-of-work-pattern-repository-pattern-with-entity-framework-in-mvc/
We are trying to use those repositories with Breeze since it handles client side javascript and OData very well.
I was wondering how we could use these with Breeze to handle overriding the BeforeSaveEntity correctly.
We have quite a bit of business logic that needs to happen during the save (modifying properties like ModifiedBy, ModifiedTime, CreatedBy, etc) but when we change those they aren't updated by breeze, so we have to re query after the save (we've tried manually mapping the changes back but it requires us to duplicate all of the business logic).
Our second option was to check the type of each entity and then request the correct repository for it, handle the save internally, and then do a new get request on the client to get the updated information. This is chatty though so we were hoping there is a better way. What would the correct way of updating these objects while bypassing breeze's save without returning an error or having to reget the data afterward?
Any examples of Breeze with Business Logic during the save would be very helpful, especially if it happens in a service, repository or something else besides directly in the BeforeSaveEntity method.
This is many questions rolled into one and each is a big topic. The best I can do is point you in some directions.
Before I get rolling, let me explain why you're not seeing the effects of setting "properties like ModifiedBy, ModifiedTime, CreatedBy, etc)". The EFContextProvider does not update every property of the modified entities but rather only those properties mentioned in the EntityInfo.OriginalValuesMap, a dictionary of the property names and original values of just the properties that have changed. If you want to save a property that is only set on the server, just add it to the original values map:
var map = EntityInfo.OriginalValuesMap;
map["ModifiedBy"]=null; // the original value does not matter
map["ModifiedTime"]=null;
Now Breeze knows to save these properties as well and their new values will be returned to the client.
Let's return to the bigger picture.
Breeze is first and foremost an client-side JavaScript library. You can do pretty much whatever you want on the server-side and make Breeze happy about it as long as your server speaks HTTP and JSON.
Writing a server that provides all the capabilities you need is not trivial no matter what technology you favor. The authors of Breeze offer some .NET components out of the box to make your job easier, especially when you choose the Web API, EF and SQL Server stacks.
Our .NET demos typically throw everything into one web application. That's not how we roll in practice. In real life we would never instantiate a Breeze EFContextProvider in our Web API controller. That controller (or multiple controllers) would delegate to an external class that is responsible for business logic and data access, perhaps a repository or unit-of-work (UoW) class.
Repository pattern with Breeze .NET components
We tend to create separate projects for the model (POCOs usually), data access (ORM) and web (Web API plus client assets) projects. You'll see this kind of separation in the DocCode Sample and also in John Papa's Code Camper sample, the companion to his PluralsSight course "Building Apps with Angular and Breeze".
Those samples also demonstrate an implementation of the repository pattern that blends the responsibilities of multiple repositories and UoW in one class. This makes sense for the small models in these samples. There is nothing to stop you from refactoring the repositories into separate classes.
We keep our repository class in the same project as the EF data access material as we see no particular value in creating yet another project for this small purpose. It's not difficult to refactor into a separate project if you're determined to do so.
Both the Breeze and Code Camper samples concentrate on Breeze client development. They are thin on server-side logic. That said, you will find valuable clues for applying custom business logic in the BeforeSaveEntities extension point in the "NorthwindRepository.cs" and `NorthwindEntitySaveGuard.cs" files in the DocCode sample. You'll see how to restrict saves to certain types and certain records of those types based on the user who is making the request.
The logic can be overwhelming if you try to channel all save changes requests through a single endpoint. You don't have to do that. You could have several save endpoints, each dedicated to a particular business operation that is limited to insert/updating/deleting entities of just a few types in a highly specific manner. You can be as granular as you please. See "Named Saves" in the "Saving Entities" topic.
Have it your way
Now there are a gazillion ways to implement repository and UoW patterns.
You could go the way set forth by the post you cited. In that case, you don't need the Breeze .NET components. It's pretty trivial to wire up your Web API query methods (IQueryable or not) to repository methods that return IQueryable (or just objects). The Web API doesn't have to know if you've got a Breeze EFContextProvider behind the scenes or something completely different.
Handling the Breeze client's SaveChanges request is a bit trickier. Maybe you can derive from ContextProvider or EFContextProvider; maybe not. Study the "ContextProvider.cs" documentation and the source code, especially the SaveChanges method, and you'll see what you need to do to keep Breeze client happy and interface with however you want to handle change-set saves with your UoW.
Assuming you change nothing on the client-side (that's an assumption, not a given ... you can change the save protocol if you want), your SaveChanges needs to do only two things:
Interpret the "saveBundle" from the client.
Return something structurally similar to the SaveResult
The saveBundle is a JSON package that you probably don't want to unpack yourself. Fortunately, you can derive a class from ContextProvider that you use simply to turn the saveBundle into a "SaveMap", a dictionary of EntityInfo objects that's pretty much what anyone would want to work with when analyzing a change-set for validation and save.
The following might do the trick:
using System;
using System.Collections.Generic;
using System.Data;
using Breeze.ContextProvider;
using Newtonsoft.Json.Linq;
public class SaveBundleToSaveMap : ContextProvider
{
// Never create a public instance
private SaveBundleToSaveMap(){}
/// <summary>
/// Convert a saveBundle into a SaveMap
/// </summary>
public static Dictionary<Type, List<EntityInfo>> Convert(JObject saveBundle)
{
var dynSaveBundle = (dynamic) saveBundle;
var entitiesArray = (JArray) dynSaveBundle.entities;
var provider = new SaveBundleToSaveMap();
var saveWorkState = new SaveWorkState(provider, entitiesArray);
return saveWorkState.SaveMap;
}
// override abstract members but DO NOT USE ANY OF THEM
}
Then it's up to you how you make use of the "SaveMap" and dispatch to your business logic.
The SaveResult is a simple structure:
public class SaveResult {
public List<Object> Entities; // each of the entity type you serialize to the client
public List<KeyMapping> KeyMappings;
public List<Object> Errors;
}
public class KeyMapping {
public String EntityTypeName;
public Object TempValue;
public Object RealValue;
}
Use these classes as is or construct your own. Breeze client cares about the JSON, not these types.
I am creating a WPF application that uses a WCF service to interact with the data source. I use DI for both the client and the WCF server to ensure decoupled code, but I am unsure how to handle the data transfer from backend to user interface.
To keep the layers separate data is currently transferred from the database to the UI through several mapping steps. On the server side data entities are mapped to domain objects which again are mapped to service data contracts. On the client side WCF proxy classes are mapped to viewmodels.
Some developers at work claim that this "copying" of data between seemingly identical classes creates a maintenance problem because so many classes must be updated when a change is introduced. Instead they say you should use shared classes across the layers since we control both the client application and the WCF service. I too worry about the amount of work involved and see a potential performance penalty, but on the other hand using a shared class across the layers/abstractions might create tight coupling the way I see it. What is the best approach?
Using DTOs as business objects is not the best decision you can make.
From my experience I can say that usually when your objects are identical for all the layers then there is probably a problem with the architecture somewhere.
In a real business scenario it is quite unlikely that a business logic on a server and a business logic on a client have the same context and operate with the same objects. And if they have exactly the same structure with the database... hmmm... sounds like a data-driven application.
But if it is a data-driven application when clients access some data, modify it and save it back, then probably you don't really need this complicated layering? It sounds simple so let's keep it simple. If it is a data-driven application, why not just create a WCF DataServices context on top of your database and let it do all the dirty work for you when you just access your data over WCF without even thinking about DTOs, mappings, etc.
If it is not a data-driven application, then you probably have some complicated business logic on your server side, and this business logic usually operates with objects that make sense only its context. It just doesn't make sense to push these object all the way through to the UI.
Instead, the UI will probably send commands to the server in order to ask the system to do something. For example, it will send a "DisableAccount(id=123)" command instead of loading AccountDTO, changing its IsEnabled flag to false and pushing it back.
If there is a business logic, then it probably will be triggered by such command from the client who does not need to know how to disable accounts or how to do other things. It just knows and can command the system to do something.
So in this scenario client (the UI) just does not need the same object that server has. It may need some data to display to user, but it definitely will be in a format that make sense for the client's view, not for the business logic. It will probably contain some denormalized data, combined somehow.
Say, User for UI is not a DTO mapped to the Users table. It is another DTO, containing users data and statistics from different tables, processed somehow. Client does not need to know the internal structure of server's data storage so there is no need to expose it. Get the relevant data and send the appropriate commands, that's it.
Saying all this I should underline that it is NOT a binary choice you make. For simple features you may use a simple approach, for the features where having business logic make sense you may do the other things.
You do not have to choose one for everything. So you do not have to always create 3 similar objects just because it is "The Way" or always pass entities all the way through to the UI.
But what you will have to do is to clearly separate contexts and to define where which approach is going to be used.
In 80% you will probably end up with something simple (like WCF DataServices), and you don't need to do anything, and it is fine as in a lot of operations you just want to change the data.
But in other 20% (which is the "core" of your application) where the real business logic lives - here you may want this kind of separation not only for objects, but also for responsibilities between your layers.
All that mapping indeed creates a maintenance burden. Whether or not it's warranted depends on what you are building, and how complex the business logic is.
However, it's very important to realize that once you start sharing data structures across layers and tiers, the architecture is no longer decoupled. If you do that, you'd essentially be building a monolithic application. Don't get me wrong: there's nothing wrong with building a monolithic application if all you're doing is a glorified CRUD application, but it's essential to make that decision explicitly.
There's at least these alternatives:
Maintain strict layering. The mapping cost remains, but the code is decoupled.
Build a monolithic application. Collapse everything you can collapse. Keep it as simple as possible. It's going to be tightly coupled, but it just may become so simple that it doesn't matter.
Do something radically different, like building a CQRS application or a SOA mashup.
Personally, I prefer the third option these days.
I see nothing sacred about layers. Having layer-specific versions of each and every entity in the model would increase the number of classes a great deal. It's unnecessary, in my view. It violates the DRY principle: why keep repeating yourself?
What does layer purity buy you?
So I'd say the best approach is to pass those model entities around without fear.
I'm in the process of doing the analysis of a potentially big web site, and I have a number of questions.
The web site is going to be written in ASP.NET MVC 3 with razor view engine. In most examples I find that controllers directly use the underlying database (using domain/repository pattern), so there's no WCF service in between. My first question is: is this architecture suitable for a big site with a lot of traffic? It's always possible to load balance the site, but is this a good approach? Or should I make the site use WCF services that interact with the data?
Question 2: I would like to adopt CQS principles, which means that I want to separate the querying from the commanding part. So this means that the querying part will have a different model (optimized for the views) than the commanding part (optimized to business intend and only containing properties that are needed for completing the command) - but both act on the same database. Do you think this is a good idea?
Thanks for the advice!
For scalability, it helps to separate back-end code from front-end code. So if you put UI code in the MVC project and as much processing code as possible in one or more separate WCF and business logic projects, not only will your code be clearer but you will also be able to scale the layers/tiers independently of each other.
CQRS is great for high-traffic websites. I think CQRS, properly combined with a good base library for DDD, is good even for low-traffic sites because it makes business logic easier to implement. The separation of data into a read-optimized model and a write-optimized model makes sense from an architectural point of view also because it makes changes easier to do (maybe some more work, but it's definitely easier to make changes without breaking something).
However, if both act on the same database, I would make sure that the read model consists entirely of Views so that you can modify entities as needed without breaking the Read code. This has the advantage that you'll need to write less code, but your write model will still consist of a full-fledged entity model rather than just an event store.
EDIT to answer your extra questions:
What I like to do is use a WCF Data Service for the Read model. This technology (specific to .NET 4.0) builds an OData (= REST + Atom with LINQ support) web service on top of a data model, such as an Entity Framework EDMX.
So, I build a Read model in SQL Server (Views), then build an Entity Framework model from that, then build a WCF Data Service on top of that, in read-only mode. That sounds a lot more complicated than it is, it only takes a few minutes. You don't need to create yet another model, just expose the EDMX as read-only. See also http://msdn.microsoft.com/en-us/library/cc668794.aspx.
The Command service is then just a one-way regular WCF service, the Read service is the WCF Data Service, and your MVC application consumes them both.
ADO.NET Data service is the next generation of data access layer within applications. I have seen a lot of examples using it directly from a UI layer such as Silverlight or Ajax to get data. This is almost as having a two tiered system, with business layer completely removed. Should DAL be accessed by the Business layer, and not directly from UI?
ADO.NET Data Services is one more tool to be evaluated in order to move data.
.NET RIA Services is another one. Much better I would say.
I see ADO.NET Data Services as a low level services to be used by some
high level framework. I would not let my UI talk directly to it.
The main problem I see with ADO.NET Data Services has more to do with
security than with anything else.
For simple/quick tasks, in a Intranet, and if you are not too pick with your
design, it can be useful. (IMO)
It can be quite handy when you need to quickly expose data from an existing database.
I say handy, but it would not be my first choice as I avoid as much as I can
the "quick and dirty" solutions.
Those solutions are like ghosts, always come back to haunt you.
ADO.NET Data service is the next generation of data access layer within applications
I have no idea where you got that from! Perhaps you're confusing ADO.NET Data Services with ADO.NET Entity Framework?
One shouldn't assume that everything Microsoft produces is of value to every developer. In my opinion, ADO.NET Data Services is a quick way to create CRUD services, which maybe have a few other operations defined on the entity, but the operations are all stored procedures. If all you need is a database-oriented service, then this may be what you want. Certainly, there's relatively little reason to do any coding for a service like this, except in the database.
But that doesn't mean that ADO.NET Data Services "has a place in the overall design" of every project. It's something that fills a need of enough customers that Microsoft thought it worthwhile to spend money developing and maintaining it.
For that matter, they also thought ASP.NET MVC was a good idea...
:-)
In my opinion other answers underestimate importance of ADO.Net Data Services. Though using it directly in your application brings some similarity to two tiered system , other Microsoft products such as .Net RIA Services , Windows Asure Storage Services based on it. On the contrary to the phrase in one of the answers "For simple/quick tasks, in a Intranet, and if you are not too pick with your design, it can be useful" it may be useful for public websites including websites in ASP.Net MVC.
Dino Esposito describes the driving force for Ado.Net Data Services in his blog
http://weblogs.asp.net/despos/archive/2008/04/21/the-quot-driving-force-quot-pattern-part-1-of-n.aspx
"ADO.NET Data Services (aka, Astoria)
Driving force: the need of building richly interactive Web systems.
What's that in abstract: New set of tools for building a middle-tier or, better yet, the service layer on top of a middle-tier in any sort of application, including enterprise class applications.
What's that in concrete: provides you with URLs to invoke from hyperlinks to bring data down to the client. Better for scenarios where a client needs a direct|partially filtered access to data. Not ideal for querying data from IE, but ideal for building a new generation of Web controls that breath AJAX. And just that."