Dapper and n-tier application (navigation properties) - c#

I'm struggling to get my head around how I should implement dapper in my application. I have a n-tier mvc application and have some experience with EF. Even that I think EF is good, I have not passed the learning curve to make it flow easy and not struggle with performance. In the new project we decided to give dapper a go, mostly to get control over the sql and hopefully get good performance.
Background
I created a layered aplication (core) with these layer
Web - mvc
Service - Business layer to handle the business logic
Data - datalayer to access the ms sql server
I went ahead and started implementing a UnitOfWork and generic Repositories in the datalayer.
A normal structure in the Database would be
Order
ref to User
ref to Address
OrderLine
ref to Product
And in many cases I want to retrieve multiple orders with all lines and products.
So what I did was to have navigation properties on the entity models as you would in EF and populate them with dapper using either multiquery or split the result into different entities and mapping them to the graph.
The problem
The problem I run into is when I do an insert. I have an sqlextension that maps the properties to table columns. But the navigation would also be mapped by default. I realize that I can decorate with attributes and read them on the mapping, but as I google I'm getting aware that maybe I should drop the UnitOfWork pattern and also repository, making the data-layer "super thin" and just expose the connection.
Then the service-layer would call the Dapper with correct sql, kind of what I would do today but with repositories.
I would also drop the navigation properties and fetch each entity on it's own, and combine them in the ViewModel.
My problem with this is if we take the order table above I would have to do something like this to get a full list (normally paged, also I removed the User/address)
var listModel = new OrderListViewModel();
var orders = orderService.GetAll();
foreach(var order in orders) {
var orderModel = new OrderViewModel(); // also map fields
var orderLines = orderService.GetOrderLinesForOrder(order.OrderId);
foreach(var orderline in orderLines) {
var orderLineModel = new OrderLineViewModel(); // also map fields
var product = productService.GetProduct(orderline.ProductId);
orderLineModel.Product = new ProductViewModel(); // also map fields
orderModel.OrderLines(orderLineModel);
}
listModel.Orders.Add(orderModel);
}
This will generate ALOT of queries (almost like EF lazy loading). So I could do a mapping thing
var orders = orderService.GetAll();
var orderLines = orderService.GetOrderLinesForOrders(orders.Select(o => o.OrderId).ToArray() ); // get all orderlines for all orders
var products = productService.GetProductsForOrderLines(orderLines.Select(p => p.OrderLineId).ToArray() ); // get all products for all orderlines
foreach(var order in orders) {
var orderModel = new OrderViewModel(); // also map fields
var orderLines = orderLines.Where( o => o.OrderId == order.OrderId );
foreach(var orderline in orderLines) {
var orderLineModel = new OrderLineViewModel(); // also map fields
var product = products.First(p => p.ProductId == orderline.ProductId);
orderLineModel.Product = new ProductViewModel(); // also map fields
orderModel.OrderLines(orderLineModel);
}
listModel.Orders.Add(orderModel);
}
This will generate alot less sql queries and is optimal in performance I think. I know there can be a problem with more than 2100 (?) parameters, but I think that will not be a problem in my case.
The problem is that many of out tables have different status, and many relations to other tables. I would have to do alot of these queries all the time.
When I first did repository and navigation I would do it like
repo.Get<Order, OrderLines, Product, Order>(sqlThatWouldJoinAllTables);
// split and map the structure into order Entity and just return that
That way I could just call orderService.GetAll() and retrieve a graph of order, orderlines and products.
I don't know which of the solutions is "best practice". I've tried to find a good open source project using layers and dapper to get some real world usage, but without success.
The approach of removing navigation properties also remove some of the purpose of the service layer, since i'm in kind of a way moving some of the business logic to the mvc controller.
I can't find a good practice how I would go forward, please advice.

If the RDBMS you're using supports JSON, I would suggest to wrap everything you need to insert into a JSON and send it to stored procedure with just one call. Same technique can be used to return a graph of related object with just one call. The Unit-Of-Work, a transaction really, will be taken care in the stored procedure itself, which is also the right place where to deal with transactions that operation on data IMHO.
This helps enormously to reduce round-trips at the expense of more CPU used on the database. This is usually not a problem unless you expect a really huge number (= more then several thousands of concurrent queries per second.
I have wrote extensively about this here:
https://medium.com/dapper-net/one-to-many-mapping-with-dapper-55ae6a65cfd4
and more specifically the "Complex Custom Handling" sample shows exactly what I mentioned.

Related

Live Transfer of data from one provider to another in Entity Framework

I apologise if this has been asked already, I am struggling greatly with the terminology of what I am trying to find out about as it conflicts with functionality in Entity Framework.
What I am trying to do:
I would like to create an application that on setup gives the user to use 1 database as a "trial"/"startup" database, i.e. non-production database. This would allow a user to trial the application but would not have backups etc. in no way would this be a "production" database. This could be SQLite for example.
When the user is then ready, they could then click "convert to production" (or similar), and give it the target of the new database machine/database. This would be considered the "production" environment. This could be something like MySQL, SQLServer or.. whatever else EF connects to these days..
The question:
Does EF support this type of migration/data transfer live? Would it need another app where you could configure the EF source and EF destination for it to then run through the process of conversion/seeding/population of the data source to another data source?
Why I have asked here:
I have tried to search for things around this topic, but transferring/migration brings up subjects totally non-related, so any help would be much appreciated.
From what you describe I don't think there is anything out of the box to support that. You can map a DbContext to either database, then it would be a matter of fetching and detaching entities from the evaluation DbContext and attaching them to the production one.
For a relatively simple schema / object graph this would be fairly straight-forward to implement.
ICollection<Customer> customers = new List<Customer>();
using(var context = new AppDbContext(evalConnectionString))
{
customers = context.Customers.AsNoTracking().ToList();
}
using(var context = new AppDbContext(productionConnectionString))
{ // Assuming an empty database...
context.Customers.AddRange(customers);
}
Though for more complex models this could take some work, especially when dealing with things like existing lookups/references. Where you want to move objects that might share the same reference to another object you would need to query the destination DbContext for existing relatives and substitute them before saving the "parent" entity.
ICollection<Order> orders = new List<Order>();
using(var context = new AppDbContext(evalConnectionString))
{
orders = context.Orders
.Include(x => x.Customer)
.AsNoTracking()
.ToList();
}
using(var context = new AppDbContext(productionConnectionString))
{
var customerIds = orders.Select(x => x.Customer.CustomerId)
.Distinct().ToList();
var existingCustomers = context.Customers
.Where(x => customerIds.Contains(x.CustomerId))
.ToList();
foreach(var order in orders)
{ // Assuming all customers were loaded
var existingCustomer = existingCustomers.SingleOrDefault(x => x.CustomerId == order.Customer.CustomerId);
if(existingCustomer != null)
order.Customer = existingCustomer;
else
existingCustomers.Add(order.Customer);
context.Orders.Add(order);
}
}
This is a very simple example to outline how to handle scenarios where you may be inserting data with references that may, or may not exist in the target DbContext. If we are copying across Orders and want to deal with their respective Customers we first need to check if any tracked customer reference exists and use that reference to avoid a duplicate row being inserted or throwing an exception.
Normally loading the orders and related references from one DbContext should ensure that multiple orders referencing the same Customer entity will all share the same entity reference. However, to use detached entities that we can associate with the new DbContext via AsNoTracking(), detached references to the same record will not be the same reference so we need to treat these with care.
For example where there are 2 orders for the same customer:
var ordersA = context.Orders.Include(x => x.Customer).ToList();
Assert.AreSame(orders[0].Customer, orders[1].Customer); // Passes
var ordersB = context.Orders.Include(x => x.Customer).AsNoTracking().ToList();
Assert.AreSame(orders[0].Customer, orders[1].Customer); // Fails
Even though in the 2nd example both are for the same customer. Each will have a Customer reference with the same ID, but 2 different references because the DbContext is not tracking the references used. One of the several "gotchas" with detached entities and efforts to boost performance etc. Using tracked references isn't ideal since those entities will still think they are associated with another DbContext. We can detach them, but that means diving through the object graph and detaching all references. (Do-able, but messy compared to just loading them detached)
Where it can also get complicated is when possibly migrating data in batches (disposing of a DbContext regularly to avoid performance pitfalls for larger data volumes) or synchronizing data over time. It is generally advisable to first check the destination DbContext for matching records and use those to avoid duplicate data being inserted. (or throwing exceptions)
So simple data models this is fairly straight forward. For more complex ones where there is more data to bring across and more relationships between that data, it's more complicated. For those systems I'd probably look at generating a database-to-database migration such as creating INSERT statements for the desired target DB from the data in the source database. There it is just a matter of inserting the data in relational order to comply with the data constraints. (Either using a tool or rolling your own script generation)

Is there a way to automatically create CRUD for EF Model (DB First currently)

I am creating a WPF app and I have an existing DB that I would like to use and NOT recreate. I will if I have to, but I would rather not. The DB Is sqlite and when I add it to my data later and create a DataModel based on the DB, I get the model and the DB Context, however there are no methods created for CRUD or for instance .ToList() so I can return all of the items on the table.
Do I need to create all of these manually or is there a way to do it like the way that MVC can scaffold?
I am using VS 2017, WPF, EF6 and Sqlite installed with Nu-Get
To answer the question in the title.
No.
There is no click-a-button method of scaffolding out UI like you get with MVC.
If you just deal with a table at a time then you could build a generic repository that returns a List for a given table. That won't save you much coding, but you could do it.
If you made that return an iQueryable rather than just a List then you could "chain" such a query. Linq queries aren't turned into SQL until you force iteration and you can base one on another adding criteria, what to select etc etc for flexibility.
In the body of your post you ask about methods to read and write data. This seems to be almost totally unrelated from the other question because it's data access rather than UI.
"there are no methods created for CRUD or for instance .ToList() so I can return all of the items on the table."
There are methods available in the form of LINQ extension methods.
ToList() is one of these, except it is usual to use async await and ToListAsync.
Where and Select are other extension methods.
You would be writing any model layer that exposed the results of those though.
I'm not clear whether you are just unaware of linq or what, but here's an example query.
var customers = await (from c in db.Customers
orderby c.CustomerName
select c)
.Include(x => x.Orders) //.Include("Orders") alternate syntax
.ToListAsync();
EF uses "lazy loading" of related entities, that Include makes it read the Orders for each customer.
Entity Framework is an Object Relational Mapper
Which means it will Map your C# objects to Tables.
Whenever you are creating a model from bd it will create a Context Class which will in inherit the DbContext. in this class you will find all the tables in DbSet<Tablename> Tablename{get; set;}. Basically, this list contains will the rows. the operation performed on this list will affect the DB on SaveChange method.
Example for CURD
public DbSet<Student> Students { get; set; }
//Create
using (var context = new YourDataContext()) {
var std = new Student()
{
Name = "Aviansh"
};
context.Students.Add(std);
context.SaveChanges();
}//Basically saving it will add a row in student table with name field as avinash
//Delete
using (var context = new YourDataContext()) {
var CurrentStudent=context.Students.FirstOrDefault(x=>x.Name=="Avinash")
CurrentStudent.context.Students.Remove(CurrentStudent);
context.SaveChanges();
}
Note: on SaveChanges the change will reflect on Db

Manipulating large quantities of data in ASP.NET MVC 5

I am currently working towards implementing a charting library with a database that contains a large amount of data. For the table I am using, the raw data is spread out across 148 columns of data, with over 1000 rows. As I have only created models for tables that contain a few columns, I am unsure how to go about implementing a model for this particular table. My usual method of creating a model and using the Entity Framework to connect it to a database doesn't seem practical, as implementing 148 properties for each column does not seem like an efficient method.
My questions are:
What would be a good method to implement this table into an MVC project so that there are read actions that allow one to pull the data from the table?
How would one structure a model so that one could read 148 columns of data from it without having to declare 148 properties?
Is the Entity Framework an efficient way of achieving this goal?
Entity Framework Database First sounds like the perfect solution for your problem.
Data first models mean how they sound; the data exists before the code does. Entity Framework will create the models as partial classes for you based on the table you direct it to.
Additionally, exceptions won't be thrown if the table changes (as long as nothing is accessing a field that doesn't exist), which can be extremely beneficial in a lot of cases. Migrations are not necessary. Instead, all you have to do is right click on the generated model and click "Update Model from Database" and it works like magic. The whole process can be significantly faster than Code First.
Here is another tutorial to help you.
yes with Database First you can create the entites so fast, also remember that is a good practice return onlye the fiedls that you really need, so, your entity has 148 columns, but your app needs only 10 fields, so convert the original entity to a model or viewmodel and use it!
One excelent tool that cal help you is AutoMapper
Regards,
Wow, that's a lot of columns!
Given your circumstances a few thoughts come to mind:
1: If your problem is the leg work of creating that many properties you could look at Entity Framework Power Tools. EF Tools is able to reverse engineer a database and create the necessary models/entity relation mappings for you, saving you a lot of the grunt work.
To save you pulling all of that data out in one go you can then use projections like so:
var result = DbContext.ChartingData.Select(x => new PartialDto {
Property1 = x.Column1,
Property50 = x.Column50,
Property109 = x.Column109
});
A tool like AutoMapper will allow you to do this with ease via simply configurable mapping profiles:
var result = DbContext.ChartingData.Project().To<PartialDto>().ToList();
2: If you have concerns with the performance of manipulating such large entities through Entity Framework then you could also look at using something like Dapper (which will happily work alongside Entity Framework).
This would save you the hassle of modelling the entities for the larger tables but allow you to easily query/update specific columns:
public class ModelledDataColumns
{
public string Property1 { get; set; }
public string Property50 { get; set; }
public string Property109 { get; set; }
}
const string sqlCommand = "SELECT Property1, Property50, Property109 FROM YourTable WHERE Id = #Id";
IEnumerable<ModelledDataColumns> collection = connection.Query<ModelledDataColumns>(sqlCommand", new { Id = 5 }).ToList();
Ultimately if you're keen to go the Entity Framework route then as far as I'm aware there's no way to pull that data from the database without having to create all of the properties one way or another.

Entity Framework updating two databases

As I've mentioned in a couple other questions, I'm currently trying to replace a home-grown ORM with the Entity Framework, now that our database can support it.
Currently, we have certain objects set up such that they are mapped to a table in our internal database and a table in the database that runs our website (which is not even in the same state, let alone on the same server). So, for example:
Part p = new Part(12345);
p.Name = "Renamed part";
p.Update();
will update both the internal and the web databases simultaneously to reflect that the part with ID 12345 is now named "Renamed part". This logic only needs to go one direction (internal -> web) for the time being. We access the web database through a LINQ-to-SQL DBML and its objects.
I think my question has two parts, although it's possible I'm not asking the right question in the first place.
Is there any kind of "OnUpdate()" event/method that I can use to trigger validation of "Should this be pushed to the web?" and then do the pushing? If there isn't anything by default, is there any other way I can insert logic between .SaveChanges() and when it hits the database?
Is there any way that I can specify for each object which DBML object it maps to, and for each EF auto-generated property which property on the L2S object to map to? The names often match up, but not always so I can't rely on that. Alternatively, can I modify the L2S objects in a generic way so that they can populate themselves from the EF object?
Sounds like a job for Sql Server replication.
You don't need to inter-connect the two together as it seems you're saying with question 2.
Just have the two separate databases with their own EF or L2S models and abstract them away using repositories with domain objects.
This is the solution I ended up going with. Note that the implementation of IAdvantageWebTable is inherited from the existing base class, so nothing special needed to be done for EF-based classes, once the T4 template was modified to inherit correctly.
public partial class EntityContext
{
public override int SaveChanges(System.Data.Objects.SaveOptions options)
{
var modified = this.ObjectStateManager.GetObjectStateEntries(EntityState.Modified | EntityState.Added); // Get the list of things to update
var result = base.SaveChanges(options); // Call the base SaveChanges, which clears that list.
using (var context = new WebDataContext()) // This is the second database context.
{
foreach (var obj in modified)
{
var table = obj.Entity as IAdvantageWebTable;
if (table != null)
{
table.UpdateWeb(context); // This is IAdvantageWebTable.UpdateWeb(), which calls all the existing logic I've had in place for years.
}
}
context.SubmitChanges();
}
return result;
}
}

LinqToSQL - no supported translation to SQL

I have been puzzling over a problem this morning with LinqToSQL. I'll try and summarise with the abbreviated example below to explain my point.
I have DB two tables:
table Parent
{
ParentId
}
table Child
{
ChildId
ParentId [FK]
Name
Age
}
These have LinqToSQL equivalent classes in my project, however, I have written two custom model classes that I want my UI to use, instead of using the LinqToSQL classes.
My data access from the front end goes through a service class, which in turn calls a repository class, which queries the data via linq.
At the repository level I return an IQueryable by:
return from data in _data.Children
select new CustomModel.Child
{
ChildId = data.ChildId,
ParentId = date.ParentId
};
My service layer then adds an additional query restriction by parent before returning the list of children for that parent.
return _repository.GetAllChildren().Where(c => c.Parent.ParentId == parentId).ToList();
So at this point, I get the method has no supported translation to sql error when I run everything as the c.Parent property of my custom model cannot be converted. [The c.Parent property is an object reference to the linked parent model class.]
That all makes sense so my question is this:
Can you provide the querying process
with some rules that will convert a
predicate expression into the correct
piece of SQL to run at the database
and therefore not trigger an error?
I haven't done much work with linq up to now so forgive my lack of experience if I haven't explained this well enough.
Also, for those commenting on my choice of architecture, I have changed it to get around this problem and I am just playing around with ideas at this stage. I'd like to know if there is an answer for future reference.
Many thanks if anyone can help.
Firstly, it begs the question: why is the repository returning the UI types? If the repo returned the database types, this wouldn't be an issue. Consider refactoring so that the repo deals only with the data model, and the UI does the translation at the end (after any composition).
If you mean "and have it translate down to the database" - then basically, no. Composable queries can only use types defined in the LINQ-to-SQL model, and a handful of supported standard functions. Something similar came up recently on a related question, see here.
For some scenarios (unusual logic, but using the typed defined in the LINQ-to-SQL model), you can use UDFs at the database, and write the logic yourself (in TSQL) - but only with LINQ-to-SQL (not EF).
If the volume isn't high, you can use LINQ-to-Objects for the last bit. Just add an .AsEnumerable() before the affected Where - this will do this bit of logic back in managed .NET code (but the predicate won't be used in the database query):
return _repository.GetAllChildren().AsEnumerable()
.Where(c => c.Parent.ParentId == parentId).ToList();

Categories

Resources