Linq-to-SQL and WCF service - data transfer objects - c#

I'm curious about best practice when developing n-tier application with Linq-to-SQL and WCF service.
In particular, I'm interested, for example, how to return to presentation tier data from two related tables. Suppose next situation (much simplified):
Database has tables:
Orders (id, OrderName)
OrderDetails (id, orderid, DetailName)
Middle tier has CRUD methods for OrderDetails. So, I need to have way to rebuild entity for attaching to the context for update or insert when it come back from presentation layer.
In presentation layer I need to display list of OrderDetails with corresponding OrderName from the parent table.
There are two approach for classes, that returned from the service:
Use DTO custom class that will encapsulate data from both tables and projection:
class OrderDetailDTO
{
public int Id { get; set; }
public string DetailName { get; set; }
public string OrderName { get; set; }
}
IEnumerable<OrderDetailDTO> GetOrderDetails()
{
var db = new LinqDataContext();
return (from od in db.OrderDetails
select new OrderDetailDTO
{
Id = od.id,
DetailName = od.DetailName,
OrderName = od.Order.OrderName
}).ToList();
}
Cons: need to assign every field which is important for presentation layer in both ways (when returning data and when creating new entity for attaching to context, when data comes back from presentation layer)
Use customized Linq-to-SQL entity partial class:
partial class OrderDetail
{
[DataMember]
public string OrderName
{
get
{
return this.Order.OrderName // return value from related entity
}
set {}
}
}
IEnumerable<OrderDetail> GetOrderDetails()
{
var db = new LinqDataContext();
var loadOptions = new DataLoadOptions();
loadOptions.LoadWith<OrderDetail>(item => item.Order);
db.LoadOptions = options;
return (from od in db.OrderDetails
select od).ToList();
}
Cons: database query will include all columns from Orders table, Linq-to-SQL will materialize whole Order entity, although I need only one field from it.
Sorry for such long story. May be I missed something? Will appreciate any suggestions.

I would say use DTO and Automapper, not a good idea to expose DB entity as datacontract

Is usage of Linq to SQL a requirement for you or you are still in design stage where you can choose technologies? If latest, I would suggest using Entity Framework with Self Tracking Entities (STE). Than when you get entity back from client all client changes will be handled for you automatically by STEs, you will just have to call Save. Including related entities is also easy then: (...some query...).Orders.Include(c => c.OrderDetails)

Related

How to add to an intermediary table in Entity Framework

I have the following entity:
class Car
{
public string make;
public string model;
public string registration;
}
We have multiple dealerships and we want to ensure that dealer1 can only see cars that belong to them, and dealer2 can only see their cars.
We don't want to implement this check in the everyday business logic of our applications since it could lead to inconsistent enforcement of the rules, so I'm creating a thin wrapper around Entity Framework which does that.
I have another entity:
class Dealer
{
public Guid id;
}
I don't want car to reference dealer, so instead I plan to have my wrapper code look like this:
void AddCar(Car car, Dealer dealer)
{
// Some authorization logic goes here
*Add dealer if not already added
context.Add(car)
*Add link between car and dealer to third table
}
Is there any way to add data to a third link table without defining a new class to represent that link for every type of entity? E.g. can I just do like a dumb table insert or something like that?
I've tried to simplify my example as much as possible for clarity, but the reality is that I'm trying to make the wrapper generic as I have no idea what entities exist across all the micro services it will be used in (and nor should I)
You can execute SQL queries in entity framework by using ExecuteSql.
int carId = 1;
int dealerId = 1;
using (var context = new AppDbContext())
{
var sql = $"INSERT INTO [CarDealer] ([CarId], [DealerId]) VALUES ({carId}, {dealerId})";
var rowsModified = context.Database.ExecuteSql(sql);
}

LINQ Query optimalisation using EF6

I'm trying my hand at LINQ for the first time and just wanted to post a small question to make sure if this was the best way to go about it. I want a list of every value in a table. So far this is what I have, and it works, but is this the best way to go about collecting everything in a LINQ friendly way?
public static List<Table1> GetAllDatainTable()
{
List<Table1> Alldata = new List<Table1>();
using (var context = new EFContext())
{
Alldata = context.Tablename.ToList();
}
return Alldata;
}
For simple entities, that is an entity that has no references to other entities (navigation properties) your approach is essentially fine. It can be condensed down to:
public static List<Table1> GetAllDatainTable()
{
using (var context = new EFContext())
{
return context.Table1s.ToList();
}
}
However, in most real-world scenarios you are going to want to leverage things like navigation properties for the relationships between entities. I.e. an Order references a Customer with Address details, and contains OrderLines which each reference a Product, etc. Returning entities this way becomes problematic because any code that accepts the entities returned by a method like this should be getting either complete, or completable entities.
For instance if I have a method that returns an order, and I have various code that uses that order information: Some of that code might try to get info about the order's customer, other code might be interested in the products. EF supports lazy loading so that related data can be pulled if, and when needed, however that only works within the lifespan of the DbContext. A method like this disposes the DbContext so Lazy Loading is off the cards.
One option is to eager load everything:
using (var context = new EFContext())
{
var order = context.Orders
.Include(o => o.Customer)
.ThenInclude(c => c.Addresses)
.Include(o => o.OrderLines)
.ThenInclude(ol => ol.Product)
.Single(o => o.OrderId == orderId);
return order;
}
However, there are two drawbacks to this approach. Firstly, it means loading considerably more data every time we fetch an order. The consuming code may not care about the customer or order lines, but we've loaded it all anyways. Secondly, as systems evolve, new relationships may be introduced that older code won't necessarily be noticed to be updated to include leading to potential NullReferenceExceptions, bugs, or performance issues when more and more related data gets included. The view or whatever is initially consuming this entity may not expect to reference these new relationships, but once you start passing around entities to views, from views, and to other methods, any code accepting an entity should expect to rely on the fact that the entity is complete or can be made complete. It can be a nightmare to have an Order potentially loaded in various levels of "completeness" and code handling whether data is loaded or not. As a general recommendation, I advise not to pass entities around outside of the scope of the DbContext that loaded them.
The better solution is to leverage projection to populate view models from the entities suited to your code's consumption. WPF often utilizes the MVVM pattern, so this means using EF's Select method or Automapper's ProjectTo method to populate view models based each of your consumer's needs. When your code is working with ViewModels containing the data views and such need, then loading and populating entities as needed this allows you to produce far more efficient (fast) and resilient queries to get data out.
If I have a view that lists orders with a created date, customer name, and list of products /w quantities we define a view model for the view:
[Serializable]
public class OrderSummary
{
public int OrderId { get; set; }
public string OrderNumber { get; set; }
public DateTime CreatedAt { get; set; }
public string CustomerName { get; set; }
public ICollection<OrderLineSummary> OrderLines { get; set; } = new List<OrderLineSummary>();
}
[Serializable]
public class OrderLineSummary
{
public int OrderLineId { get; set; }
public int ProductId { get; set; }
public string ProductName { get; set; }
public int Quantity { get; set; }
}
then project the view models in the Linq query:
using (var context = new EFContext())
{
var orders = context.Orders
// add filters & such /w Where() / OrderBy() etc.
.Select(o => new OrderSummary
{
OrderId = o.OrderId,
OrderNumber = o.OrderNumber,
CreatedAt = o.CreatedAt,
CustomerName = o.Customer.Name,
OrderLines = o.OrderLines.Select( ol => new OrderLineSummary
{
OrderLineId = ol.OrderLineId,
ProductId = ol.Product.ProductId,
ProductName = ol.Product.Name,
Quantity = ol.Quantity
}).ToList()
}).ToList();
return orders;
}
Note that we don't need to worry about eager loading related entities, and if later down the road an order or customer or such gains new relationships, the above query will continue to work, only being updated if the new relationship information is useful for the view(s) it serves. It can compose a faster, less memory intensive query fetching fewer fields to be passed over the wire from the database to the application, and indexes can be employed to tune this even further for high-use queries.
Update:
Additional performance tips: Generally avoid methods like GetAll*() as a lowest common denominator method. Far too many performance issues I come across with methods like this are in the form of:
var ordersToShip = GetAllOrders()
.Where(o => o.OrderStatus == OrderStatus.Pending)
.ToList();
foreach(order in ordersToShip)
{
// do something that only needs order.OrderId.
}
Where GetAllOrders() returns List<Order> or IEnumerable<Order>. Sometimes there is code like GetAllOrders().Count() > 0 or such.
Code like this is extremely inefficient because GetAllOrders() fetches *all records from the database, only to load them into memory in the application to later be filtered down or counted etc.
If you're following a path to abstract away the EF DbContext and entities into a service / repository through methods then you should ensure that the service exposes methods to produce efficient queries, or forgo the abstraction and leverage the DbContext directly where data is needed.
var orderIdsToShip = context.Orders
.Where(o => o.OrderStatus == OrderStatus.Pending)
.Select(o => o.OrderId)
.ToList();
var customerOrderCount = context.Customer
.Where(c => c.CustomerId == customerId)
.Select(c => c.Orders.Count())
.Single();
EF is extremely powerful and when selected to service your application should be embraced as part of the application to give the maximum benefit. I recommend avoiding coding to abstract it away purely for the sake of abstraction unless you are looking to employ unit testing to isolate the dependency on data with mocks. In this case I recommend leveraging a unit of work wrapper for the DbContext and the Repository pattern leveraging IQueryable to make isolating business logic simple.

Does Entity size matter for performance in EF Core with DB first?

I'm writing a ASP.NET Core Web API project. As a data source It will be using existing (and pretty big) database. But not entire database. The API will use only some of the tables and even in these tables it will not use all the columns.
Using Reverse engineering and scaffolding I was able to generate DbContext and Entity classes... and it got me thinking. There is a table with 30 columns (or more). I'm using this table, but I only need 5 columns.
My question is:
Is there any advantage of removing 25 unused columns from C# entity object? Does it really matter?
The advantage of leaving them there unused is that in case of someone wants to add new functionality that will need one of them, he will not need to go to the db and reverse engineer needed columns (there are there already).
The advantage of removing unused is... ?
EDIT: Here is the sample code:
public class FooContext : DbContext
{
public FooContext(DbContextOptions<FooContext> options)
: base(options)
{
}
public DbSet<Item> Items { get; set; }
}
[Table("item")]
public class Item
{
[Key]
[Column("itemID", TypeName = "int")]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int Id { get; set; }
[Column("name", TypeName = "varchar(255)")]
public string Name { get; set; }
}
Sample usage:
public ItemDto GetItem(int id)
{
var item = _fooContext.Items.Where(i => i.Id == id).FirstOrDefault();
// Here I have item with two fields: Id and Name.
var itemDto = _mapper.Map<ItemDto>(item);
return itemDto;
}
Obviously I'm curious about more complex operations. Like... when item entity is being included by other entity. For example:
_foo.Warehouse.Include(i => i.Items)
or other more complex functions on Item entity
Your entity needs to match what's in the database, i.e. you need a property to match each column (neglecting any shadow properties). There's no choice here, as EF will complain otherwise.
However, when you actually query, you can select only the columns you actually need via something like:
var foos = await _context.Foos
.Select(x => new
{
Bar = x.Bar,
Baz = z.Baz
})
.ToListAsync();
Alternatively, if you don't need to be able to insert/update the table, you can instead opt to use DbQuery<T> instead of DbSet<T>. With DbQuery<T>, you can use anything class you want, and project the values however you like, via FromSql.

Using SQL to pull data from multiple tables and display it in one view

I've created my application and using the database first approach. So I used the ADO.net Entity Data Model to add my database to the project and then added controllers to it that way, I don't have any models so to speak I think? But anyway, I've added controllers and CRUD was added to each entity automatically.
My problem is, I want to view data from many tables on one web page. I can do this via sql but how do I use sql to pull the data I need and display it on the screen.
To build a little bit on Dave A's answer:
I personally like doing this type of retrieval with database first EF.
After building the EDMX, create a simple, straight-forward POCO that mimics what you want to return. A simple example would be:
public class ComplexModelFromMultipleTables
{
List<Car> Cars { get; set; }
List<Bike> Bikes { get; set; }
List<Boat> Boats { get; set; }
}
Once you've built the relationships in your database which is reflected in your EDMX, access it in a provider via your favorite pattern. A simple using pattern is always good, though I've built the more complex object with a mapper.
public ComplexModelFromMultipleTables GetObject
using (var db = new DBContext())
{
var model = new ComplexModelFromMultipleTables
{
Cars = db.Cars.Where(x => x.CarType == whateveryouwant).ToList(),
Bikes = db.Bikes.Where(x => x.anotherproperty == whateveryouwant).ToList(),
Boats = db.Boats.Where(x => x.something else == whateveryouwant).ToList(),
}
return model;
}
Call this provider from your controller and strongly type the view with
#model ComplexModelFromMultipleTables
at the top of your view.
I assume that you are referring to EF Code First approach to creating an Entity Model.
If I'm right, your conflicted by the common practice of using Entities (Table mapped classes) as Models to your Views.
I often use Entities as Models when I first scaffold a page. But As you've found, they are rarely appropriate and I often find myself migrating to more Robust Models.
1) I recommend you create a class library in your Models Directory. Embed make several Entities members of your class.
Forexample, you can have a CustomeActivityModel which also has as members of Customers, Sales, and Orders.
class CustomeActivityModel
{
Customers CustomerList { get; set; }
Sales SalesList { get; set; }
Orders OrdersList { get; set; }
}
within you Controller, you would populate them
ViewResult Index()
{
CustomeActivityModel Model = new CustomeActivityModel();
Model.CustomerList EFContext.Customers;
Model.SalesList EFContext.Sales;
Model.OrdersList EFContext.Orders;
Return View(Model);
}
or alternatively, you can use EF's Linq ability to include Entities that have key relationships (assuming Sales has foreign key for Customers and Orders)
ViewResult Index()
{
Model = EFContext.Sales.include("Customers").include("Orders");
Return View(Model);
}

How to insert 2 new related DTOs using RIA Service?

I am using RIA Service in our Silverlight application. Database entities are not directly exposed to a client but I have a set of POCO classes for it. Then in CRUD methods for these POCO classes they are converted to database entities and saved to database.
The problem arises on the server side when client creates 2 new POCO entities which are related. Insert method is called on the server for each POCO entity separately and I may create corresponding new database entities there and add them to object context. But I see no way to add relation between these created database entities. Is there a solution for that?
For example, I have these 2 POCO entities (simplified):
[DataContract(IsReference = true)]
public partial class Process
{
[DataMember]
[Key]
public string Name
{
get; set;
}
[DataMember]
public long StepId
{
get; set;
}
[DataMember]
[Association("StepProcess", "StepId", "Id", IsForeignKey=true)]
public Step Step
{
get; set;
}
}
[DataContract(IsReference = true)]
public partial class Step
{
[DataMember]
[Key]
public long Id
{
get; set;
}
[DataMember]
public string Name
{
get; set;
}
}
And I have these 2 Insert methods in my domain service class:
public void InsertProcess(Process process)
{
var dbProcess = new DBProcess();
dbProcess.Name = process.Name;
//dbProcess.StepId = process.StepId; Cannot do that!
this.ObjectContext.AddToDBProcess(dbProcess);
}
public void InsertStep(Step step)
{
var dbStep = new DBStep();
dbStep.Name = step.Name;
this.ObjectContext.AddToDBSteps(dbStep);
this.ChangeSet.Associate<Step, DBStep>
(step, dbStep, (dto, entity) =>
{
dto.Id = entity.Id;
});
}
Client adds a new Process, then creates and adds a new Step to it and then calls SubmitChanges(). Process.StepId is not filled with a correct value as there is no correct Step.Id for the newly created step yet, so I cannot just copy this value to database entity.
So the question is how to recreate relations between newly created database entities the same as they are in newly created DTOs?
I know about Composition attribute but it is not suitable for us. Both Process and Step are independent entities (i.e. steps may exist without a process).
There are two ways to solve this:
Have each call return the primary key for the item after it is created, then you can store the resulting PKey in the other POCO to call the second service.
Create a Service method that takes both POCOs as parameters and does the work of relating them for you.
Thanks, although both these suggestions are valid but they are also applicable only for simple and small object hierarchies, not my case. I end up using approach similar to this. I.e. I have a POCO to database objects map. If both Process and Step are new, in InsertProcess method process.Step navigation property is filled with this new step (otherwise StepId can be used as it referenced to existing step). So if this process.Step is in the map I just fill corresponding navigation property in DBProcess, otherwise I create new instance of DBStep, put it to the map and then set it to DBProcess.Step navigation property. This new empty DBStep will be filled in InsertStep method later.

Categories

Resources