LinqToSQL - no supported translation to SQL - c#

I have been puzzling over a problem this morning with LinqToSQL. I'll try and summarise with the abbreviated example below to explain my point.
I have DB two tables:
table Parent
{
ParentId
}
table Child
{
ChildId
ParentId [FK]
Name
Age
}
These have LinqToSQL equivalent classes in my project, however, I have written two custom model classes that I want my UI to use, instead of using the LinqToSQL classes.
My data access from the front end goes through a service class, which in turn calls a repository class, which queries the data via linq.
At the repository level I return an IQueryable by:
return from data in _data.Children
select new CustomModel.Child
{
ChildId = data.ChildId,
ParentId = date.ParentId
};
My service layer then adds an additional query restriction by parent before returning the list of children for that parent.
return _repository.GetAllChildren().Where(c => c.Parent.ParentId == parentId).ToList();
So at this point, I get the method has no supported translation to sql error when I run everything as the c.Parent property of my custom model cannot be converted. [The c.Parent property is an object reference to the linked parent model class.]
That all makes sense so my question is this:
Can you provide the querying process
with some rules that will convert a
predicate expression into the correct
piece of SQL to run at the database
and therefore not trigger an error?
I haven't done much work with linq up to now so forgive my lack of experience if I haven't explained this well enough.
Also, for those commenting on my choice of architecture, I have changed it to get around this problem and I am just playing around with ideas at this stage. I'd like to know if there is an answer for future reference.
Many thanks if anyone can help.

Firstly, it begs the question: why is the repository returning the UI types? If the repo returned the database types, this wouldn't be an issue. Consider refactoring so that the repo deals only with the data model, and the UI does the translation at the end (after any composition).
If you mean "and have it translate down to the database" - then basically, no. Composable queries can only use types defined in the LINQ-to-SQL model, and a handful of supported standard functions. Something similar came up recently on a related question, see here.
For some scenarios (unusual logic, but using the typed defined in the LINQ-to-SQL model), you can use UDFs at the database, and write the logic yourself (in TSQL) - but only with LINQ-to-SQL (not EF).
If the volume isn't high, you can use LINQ-to-Objects for the last bit. Just add an .AsEnumerable() before the affected Where - this will do this bit of logic back in managed .NET code (but the predicate won't be used in the database query):
return _repository.GetAllChildren().AsEnumerable()
.Where(c => c.Parent.ParentId == parentId).ToList();

Related

Dynamic table name EF CORE 2.2

I want to make a universal method for working with tables. Studied links
Dynamically Instantiate Model object in Entity Framework DB first by passing type as parameter
Dynamically access table in EF Core 2.0
As an example, the ASP.NET CORE controller for one of the SQL tables is shown below. There are many tables. You have to implement such (DEL,ADD,CHANGE) methods for each table :
[Authorize(Roles = "Administrator")]
[HttpPost]
public ActionResult DeleteToDB(string id)
{
webtm_mng_16Context db = new webtm_mng_16Context();
var Obj_item1 = (from o1 in db.IT_bar
where o1.id == int.Parse(id)
select o1).SingleOrDefault();
if ((Obj_item1 != null))
{
db.IT_bar.Remove(Obj_item1);
db.SaveChanges();
}
var Result = "ok";
return Json(Result);
}
I want to get a universal method for all such operations with the ability to change the name of the table dynamically. Ideally, set the table name as a string. I know that this can be done using SQL inserts, but is there really no simple method to implement this in EF CORE
Sorry, but you need to rework your model.
It is possible to do something generic as long as you have one table per type - you can go into the configuration and change the database table. OpenIddict allows that. You can overwrite the constructors of the DbContext and play whatever you want with the object model, and that includes changing table names.
What you can also do is a generic base class taking the classes you deal with as parameters. I have those - taking (a) the db entity type and (b) the api side dto type and then using some generic functions and Automapper to map between them.
But the moment you need to grab the table name dynamically you are in a world of pain. EF standard architecture assumes that an object type is mapped to a database entity. As such, an ID is unique within a table - the whole relational model depends on that. Id 44 has to be unique, for a specific object, not for an object and the table it was at this moment loaded from.
You also miss up significantly on acutally logic, i.e. for delete. I hate to tell you, but while you can implement security on other layers for reading, every single one of my write/update methods are handwritten. Now, it may seem that "Authorize" works - but no, it does not. Or - it does if your application is "Hello world" complex. I run sometimes pages of testing code whether an operation is allowed in a specific business context and this IS specific, whether the user has set an override switch (which may or may not be valid depending on who he is) do bypass certain business rules. All that is anyway specific.
Oh, what you can also do... because you seem to have a lot of tables: do NOT use one class, generate them. Scaffolding is not that complex. I hardly remember when I did generate the last EF core database classes - they nowadays all come out of Entity Developer (tool from Devart), while the db is handled with change scripts (I work db first - i actually want to USE The database and that means filtered indices, triggers, some sp's and views with specific SQL), so migrations do not really work at all.
But now, overwriting the table name dynamically - while keeping the same object in the background - will bite you quite fast. It likely only works for extremely simplistic things - you know, "hello world" example - and breaks apart the moment you actually have logic.

entity framework 4: where is my load method and IsLoaded property?

I am using EF4, and I using the System.Data.Entity dll in my class. However, I can't see the load method of my navigation property of my entity.
How can I access to this method?
What I am doing is create e 4.0 project of .NET, create my edmx from a database, right click in the model edmx and I generate a DBContext.
However, I can access to the local property of my context, that I think that is a feature of EF4.
Thanks.
Daimroc.
For the DbContext approach, the IsLoaded property and Load method have been moved:
context.Entry(yourEntity)
.Collection(entity => entity.NavigationProperty)
.IsLoaded;
context.Entry(yourEntity)
.Collection(entity => entity.NavigationProperty)
.Load();
Entry method: http://msdn.microsoft.com/en-us/library/gg696578(v=vs.103).aspx
Collection method: http://msdn.microsoft.com/en-us/library/gg671324(v=vs.103).aspx
IsLoaded property: http://msdn.microsoft.com/en-us/library/gg696146(v=vs.103).aspx
Load method: http://msdn.microsoft.com/en-us/library/gg696220(v=vs.103).aspx
When you write the query (all this also works with lambda syntax as well btw)
From a in context.EntitySet.Include("Navigation Property NAme")
select a
and all will be there ;) .. or more to the point the navigation properties will be populated.
can be wierd with lots of joins though
you can also
From a in context.EntitySet.Include("Navigation1.Navigation2.Navigation2")
.Include("SomeOtherNavigation")
select a
where navigation1 and SomeOtherNavigation are navigation properties on the entity set
Failing that you can turn on lazy loading in the context. But its evil imo.
Also it doesnt really scale well when you go for an SOA approach as you need to include prior to knowing about the lazy load in your repository.
Ie you run the query in your repository, send stuff over the service and then want the navigation properties in the front end ... by which time its too late to lazy load ... EG you are using a wcf service that doesn't use data services.
Important to note that you ahve to use Include on the EntitySet, it is not on an IQueryable<>.
A real example:
var csg = from ucsg in c.UserCarShareGroups.Include("UserCarShareGroups.User.UserVehicles")
where ucsg.User.UserName.ToLower() == userName.ToLower()
&& ucsg.CarShareGroup.Carpark.Name.ToLower().Equals(carparkName.ToLower())
&& ucsg.DeletedBy == null
select ucsg.CarShareGroup;
(the database has case sensitive collation for some reason)
An alternate approach (and maybe more relevant is here)
Entity Framework - Trouble with .Load()
However I like doing it the way I said because it is explicit and I know exactly what is being pulled out. Especially when dealing with large datasets.
An answer to a question that combines my concerns with Load() is here:
Entity Framework 4 load and include combine

Nhibernate Polymorphic Query - Eager Load Associations Without Polymorphic Fetch

I will start by saying I have already looked thoroughly in stack overflow, nhusers and the documentation for a possible solution to my issue.
I need to be able to query only the base class table in parts of my multi/future query when eagerly loading associations (although from the research I have done I don't think this is possible)
I have started to map an existing schema using fluent nhibernate as a proof of concept. I have mapped an inheritance hierarchy using table per sub class (The mappings all work perfectly fine so I won't paste them all in here). The hierarchy has around 15 sub classes and the base class has some additional associations. E.g.
Base
Dictionary<string, Attribute> Attributes
List<EntityChange> Changes
I need to eagerly load both of the collections as in the given scenario they are required for post processing and lazily loading them causes performance issues. I am eagerly loading them by a multi / future query:
var baseQuery = session.CreateCriteria<Base>("b")
.CreateCriteria("Nested", JoinType.LeftOuterJoin)
.CreateCriteria("Nested2", JoinType.LeftOuterJoin)
.CreateCriteria("Nested2.AdditionalNested", JoinType.LeftOuterJoin);
var logsQuery = session.CreateCriteria<Base>("b").CreateAlias("Changes", "c", JoinType.LeftOuterJoin,
Expression.And(Expression.Ge("c.EntryDate", changesStartDate), Expression.Le("c.EntryDate", changesEndDate)))
.AddOrder(Order.Desc("c.EntryDate"));
var attributesQuery = session.CreateCriteria<Base>("t").SetFetchMode("Attributes", FetchMode.Join);
logsQuery.Future<Base>();
attributesQuery.Future<Base>();
var results = baseQuery.Future<Base>().ToList();
The queries execute and return the correct results. But just to eagerly load the associations in this manner means that the attribute and changes queries have to perform a polymorphic fetch (the addition of about 15 left outer joins per query that aren't required). I know this is required for polymorphic querying but the base query will return the hierarchy that I desire. The other parts of the multi query that are issuing a polymorphic query are redundant.
I haven't yet mapped the whole of the hierarchy so there will be additional unecessary joins being performed and there are also other associations that could be loaded up front. These two combined without the addition of an increase in volume will lead to performance issues. The performance currently of this query is about 6 seconds (which admittedly is better than the 20 it's currently taking) but by messing around a bit with the query and taking out the extra joins I can get it down to about 2 seconds (this is a common query so getting it as low as possible is beneficial not just pleasing to me. It will also be run from multiple distributed machine so I would rather not get into a discussion about caching / 2nd level caching).
I have tried
using the class modifier in the query 'class = base'. I initially done this blindly but believe this is for discriminator values. Even if it is for the case statement in the SQL this will not prevent the extra joins.
Doing everything in a single query. This is slower than splitting it up as above and gives the cartesian product
Using 'Polymorphism.Explicit();' in the fluent mappings. This has no effect as I am using ClassMap with SubclassMaps so it is ignored. I tried changing all the maps to ClassMaps and using Join but this didn't give the desired behaviour.
Tried to trick nhibernate into joining the base class table onto itself for loading associations (basically load a more concrete type to prevent the polymorphic query) - create a derived class 'BaseOnlyLoading' which uses the same table and primary key as the base class. This was obviously a hack but I was just trying to see what's possible. NHibernate doesn't allow the class and sub class to use the same table.
Define the BaseOnlyLoadingMap to be a classmap with the same assocations as the BaseMap with a join back onto the Base. This was hopeful as assocation collections are resolved in the context based on full type name.
Use an interceptor which modifies the SQL that before it's execute. I wouldn't use this in production and just tried it out of interest. I passed an interceptor into a local session. This caused issues and I didn't proceed.
The HQL 'Type' query operator as explained here although I am not sure this has been implemented in the .NET version and might behave similarly to 1.
There is comment on highest rated answer (How to perform a non-polymorphic HQL query in Hibernate?) which suggest overriding the IsExplicitPolymorphism on the persister. I had a quick look and from what I remember the persister was either global per entity or created in the SessionImpl from a static factory which would prevent doing this. Even if this was possible I am not sure what sort of side effects this would have.
I tried using some SQL to load everything but even if I use a stored proc I am not sure how nhibernate will piece the graph back together. Maybe I could specify all the entities and aliases?
Specifying explicit per query would be nice. Any suggestions?
Thanks in advance.

C# linq to sql - selecting tables dynamically

I have the following scenario: there are a database that generates a new logTable every year. It started on 2001 and now has 11 tables. They all have the same structure, thus the same fields, indexes,pk's, etc.
I have some classes called managers that - as the name says - manages every operation on this DB. For each different table i have a manager, except for this logTable which i have only one manager.
I've read a lot and tried different things like using ITable to get tables dynamically or an interface that all my tables implements. Unfortunately, i lose strong-typed properties and with that i can't do any searches or updates or anything, since i can't use logTable.Where(q=> q.ID == paramId).
Considering that those tables have the same structure, a query that searches logs from 2010 can be the exact one that searches logs from 2011 and on.
I'm only asking this because i wouldn't like to rewrite the same code for each table, since they are equal on it's structure.
EDIT
I'm using Linq to SQL as my ORM. And these tables uses all DB operations, not just select.
Consider putting all your logs in one table and using partitioning to maintain performance. If that is not feasible you could create a view that unions all the log tables together and use that when selecting log data. That way when you added a new log table you just update the view to include the new table.
EDIT Further to the most recent comment:
Sounds like you need a new DBA if he won't let you create new SPs. Yes I think could define an ILogTable interface and then make your log table classes implement it, but that would not allow you do GetTable<ILogTable>(). You would have to have some kind of DAL class with a method that created a union query, e.g.
public IEnumerable<ILogTable> GetLogs()
{
var Log2010 = from log in DBContext.2010Logs
select (ILogTable)log;
var Log2011 = from log in DBContext.2011Logs
select (ILogTable)log;
return Log2010.Concat(Log2011);
}
Above code is completely untested and may fail horribly ;-)
Edited to keep #AS-CII happy ;-)
You might want to look into the Codeplex Fluent Linq to SQL project. I've never used it, but I'm familiar with the ideas from using similar mapping techniques in EF4. YOu could create a single object and map it dynamically to different tables using syntax such as:
public class LogMapping : Mapping<Log> {
public LogMapping(int year) {
Named("Logs" + year);
//Column mappings...
}
}
As long as each of your queries return the same shape, you can use ExecuteQuery<Log>("Select cols From LogTable" + instance). Just be aware that ExecuteQuery is one case where LINQ to SQL allows for SQL Injection. I discuss how to parameterize ExecuteQuery at http://www.thinqlinq.com/Post.aspx/Title/Does-LINQ-to-SQL-eliminate-the-possibility-of-SQL-Injection.

Adding behavior to LINQ to Entities models

What's the preferred approach when using L2E to add behavior to the objects in the data model?
Having a wrapper class that implements the behavior you need with only the data you need
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select
new FFFeed(feed.name, new Uri(feed.uri), feed.refresh);
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (FFeed feed in feedList) { feed.doX(); }
Using directly the data model instances and have a method that operates over the IEnumerable of those instances
using (var dbh = new ffEntities())
{
var query = from feed in dbh.feeds select feed;
return query.ToList();
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { doX(feed); }
Using extension methods on the data model class so it ends up having the extra methods the wrapper would have.
public static class dataModelExtensions {
public static void doX(this feeds source) {
//do X
}
}
//Later in a separate place, not even in the same class
foreach (feeds feed in feedList) { feed.doX(); }
Which one is best? I tend to favor the last approach as it's clean, doesn't interfere with the CRUD facilities (i can just use it to insert/update/delete directly, no need to wrap things back), but I wonder if there's a downside I haven't seen.
Is there a fourth approach? I fail at grasping LINQ's philosophy a bit, especially regarding LINQ to Entities.
The Entity classes are partial classes as far as i know, so you can add another file extending them directly using the partial keyword.
Else, i usually have a wrapper class, i.e. my ViewModel (i'm using WPF with MVVM). I also have some generic Helper classes with fluent interfaces that i use to add specific query filters to my ViewModel.
I think it's a mistake to put behaviors on entity types at all.
The Entity Framework is based around the Entity Data Model, described by one of its architects as "very close to the object data model of .NET, modulo the behaviors." Put another way, your entity model is designed to map relational data into object space, but it should not be extended with methods. Save your methods for business types.
Unlike some other ORMs, you are not stuck with whatever object type comes out of the black box. You can project to nearly any type with LINQ, even if it is shaped differently than your entity types. So use entity types for mapping only, not for business code, data transfer, or presentation models.
Entity types are declared partial when code is generated. This leads some developers to attempt to extend them into business types. This is a mistake. Indeed, it is rarely a good idea to extend entity types. The properties created within your entity model can be queried in LINQ to Entities; properties or methods you add to the partial class cannot be included in a query.
Consider these examples of a business method:
public Decimal CalculateEarnings(Guid id)
{
var timeRecord = (from tr in Context.TimeRecords
.Include(“Employee.Person”)
.Include(“Job.Steps”)
.Include(“TheWorld.And.ItsDog”)
where tr.Id = id
select tr).First();
// Calculate has deep knowledge of entity model
return EarningsHelpers.Calculate(timeRecord);
}
What's wrong with this method? The generated SQL is going to be ferociously complex, because we have asked the Entity Framework to materialize instances of entire objects merely to get at the minority of properties required by the Calculate method. The code is also fragile. Changing the model will not only break the eager loading (via the Include calls), but will also break the Calculate method.
The Single Responsibility Principle states that a class should have only one reason to change. In the example shown on the screen, the EarningsHelpers type has the responsibility both of actually calculating earnings and of keeping up-to-date with changes to the entity model. The first responsibility seems correct, the second doesn't sound right. Let's see if we can fix that.
public Decimal CalculateEarnings(Guid id)
{
var timeData = from tr in Context.TimeRecords
where tr.Id = id
select new EarningsCalculationContext
{
Salary = tr.Employee.Salary,
StepRates = from s in tr.Job.Steps
select s.Rate,
TotalHours = tr.Stop – tr.Start
}.First();
// Calculate has no knowledge of entity model
return EarningsHelpers.Calculate(timeData);
}
In the next example, I have rewritten the LINQ query to pick out only the bits of information required by the Calculate method, and project that information onto a type which rolls up the arguments for the Calculate method. If writing a new type just to pass arguments to a method seemed like too much work, I could have also projected onto an anonymous type, and passed Salary, StepRates, and TotalHours as individual arguments. But either way, we have fixed the dependency of EarningsHelpers on the entity model, and as a free bonus we've gotten more efficient SQL, as well.
You might look at this code and wonder what would happen if the Job property of TimeRecord where nullable. Wouldn't I get a null reference exception?
No, I would not. This code will not be compiled and executed as IL; it will be translated to SQL. LINQ to Entities coalesces null references. In the example query shown on the screen, StepRates would simply return null if Job was null. You can think of this as being identical to lazy loading, except without the extra database queries. The code says, "If there is a job, then load the rates from its steps."
An additional benefit of this kind of architecture is that it makes unit testing of the Web assembly very easy. Unit tests should not access a database, generally speaking (put another way, tests which do access a database are integration tests rather than unit tests). It's quite easy to write a mock repository which returns arrays of objects as Queryables rather than actually going to the Entity Framework.

Categories

Resources