EF7 Proxy Collections Not Generating - c#

I'm trying ASP.NET vNext / Entity Framework 7.
If I have 2 classes: A and B, where one A connects to many "B's," Entity Framework does not generate any proxy collection or proxy class. Thus, when trying to access the collection property, it is always empty unless I add to the collection manually. How does one implement lazy (or even eagerly) loaded collections in EF7?
public class A {
public Guid UniqueId {get;set;}
private ICollection<B> _backing;
public virtual ICollection<B> OneToManyRelationship {
get { return _backing ?? (_backing = new Collection<B>()); }
set { _backing = value; } }
}
public class B {
public A Owner {get;set;}
public string UniqueIdentifier {get;set;}
public int SomeImportantData {get;set;}
}

EF 7 does not support lazy loading upon initial release.
An example of this is lazy loading support, we know this is a critical feature for a number of developers, but at the same time there are many applications that can be developed without this feature. Rather than making everyone wait until it is implemented, we will ship when we have a stable code base and are confident that we have the correct factoring in our core components. To be clear, it's not that we are planning to remove lazy loading support from EF7, just that some apps can start taking advantage of the benefits of EF7 before lazy loading is implemented.
Though I have not yet migrated to EF 7, you should be able to eagerly load your object graph, e.g.
using System.Data.Entity;
...
var query = ctx.A.Include(a => a.OneToManyRelationship);

Related

What is a proper way of writing entity POCO classes in Entity Framework Core?

EF Core has a "code first mentality" by default, i.e. it is supposed to be used in a code-first manner, and even though database-first approach is supported, it is described as nothing more than reverse-engineering the existing database and creating code-first representation of it. What I mean is, the model (POCO classes) created in code "by hand" (code-first), and generated from the database (by Scaffold-DbContext command), should be identical.
Surprisingly, official EF Core docs demonstrate significant differences. Here is an example of creating the model in code: https://ef.readthedocs.io/en/latest/platforms/aspnetcore/new-db.html And here is the example of reverse-engineering it from existing database: https://ef.readthedocs.io/en/latest/platforms/aspnetcore/existing-db.html
This is the entity class in first case:
public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }
public List<Post> Posts { get; set; }
}
public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public int BlogId { get; set; }
public Blog Blog { get; set; }
}
and this is the entity class in second case:
public partial class Blog
{
public Blog()
{
Post = new HashSet<Post>();
}
public int BlogId { get; set; }
public string Url { get; set; }
public virtual ICollection<Post> Post { get; set; }
}
The first example is a very simple, quite obvious POCO class. It is shown everywhere in the documentation (except for the examples generated from database). The second example though, has some additions:
Class is declared partial (even though there's nowhere to be seen another partial definition of it).
Navigation property is of type ICollection< T >, instead of just List< T >.
Navigation property is initialized to new HashSet< T >() in the constructor. There is no such initialization in code-first example.
Navigation property is declared virtual.
DbSet members in a generated context class are also virtual.
I've tried scaffolding the model from database (latest tooling as of this writing) and it generates entities exactly as shown, so this is not an outdated documentation issue. So the official tooling generates different code, and the official documentation suggests writing different (trivial) code - without partial class, virtual members, construction initialization, etc.
My question is, trying to build the model in code, how should I write my code? I like using ICollection instead of List because it is more generic, but other than that, I'm not sure whether I need to follow docs, or MS tools? Do I need to declare them as virtual? Do I need to initialize them in a constructor? etc...
I know from the old EF times that virtual navigation properties allow lazy loading, but it is not even supported (yet) in EF Core, and I don't know of any other uses. Maybe it affects performance? Maybe tools try to generate future-proof code, so that when lazy-loading will be implemented, the POCO classes and context will be able to support it? If so, can I ditch them as I don't need lazy loading (all data querying is encapsulated in a repo)?
Shortly, please help me understand why is the difference, and which style should I use when building the model in code?
I try to give a short answer to each point you mentioned
partial classes are specially useful for tool-generated code. Suppose you want to implement a model-only derived property. For code first, you would just do it, wherever you want. For database first, the class file will be re-written if you update your model. So if you want to keep your extension code, you want to place it in a different file outside the managed model - this is where partial helps you to extend the class without tweaking the auto-generated code by hand.
ICollection is definitely a suitable choice, even for code first. Your database probably won't support a defined order anyway without a sorting statement.
Constructor initialization is a convenience at least... suppose you have either an empty collection database-wise or you didn't load the property at all. Without the constructor you have to handle null cases explicitely at arbitrary points in code. Whether you should go with List or HashSet is something I can't answer right now.
virtual enables proxy creation for the database entities, which can help with two things: Lazy Loading as you already mentioned and change tracking. A proxy object can track changes to virtual properties immediately with the setter, while normal objects in the context need to be inspected on SaveChanges. In some cases, this might be more efficient (not generally).
virtual IDbSet context entries allow easier design of testing-mockup contexts for unit tests. Other use cases might also exist.

How to keep separation of concerns in ASP.NET Boilerplate template and use spatial information query?

I'm using ASP.NET Boilerplate in the "Core" library I have this class:
public class Post : Entity<Guid>
{
public Post()
{
Id = Guid.NewGuid();
Hashtags = new HashSet<Hashtag>();
}
public string Body { get; set; }
public Location Location { get; set; }
public virtual ICollection<Hashtag> Hashtags { get; set; }
}
I searched and found the Entity Framework can handle spatial data query better using DbGeography class. The problem is I don't want to use Entity Framework in the Core library ...
Is there any way around ?
Is there any way around ?
Short answer: no
If you want to use a type defined in the EF-library you need to reference it.
But I see no reason not to reference the EF-library in the core library to use it's types, if it will be loaded by other projects in your application anyhow.
I have about the same scenario here and my entities project references EF just to "have access" to the data annotations, but is not "using" EF in a sense of defining a DbContext or executing any initialization or querying and so does not break the separation of concerns.

Methods for breaking apart a large DbContext with many relationships

A project I'm working on has DbContext that tracks a lot of different Entities. Due to the large number of relationships involved, it takes a long time to query from the context the first time around while it generates its views. In order to reduce the startup time, and better organize contexts into functional areas, I'm looking for ways to split it apart.
These are some methods I've tried so far, and problems I've seen with them:
Create a new smaller Context with a subset of DbSets from the huge Context.
This doesn't help, since EF seems to crawl through all the navigation properties and include all related entities anyway (according to LINQPad at least, which shows all the entities related to the Context when it's expanded in the connection panel). We have a few top-level entities that are far reaching, so there are very few subsets that can be fully isolated without removing navigation properties and doing a good amount of refactoring.
Split Entities into classes that include navigation properties, and ones that are just db fields, like so:
public class PersonLight
{
public int Id { get; set; }
public string Name { get; set; }
public int JobId { get; set; }
}
public class Person : PersonLight
{
public Job Job { get; set; }
}
public class ContextLight : DbContext
{
public virtual DbSet<PersonLight> People { get; set; }
}
No dice here as well. Even though Person isn't used at all, EF (or again, possibly just LINQPad) includes Person despite the fact that it can't be used. I assume this is because EF supports inheritance patterns, so it ends crawling related entities in this direction as well.
Do the same as #2, but with PersonLight and Person in different projects (or use partial classes in different projects). This is the best option so far, but it would be nice to have PersonFields right next to Person for easy reference.
So my questions are:
Are there any better ways to do this that I'm missing?
Why, in #3, does putting them in different projects seem to separate them enough that EF doesn't try to include both? I've tried putting them in different namespaces, but that doesn't do the trick.
Thanks.
Options to speed things along:
Generated views
Bounded Contexts
Ironically IIS app pool only needs to generate the view once.
Command line based on my tests, generates the view each time.
Not sure what linqpad does.
BTW I didn't originally add this link since you tagged it EF6.
But in case others aren't on EF6. There are some performance improvements reported. More information here:
EF6 Ninja edition

How to insert an ObservableCollection property to a local sqlite-net db?

I have a quick question about the sqlite-net library which can be found here : https://github.com/praeclarum/sqlite-net.
The thing is I have no idea how collections, and custom objects will be inserted into the database, and how do I convert them back when querying, if needed.
Take this model for example:
[PrimaryKey, AutoIncrement]
public int Id { get; set; }
private string _name; // The name of the subject. i.e "Physics"
private ObservableCollection<Lesson> _lessons;
Preface: I've not used sqlite-net; rather, I spent some time simply reviewing the source code on the github link posted in the question.
From the first page on the sqlite-net github site, there are two bullet points that should help in some high level understanding:
Very simple methods for executing CRUD operations and queries safely (using parameters) and for retrieving the results of those
query in a strongly typed fashion
In other words, sqlite-net will work well with non-complex models; will probably work best with flattened models.
Works with your data model without forcing you to change your classes. (Contains a small reflection-driven ORM layer.)
In other words, sqlite-net will transform/map the result set of the SQL query to your model; again, will probably work best with flattened models.
Looking at the primary source code of SQLite.cs, there is an InsertAll method and a few overloads that will insert a collection.
When querying for data, you should be able to use the Get<T> method and the Table<T> method and there is also an Query<T> method you could take a look at as well. Each should map the results to the type parameter.
Finally, take a look at the examples and tests for a more in-depth look at using the framework.
I've worked quite a bit with SQLite-net in the past few months (including this presentation yesterday)
how collections, and custom objects will be inserted into the database
I think the answer is they won't.
While it is a very capable database and ORM, SQLite-net is targeting lightweight mobile apps. Because of this lightweight focus, the classes used are generally very simple flattened objects like:
public class Course
{
public int CourseId { get; set; }
public string Name { get; set; }
}
public class Lesson
{
public int LessonId { get; set; }
public string Name { get; set; }
public int CourseId { get; set; }
}
If you then need to Join these back together and to handle insertion and deletion of related objects, then that's down to you - the app developer - to handle. There's no auto-tracking of related objects like there is in a larger, more complicated ORM stack.
In practice, I've not found this a problem. I find SQLite-net very useful in my mobile apps.

NHibernate: Lazy Loading Properties

So, according to Ayende Lazy Loading Properties are already in the NHibernate trunk.
My Problem is: I can't use the trunk for I have FluentNHibernate and LinQ for NHibernate, so I depend on the version they are linked against (Versio 2.x). I can't and don't want to build all the assemblies myself against the newest version of NHibernate.
So, has someone got information about when NHibernate 3.0 will leave Beta-Stadium and the auxiliaries (Linq etc.) will be compiled against it?
I appreciate any estimate!
I need this feature so I can use it on Blob-Fields. I don't want to use workarounds to destroy my object model.
You can compile Fluent with the NH 3.0 binaries, and you don't need L2NH anymore; there's a new integrated provider.
Alternatively it isn't much of a model change. Make a new class, Blob, that has Id, Version and Bytes properties, make a new table to match. Add the new class as a protected property to each of your classes that currently has a blob. Use it like a backing store. Change your mapping to map the underlying property instead of the public one.
public class MyClass
{
public MyClass()
{
MyBlobProperty_Blob= new Blob();
}
public virtual byte[] MyBlobProperty
{
get { return MyBlobProperty_Blob.Bytes; }
}
protected virtual Blob MyBlobProperty_Blob { get; private set; }
}
It is a significant schema change however. This particular solution moves all your binary data into one table.

Categories

Resources