DevForce navigation Property Issue - c#

We are currently re-writing our application and we have hundreds of tables and each of them has a CreatedById and ModifiedById and each of these has a foreign key to our users table, around 411 FKey / Nav properties to that table.
We use DevForce and EF for our DB access / entity management and during some testing of migration from our current application to this new one we are getting errors / stack overflows in the serializer and / or extremely long load times (5 seconds for 2000 entities).
I created a test app that had the same number of tables but removed all 411 of those FKeys and it dropped the load to under a second and the serialization errors also went away.
The issue I now have is I do actually need to get to the users table from a number of other entities using dot navigation / nav props so I was wondering if anyone knows how to add these to the buddy class via code.
I have googled and did find an old DevForce forum post that mentions something about some sample code to do this but there was no link to the actual sample.
If anyone has any ideas / suggestions I would really appreciate it.
Thanks in advance

I'm curious why you're seeing the serialization errors and poor load time. If you have the time, please open a support case at the DevForce support site with additional information so we can take a look at it.
The simplest approach to adding hand-coded navigation properties is to use the fact that the entity classes are partial classes. The "buddy" class is intended more to provide overrides for property attributes so isn't a good fit here.
To hand-code a navigation property, a simple example (without validation) could look something like this:
public partial class Customer {
private User _creationUser;
public User CreationUser {
get {
if (_creationUser == null) {
var query = new EntityKeyQuery(new EntityKey(typeof(User), this.CreatedById));
_creationUser = this.EntityAspect.EntityManager.ExecuteQuery(query).Cast<User>().FirstOrDefault();
}
return _creationUser;
}
set {
_creationUser = value;
this.CreatedById = _creationUser.UserId;
}
}
}
The problem here is that you have hundreds of entity classes which need these properties, so implementing these on a base entity class which your other entities extend would be a good idea. The DevForce Resource Center has more information on using a base entity class.
Another option is to override the template generation to generate custom properties on each entity. The DRC also has some information on this.

Related

Manipulating large quantities of data in ASP.NET MVC 5

I am currently working towards implementing a charting library with a database that contains a large amount of data. For the table I am using, the raw data is spread out across 148 columns of data, with over 1000 rows. As I have only created models for tables that contain a few columns, I am unsure how to go about implementing a model for this particular table. My usual method of creating a model and using the Entity Framework to connect it to a database doesn't seem practical, as implementing 148 properties for each column does not seem like an efficient method.
My questions are:
What would be a good method to implement this table into an MVC project so that there are read actions that allow one to pull the data from the table?
How would one structure a model so that one could read 148 columns of data from it without having to declare 148 properties?
Is the Entity Framework an efficient way of achieving this goal?
Entity Framework Database First sounds like the perfect solution for your problem.
Data first models mean how they sound; the data exists before the code does. Entity Framework will create the models as partial classes for you based on the table you direct it to.
Additionally, exceptions won't be thrown if the table changes (as long as nothing is accessing a field that doesn't exist), which can be extremely beneficial in a lot of cases. Migrations are not necessary. Instead, all you have to do is right click on the generated model and click "Update Model from Database" and it works like magic. The whole process can be significantly faster than Code First.
Here is another tutorial to help you.
yes with Database First you can create the entites so fast, also remember that is a good practice return onlye the fiedls that you really need, so, your entity has 148 columns, but your app needs only 10 fields, so convert the original entity to a model or viewmodel and use it!
One excelent tool that cal help you is AutoMapper
Regards,
Wow, that's a lot of columns!
Given your circumstances a few thoughts come to mind:
1: If your problem is the leg work of creating that many properties you could look at Entity Framework Power Tools. EF Tools is able to reverse engineer a database and create the necessary models/entity relation mappings for you, saving you a lot of the grunt work.
To save you pulling all of that data out in one go you can then use projections like so:
var result = DbContext.ChartingData.Select(x => new PartialDto {
Property1 = x.Column1,
Property50 = x.Column50,
Property109 = x.Column109
});
A tool like AutoMapper will allow you to do this with ease via simply configurable mapping profiles:
var result = DbContext.ChartingData.Project().To<PartialDto>().ToList();
2: If you have concerns with the performance of manipulating such large entities through Entity Framework then you could also look at using something like Dapper (which will happily work alongside Entity Framework).
This would save you the hassle of modelling the entities for the larger tables but allow you to easily query/update specific columns:
public class ModelledDataColumns
{
public string Property1 { get; set; }
public string Property50 { get; set; }
public string Property109 { get; set; }
}
const string sqlCommand = "SELECT Property1, Property50, Property109 FROM YourTable WHERE Id = #Id";
IEnumerable<ModelledDataColumns> collection = connection.Query<ModelledDataColumns>(sqlCommand", new { Id = 5 }).ToList();
Ultimately if you're keen to go the Entity Framework route then as far as I'm aware there's no way to pull that data from the database without having to create all of the properties one way or another.

Archive data based on conditions

We've been using the Entity framework-code first approach and Fluent Api, and have this requirement, an entity with multiple navigation properties and the possibility of numerous entries.
This entity reflects the data of a process and a field captures whether the entity is active in the process. I've provided an example for this.
public class ProcessEntity
{
//Other properties and Navigation properties
public bool IsInProcess { get; set; }
}
What I've been trying to do is, have an another table could be a mapping table or something that will contain only the ProcessEntity items whose IsInProcess property is set to true, ie.,this table provides the ProcessEntities that are active in the process.
The whole idea and thought behind this segregation is that, a lot of queries and reports are generated only on the items that are still in process and querying the whole table every time with a Where clause would be a performance bottleneck. Please correct me If I'm wrong.
I thought of having a mapping table but the entries have to be manually added and removed based on the condition.
Is there any other solution or alternative design ideas for this requirement?
Consider using an index.
Your second table is what an index would do.
Let the DB do its job.
Given that a boolean isnt a great differentiator, a date or similiar as part of the index may also be useful.
eg How to create index in Entity Framework 6.2 with code first

Prevent cached objects to end up in the database with Entity Framework

We have an ASP.NET project with Entity Framework and SQL Azure.
A big part of our data only needs to be updated a few times a day, other data is very volatile.
The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do.
The volatile data is requested everytime by a DbContext per Http request.
When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server.
So far, so good.
Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges.
Well, that was quite a mess.
The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data.
As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint.
But it is of course hacky, and I'm still pretty scared ;P
Are there any better ways to prevent whole tree's of cached objects ending up in the database?
More information
Project is ASP.NET MVC
I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request
This is in a high traffic website, with a lot of personal customized views. Having the POCO data in memory works really good for what I want. Except the problem I mentioned.
It is a bit more complicated, but a simplified version is that I cache the objects by a singleton: so i.e:
EntityCache.Instance.LolCats = new DbContext().LolCats.AsNoTracking().ToList();
This cache I dependency-inject into my controllers.
You can solve it like this:
1) Create an interface like this:
public interface IIsReadOnly
{
bool IsReadOnly { get; set; }
}
2) Implement this interface in all of the entities that can be cached. When you read and cache them, set the IsReadOnly property to true. This flag will be used when SaveChanges is invoked. Remember to decorate this property with the [NotMapped] attribute, or use any other mean to make EF ignore it.
public class ACacheableEntitySample
: IIsReadOnly
{
[NotMapped]
public bool IsReadOnly { get; set; }
// define the "regular" entity properties
}
NOTE: you can include the property directly in the class definition (if using Code First), or use partial classes (for Db First, Model First, or Code First).
NOTE: alternatively you can make EF ignore the IsReadOnly property using the Fluent API, or even better a custom convention (EF 6+)
3) Override your inherited DbContext.SaveChanges method. In the overridden method, review all the entries with pending changes, and if they are read only, change there state to Unchanged:
if (entry is IIsReadOnly) // if it's a cacheable entity
{
if (entry.IsReadOnly) // and it was marked as readonly when caching
{
// change the entry state to unchanged here, so that it's not updated
}
}
NOTE: This is sample code to explain what you need to do. In your final implementation you can do it with a simple LINQ sentence that get all the IIsReadOnly entities, which have the IsReadOnly set to true, and set their state to Unchanged.
You can use the IIsReadOnly entites in another DbContext and manipulate them in the usual way. For example if you get one of these entites, update it, and call SaveChanges, the changes will be saved because IsReadOnly will have the default false value. But you'll easily avoid saving changes of cached data accidentally, simply by setting the IsReadOnly property to true when caching.
Original answer deleted because it was a waste of time.
Your post and proceeding comments are a perfect example of the XY Problem.
You say:
I really need a solution for the problem, not for the architecture
What if the architecture is the problem?
The problem you presented
A caching solution you implemented that violates at least a half dozen best practices has (surprise!) blown up in your face. You've managed to stop it from blowing again up via a spectacular (not in a good way) hack but you want to know how to do it in a way that won't require such a spectacular hack.
The problem you had
You needed to cache some data because it was getting too expensive to hit the database for every request.
The answers that were offered
Use foreign keys instead of navigation properties
This is a perfectly valid answer and, surprise, a best practice. Navigation properties can change any time you regenerate the code in your Entity Data Model and are often ambiguous. With a bit of effort you could have used this and never had to worry about EF's handling of object relationships again.
Cache models instead of Entity objects
Another valid answer, and one that requires the least amount of actual work. MVC applications usually require some redundancy between viewmodels and entity objects and if you ever write a proper multi-tier application you'll practically drown in redundant objects. And nobody will accidentally add these objects to a DbContext ever again - because they can't.
Criticism
You have offered up very little useful information. From what I can tell your approach from the get-go was wrong.
Firstly, dumping whole tables into memory at App_Start is at best a temporary solution. If the table was too big to hit on every request, it's too big to hit on App_Start. What happens if something important breaks while people are using your application and you need to deploy a bug fix ASAP? What happens when your tables get really big and you start getting timeouts from EF while trying to dump them into memory? What happens if 95% of your users only really ever need 10% of that big table you've dumped into memory? Is the memory on your web/cache server going to be enough to accommodate the increasing size of your tables? For how long?
Secondly, no Entity object should remain anywhere after its originating DbContext is disposed. Entity objects behave in a convenient way while their DbContext is in scope and become troublesome POCOs when it's out of scope. I say troublesome because the 'magic' DbContext does with change tracking tends to fool people unfamiliar with the inner workings of EF into thinking that an Entity object is directly connected to a table row in the database. The problem you had illustrates this point perfectly.
Thirdly, it looks like you need to delete and re-dump a whole table to memory, even if you only update a single column in a single row. That's immensely wasteful to both the memory and CPU on your web server, and to your Azure SQL instance(s). What happens when a small bit of data comes in wrong and needs to be updated in a hurry? What if one of your nightly update jobs fails but you need fresh data in the morning?
You may not worry about any of this stuff now but your solution blowing up in your face should have at the very least raised some red flags. I've had to deal with as lot of caching in projects I've worked on in the past few years and everything I say here comes from experience.
Proposed solution - On-demand caching
If you've put a little effort into organizing your code, all of your CRUD operations on the database should be in specialized helper classes which I call repositories. Your controller calls its specialized repository (StuffController - StuffRepository), receives a model and binds that model to a view, kinda like this:
public class StuffController : Controller
{
private MyDbContext _db;
private StuffRepository _repo;
public StuffController()
{
_db = new MyDbContext();
_repo = new StuffRepository(_db);
}
// ...
public ActionResult Details(int id)
{
var model = _repo.ReadDetails(id);
// ...
return View(model);
}
protected override void Dispose(bool disposing)
{
_db.Dispose();
base.Dispose(disposing);
}
}
What on-demand caching would do is wrap that call to the repository in such a way that if the result of that method was already in the cache and it was not stale, it would return it from the cache. Otherwise it would hit the database.
Here's a simplified (and probably nonfunctional) example of a CacheWrapper class so you can understand what it does, using HttpRuntime.Cache:
public static class CacheWrapper
{
private static List<string> _keys = new List<string>();
public static List<string> Keys
{
get { lock(_keys) { return _keys.ToList(); } }
}
public static T Fetch<T>(string key, Func<T> dlgt, bool refresh = false) where T : class
{
var result = HttpRuntime.Cache.Get(key) as T;
if(result != null && !refresh) return result;
lock(HttpRuntime.Cache)
{
lock(_keys)
{
_keys.Add(key);
}
result = dlgt();
HttpRuntime.Cache.Add(key, result, /* some other params */);
}
return result;
}
}
And the new way to call things from the controller:
public ActionResult Details(int id)
{
var model = CacheWrapper.Fetch("StuffDetails_" + id, () => _repo.ReadDetails(id));
// ...
return View(model);
}
A slightly more complex version of this is in production on a public web application as we speak and working quite well.

how does your custom class relate to the database

Okay, so i've studied c# and asp.net long enough and would like to know how all these custom classes i created relate to the database. for example.
i have a class call Employee
public class Employee
{
public int ID { get; set; }
public string Name { get; set; }
public string EmailAddress { get; set; }
}
and i have a database with the following 4 fields:
ID
Name
EmailAddress
PhoneNumber
it seems like the custom class is my database. and in asp.net i can simple run the LINQ to SQL command on my database and get the whole schema of my class without typing out a custom class with getter and setter.
so let's just say that now i am running a query to retrieve a list of employees. I would like to know how does my application map to my Employee class to my database?
by itself, it doesn't. But add any ORM or similar, and you start to get closer. for example, LINQ-to-SQL (which I mention because it is easy to get working with Visual Studio), you typically get (given to you by the tooling) a custom "data context" class, which you use as:
using(var ctx = new MyDatabase()) {
foreach(var emp in ctx.Employees) {
....
}
}
This is generating TSQL and mapping the data to objects automatically. By default the tooling creates a separate Employee class, but you can tweak this via partial classes. This also supports inserts, data changes and deletion.
There are also tools that allow re-use of your existing domain objects; either approach can be successful - each has advantages and disadvantages.
If you only want to read data, then it is even easier; a micro-ORM such as dapper-dot-net allows you to use our type with TSQL that you write, with it handling the tedious materialisation code.
Your question is a little vague, imo. But what you are referring to is the Model of the MVC (Model-View-Controller) architecture.
What the Model , your Employee Class, manages data of the application. So it can not only get and set (save / update) your data, but it can also be used to notify of a data change. (Usually to the view).
You mentioned you where using SQL, so more then likely you could create and save an entire employee record by sending an Associative Array of the table data to save it to the database. Your setting for the Class would handle the unique SQL syntax to INSERT the data. In larger MVC Frameworks. The Model of your application inherits several other classes to handle the proper saving to different types of backends other than MS SQL.
Models will also, normally, have functions to handle finding records and updating records. This is normally by specify a search field, and it returning the record, of which would include the ID and you would normally base this back into a save / update function to make changes to record. You could also tie into this level of the Model to create revision of the data you are saving
So how the model directly correlates to your SQL structure is dependent on how you right it. Or which Framework you decide to use. I believe a common one for asp.net is the Microsoft's ASP.Net MVC
Your class cannot be directly mapped to the database without ORM tool, The ORM tool will read your configuration and will map your class to DB row as per your mappings automatically. That means you don't need to read the row and set the class fields explicitly but you have to provide mapping files and have to go through the ORM framework to load the entities, and the framework will take care of the rest
You can check nHibernate and here is getting started on nHibernate.

Lookup tables in Entity Framework 4.1 (the proper way) in C#

With the new release of Entity Framework 4.1, I thought it would be a good time to learn how to utilise it in my coding. I've started off well but I seem to have hit a brick wall and I dont know what the best approach is.
My issue is when using lookup tables, I cant see how to keep my data as objects (rather than lists, anonymous types etc) when pulling in data from a lookup table.
I have looked around on Google but most of the posts I find are prior to the latest release of EF 4.1 and I am assuming that there is a better way to do it.
I have a simple 'invoice header' and 'customer' situation so I have set the mappings up as you would expect (the invoice header has the Id of the customer it relates to).
If I pull in data from only the invoice table then I get a true object that I can bind in to a datagrid and later save changes but this doesnt pull in the customer name like this, for example:
var results = from c in context.InvoiceHeaders
select c;
If I restructure the query to pull back specific columns including drilling down in to the customer table and and getting the customer name directly then I get the data I want but it's now not a type of object that I would expect (invoice object), like this:
var results = from c in context.InvoiceHeaders
select new { c.CreatedBy, c.Customer.Name };
But it now becomes an anonymous type and it seems to lose its bindings back to the database (hope I'm making sense)
So - my question is, "what is the best/official way to use lookup tables in EF 4.1" and/or "can I use lookup tables and keep my bindings"?
Please let me know if you need me to post any code but on this occasion, as it was a general question, I didnt feel I needed to.
Thanks in advance,
James
EF classes are partial so you may expand them :
public partial class InvoiceHeaders
{
public string CustomerName
{
get
{
try
{
return this.Customer.Name;
}
catch
{
return string.Empty;
}
}
private set { }
}
}
But when designing forms, data binding tools does note correctly use this expansion, so you should define a new class and use this class as data source when bind a component to your data source:
public partial class InvoiceHeadersEx : InvoiceHeaders
{
}
and in form.load event change the binding datasource:
private void Form1_Load(object sender, EventArgs e)
{
InvoiceHeadersExDataGridView.DataSource = InvoiceHeadersSource;
InvoiceHeadersBindingSource.DataSource = context.InvoiceHeaders;
}
I think the answer to this is to make sure you're using reference objects (I think that's what EF calls them) in your structure. So that an Invoice doesn't just have public int ClientId {get; set;} but also has public virtual Client Client {get; set;} This gives you a direct link to the actual client- and should still return Invoice objects.
Oh, I get the problem now. When you create an anonymous type, it's basically a new class (it has a type definition and everything). Because it's a new type, which you have control over, it's not an EF data type or linked to a data context.
You're best bet is returning the entire customer object. I appreciate this can cause performance issues when you have large objects, all I can say is, keep your objects smallish.

Categories

Resources