Simplest way to do Lazy Loading with LINQ2SQL - c#

I have a read-only database, so I am turning off ObjectTracking (thus implicitly turning off DeferredLoading).
I wish to do lazy loading and not use LoadWith<>.
What is the simplest way to explicitly tell Linq to go and lazy fetch a relation just before I need the data itself.
For example: a simple dbml
If I have the following code:
TestDbDataContext context = new TestDbDataContext(Settings.Default.TestersConnectionString);
context.ObjectTrackingEnabled = false;
var result = context.Employees.ToList();
foreach (var employee in result)
{
// HERE Should load gift list
foreach (var gift in employee.Gifts)
{
Console.WriteLine(gift.Name);
}
}
I know I can write a full query again, but I hope we can find together a better way.

You are fighting the system... 2 thoughts:
if you know you need the other data (nested foreach), why wouldn't you want to use LoadWith? That is pretty-much the text-book use case
since you (from post) know that object tracking is required for lazy loading, why not just enable object tracking; data-contexts should usually be considered "units of work" (i.e. short-lived), so this isn't likely to hurt much in reality.
See the official replies here for why these two options (object tracking and deferred loading) are linked.

Use a LazyList:
http://blog.wekeroad.com/blog/lazy-loading-with-the-lazylist/

Related

Db4O activation depth, Faq, Best Practise for Web Application

Our database includes 4,000,000 records (sql server) and it's physical size is 550 MB .
Entities in database are related each other as graph style. When i load an entity from db with 5 level depth there is a problem (all records are loaded).
Is there any mechanism like Entity Framework( Include("MyProperty.ItsProperty"))
What is the best Types for using with db4O databases?
Is there any issue for Guid, Generic Collections?
Is there any best practise for WebApplication with db4o? Session Containers+EmbeddedDb4ODb or Client/ServerDb4O?
Thx for help..
Thx for good explanation. But i want to give my exact problem as a sample:
I have three entities: (N-N relationship. B is an intersection Entity. Concept:Graph)
class A
{
public B[] BList;
public int Number;
public R R;
}
class B
{
public A A;
public C C;
public D D;
public int Number;
}
class C
{
public B[] BList;
public E E;
public F F;
public int Number;
}
I want to query dbContext.A.Include("BList.C.BList.A").Include("BList.C.E.G").Where(....)
I want to get :A.BList.C.BList.A.R
But I dont want to get :A.R
I want to get :A.BList.C.E.G
But I dont want to get :A.BList.C.F
I want to get :A.BList.C.E.G
But I dont want get :A.BList.D
Note:this requirements can change a query to another query
Extra question is there any possibility to load
A.BList[#Number<120].C.BList.A[#Number>100] Super syntax :)
Activation: As you said db4o uses it's activation-mechanism to control which objects are loaded. To prevent that to many objects are loaded there are different strategies.
Lower the global default activation-depth: configuration.Common.ActivationDepth = 2 Then use the strategies below to activate objects on need.
Use class-specific activation configuration like cascading activation, minimum and maximun activation-depth etc.
Activate objects explicit on demand: container.Activate(theObject,5)
However all these stuff is rather painful on complex object graphs. The only strategy to get away from that pain is transparent activation. Create an attribute like TransparentlyActivated. Use this attribute to mark your stored classes. Then use the db4otool to enhance your classes. Add the db4otool-command to the Post-Build events in Visual Studio: Like 'PathTo\Db4oTool.exe -ta -debug -by-attribute:YourNamespace.TransparentlyActivated $(TargetPath)
Guid, Generic Collections:
No (in Version 7.12 or 8.0). However if you store your own structs: Those are handled very poorly by db4o
WebApplication: I recommend an embedded-container, and then a session-container for each request.
Update for extended question part
To your case. For such complex activation schema I would use transparent activation.
I assume you are using properties and not public fields in your real scenario, otherwise transparent persistence doesn't work.
The transparent activation basically loads an object in the moment a method/property is called the first. So when you access the property A.R then A itself it loaded, but not the referenced objects. I just go through a few of you access patterns to show what I mean:
Getting 'A.BList.C.BList.A.R'
A is loaded when you access A.BList. The BList array is filled with unactivate objects
You keep navigating further to BList.C. At this moment the BList object is loaded
Then you access C.BList. db4o loads the C-object
And so on and so forth.
So when you get 'A.BList.C.BList.A.R' then 'A.R' isn't loaded
A unloaded object is represented by an 'empty'-shell object, which has all values set to null or the default value. Arrays are always fully loaded, but first filled with unactivated objects.
Note that theres no real query syntax to do some kind of elaborate load requests. You load your start object and then pull stuff in as you need it.
I also need to mention that this kind of access will perform terrible over the network with db4o.
Yet another hint. If you want to do elaborate work on a graph-structure, you also should take a look at graph databases, like Neo4J or Sones Graph DB

Copying from EntityCollection to EntityCollection impossible?

How would you do this (pseudo code): product1.Orders.AddRange(product2.Orders);
However, the function "AddRange" does not exist, so how would you copy all items in the EntityCollection "Orders" from product2 to product1?
Should be simple, but it is not...
The problem is deeper than you think.
Your foreach attempt fails, because when you call product1.Orders.Add, the entity gets removed from product2.Orders, thus rendering the existing enumerator invalid, which causes the exception you see.
So why does entity get removed from produc2? Well, seems quite simple: because Order can only belong to one product at a time. The Entity Framework takes care of data integrity by enforcing rules like this.
If I understand correctly, your aim here is to actually copy the orders from one product to another, am I correct?
If so, then you have to explicitly create a copy of each order inside your foreach loop, and then add that copy to product1.
For some reason that is rather obscure to me, there is no automated way to create a copy of an entity. Therefore, you pretty much have to manually copy all Order's properties, one by one. You can make the code look somewhat more neat by incorporating this logic into the Order class itself - create a method named Clone() that would copy all properties. Be sure, though, not to copy the "owner product reference" property, because your whole point is to give it another owner product, isn't it?
Anyway, do not hesitate to ask more questions if something is unclear. And good luck.
Fyodor
Based on the previous two answers, I came up with the following working solution:
public static void AddRange<T>(this EntityCollection<T> destinationEntityCollection,
EntityCollection<T> sourceEntityCollection) where T : class
{
var array = new T[sourceEntityCollection.Count()];
sourceEntityCollection.CopyTo(array,0);
foreach (var entity in array)
{
destinationEntityCollection.Add(entity);
}
}
Yes, the usual collection related functions are not there.
But,
1. Did you check CopyTo method?
2. Do you find any problem with using the iterator? You know, GetEnumerator, go through the collection and copy the entities?
The above two can solve your problems. But, I'm sure in .NET 3.0+ there would be compact solutions.
My answers are related to .NET 2.0

Common problem for me in C#, is my solution good, stupid, reasonable? (Advanced Beginner)

Ok, understand that I come from Cold Fusion so I tend to think of things in a CF sort of way, and C# and CF are as different as can be in general approach.
So the problem is: I want to pull a "table" (thats how I think of it) of data from a SQL database via LINQ and then I want to do some computations on it in memory. This "table" contains 6 or 7 values of a couple different types.
Right now, my solution is that I do the LINQ query using a Generic List of a custom Type. So my example is the RelevanceTable. I pull some data out that I want to do some evaluation of the data, which first start with .Contains. It appears that .Contains wants to act on the whole list or nothing. So I can use it if I have List<string>, but if I have List<ReferenceTableEntry> where ReferenceTableEntry is my custom type, I would need to override the IEquatable and tell the compiler what exactly "Equals" means.
While this doesn't seem unreasonable, it does seem like a long way to go for a simple problem so I have this sneaking suspicion that my approach is flawed from the get go.
If I want to use LINQ and .Contains, is overriding the Interface the only way? It seems like if there way just a way to say which field to operate on. Is there another collection type besides LIST that maybe has this ability. I have started using List a lot for this and while I have looked and looked, a see some other but not necessarily superior approaches.
I'm not looking for some fine point of performance or compactness or readability, just wondering if I am using a Phillips head screwdriver in a Hex screw. If my approach is a "decent" one, but not the best of course I'd like to know a better, but just knowing that its in the ballpark would give me little "Yeah! I'm not stupid!" and I would finish at least what I am doing completely before switch to another method.
Hope I explained that well enough. Thanks for you help.
What exactly is it you want to do with the table? It isn't clear. However, the standard LINQ (-to-Objects) methods will be available on any typed collection (including List<T>), allowing any range of Where, First, Any, All, etc.
So: what is you are trying to do? If you had the table, what value(s) do you want?
As a guess (based on the Contains stuff) - do you just want:
bool x= table.Any(x=>x.Foo == foo); // or someObj.Foo
?
There are overloads for some of the methods in the List class that takes a delegate (optionally in the form of a lambda expression), that you can use to specify what field to look for.
For example, to look for the item where the Id property is 42:
ReferenceTableEntry found = theList.Find(r => r.Id == 42);
The found variable will have a reference to the first item that matches, or null if no item matched.
There are also some LINQ extensions that takes a delegate or an expression. This will do the same as the Find method:
ReferenceTableEntry found = theList.FirstOrDefault(r => r.Id == 42);
Ok, so if I'm reading this correctly you want to use the contains method. When using this with collections of objects (such as ReferenceTableEntry) you need to be careful because what you're saying is you're checking to see if the collection contains an object that IS the same as the object you're comparing against.
If you use the .Find() or .FindAll() method you can specify the criteria that you want to match on using an anonymous method.
So for example if you want to find all ReferenceTableEntry records in your list that have an Id greater than 1 you could do something like this
List<ReferenceTableEntry> listToSearch = //populate list here
var matches = listToSearch.FindAll(x => x.Id > 1);
matches will be a list of ReferenceTableEntry records that have an ID greater than 1.
having said all that, it's not completely clear that this is what you're trying to do.
Here is the LINQ query involved that creates the object I am talking about, and the problem line is:
.Where (searchWord => queryTerms.Contains(searchWord.Word))
List<queryTerm> queryTerms = MakeQueryTermList();
public static List<RelevanceTableEntry> CreateRelevanceTable(List<queryTerm> queryTerms)
{
SearchDataContext myContext = new SearchDataContext();
var productRelevance = (from pwords in myContext.SearchWordOccuranceProducts
where (myContext.SearchUniqueWords
.Where (searchWord => queryTerms.Contains(searchWord.Word))
.Select (searchWord => searchWord.Id)).Contains(pwords.WordId)
orderby pwords.WordId
select new {pwords.WordId, pwords.Weight, pwords.Position, pwords.ProductId});
}
This query returns a list of WordId's that match the submitted search string (when it was List and it was just the word, that works fine, because as an answerer mentioned before, they were the same type of objects). My custom type here is queryTerms, a List that contains WordId, ProductId, Position, and Weight. From there I go about calculating the relevance by doing various operations on the created object. Sum "Weight" by product, use position matches to bump up Weights, etc. My point for keeping this separate was that the rules for doing those operations will change, but the basic factors involved will not. I would have even rather it be MORE separate (I'm still learning, I don't want to get fancy) but the rules for local and interpreted LINQ queries seems to trip me up when I do.
Since CF has supported queries of queries forever, that's how I tend to lean. Pull the data you need from the db, then do your operations (which includes queries with Aggregate functions) on the in-memory table.
I hope that makes it more clear.

Dynamic "WHERE" like queries on memory objects

What would be the best approach to allow users to define a WHERE-like constraints on objects which are defined like this:
Collection<object[]> data
Collection<string> columnNames
where object[] is a single row.
I was thinking about dynamically creating a strong-typed wrapper and just using Dynamic LINQ but maybe there is a simpler solution?
DataSet's are not really an option since the collections are rather huge (40,000+ records) and I don't want to create DataTable and populate it every time I run a query.
What kind of queries do you need to run? If it's just equality, that's relatively easy:
public static IEnumerable<object[]> WhereEqual(
this IEnumerable<object[]> source,
Collection<string> columnNames,
string column,
object value)
{
int columnIndex = columnNames.IndexOf(column);
if (columnIndex == -1)
{
throw new ArgumentException();
}
return source.Where(row => Object.Equals(row[columnIndex], value);
}
If you need something more complicated, please give us an example of what you'd like to be able to write.
If I get your point : you'd like to support users writting the where clause externally - I mean users are real users and not developers so you seek solution for the uicontrol, code where condition bridge. I just though this because you mentioned dlinq.
So if I'm correct what you want to do is really :
give the user the ability to use column names
give the ability to describe a bool function (which will serve as where criteria)
compose the query dynamically and run
For this task let me propose : Rules from the System.Workflow.Activities.Rules namespace. For rules there're several designers available not to mention the ones shipped with Visual Studio (for the web that's another question, but there're several ones for that too).I'd start with Rules without workflow then examine examples from msdn. It's a very flexible and customizable engine.
One other thing: LINQ has connection to this problem as a function returning IQueryable can defer query execution, you can previously define a query and in another part of the code one can extend the returned queryable based on the user's condition (which then can be sticked with extension methods).
When just using object, LINQ isn't really going to help you very much... is it worth the pain? And Dynamic LINQ is certainly overkill. What is the expected way of using this? I can think of a few ways of adding basic Where operations.... but I'm not sure how helpful it would be.
How about embedding something like IronPython in your project? We use that to allow users to define their own expressions (filters and otherwise) inside a sandbox.
I'm thinking about something like this:
((col1 = "abc") or (col2 = "xyz")) and (col3 = "123")
Ultimately it would be nice to have support for LIKE operator with % wildcard.
Thank you all guys - I've finally found it. It's called NQuery and it's available from CodePlex. In its documentation there is even an example which contains a binding to my very structure - list of column names + list of object[]. Plus fully functional SQL query engine.
Just perfect.

Pros & cons between LINQ and traditional collection based approaches

Being relatively new to the .net game, I was wondering, has anyone had any experience of the pros / cons between the use of LINQ and what could be considered more traditional methods working with lists / collections?
For a specific example of a project I'm working on : a list of unique id / name pairs are being retrieved from a remote web-service.
this list will change infrequently (once per day),
will be read-only from the point of view of the application where it is being used
will be stored at the application level for all requests to access
Given those points, I plan to store the returned values at the application level in a singleton class.
My initial approach was to iterate through the list returned from the remote service and store it in a NameValueCollection in a singleton class, with methods to retrieve from the collection based on an id:
sugarsoap soapService = new sugarsoap();
branch_summary[] branchList = soapService.getBranches();
foreach (branch_summary aBranch in branchList)
{
branchNameList.Add(aBranch.id, aBranch.name);
}
The alternative using LINQ is to simply add a method that works on the list directly once it has been retrieved:
public string branchName (string branchId)
{
//branchList populated in the constructor
branch_summary bs = from b in branchList where b.id == branchId select b;
return branch_summary.name;
}
Is either better than the other - is there a third way? I'm open to all answers, for both approaches and both in terms of solutions that offer elegance, and those which benefit performance.
i dont think the linq you wrote would compile, it'd have to be
public string branchName (string branchId)
{
//branchList populated in the constructor
branch_summary bs = (from b in branchList where b.id == branchId select b).FirstOrDefault();
return branch_summary == null ? null : branch_summary.name;
}
note the .FirstsOrDefault()
I'd rather use LINQ for the reason that it can be used in other places, for writing more complex filters on your data. I also think it's easier to read than NameValueCollection alternative.
that's my $0.02
In general, your simple one-line for/foreach loop will be faster than using Linq. Also, Linq doesn't [always] offer significant readability improvements in this case. Here is the general rule I code by:
If the algorithm is simple enough to write and maintain without Linq, and you don't need delayed evaluation, and Linq doesn't offer sufficient maintainability improvements, then don't use it. However, there are times where Linq immensely improves the readability and correctness of your code, as shown in two examples I posted here and here.
I'm not sure a singleton class is absolutely necessary, do you absolutely need global access at all times? Is the list large?
I assume you will have a refresh method on the singleton class for when the properties need to change and also that you have some way of notifying the singleton to update when the list changes.
Both solutions are viable. I think LINQ will populate the collection faster in the constructor (but not noticeably faster). Traditional collection based approaches are fine. Personally, I would choose the LINQ version if only because it is new tech and I like to use it. Assuming your deployment environment has .NET 3.5...
Do you have a method on your webservice for getting branches by Id? That would be the third option if the branch info is needed infrequently.
Shortened and workified:
public string BranchName(string branchId)
{
var bs = branchList.FirstOrDefault(b => b.Id == branchId);
return bs == null ? null : bs.Name;
}

Categories

Resources