I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL?
Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry.
Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream.
Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product.
This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context?
var query = (from c in contextA.Cases
where monitoredInboxList.Contains(c.INBOXES.inboxName)
//right join d in contextB.CaseLog on d.ID = c.ID....
select new
{
//setup fields here...
});
you could try using a GetTable command. I think this loads all of contextB.TableB's data first, not 100% sure on that though. I don't have an environment set up to play around in or test this out so let me know if it works =)
from a in contextA.TableA
join b in contextB.GetTable<TableB>() on a.id equals b.id
select new { a, b }
Your best bet, outside of database solutions, is to join using LINQ (to objects) after execution.
I realize this isn't the solution you were hoping for. At least at this level, you won't have to worry about the IN list limitation (.Contains)
Edit:
outside of database solutions above really points to linked server solutions where you allow the table/view from context A to exist in the database from context B.
If you cannot extract the 2 tables into List objects and then join them then you will probably have to do something database side. I would recomend creating a linked server and a view on the DB server you have control of. You can then do the join in the view and you would have a very simple LINQ query to just retrieve the view. I am njot sure how LINQtoSQL could every do a join between 2 data contexts pointing to 2 different servers.
Related
I am having difficulty trying to use LINQ to query a sql database in such a way to group all objects (b) in one table associated with an object (a) in another table into an anonymous type with both (a) and a list of (b)s. Essentially, I have a database with a table of offers, and another table with histories of actions taken related to those offers. What I'd like to be able to do is group them in such a way that I have a list of an anonymous type that contains every offer, and a list of every action taken on that offer, so the signature would be:
List<'a>
where 'a is new { Offer offer, List<OfferHistories> offerHistories}
Here is what I tried initially, which obviously will not work
var query = (from offer in context.Offers
join offerHistory in context.OffersHistories on offer.TransactionId equals offerHistory.TransactionId
group offerHistory by offerHistory.TransactionId into offerHistories
select { offer, offerHistories.ToList() }).ToList();
Normally I wouldn't come to SE with this little information but I have tried many different ways and am at a loss for how to proceed.
Please try to avoid .ToList() calls, only do if really necessary. I have an important question: Do you really need all columns of OffersHistories? Because it is very expensive grouping a full object, try only grouping the necessary columns instead. If you really need all offerHistories for one offer then I'm suggesting to write a sub select (this is also cost more performance):
var query = (from offer in context.Offers
select new { offer, offerHistories = (from offerHistory in context.OffersHistories
where offerHistory.TransactionId == offer.TransactionId
select offerHistory) });
P.s.: it's a good idea to create indexes for foreign key columns, columns that are used in where and group by statements, those are going to make the query faster,
I'm porting some existing (dynamic) SQL to C# via the SMO namespace. I'm having a little trouble figuring out how to join an existing database to my AlwaysOn Availability Group, though. The SMO namespace has a Database object and an AvailabilityDatabase object, but the two seem to be somewhat orthogonal...I can't see a way to move back and forth between the two concepts. In our existing implementation we create the database, perform some operations on it, create a full backup and then join it to the Availability Group. I'm trying to recreate this workflow in SMO, but getting hung up at the join to Availability Group step. If I do this...
AvailabilityGroup ag = new AvailabilityGroup(sqlServer, myExistingAgName);
AvailabilityDatabase agDb = new AvailabilityDatabase(ag, myExistingDbName);
agDb.JoinAvailablityGroup();
The operation fails and tells me that the AvailabilityDatabase hasn't been created yet. However, if I do this...
AvailabilityGroup ag = new AvailabilityGroup(sqlServer, myExistingAgName);
AvailabilityDatabase agDb = new AvailabilityDatabase(ag, myExistingDbName);
agDb.Create();
agDb.JoinAvailablityGroup();
The operation fails and tells me that the creation of the AvailabilityDatabase failed. Although the error message doesn't explicitly state this, I would assume the reason for the failure was that a DB by the name of myExistingDbName already exists, which is expected. I'm sure I'm just missing something fundamental here, but the MSDN documentation isn't very illustrative, and I'm not having any luck finding any tutorials/examples of this sort of thing online.
To add the database to the Availability Group on primary, I used the following...
// note that the Availability Group is instantiated differently than above
AvailabilityGroup ag = sqlServer.AvailabilityGroups[availabilityGroup];
AvailabilityDatabase agDb = new AvailabilityDatabase(ag, database);
agDb.Create();
After restoring the database on the secondary server(s), here's what I used to join it to the Availability Group...
AvailabilityGroup ag = sqlServer.AvailabilityGroups[availabilityGroup];
ag.AvailabilityDatabases[database].JoinAvailablityGroup();
This question can actually be applied to any language.
It is similar to this one, but not quite the same.
I have a website application that will be displaying data from database.
Three DB tables:
tblProfessor(Id,FirstName,LastName)
tblStudent(Id,FirstName,LastName)
tblProfessorStudent(Id,StudentId,ProfessorId)
So we have Students and Professors. Students can be taught by multiple professors and professors can teach multiple students.
Two ways of querying data:
return a join of all three tables, in which case we transfer some
duplicate data.
return three sets for each of the table. I know
multiple sets of data can be returned in one call from my web
application. I'm not clear about mechanics of that call, but I think
it will be just one connection to the DB (in contrast to the similar question mentioned above).
The query in the first case:
select
ProfessoirId = p.Id
,ProfessorFirstName = p.FirstName
,ProfessorLastName = p.LastName
,StudentId = s.Id
,StudentFirstName = s.FirstName
,StudentLastName = s.LastName
from tblProfessorStudent ps
inner join tblProfessor p
on p.id = ps.ProfessorId
inner join tblStudent s
on s.id = ps.StudentId
The duplication that I am talking about is returning first and last names of student and professor per each row - combination of "Student is taught by professor" and "professor teaches students". The duplication results in extra amount of kb that needs to be transferred from DB to the app.
The query in the second case will be as simple as this:
select <columns> from tblProfessor
select <columns> from tblStudent
select <columns> from tblProfessorStudent
How should I approach querying data for my app from the performance perspective?
From a pure performance perspective, there's nothing that beats the SQL Server's ability to join data sets in T-SQL. Especially when we are talking about large data sets.
Its sole purpose in life is to manage data and data sets, and it does that where the source of the data is.
Joining "over the wire"/on the client will introduce a great deal of (network) overhead, redundant data traffic, and there's no or close to no way that fancy client algorithms can overcome this.
Of course, and as usual: YMMV, "it depends" is always applicable to my statement.
If you are concerned about performance, then you should not return all rows from your tables. Once the database grows, this will cause the application to slow down. You should filter your data to get only the rows you need to display to the user. You can also consider implementing paging, so that you don't display a lot of rows at once.
I think that what matters most in this case is how you are using the data. If you have the correct indexes implemented, SQL Server will join the tables just fine, don’t worry about it. I’m pretty sure it will be faster than running 3 selects. You said you are worried about duplicate data, but what sort of duplication? If you join the 3 tables you’ll have the real data, I mean, teachers that teach X students and students that are taught by X teachers. No duplications! So again, it depends on how you are using the result sets. Are you simply displaying a list of students and a list of teachers? In this case go with option 2. If you need to show Teacher A has the following students, then go with the join on option 1 because if you choose option 2, you will have to manipulate the ProfessorStudent data sets (which I assume has only IDs) to get the names from the other 2 datasets and this is too much trouble in my opinion.
My database structure is this: an OptiUser belongs to multiple UserGroups through the IdentityMap table, which is a matching table (many to many) with some additional properties attached to it. Each UserGroup has multiple OptiDashboards.
I have a GUID string which identifies a particular user (wlid in this code). I want to get an IEnumerable of all of the OptiDashboards for the user identified by wlid.
Which of these two Linq-to-Entities queries is the most efficient? Do they run the same way on the back-end?
Also, can I shorten option 2's Include statements to just .Include("IdentityMaps.UserGroup.OptiDashboards")?
using (OptiEntities db = new OptiEntities())
{
// option 1
IEnumerable<OptiDashboard> dashboards = db.OptiDashboards
.Where(d => d.UserGroups
.Any(u => u.IdentityMaps
.Any(i => i.OptiUser.WinLiveIDToken == wlid)));
// option 2
OptiUser user = db.OptiUsers
.Include("IdentityMaps")
.Include("IdentityMaps.UserGroup")
.Include("IdentityMaps.UserGroup.OptiDashboards")
.Where(r => r.WinLiveIDToken == wlid).FirstOrDefault();
// then I would get the dashboards through user.IdentityMaps.UserGroup.OptiDashboards
// (through foreach loops...)
}
You may be misunderstanding what the Include function actually does. Option 1 is purely a query syntax which has no effect on what is returned by the entity framework. Option 2, with the Include function instructs the entity framework to Eagerly Fetch the related rows from the database when returns the results of the query.
So option 1 will result in some joins, but the "select" part of the query will be restricted to the OptiDashboards table.
Option 2 will result in joins as well, but in this case it will be returning the results from all the included tables, which obviously is going to introduce more of a performance hit. But at the same time, the results will include all the related entities you need, avoiding the [possible] need for more round-trips to the database.
I think the Include will render as joins an you will the able to access the data from those tables in you user object (Eager Loading the properties).
The Any query will render as exists and not load the user object with info from the other tables.
For best performance if you don't need the additional info use the Any query
As has already been pointed out, the first option would almost certainly perform better, simply because it would be retrieving less information. Besides that, I wanted to point out that you could also write the query this way:
var dashboards =
from u in db.OptiUsers where u.WinLiveIDToken == wlid
from im in u.IdentityMaps
from d in im.UserGroup.OptiDashboards
select d;
I would expect the above to perform similarly to the first option, but you may (or may not) prefer the above form.
I am very new to LINQ to SQL, so please forgive me if its a layman sort of question.
I see at many places that we use "select new" keyword in a query.
For e.g.
var orders = from o in db.Orders select new {
o.OrderID,
o.CustomerID,
o.EmployeeID,
o.ShippedDate
}
Why don't we just remove select new and just use "select o"
var orders = from o in db.Orders select o;
What I can differentiate is performance difference in terms of speed, i.e. then second query will take more time in execution than the first one.
Are there any other "differences" or "better to use" concepts between them ?
With the new keyword they are building an anonymous object with only those four fields. Perhaps Orders has 1000 fields, and they only need 4 fields.
If you are doing it in LINQ-to-SQL or Entity Framework (or other similar ORMs) the SELECT it'll build and send to the SQL Server will only load those 4 fields (note that NHibernate doesn't exactly support projections at the db level. When you load an entity you have to load it completely). Less data transmitted on the network AND there is a small chance that this data is contained in an index (loading data from an index is normally faster than loading from the table, because the table could have 1000 fields while the index could contain EXACTLY those 4 fields).
The operation of selecting only some columns in SQL terminology is called PROJECTION.
A concrete case: let's say you build a file system on top of SQL. The fields are:
filename VARCHAR(100)
data BLOB
Now you want to read the list of the files. A simple SELECT filename FROM files in SQL. It would be useless to load the data for each file while you only need the filename. And remember that the data part could "weight" megabytes, while the filename part is up to 100 characters.
After reading how much "fun" is using new with anonymous objects, remember to read what #pleun has written, and remember: ORMs are like icebergs: 7/8 of their working is hidden below the surface and ready to bite you back.
The answer given is fine, however I would like to add another aspect.
Because, using the select new { }, you disconnect from the datacontext and that makes you loose the change tracking mechanism of Linq-to-Sql.
So for only displaying data, it is fine and will lead to performance increase.
BUT if you want to do updates, it is a no go.
In the select new, we're creating a new anonymous type with only the properties you need. They'll all get the property names and values from the matching Orders. This is helpful when you don't want to pull back all the properties from the source. Some may be large (think varchar(max), binary, or xml datatypes), and we might want to exclude those from our query.
If you were to select o, then you'd be selecting an Order with all its properties and behaviours.