Is it possible two compare two fields in same IEnumerable using Linq? - c#

In researching this question, I found numerous answers for comparing two different lists. That is not my scenario. I have an IEnumerable of a class with several fields, and I need to filter where one field is greater than another field in the same list.
I can envision many uses for that type of comparison, but I am keeping things very simple in this example.
To give you a better context, here's a simple table made in T-SQL.
T-SQL code:
create table #GetMia1ASummaryBar (
Id int not null identity(1,1),
ShrCreditRate Float null,
NonShrCreditRate Float null
);
insert into #GetMia1ASummaryBar(ShrCreditRate,NonShrCreditRate)
values (null,1.5),(2.5,0.75),(2,2),(1,null);
-- to see the entire table
select * from #GetMia1ASummaryBar;
-- to filter where the first field is greater than the second
select * from #GetMia1ASummaryBar t where t.ShrCreditRate>t.NonShrCreditRate;
drop table #GetMia1ASummaryBar;
Using Linq, I would like to be able to do what I can do very easily in T-SQL: select * from #GetMia1ASummaryBar t where t.ShrCreditRate>t.NonShrCreditRate;
Along those lines, I tried this.
// select where first field is greater than second field
var list = repo.GetMia1ASummaryBar(campus)
.Where(l => l.ShrCreditRate > l.NonShrCreditRate);
While I received no compile errors, I received no records where I should have received at least one.
So instead of this,
Id ShrCreditRate NonShrCreditRate
----------- ---------------------- ----------------------
1 NULL 1.5
2 2.5 0.75
3 2 2
4 1 NULL
I'd like to filter to receive this.
Id ShrCreditRate NonShrCreditRate
----------- ---------------------- ----------------------
2 2.5 0.75
I'm really trying to avoid creating a separate list populated by a for-each loop, which would be a last resort. Is there a simple way to the type of Linq comparison I am trying to make.

Thanks to everyone who contributed in the comments. The short story is that this syntax is indeed valid.
// select where first field is greater than second field
var list = repo.GetMia1ASummaryBar(campus)
.Where(l => l.ShrCreditRate > l.NonShrCreditRate);
The reason the list was empty was because of an underlying dependency with a filter on it. I uncovered this unexpected behavior in an integration test, which once more shows the value of an integration test. (My unit test didn't uncover the unexpected behavior.)

Related

Entity Framework not returns duplicate matching items

I found an interesting issue in Entity Framework. Check the code bellow. Here i am using Contains() to find all matching Id rows from Table Test1 but when i add same id multiple times it returns only 1 item not duplicating items but i want to get duplicate items too. How can i do this then?
var ids = new List<int>();
ids.Add(1);
ids.Add(1);
var foo = ctx.Test1.Include("Test2").Where(x => ids.Contains(x.Id)).ToList();
YOu can not. You really need to learn the basic of how SQL works and how query works because your question is a fundamental misunderstanding.
when i add same id multiple times it returns only 1 item not duplicating items
Because the table STILL contains only 1 item. if you add the same ID multiple times, why would you expect it to return the row multiple times?
The way it is evaluated is:
Take row
Check whether the ID matches any of the provided list.
Next row.
So, regardless how often you put the ID into the list of approved id's, it OBVIOUSLY will only return one row. You do not get duplicate items because you do not have duplicate items to start with.
Like so often when using anything Ef related, it also helps to intercept and look at the generated SQL and the generated query plan - this at least will make obviously clear that you can not get 2 id's. Contains will be an IN clause, containing the list of values. Like I said above, contains checks for rows, it will not magically duplicate them.
I would suggest making the duplication manually after query - though in 25 years I have never seen this requirement coming up, so I would strongly suggest you check whether what you try to do makes any logical sense from a higher perspective first.
Why should it be the other way? Your EF Contains instruction has in SQL "IN" form:
SELECT
...
FROM ...
WHERE ... IN (1, 1)

Page results from Azure Application Insights Analytics API

Is it possible to "page" the results from the Analytics API?
If I use the following Query (via http POST)
{
"query":"customEvents | project customDimensions.FilePath, timestamp
| where timestamp > now(-100d) | order by timestamp desc | limit 25"
}
I get up to 10,000 results back in one result set. Is there any way to use something similar to the $skip that they have for the events API? Like "SKIP 75 TAKE 25" or something to get the 4th page of results.
[edit: this answer is now out of date, there has been a row_number function added to the query language. this answer left for historical purposes if anyone runs into strange queries that look like this answer]
Not easily.
If you can use the /events ODATA query path instead of the /query path, that supports paging. but not really custom queries like you have.
To get something like paging, you need to make a complicated query, and use summarize and makeList and invent a rowNum field in your query, then use mvexpand to re-expand the lists and then filter by the rowNum. it's pretty complicated and unintuitive, something like:
customEvents
| project customDimensions.FilePath, timestamp
| where timestamp > now(-100d)
| order by timestamp desc
// squishes things down to 1 row where each column is huge list of values
| summarize filePath=makelist(customDimensions.FilePath, 1000000)
, timestamp=makelist(timestamp, 1000000)
// make up a row number, not sure this part is correct
, rowNum = range(1,count(strcat(filePath,timestamp)),1)
// expands the single rows into real rows
| mvexpand filePath,timestamp,rowNum limit 1000000
| where rowNum > 0 and rowNum <= 100 // you'd change these values to page
i believe there's already a request on the appinsights uservoice to support paging operators in the query language.
the other assumption here is that data isn't changing in the underlying table while you're doing work. if new data appears between your calls, like
give me rows 0-99
50 new rows appear
give me rows 100-199
then step 3 is actually giving you back 50 duplicate rows that you just got in step 1.
There's a more correct way to do this now, using new operators that were added to the query language since my previous answer.
The two operators are serialize and row_number().
serialize ensures the data is in a shape and order that works with row_number(). Some of the existing operators like order by already create serialized data.
there's also prev() and next() operators that can get the values from previous or next rows in a serialized result.

Linq query for ordering by ascending order except for one row

I am pulling data from a table like the example below:
status_id status_description
1 Unknown
2 Personal
3 Terminated
4 Relocated
6 Other
7 LOP
8 Continuing
I want to get the results into a IEnumerable which i am then returning to the front end to display the descriptions in a dropdown.
I want to sort this alphabetically and have the "Other" option always show up in the bottom of the dropdown.
Is there any way to get this in the backend? Currently I have this:
IEnumerable<employee_map> map= await(from emp in db.emp_status_map
orderby emp.status_description
select emp).ToListAsync();
Simply order on two values, first on whether the description is Other, then on the actual description itself:
orderby emp.status_description == "Other", emp.status_description
Servy's answer is fine, it works and fullfils your requirements. Another solution slightly different would be to add a field called "DisplayOrder", for example, and set it to 1 for all the rows except for "other", and set it to 2 (or whatever number you want) to "other". Then, you just order by DisplayOrder, Description.
It's highly probably that this solution is gonna be much faster if you define an index on DisplayOrder, Description.

Access entity array by index c#

I have a table that represents a matrix:
CustType DiscountGroup1 DiscountGroup2 DiscountGroup3
Wholesale 32 10 15
Retail 10 15 0
All my stock items have a corresponding discount group code 1, 2 or 3.
At the time of invoicing I want to lookup the discount the customer type gets on the item(s) being invoiced.
The table needs to be able to grow to include new customer types and new discount groups so nothing can be hardcoded.
I figured I would pull the data into an array so I could select the column by index but I am getting stumped by my entities being too intelligent...
var disc = (from d in context.CustDiscountGroups
where d.CustType == Wholesale
select d).ToArray();
I can only access the columns by name ie: disc[0].DiscountGroup1
if I try disc[0,1] I get an error saying wrong number of indices inside.
What am I missing? I feel like it is something ridiculously fundamental. My only other thought was naming the columns as 1, 2, 3 etc and building a sql select string where I can use a variable to denote a column name.
The database is in design stages as well so the table(s) can be remade in any way needed, I'm struggling to get my head wrapped round which way to approach the problem.
your entity CustDiscountGroups having properties CustType, DiscountGroup1, DiscountGroup2, DiscountGroup3 and your query return array of CustDiscountGroups so you cant access like [0,1] there is no 2D array
if you need to access first item you can get it as disc[0] then you can get any of properties of discount group by name of the property. like
disc[0].CustType, disc[0].DiscountGroup1, disc[0].DiscountGroup2, disc[0].DiscountGroup3
If you want to get array of array then get the property value using reflection as below
var disc = context.CustDiscountGroups.Where(c=>c.CustType == Wholesale)
.Select(v=>typeof(CustDiscountGroups)
.GetFields(System.Reflection.BindingFlags.Public)
.Select(f=>f.GetValue(v)).ToArray())
.ToArray();
var disc = context.CustDiscountGroups.Where(c=>c.CustType == Wholesale)
.Select(v=>typeof(CustDiscountGroups)
.GetProperties()
.Select(f=>f.GetValue(v,null)).ToArray()).ToArray();
now you can access values like disc[0][1]
Please note: I haven't compiled and tested above code, please get the idea and change as you want

SELECT * Not returning all rows, unless I ORDER BY id DESC

Application presented with "Sequence Contains More Than One Entity" error. Knowing this is usually the result of a .SingleOrDefault() command in Linq, I started investigating. I can verify that the production server has many instances of duplicate keywords, so that's where I begin.
I have the following Keyword table:
id INT (NOT NULL, PRIMARY KEY)
text NVARCHAR(512)
active INT
active is just a way to "enable/disable" data if the need strikes me. I'm using LINQ to SQL and have the following method implemented:
public Keyword GetKeyword(String keywordText)
{
return db.Keywords.SingleOrDefault(k => (k.text.ToUpper() == keywordText.ToUpper()));
}
The idea is that I attach keywords through an association table so that multiple objects can reference the same keyword. There should be no duplicate text entries within the Keyword table. This is not enforced on the database, but rather through code. This might not be best practice, but that is the least of my problems at the moment. So when I create my object, I do this:
Keyword keyword = GetKeyword(keywordText)
if(keyword == null)
{
keyword = new Keyword();
keyword.text = keywordText;
keyword.active = Globals.ACTIVE;
db.Keywords.InsertOnSubmit(keyword);
}
KeywordReference reference = new KeywordReference();
reference.keywordId = keyword.id;
myObject.KeywordReferences.Add(reference);
db.SubmitChanges();
this code is actually paraphrased, I use a repository pattern, so all relevant code would be much longer. However, I can assure you the code is working as intended, as I've extensively tested it. The problem seems to be happening on the database level.
So I run a few tests and manual queries on my test database and find that the keyword I passed in my tests is not in the database, yet if I do a JOIN, I see that it is. So I dig a little deeper and opt to manually scan through the whole list:
SELECT * FROM Keyword
return 737 results. I decided I didn't want to look through them all, so I ORDER BY text and get 737 results. I look for the word I recently added, and it does not show up in the list.
Confused, I do a lookup for all keywordIds associated with the object I recently worked on and see a few with ids over 1,000 (the id is set to autoincrement by 1). Knowing that I never actually delete a keyword (hence the "Active" column) so all ids from 1 to at least those numbers over 1,000 should all be present, I do another query:
SELECT * FROM Keyword ORDER BY id
returns 737 results, max ID stops at 737. So I try:
SELECT * FROM Keyword ORDER BY id DESC
returns 1308 rows
I've seen disparities like this before if there is no primary key or no unique identifier, but I have confirmed that the id column is, in fact, both. I'm not really sure where to go from here, and I now have the additional problem of about 4,000+ keywords on production that are duplicate, and several more objects that reference different instances of each.
If you somehow can get all your data then your table is fine, but index is corrupted.
Try to rebuild index on your table:
ALTER INDEX ALL ON Your.Table
REBUILD WITH (FILLFACTOR = 80, SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON);
In rare situations you can see such errors if your hard drive fails to write data due to power loss. Chkdsk should help

Categories

Resources