I a have 2 objects A and B. B is inherited from A and has some more properties.
I have IEnumerable{A} that contains only B objects.
What I want to do is:
list.Single(b => b.PropertyThatOnlyExistOnB == "something")
I would have expect something like this to work:
list.Single((B) b => b.PropertyThatOnlyExistOnB == "something")
But it doesn't compile. For now I just doing:
B result = null;
foreach (b in list)
{
if((B)b.PropertyThatOnlyExistOnB == "something")
{
result = (B)b;
}
}
Is there a shorter way?
Thanks
Use the Enumerable.OfType<TResult> extension methods to filter/cast.
list.OfType<B>().Single(b => b.PropertyThatOnlyExistOnB == "something")
Although I like #VirtualBlackFox's answer best, for completeness sake: Here is how to get your idea to work:
list.Single(b => ((B)b).PropertyThatOnlyExistOnB == "something");
You weren't that far off track, except that you got some of the syntax confused. The b => EXPRESSION syntax denotes a lambda expression. You can't start altering the stuff before the =>, unless you want to add (or remove) arguments:
* `x => LAMBDA_WITH_ONE_PARAMETER`
* `(x) => LAMBDA_WITH_ONE_PARAMETER`
* `() => LAMBDA_WITH_NO_PARAMETERS`
* `(x, y, z) => LAMBDA_WITH_THREE_PARAMETERS`
I have IEnumerable<A> that contains only B objects.
I would question this statement about your variable. You've specified that it is an IEnumerable<A>, but it contains only instances of B. What is the purpose of this? If you are explicitly only requiring instances of B in all circumstances, it would be better for this to be an IEnumerable<B>, as it safeguards problems that could be caught at compile time.
Consider the following, I would imagine that you may have some code similar to:
var setOfA = // Get a set of A.
DoSomethingWithA(setOfA);
var instanceOfB = GetInstanceOfB(setOfA);
In this case, I can understand that an IEnumerable<A> is perfectly valid, except when you want to perform the latter operation, GetInstanceOfB. Let's imagine, the definition is:
B GetInstanceOfB(IEnumerable<A> setOfA)
{
return // The answer to your question.
}
Now, the initial problem I hope you see, is that you're putting all your cards on the notion that your list (setOfA in my example), is always only going to contain instances of B. While you may guarantee that from your developer point of view, the compiler can make no such assumption, it can only guarantee that setOfA (list) is an IEnumerable<A>, and therein lies the potential issue.
Looking at the answers provided (all of which are perfectly valid [#VirtualBlackFox being the safest answer] given your notion):
I have IEnumerable<A> that contains only B objects.
What if, in some future change, setOfA, also contains an instance of C (a potential future subclass of A). Given this answer:
list.Single(b => ((B)b).PropertyThatOnlyExistOnB == "something");
What if setOfA is actually: [C B B]. You can see that the explicit cast (B)b will cause an InvalidCastException to be thrown. Because of the nature of the Single operation, it will continue to enumerate until the first instance that something fails the predicate (PropertyThatOnlyExistOnB == "something"), or an exception is thrown. In this instance, the exception could be thrown which is unexpected, and likely unhandled. This answer, is similar to:
list.Cast<B>().Single(b => b.PropertyThatOnlyExistOnB == "something");
Given this answer:
list.Single<A>(b => (b as B).PropertyThatOnlyExistOnB == "something")
In the same situation, the exception would arise as a thrown instance of NullReferenceException, because the instance of C cannot be safely type cast to B.
Now, don't get me wrong, I am not picking holes with those answers, as I said they are perfectly valid given the remit of your question. But in circumstances where your code changes, those perfectly valid answers become potential future issues.
Given this answer:
list.OfType<B>.Single(b => b.PropertyThatOnlyExistOnB == "something");
This allows you to safely type cast to a potential subset of A that are in fact B, and the compiler can guarantee that your predicate is only being used on an IEnumerable<B>.
But this would lead me to discovering that the juncture in your code is trying to handle your IEnumerable<A> but perform an operation where you really want your IEnumerable<B>. In which case, shouldn't you refactor this code to possibly have an explicit method:
B GetMatchingInstanceOfB(IEnumerable<B> setOfB)
{
if (setOfB == null) throw new ArgumentNullException("setOfB");
return setOfB.Single(b => b.PropertyThatOnlyExistOnB == "something");
}
The change in the design of the method ensures that it will only explicitly accept a valid set of B, and you don't have to worry about your cast within that method. The method is responsible only for matching a single item of B.
This of course means you need to push your cast out to a different level, but that still is much more explicit:
var b = GetMatchingInstanceOfB(setOfA.OfType<B>());
I'm also assuming that you have sufficient error handling in place in circumstances where the predicate will fail where all instances are B, e.g., more than 1 item satisfies PropertyThatOnlyExistOnB == "something".
This might have been a pointless rant about reviewing your code, but I think it is worth considering unexpected situations that could arise, and how potentially tweaking your variables can save you a potential headache in the future.
This should work fine:
list.Single<A>(b => (b as B).PropertyThatOnlyExistOnB == "something")
If you dont want to risk exceptions to be thrown you can do this:
list.Single<A>(b => ((b is B)&&((b as B).PropertyThatOnlyExistOnB == "something")))
Related
public void UpdateCredentialSelect(ClientCredentials credential, bool selected)
{
onsSelectedCredentials.RemoveAll(x => x.Equals(null));
if (selected && !onsSelectedCredentials.Exists(x => x.name.Equals(credential.name)))
{
onsSelectedCredentials.Add(credential);
}
else
{
onsSelectedCredentials.Remove(credential);
}
onsSecurityScreen.UpdateDynamicItems();
onsSecurityScreen.UpdateSelectAllCheckmark();
}
Running through Coverity reports and it is having issues with the "onsSelectedCredentials.RemoveAll(x => x.Equals(null));" line here, stating "check_after_deref: Null-checking x suggests that it may be null, but it has already been dereferenced on all paths leading to the check." The purpose of that line of code is to read through the current values in the list and strip out any that have become null, there's no null check happening as far as I can tell. Is this a false positive from Coverity or should I do something to fix this?
The expression x.Equals(null) will throw NullReferenceException if x is null. It can never evaluate to true (unless Equals has been overridden to do something screwy).
Coverity is correctly telling you that, albeit in a somewhat indirect way. Specifically, it understands that Equals is meant to test equality, and that you're comparing x to null as if they might be the same (the "check"), but you can't get into Equals (the "path") at all because of the NullReferenceException. It calls x.Equals() a "dereference", unfortunately using C/C++ terminology (for historical reasons).
To fix the bug in the code and also make Coverity happy, as suggested by derHugo in a comment, change the RemoveAll line to something like this:
onsSelectedCredentials.RemoveAll(x => (x == null));
I have some weird behaviour with a foreach-loop:
IEnumerable<Compound> loadedCompounds;
...
// Loop through the peaks.
foreach (GCPeak p in peaks)
{
// Loop through the compounds.
foreach (Compound c in loadedCompounds)
{
if (c.IsInRange(p) && c.SignalType == p.SignalType)
{
c.AddPeak(p);
}
}
}
So what I'd like to do: Loop through all the GCPeaks (it is a class) and sort them to their corresponding compounds.
AddPeak just adds the GCPeak to a SortedList. Code compiles and runs without exceptions, but the problem is:
After c.AddPeak(p) the SortedList in c contains the GCPeak (checked with Debugger), while the SortedLists in loadedCompounds remains empty.
I am quite confused with this bug I produced:
What is the reason for this behavior? Both Compound and GCPeak are classes so I'd expect references and not copies of my objects and my code to work.
How to do what I'd like to do properly?
EDIT:
This is how I obtain the IEnumarables (The whole thing is coming from an XML file - LINQ to XML). Compounds are obtained basically the same way.
IEnumerable<GCPeak> peaksFromSignal = from p in signal.Descendants("IntegrationResults")
select new GCPeak()
{
SignalType = signaltype,
RunInformation = runInformation,
RetentionTime = XmlConvert.ToDouble(p.Element("RetTime").Value),
PeakArea = XmlConvert.ToDouble(p.Element("Area").Value),
};
Thanks!
An IEnumerable won't hold a hard reference to your list. This causes two potential problems for you.
1) What you are enumerating might not be there anymore (for example if you were enumerating a list of facebook posts using a lazy technique like IEnumerable etc, but your connection to facebook for is closed, then it may evaluate to an empty enumerable. The same would occur if you were doing an IEnumerable over a database collection but that DB connection was closed etc.
2) Using an enumerable like that could lead you later to or previously to that to do a multiple enumeration which can have issues. Resharper typically warns against this (to prevent unintended consequences). See here for more info: Handling warning for possible multiple enumeration of IEnumerable
What you can do to debug your situation would be to use the LINQ extension of .toList() to force early evaluation of your IEnumerable. This will let you see what is in the IEnumerable easier and will let you follow this through your code. Do note that doing toList() does have performance implications as compared to a lazy reference like you have currently but it will force a hard reference earlier and help you debug your scenario and will avoid scenarios mentioned above causing challenges for you.
Thanks for your comments.
Indeed converting my loadedCompounds to a List<> worked.
Lesson learned: Be careful with IEnumerable.
EDIT
As requested, I am adding the implementation of AddPeak:
public void AddPeak(GCPeak peak)
{
if (peak != null)
{
peaks.Add(peak.RunInformation.InjectionDateTime, peak);
}
}
RunInformation is a struct.
I sometimes use LINQ constructs in my C# source. I use VS 2010 with ReSharper. Now I'm getting "Possible multiple enumeration of IEnumerable" warning from ReSharper.
I would like to refactor it according to the best practices. Here's briefly what it does:
IEnumerable<String> codesMatching = from c in codes where conditions select c;
String theCode = null;
if (codesMatching.Any())
{
theCode = codesMatching.First();
}
if ((theCode == null) || (codesMatching.Count() != 1))
{
throw new Exception("Matching code either not found or is not unique.");
}
// OK - do something with theCode.
A question:
Should I first store the result of the LINQ expression in a List?
(I'm pretty sure it won't return more than a couple of rows - say 10 at the most.)
Any hints appreciated.
Thanks
Pavel
Since you want to verify if your condition is unique, you can try this (and yes, you must store the result):
var codes = (from c in codes where conditions select c).Take(2).ToArray();
if (codes.Length != 1)
{
throw new Exception("Matching code either not found or is not unique.");
}
var code = codes[0];
Yes, you need to store result as List\Array, and then use it. In that case it won't enumerate it a couple of times.
In your case if you need to be sure that there is just one item that satisfy condition, you can use Single - if there will be more than one item that satisfy conditions it will throw exception. If there will be no items at all, it also throw exception.
And your code will be easier:
string theCode = (from c in codes where conditions select c).Single();
But in that case you can't change exception text, or you need to wrap it into own try\catch block and rethrow it with custom text\exception
Finalizing enumerable with .ToList()/.ToArray() would get rid of the warning, but to understand if it is better than multiple enumerations or not would depend on codes and conditions implementations. .Any() and .First() are lazy primitives and won't execute past the first element and .Count() might not be hit at all, hence converting to a list might be more wasteful than getting a new enumerator.
This question already has an answer here:
And difference between FirstOrDefault(func) & Where(func).FirstOrDefault()?
(1 answer)
Closed 9 years ago.
Can someone explain if these two Linq statements are same or do they differ in terms of execution. I am guessing the result of their execution is the same but please correct me if I am wrong.
var k = db.MySet.Where(a => a.Id == id).SingleOrDefault().Username;
var mo = db.MySet.SingleOrDefault(a => a.Id == id).Username;
Yes, both instructions are functionally equivalent and return the same result. The second is just a shortcut.
However, I wouldn't recommend writing it like this, because SingleOrDefault will return null if there is no item with the specified Id. This will result in a NullReferenceException when you access Username. If you don't expect it to return null, use Single, not SingleOrDefault, because it will give you a more useful error message if your expectation is not met. If you're not sure that a user with that Id exists, use SingleOrDefault, but check the result before accessing its members.
yes. these two linq statements are same.
but i suggest you wirte code like this:
var mo = db.MySet.SingleOrDefault(a => a.Id == id);
if(mo !=null)
{
string username=mo.Username;
}
var k = db.MySet.Where(a => a.Id == id).SingleOrDefault().Username;
var mo = db.MySet.SingleOrDefault(a => a.Id == id).Username;
You asked if they are equivalent...
Yes, they will return the same result, both in LINQ-to-Objects and in LINQ-to-SQL/Entity-Framework
No, they aren't equal, equal in LINQ-to-Objects. Someone had benchmarked them and discovered that the first one is a little faster (because the .Where() has special optimizations based on the type of db.MySet) Reference: https://stackoverflow.com/a/8664387/613130
They're different in the terms of actual code they will execute, but I can't see a situation in which they will give different results. In fact, if you've got Resharper installed, it will recommend you change the former into the latter.
However, I'd in general question why you ever want to do SingleOrDefault() without immediately following it with a null check.
Instead of checking for null I always check for default(T) as the name of the LINQ function implies too. In my opinion its a bit more maintainable code in case the type is changed into a struct or class.
They will both return the same result (or both will throw a NULL reference exception). However, the second one may be more efficient.
The first version will need to enumerate all values that meet the where condition, and then it will check to see if this returned only 1 value. Thus this version might need to enumerate over 100s of values.
The second version will check for only one value that meets the condition at the start. As soon as that version finds 2 values, it will thrown an exception, so it does not have the overhead of enumerating (possibly) 100s of values that will never be used.
I'm trying to wrap my head around what the C# compiler does when I'm chaining linq methods, particularly when chaining the same method multiple times.
Simple example: Let's say I'm trying to filter a sequence of ints based on two conditions.
The most obvious thing to do is something like this:
IEnumerable<int> Method1(IEnumerable<int> input)
{
return input.Where(i => i % 3 == 0 && i % 5 == 0);
}
But we could also chain the where methods, with a single condition in each:
IEnumerable<int> Method2(IEnumerable<int> input)
{
return input.Where(i => i % 3 == 0).Where(i => i % 5 == 0);
}
I had a look at the IL in Reflector; it is obviously different for the two methods, but analysing it further is beyond my knowledge at the moment :)
I would like to find out:
a) what the compiler does differently in each instance, and why.
b) are there any performance implications (not trying to micro-optimize; just curious!)
The answer to (a) is short, but I'll go into more detail below:
The compiler doesn't actually do the chaining - it happens at runtime, through the normal organization of the objects! There's far less magic here than what might appear at first glance - Jon Skeet recently completed the "Where clause" step in his blog series, Re-implementing LINQ to Objects. I'd recommend reading through that.
In very short terms, what happens is this: each time you call the Where extension method, it returns a new WhereEnumerable object that has two things - a reference to the previous IEnumerable (the one you called Where on), and the lambda you provided.
When you start iterating over this WhereEnumerable (for example, in a foreach later down in your code), internally it simply begins iterating on the IEnumerable that it has referenced.
"This foreach just asked me for the next element in my sequence, so I'm turning around and asking you for the next element in your sequence".
That goes all the way down the chain until we hit the origin, which is actually some kind of array or storage of real elements. As each Enumerable then says "OK, here's my element" passing it back up the chain, it also applies its own custom logic. For a Where, it applies the lambda to see if the element passes the criteria. If so, it allows it to continue on to the next caller. If it fails, it stops at that point, turns back to its referenced Enumerable, and asks for the next element.
This keeps happening until everyone's MoveNext returns false, which means the enumeration is complete and there are no more elements.
To answer (b), there's always a difference, but here it's far too trivial to bother with. Don't worry about it :)
The first will use one iterator, the second will use two. That is, the first sets up a pipeline with one stage, the second will involve two stages.
Two iterators have a slight performance disadvantage to one.