Replace for-switch loop with a Linq query - c#

I have a Message object which wraps a message format I do not have control over. The format is a simple list of Key/Value pairs. I want to extract a list of Users from a given Message. For example given the following message...
1. 200->....
2. 300->....
3. ....
4. 405->....
5. 001->first_user_name
6. 002->first_user_phone
7. 003->first_user_fax
8. 001->second_user_name
9. 001->third_user_name
10. 002->third_user_phone
11. 003->third_user_fax
12. 004->third_user_address
13. .....
14. 001->last_user_name
15. 003->last_user_fax
I want to extract four Users with the provided properties set. The initial keys 200/300....405 represent fields I don't need and can skip to get to the User data.
Each users data is in consecutive fields but the number of fields varies depending on how much information is known about a user. The following method does what I'm looking for. It uses an enumeration of possible key types and a method to find the index of the first field with user data.
private List<User> ParseUsers( Message message )
{
List<User> users = new List<User>( );
User user = null; String val = String.Empty;
for( Int32 i = message.IndexOfFirst( Keys.Name ); i < message.Count; i++ )
{
val = message[ i ].Val;
switch( message[ i ].Key )
{
case Keys.Name:
user = new User( val );
users.Add( user );
break;
case Keys.Phone:
user.Phone = val;
break;
case Keys.Fax:
user.Fax = val;
break;
case Keys.Address:
user.Address = val;
break;
default:
break;
}
}
return users;
}
I'm wondering if its possible to replace the method with a Linq query. I'm having trouble telling Linq to select a new user and populate its fields with all matching data until you find the start of the next user entry.
Note: Relative key numbers are random (not 1,2,3,4) in the real message format.

I don't see the benefit in changing your code to a LINQ query, but it's definitely possible:
private List<User> ParseUsers(Message message)
{
return Enumerable
.Range(0, message.Count)
.Select(i => message[i])
.SkipWhile(x => x.Key != Keys.Name)
.GroupAdjacent((g, x) => x.Key != Keys.Name)
.Select(g => g.ToDictionary(x => x.Key, x => x.Val))
.Select(d => new User(d[Keys.Name])
{
Phone = d.ContainsKey(Keys.Phone) ? d[Keys.Phone] : null,
Fax = d.ContainsKey(Keys.Fax) ? d[Keys.Fax] : null,
Address = d.ContainsKey(Keys.Address) ? d[Keys.Address] : null,
})
.ToList();
}
using
static IEnumerable<IEnumerable<T>> GroupAdjacent<T>(
this IEnumerable<T> source, Func<IEnumerable<T>, T, bool> adjacent)
{
var g = new List<T>();
foreach (var x in source)
{
if (g.Count != 0 && !adjacent(g, x))
{
yield return g;
g = new List<T>();
}
g.Add(x);
}
yield return g;
}

No, and the reason being, in general, most LINQ functions, in the same way as SQL queries, deal with unordered data, i.e. they don't make assumptions about the order of the incoming data. That gives them flexibility to be parallelized, etc. Your data has intrinsic order, so doesn't fit the model of querying.

How about splitting message into a List<List<KeyValuePait<int, string>>> where each List<KeyValuePair<int, string>> represents a single user. You could then do something like:
// SplitToUserLists would need a sensible implementation.
List<List<KeyValuePair<int,string>>> splitMessage = message.SplitToUserLists();
IEnumerable<User> users = splitMessage.Select(ConstructUser);
With
private User ConstructUser(List<KeyValuePair<int, string>> userList)
{
return userList.Aggregate(new User(), (user, keyValuePair) => user[keyValuePair.Key] = keyValuePair.Val);
}

I don't think there is any performance benefit, but it increases readability a lot in my opinion.
A possible solution could look like this:
var data = File.ReadAllLines("data.txt")
.Select(line => line.Split(new[] {"->"}, StringSplitOptions.RemoveEmptyEntries))
.GroupByOrder(ele => ele[0]);
The real magic is happening behind GroupByOrder, which is an extension method.
public static IEnumerable<IEnumerable<T>> GroupByOrder<T, K>(this IEnumerable<T> source, Func<T, K> keySelector) where K : IComparable {
var prevKey = keySelector(source.First());
var captured = new List<T>();
foreach (var curr in source) {
if (keySelector(curr).CompareTo(prevKey) <= 0) {
yield return captured;
captured = new List<T>();
}
captured.Add(curr);
}
yield return captured;
}
(Disclaimer: idea stolen from Tomas Petricek)
Your sample data yields the following groups, which now just have to be parsed into your User object.
User:
first_user_name
first_user_phone
first_user_fax
User:
second_user_name
User:
third_user_name
third_user_phone
third_user_fax
third_user_address
User:
last_user_name
last_user_fax

Related

Is there best practice to obtain elements/variables from collection based on different conditions

Or in general how to filter some elements from collection based on different and complex conditions in single pass
Let's say we have collection of elements
var cats = new List<Cat>{ new Cat("Fluffy"), new Cat("Meowista"), new Cat("Scratchy")};
And somewhere we use this collection
public CatFightResult MarchBoxing(List<Cat> cats, string redCatName, string blueCatName)
{
var redCat = cats.First(cat => cat.Name == redCatName);
var blueCat = cats.First(cat => cat.Name == blueCatName);
var redValue = redCat.FightValue();
var blueValue = blueCat.FightValue();
if (Cat.FightValuesEqualWithEpsilon(redValue, blueValue))
return new CatFightResult{IsDraw: true};
return new CatFightResult{Winner: redValue > blueValue ? redCat : blueCat};
}
Question: Is there a nice way to obtain multiple variables from collection based on some condition(s)?
The question probably requires some sort of uniqueness in collection, let's first assume there is some (i.e. HashSet/Dictionary)
AND preferably:
SINGLE pass/cycle on collection (the most important reason of question, as you can see there are 2 filter operations in above method)
oneliner or like that, with readability, and the shorter the better
generic way (IEnumerable<T> I think, or ICollection<T>)
typos error-prone and changes/additions safe (minimal use of actual conditions in code, preferably checked
null/exception check, because my intention that null is valid result for obtained variable
Would be also cool to have ability to provide custom conditions, which probably could be done via Func parameters, but I didn't tested yet.
There are my attempts, which I've posted in my repo https://github.com/phomm/TreeBalancer/blob/master/TreeTraverse/Program.cs
Here is the adaptation to example with Cats:
public CatFightResult MarchBoxing(List<Cat> cats, string redCatName, string blueCatName)
{
var redCat = null;
var blueCat = null;
//1 kinda oneliner, but hard to read and not errorprone
foreach (var c in cats) _ = c.Name == redCatName ? redCat = n : n.Name == blueCatName ? blueCat = n : null;
//2 very good try, because errorprone and easy to read (and find mistake in assignment), but not oneliner and not elegant (but fast) redundant fetching and not single pass at all, up to O(N*N) with FirstOrDefault
var filter = new [] { redCatName, blueCatName }.ToDictionary(x => x.Key, x => cats.FirstOrDefault(n => n.Name == x.Key));
redCat = filter[redCatName];
blueCat = filter[blueCatName];
//3 with readability and ckecks for mistakenly written searching keys (dictionary internal dupe key check) , but not oneliner and not actualy single pass
var dic = new Dictionary<int, Func<Cat, Cat>> { { redCatName, n => redCat = n }, { blueCatName, n => blueCat = n } };
cats.All(n => dic.TryGetValue(n.Name, out var func) ? func(n) is null : true);
//4 best approach, BUT not generic (ofc one can write simple generic IEnumerable<T> ForEach extension method, and it would be strong candidate to win)
cats.ForEach(n => _ = n.Name == redCatName ? redCat = n : n.Name == blueCatName ? blueCat = n : null);
//5 nice approach, but not single pass, enumerating collection twice
cats.Zip(cats, (n, s) => n.Name == redCatName ? redCat = n : n.Name == blueCatName ? blueCat = n : null);
//6 the one I prefer best, however it's arguable due to breaking functional approach of Linq, causing side effects
cats.All(n => (n.Name == redCatName ? redCat = n : n.Name == blueCatName ? blueCat = n : null) is null);
}
All the options with ternary op are not extensible easily and relatively error-prone, but are quite short and Linq-ish, they also rely (some trade-off with confusion) on not returning/using actual results of ternary (with discard "_" or "is null" as bool). I think the approach with Dictionary of Funcs is a good candidate to implement custom conditions, just bake-in them with variables.
Thank you, looking forward your solutions ! :)
I'm not sure if it's possible with Linq out of the box but if writing a custom extension once is an option for you, retrieving some values from a collection with arbitrary number of conditions may later be put in pretty concise manner.
For example, you may write something like
var (redCat, blueCat) = cats.FindFirsts(
x => x.Name == redCatName,
x => x.Name == blueCatName);
If you introduce the FindFirsts() extension as follows:
public static class FindExtensions
{
public static T[] FindFirsts<T>(this IEnumerable<T> collection,
params Func<T, bool>[] conditions)
{
if (conditions.Length == 0)
return new T[] { };
var unmatchedConditions = conditions.Length;
var lookupWork = conditions
.Select(c => (
value: default(T),
found: false,
cond: c
))
.ToArray();
foreach (var item in collection)
{
for (var i = 0; i < lookupWork.Length; i++)
{
if (!lookupWork[i].found && lookupWork[i].cond(item))
{
lookupWork[i].found = true;
lookupWork[i].value = item;
unmatchedConditions--;
}
}
if (unmatchedConditions <= 0)
break;
}
return lookupWork.Select(x => x.value).ToArray();
}
}
The full demo can be hound here: https://dotnetfiddle.net/QdVJUd
Note: In order to deconstruct the result array (i.e. use var (redCat, blueCat) = ...), you have to define a deconstruction extension. I borrowed some code from this thread to do so.

Returning value from out modifier to a collection in C#

Let's suppose I receive a collection of strings from user. I need to convert them to GUID sequences for further processing. There is a chance, that user may enter invalid data (not correct GUID sequence), so I need to validate input. Additionally, I can run business-process if only all uploaded data are correct GUID values. Here is my code:
IEnumerable<string> userUploadedValues = /* some logic */;
bool canParseUserInputToGuid = userUploadedValues.All(p => Guid.TryParse(p, out var x));
if(canParseUserInputToGuid)
var parsedUserInput = userUploadedValues.Select(p=> Guid.Parse(p));
This logic works pretty well, but I don't like it as actually I am doing work twice. In second line, Guid.TryParse(p, out var x) already writing parsed GUID sequence to the X variable. Is there an approach to combine validating and mapping logic - if sequence elements satisfy for condition (All) then map this elements to a new collection (Select) in one query? It is important for me also in terms of performance, as it is possible that client will send large amount of data (1, 000, 000+ elements) and doing twice work here is a bit inefficient.
You can do something like this in one Select:
var parsedUserInput = userUploadedValues.Select(p => Guid.TryParse(p, out var x) ? x : default)
.Where(p => p != default);
For this one, you need to be sure if there is no Guid.Empty input from the user.
Otherwise, you can return a nullable Guid if parsing doesn't succeed:
var parsedUserInput = userUploadedValues.Select(p => Guid.TryParse(p, out var x) ? x : default(Guid?))
.Where(p => p != null);
Another solution by creating an extension method, for example:
public static class MyExtensions
{
public static Guid? ToGuid(this string arg)
{
Guid? result = null;
if(Guid.TryParse(arg, out Guid guid))
result = guid;
return result;
}
}
and usage:
var parsedUserInput2 = userUploadedValues.Select(p => p.ToGuid())
.Where(p => p != null);
But keep in mind that in this cases, you will have a collection of nullable Guids.
Your out var x variable will be Guid.Empty in the case where it is not a valid Guid. So you can just do this:
IEnumerable<string> userUploadedValues = new[]
{
"guids.."
};
var maybeGuids = userUploadedValues.Select( x => {
Guid.TryParse( x, out var #guid );
return #guid;
} );
if ( maybeGuids.All( x => x != Guid.Empty ) )
{
//all the maybe guids are guids
}
You can optimize the validation and conversion like below,
IEnumerable<string> userUploadedValues = /* some logic */;
var parsedGuids = userUploadedValues.Where(p => Guid.TryParse(p, out var x));
if(userUploadedValues.Count() != parsedGuids.Count())
{
//Some conversion failed,
}
If the count of both the lists same, then you have all the converted GUIDs in the parsedGuids.
Sometimes the non-LINQ method is just easier to read and no longer.
var parsedUserInput = new List<string>();
foreach(var value in userUploadedValues)
{
if (Guid.TryParse(value, out var x)) parsedUserInput.Add(x);
else...
}

Grouping lists into groups of X items per group

I'm having a problem knowing the best way to make a method to group a list of items into groups of (for example) no more than 3 items. I've created the method below, but without doing a ToList on the group before I return it, I have a problem with it if the list is enumerated multiple times.
The first time it's enumerated is correct, but any additional enumeration is thrown off because the two variables (i and groupKey) appear to be remembered between the iterations.
So the questions are:
Is there a better way to do what I'm trying to achieve?
Is simply ToListing the resulting group before it leaves this method
really such a bad idea?
public static IEnumerable<IGrouping<int, TSource>> GroupBy<TSource>
(this IEnumerable<TSource> source, int itemsPerGroup)
{
const int initial = 1;
int i = initial;
int groupKey = 0;
var groups = source.GroupBy(x =>
{
if (i == initial)
{
groupKey = 0;
}
if (i > initial)
{
//Increase the group key if we've counted past the items per group
if (itemsPerGroup == initial || i % itemsPerGroup == 1)
{
groupKey++;
}
}
i++;
return groupKey;
});
return groups;
}
Here's one way to do this using LINQ...
public static IEnumerable<IGrouping<int, TSource>> GroupBy<TSource>
(this IEnumerable<TSource> source, int itemsPerGroup)
{
return source.Zip(Enumerable.Range(0, source.Count()),
(s, r) => new { Group = r / itemsPerGroup, Item = s })
.GroupBy(i => i.Group, g => g.Item)
.ToList();
}
Live Demo
I think you are looking for something like this:
return source.Select((x, idx) => new { x, idx })
.GroupBy(x => x.idx / itemsPerGroup)
.Select(g => g.Select(a => a.x));
You need to change your return type as IEnumerable<IEnumerable<TSource>>
The problem with using GroupBy() is that unless it somehow has knowledge under the hood that the input is ordered by key value, it has to read the entire sequence and allocate everything to its bucket before it can emit a single group. That's overkill in this case, since the key is a function of its ordinal position within the sequence.
I like the source.Skip(m).Take(n) approach, but that makes assumptions that items in source can be directly addressed. If that's not true or Skip() and Take() have no knowledge of the underlying implementation, then the production of each group is going to be an O(n/2) operation on the average, as it repeatedly iterates over source to produce the group.
This makes the overall partitioning operation, potentially quite expensive.
IF producing a group is an O(n/2) operation on the average, and
Given a group size of s, the production of approximately n/s groups is required,
Then the total cost of the operation is something like O(n2/2s), right?
So, I would do something this, an O(n) operation (feel free to use an IGrouping implementation if you'd like):
public static IEnumerable<KeyValuePair<int,T[]>> Partition<T>( this IEnumerable<T> source , int partitionSize )
{
if ( source == null ) throw new ArgumentNullException("source") ;
if ( partitionSize < 1 ) throw new ArgumentOutOfRangeException("partitionSize") ;
int i = 0 ;
List<T> partition = new List<T>( partitionSize ) ;
foreach( T item in source )
{
partition.Add(item) ;
if ( partition.Count == partitionSize )
{
yield return new KeyValuePair<int,T[]>( ++i , partition.ToArray() ) ;
partition.Clear() ;
}
}
// return the last partition if necessary
if ( partition.Count > 0 )
{
yield return new Partition<int,T>( ++i , items.ToArray() ) ;
}
}
.net Fiddle
Essentially you have an IEnumerable, and you want to group it into an IEnumerable of IGroupables which each contain the key as an index and the group as the values. Your version does seem to accomplish on the first pass, but I think that you can definitely stream line a little bit.
Using skip and take is the most desirable way to accomplish in my opinion, but the custom key for grouping is where there is an issue. There is a way around this which is to create your own class as a grouping template (seen in this answer: https://stackoverflow.com/a/5073144/1026459).
The end result is this:
public static class GroupExtension
{
public static IEnumerable<IGrouping<int, T>> GroupAt<T>(this IEnumerable<T> source, int itemsPerGroup)
{
for(int i = 0; i < (int)Math.Ceiling( (double)source.Count() / itemsPerGroup ); i++)
{
var currentGroup = new Grouping<int,T>{ Key = i };
currentGroup.AddRange(source.Skip(itemsPerGroup*i).Take(itemsPerGroup));
yield return currentGroup;
}
}
private class Grouping<TKey, TElement> : List<TElement>, IGrouping<TKey, TElement>
{
public TKey Key { get; set; }
}
}
And here is the demo in the fiddle which consumes it on a simple string
public class Program
{
public void Main(){
foreach(var p in getLine().Select(s => s).GroupAt(3))
Console.WriteLine(p.Aggregate("",(s,val) => s += val));
}
public string getLine(){ return "Hello World, how are you doing, this just some text to show how the grouping works"; }
}
edit
Alternatively as just an IEnumerable of IEnumerable
public static IEnumerable<IEnumerable<T>> GroupAt<T>(this IEnumerable<T> source, int itemsPerGroup)
{
for(int i = 0; i < (int)Math.Ceiling( (double)source.Count() / itemsPerGroup ); i++)
yield return source.Skip(itemsPerGroup*i).Take(itemsPerGroup);
}
This is based on Selman's Select with index idea, but using ToLookup to combine both the GroupBy and Select together as one:
public static IEnumerable<IEnumerable<TSource>> GroupBy<TSource>
(this IEnumerable<TSource> source, int itemsPerGroup)
{
return source.Select((x, idx) => new { x, idx })
.ToLookup(q => q.idx / itemsPerGroup, q => q.x);
}
The main difference though is that ToLookup actually evaluates results immediately (as concisely explained here: https://stackoverflow.com/a/11969517/7270462), which may or may not be desired.

Linq Order by a specific number first then show all rest in order

If i have a list of numbers:
1,2,3,4,5,6,7,8
and I want to order by a specific number and then show the rest.
For example if i pick '3' the list should be:
3,1,2,4,5,6,7,8
Looking for linq and c#.
Thank you
You can use a comparison in OrderBy or ThenBy to perform a conditional sorting.
list.OrderByDescending(i => i == 3).ThenBy(i => i);
I use OrderByDescending because i want matching results first(true is "higher" than false).
A couple of answers already sort the last few numbers (which may be correct since you're only showing an already sorted list). If you want the "unselected" numbers to be displayed in their original, not necessarily sorted order instead of sorted, you can instead do;
int num = 3;
var result = list.Where(x => x == num).Concat(list.Where(x => x != num));
As #DuaneTheriot points out, IEnumerable's extension method OrderBy does a stable sort and won't change the order of elements that have an equal key. In other words;
var result = list.OrderBy(x => x != 3);
works just as well to sort 3 first and keep the order of all other elements.
Maybe something like this:
List<int> ls=new List<int>{1,2,3,4,5,6,7,8};
int nbr=3;
var result= ls.OrderBy (l =>(l==nbr?int.MinValue:l));
public static IEnumerable<T> TakeAndOrder<T>(this IEnumerable<T> items, Func<T, bool> f)
{
foreach ( var item in items.Where(f))
yield return item;
foreach (var item in items.Where(i=>!f(i)).OrderBy(i=>i))
yield return item;
}
var items = new [] {1, 4, 2, 5, 3};
items.TakeAndOrder(i=> i == 4);
Using #joachim-isaksson idea I came up with this extension method:
public static IOrderedEnumerable<TSource> OrderByWithGivenValueFirst<TSource, TKey>(
this IEnumerable<TSource> source,
Func<TSource, TKey> keySelector,
TKey value
)
=> source.OrderBy(x => !keySelector(x).Equals(value));
Test:
[TestFixture]
public class when_ordering_by_with_given_value_first
{
[Test]
public void given_value_is_first_in_the_collection()
{
var languages = new TestRecord[] {new("cs-CZ"), new("en-US"), new("de-DE"), new("sk-SK")};
languages.OrderByWithGivenValueFirst(x => x.Language, "en-US")
.ShouldBe(new TestRecord[] {new("en-US"), new("cs-CZ"), new("de-DE"), new("sk-SK")});
}
private record TestRecord(string Language);
}
You can try with below code with list of dynamic string values
var defaultSortingInternalTrades = ["E,F,G"];
var ItemsToSort = ["A","B","C","D","E",...];
var fData = items.Where(d => defaultSortingInternalTrades.Contains(d.ToString()))
.OrderBy(x => defaultSortingInternalTrades.IndexOf(x.ToString())).ToList();
var oData = items.Where(d => !defaultSortingInternalTrades.Contains(d.ToString())).ToList();
fData.AddRange(oData);

How to perform .Max() on a property of all objects in a collection and return the object with maximum value [duplicate]

This question already has answers here:
How to use LINQ to select object with minimum or maximum property value
(20 answers)
Closed 7 years ago.
I have a list of objects that have two int properties. The list is the output of another linq query. The object:
public class DimensionPair
{
public int Height { get; set; }
public int Width { get; set; }
}
I want to find and return the object in the list which has the largest Height property value.
I can manage to get the highest value of the Height value but not the object itself.
Can I do this with Linq? How?
We have an extension method to do exactly this in MoreLINQ. You can look at the implementation there, but basically it's a case of iterating through the data, remembering the maximum element we've seen so far and the maximum value it produced under the projection.
In your case you'd do something like:
var item = items.MaxBy(x => x.Height);
This is better (IMO) than any of the solutions presented here other than Mehrdad's second solution (which is basically the same as MaxBy):
It's O(n) unlike the previous accepted answer which finds the maximum value on every iteration (making it O(n^2))
The ordering solution is O(n log n)
Taking the Max value and then finding the first element with that value is O(n), but iterates over the sequence twice. Where possible, you should use LINQ in a single-pass fashion.
It's a lot simpler to read and understand than the aggregate version, and only evaluates the projection once per element
This would require a sort (O(n log n)) but is very simple and flexible. Another advantage is being able to use it with LINQ to SQL:
var maxObject = list.OrderByDescending(item => item.Height).First();
Note that this has the advantage of enumerating the list sequence just once. While it might not matter if list is a List<T> that doesn't change in the meantime, it could matter for arbitrary IEnumerable<T> objects. Nothing guarantees that the sequence doesn't change in different enumerations so methods that are doing it multiple times can be dangerous (and inefficient, depending on the nature of the sequence). However, it's still a less than ideal solution for large sequences. I suggest writing your own MaxObject extension manually if you have a large set of items to be able to do it in one pass without sorting and other stuff whatsoever (O(n)):
static class EnumerableExtensions {
public static T MaxObject<T,U>(this IEnumerable<T> source, Func<T,U> selector)
where U : IComparable<U> {
if (source == null) throw new ArgumentNullException("source");
bool first = true;
T maxObj = default(T);
U maxKey = default(U);
foreach (var item in source) {
if (first) {
maxObj = item;
maxKey = selector(maxObj);
first = false;
} else {
U currentKey = selector(item);
if (currentKey.CompareTo(maxKey) > 0) {
maxKey = currentKey;
maxObj = item;
}
}
}
if (first) throw new InvalidOperationException("Sequence is empty.");
return maxObj;
}
}
and use it with:
var maxObject = list.MaxObject(item => item.Height);
Doing an ordering and then selecting the first item is wasting a lot of time ordering the items after the first one. You don't care about the order of those.
Instead you can use the aggregate function to select the best item based on what you're looking for.
var maxHeight = dimensions
.Aggregate((agg, next) =>
next.Height > agg.Height ? next : agg);
var maxHeightAndWidth = dimensions
.Aggregate((agg, next) =>
next.Height >= agg.Height && next.Width >= agg.Width ? next: agg);
And why don't you try with this ??? :
var itemsMax = items.Where(x => x.Height == items.Max(y => y.Height));
OR more optimise :
var itemMaxHeight = items.Max(y => y.Height);
var itemsMax = items.Where(x => x.Height == itemMaxHeight);
mmm ?
The answers so far are great! But I see a need for a solution with the following constraints:
Plain, concise LINQ;
O(n) complexity;
Do not evaluate the property more than once per element.
Here it is:
public static T MaxBy<T, R>(this IEnumerable<T> en, Func<T, R> evaluate) where R : IComparable<R> {
return en.Select(t => new Tuple<T, R>(t, evaluate(t)))
.Aggregate((max, next) => next.Item2.CompareTo(max.Item2) > 0 ? next : max).Item1;
}
public static T MinBy<T, R>(this IEnumerable<T> en, Func<T, R> evaluate) where R : IComparable<R> {
return en.Select(t => new Tuple<T, R>(t, evaluate(t)))
.Aggregate((max, next) => next.Item2.CompareTo(max.Item2) < 0 ? next : max).Item1;
}
Usage:
IEnumerable<Tuple<string, int>> list = new[] {
new Tuple<string, int>("other", 2),
new Tuple<string, int>("max", 4),
new Tuple<string, int>("min", 1),
new Tuple<string, int>("other", 3),
};
Tuple<string, int> min = list.MinBy(x => x.Item2); // "min", 1
Tuple<string, int> max = list.MaxBy(x => x.Item2); // "max", 4
I believe that sorting by the column you want to get the MAX of and then grabbing the first should work. However, if there are multiple objects with the same MAX value, only one will be grabbed:
private void Test()
{
test v1 = new test();
v1.Id = 12;
test v2 = new test();
v2.Id = 12;
test v3 = new test();
v3.Id = 12;
List<test> arr = new List<test>();
arr.Add(v1);
arr.Add(v2);
arr.Add(v3);
test max = arr.OrderByDescending(t => t.Id).First();
}
class test
{
public int Id { get; set; }
}
In NHibernate (with NHibernate.Linq) you could do it as follows:
return session.Query<T>()
.Single(a => a.Filter == filter &&
a.Id == session.Query<T>()
.Where(a2 => a2.Filter == filter)
.Max(a2 => a2.Id));
Which will generate SQL like follows:
select *
from TableName foo
where foo.Filter = 'Filter On String'
and foo.Id = (select cast(max(bar.RowVersion) as INT)
from TableName bar
where bar.Name = 'Filter On String')
Which seems pretty efficient to me.
Based on Cameron's initial answer, here is what I've just added at my enhanced version of SilverFlow library's FloatingWindowHost (copying from FloatingWindowHost.cs at http://clipflair.codeplex.com source code)
public int MaxZIndex
{
get {
return FloatingWindows.Aggregate(-1, (maxZIndex, window) => {
int w = Canvas.GetZIndex(window);
return (w > maxZIndex) ? w : maxZIndex;
});
}
}
private void SetTopmost(UIElement element)
{
if (element == null)
throw new ArgumentNullException("element");
Canvas.SetZIndex(element, MaxZIndex + 1);
}
Worth noting regarding the code above that Canvas.ZIndex is an attached property available for UIElements in various containers, not just used when being hosted in a Canvas (see Controlling rendering order (ZOrder) in Silverlight without using the Canvas control). Guess one could even make a SetTopmost and SetBottomMost static extension method for UIElement easily by adapting this code.
You can also upgrade Mehrdad Afshari's solution by rewriting the extention method to faster (and better looking) one:
static class EnumerableExtensions
{
public static T MaxElement<T, R>(this IEnumerable<T> container, Func<T, R> valuingFoo) where R : IComparable
{
var enumerator = container.GetEnumerator();
if (!enumerator.MoveNext())
throw new ArgumentException("Container is empty!");
var maxElem = enumerator.Current;
var maxVal = valuingFoo(maxElem);
while (enumerator.MoveNext())
{
var currVal = valuingFoo(enumerator.Current);
if (currVal.CompareTo(maxVal) > 0)
{
maxVal = currVal;
maxElem = enumerator.Current;
}
}
return maxElem;
}
}
And then just use it:
var maxObject = list.MaxElement(item => item.Height);
That name will be clear to people using C++ (because there is std::max_element in there).

Categories

Resources