ParallelEnumerable.Aggregate for several methods - c#

Start to learn multithreading. Have 3 methods to calculate a sum, average, and product of square roots of an array.
At first, I make three separate blocking calls using PLINQ. Then I thought that it would be nice to be able to make it in a single call and return an object with sum, product, and average at the same time. I read that ParallelEnumerable.Aggregate can help me with this, but I totally don't know how to use it.
I would be really grateful for some explanation how to use this function in my case, good/bad aspects of this approach.
public static double Average(double[] array, string tool)
{
if (array == null) throw new ArgumentNullException(nameof(array));
double sum = Enumerable.Sum(array);
double result = sum / array.Length;
Print(tool, result);
return result;
}
public static double Sum(double[] array, string tool)
{
if (array == null) throw new ArgumentNullException(nameof(array));
double sum = Enumerable.Sum(array);
Print(tool, sum);
return sum;
}
public static void ProductOfSquareRoots(double[] array, string tool)
{
if (array == null) throw new ArgumentNullException(nameof(array));
double result = 1;
foreach (var number in array)
{
result = result * Math.Sqrt(number);
}
Print(tool, result);
}

The three aggregated values (average, sum and product of square roots) that you want to compute can each be computed by performing a single pass over the numbers. Instead of doing this three times (one for each aggregated value) you can do this once and aggregate the three values inside the loop (this should save time).
The average is the sum divided by the count and as you already are computing the sum you only need the count in addition to get the average. If you know the size of the input you don't even have to count the items but here I assume that the size of the input is unknown in advance.
If you want to use LINQ you can use Aggregate:
var aggregate = numbers.Aggregate(
// Starting value for the accumulator.
(Count: 0, Sum: 0D, ProductOfSquareRoots: 1D),
// Update the accumulator with a specific number.
(accumulator, number) =>
{
accumulator.Count += 1;
accumulator.Sum += number;
accumulator.ProductOfSquareRoots *= Math.Sqrt(number);
return accumulator;
});
The variable aggregate is a ValueTuple<int, double, double> with the items Count, Sum and ProductOfSquareRoots. Before C# 7 you would use an anonymous type. However, that would require an allocation for each value in the input sequence slowing down the aggregation. By using a mutable value tuple the aggregation should become faster.
Aggregate works with PLINQ so if numbers is of type ParallelQuery<T> and not IEnumerable<T> then the aggregation will be performed in parallel. Notice that this requires the aggregation to be both associative (e.g. (a + b) + c = a + (b + c) and commutative (e.g. a + b = b + a) which in your case is true.
PLINQ has an overhead so it might not perform better compared to single threaded LINQ depending on the number of elements in your sequence and how complex the calculations are. You will have to measure this yourself to determine if PLINQ speeds things up. However, you can use the same Aggregate expression in both LINQ and PLINQ making your code easy to switch from single threaded to parallel by inserting AsParallel() the right place.

Note: you must initialize the result variable with the value 1, because otherwise you will always get 0.
Note 2: instead of Enumerable.Sum(array), just write array.Sum().
No, the Aggregate method won't help you to calculate the three functions at the same time. See Martin Liversage answer.
KISS ;)
if (array == null) throw new ArgumentNullException(nameof(array));
var sum = array.Sum();
var average = array.Average();
var product = array.Aggregate(1.0, (acc, val) => acc * Math.Sqrt(val));
Can be simplified:
var average = sum / array.Length;
This eliminates an extra pass through the array.
Want to parallelize?
var sum = array.AsParallel().Sum();
//var average = array.AsParallel().Average(); // Extra pass!
var average = sum / array.Length; // More fast! Really!
var product = array.AsParallel().Aggregate(1.0, (acc, val) => acc * Math.Sqrt(val));
However, it will probably be slower than the previous method. Such paralleling is justified only for very large collections, in billions of elements.
Each pass through the collection takes time. The less passes, the better the performance. From one we have already disposed of, when calculating the average. Let's do just one.
double sum = 0;
double product = 1;
foreach (var number in array)
{
sum += number;
product = product * Math.Sqrt(number);
}
double average = sum / array.Length;
Three results in one pass! We are the best!
Let's get back to the subject.
The Parallel.Invoke method allows you to execute several functions in parallel, but it does not get the results from them. It is suitable for calculations of the type "fire and forget".
We can parallelize the computation by running multiple tasks. With help of Task.WhenAll waiting for them all completed and get the result.
var results = await Task.WhenAll(
Task.Run(() => array.Sum()),
Task.Run(() => array.Average()),
Task.Run(() => array.Aggregate(1.0, (acc, val) => acc * Math.Sqrt(val)))
);
var sum = results[0];
var average = results[1];
var product = results[2];
It is also not effective for a small size collection. But it may be more efficient than the AsParallel in some cases.
Another way of writing this approach with tasks. Perhaps it will seem clearer.
var sumTask = Task.Run(() => array.Sum());
var avgTask = Task.Run(() => array.Average());
var prodTask = Task.Run(() => array.Aggregate(1.0, (acc, val) => acc * Math.Sqrt(val)));
Task.WaitAll(sumTask, avgTask, prodTask);
sum = sumTask.Result;
average = avgTask.Result;
product = prodTask.Result;

Related

Is there a way to find the closest number in an Array to another inputed number?

So I have a Visualstudio Forms where I have a NumericUpDown function that will allow users to input a 5 digit number such as 09456. And I need to be able to compare that number to an already existing array of similar 5 digit numbers, so essentially I need to get the inputted number and find the closest number to that.
var numbers = new List<float> {89456f, 23467f, 86453f, };
// the list is way longer but you get the idea
var target = numericUpDown.3 ;
var closest = numbers.Select(n => new { n, (n - target) })
.OrderBy(p => p.distance)
.First().n;
But the first problem I encounter is that I cannot use a "-" operation on a float. Is there any way I can avoid that error and be able to still find the closest input?
Anonymous type members need names, and you need to use the absolute value of the difference. eg
var numbers = new List<float> { 89456f, 23467f, 86453f, };
var target = 3;
var closest = numbers.Select(n => new { n, distance = Math.Abs(n - target) })
.OrderBy(p => p.distance)
.First().n;
Well, apart from some issues in your sample(like no distance property on float) it should work:
int target = 55555;
float closest = numbers.OrderBy(f => Math.Abs(f - target)).First();
Demo: https://dotnetfiddle.net/gqS50L
The answers that use OrderBy are correct, but have less than optimal performance. OrderBy is an O(N log N) operation, but why sort the whole collection when you only need the top element? By contrast, MinBy will give you the result in O(N) time:
var closest = numbers.MinBy(n => Math.Abs(n - target));
Apart from the compilation errors, using LINQ for this is very slow and time consuming. The entire list has to be scanned once to find the distance, then it needs to be sorted, which scans it all over again and caches the results before returning them in order.
Before .NET 6
A faster way would be to iterate only once, calculating the distance of the current item from the target, and keep track of which number is closest. That's how eg Min and Max work.
public static float? Closest(this IEnumerable<float> list, float target)
{
float? closest=null;
float bestDist=float.MaxValue;
foreach(var n in list)
{
var dist=Math.Abs(n-target);
if (dist<bestDist)
{
bestDist=dist;
closest=n;
}
}
return closest;
}
This will return the closest number in a single pass.
var numbers = new List<float> { 89456f, 23467f, 86453f, };
var closest=numbers.Closest(20000);
Console.WriteLine($"Closest is {closest}");
------------------
Closest is 23467
Using MoreLINQ and MinBy
The same can be done in a single line using the MinBy extension method from the MoreLINQ library:
var closest=numbers.MinBy(n=>Math.Abs(n-target));
Using MinBy
In .NET 6 and later, Enumerable.MinBy was added to the BCL:
var closest=numbers.MinBy(n=>Math.Abs(n-target));
The code is similar to the explicit loop once you look past the generic key selectors and comparers :
while (e.MoveNext())
{
TSource nextValue = e.Current;
TKey nextKey = keySelector(nextValue);
if (nextKey != null && comparer.Compare(nextKey, key) < 0)
{
key = nextKey;
value = nextValue;
}
}

Getting Min, Max, Sum with a single parallel for loop

I am trying to get minimum, maximum and sum (for the average) from a large array. I would love to substitute my regular for loop with parallel.for
UInt16 tempMin = (UInt16)(Math.Pow(2,mfvm.cameras[openCamIndex].bitDepth) - 1);
UInt16 tempMax = 0;
UInt64 tempSum = 0;
for (int i = 0; i < acquisition.frameDataShorts.Length; i++)
{
if (acquisition.frameDataShorts[i] < tempMin)
tempMin = acquisition.frameDataShorts[i];
if (acquisition.frameDataShorts[i] > tempMax)
tempMax = acquisition.frameDataShorts[i];
tempSum += acquisition.frameDataShorts[i];
}
I know how to solve this using Tasks with cutting the array myself. However I would love to learn how to use parallel.for for this. Since as I understand it, it should be able to do this very elegantly.
I found this tutorial from MSDN for calculating the Sum, however I have no idea how to extend it to do all three things (min, max and sum) in a single passage.
Results:
Ok I tried PLINQ solution and I have seen some serious improvements.
3 passes (Min, Max, Sum) are on my i7 (2x4 Cores) 4x times faster then sequential aproach. However I tried the same code on Xeon (2x8 core) and results are completelly different. Parallel (again 3 passes) are actually twice as slow as sequential aproach (which is like 5x faster then on my i7).
In the end I have separated the array myself with Task Factory and I have slightly better results on all computers.
I assume that the main issue here is that three different variables are have to be remembered each iteration. You can utilize Tuple for this purpose:
var lockObject = new object();
var arr = Enumerable.Range(0, 1000000).ToArray();
long total = 0;
var min = arr[0];
var max = arr[0];
Parallel.For(0, arr.Length,
() => new Tuple<long, int, int>(0, arr[0], arr[0]),
(i, loop, temp) => new Tuple<long, int, int>(temp.Item1 + arr[i], Math.Min(temp.Item2, arr[i]),
Math.Max(temp.Item3, arr[i])),
x =>
{
lock (lockObject)
{
total += x.Item1;
min = Math.Min(min, x.Item2);
max = Math.Max(max, x.Item3);
}
}
);
I must warn you, though, that this implementation runs about 10x slower (on my machine) than the simple for loop approach you demonstrated in your question, so proceed with caution.
I don't think parallel.for is good fit here but try this out:
public class MyArrayHandler {
public async Task GetMinMaxSum() {
var myArray = Enumerable.Range(0, 1000);
var maxTask = Task.Run(() => myArray.Max());
var minTask = Task.Run(() => myArray.Min());
var sumTask = Task.Run(() => myArray.Sum());
var results = await Task.WhenAll(maxTask,
minTask,
sumTask);
var max = results[0];
var min = results[1];
var sum = results[2];
}
}
Edit
Just for fun due to the comments regarding performance I took a couple measurements. Also, found this Fastest way to find sum.
#10,000,000 values
GetMinMax: 218ms
GetMinMaxAsync: 308ms
public class MinMaxSumTests {
[Test]
public async Task GetMinMaxSumAsync() {
var myArray = Enumerable.Range(0, 10000000).Select(x => (long)x).ToArray();
var sw = new Stopwatch();
sw.Start();
var maxTask = Task.Run(() => myArray.Max());
var minTask = Task.Run(() => myArray.Min());
var sumTask = Task.Run(() => myArray.Sum());
var results = await Task.WhenAll(maxTask,
minTask,
sumTask);
var max = results[0];
var min = results[1];
var sum = results[2];
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
[Test]
public void GetMinMaxSum() {
var myArray = Enumerable.Range(0, 10000000).Select(x => (long)x).ToArray();
var sw = new Stopwatch();
sw.Start();
long tempMin = 0;
long tempMax = 0;
long tempSum = 0;
for (int i = 0; i < myArray.Length; i++) {
if (myArray[i] < tempMin)
tempMin = myArray[i];
if (myArray[i] > tempMax)
tempMax = myArray[i];
tempSum += myArray[i];
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
}
Do not reinvent the wheel, Min, Max Sum and similar operations are aggregations. Since .NET v3.5 you have a handy versions of LINQ extension methods which are already providing you the solution:
using System.Linq;
var sequence = Enumerable.Range(0, 10).Select(s => (uint)s).ToList();
Console.WriteLine(sequence.Sum(s => (double)s));
Console.WriteLine(sequence.Max());
Console.WriteLine(sequence.Min());
Though they are declared as the extensions for IEnumerable, they have some internal improvements for IList and Array types, so you should measure how your code will work on that types and on IEnumerable's.
In your case this isn't enough, as you clearly do not want to iterate other one array more than one time, so the magic goes here: PLINQ (a.k.a. Parallel-LINQ). You need to add only one method to aggregate your array in parallel:
var sequence = Enumerable.Range(0, 10000000).Select(s => (uint)s).AsParallel();
Console.WriteLine(sequence.Sum(s => (double)s));
Console.WriteLine(sequence.Max());
Console.WriteLine(sequence.Min());
This option add some overhead for synchronization the items, but it do scale well, providing a similar time either for small and big enumerations. From MSDN:
PLINQ is usually the recommended approach whenever you need to apply the parallel aggregation pattern to .NET applications. Its declarative nature makes it less prone to error than other approaches, and its performance on multicore computers is competitive with them.
Implementing parallel aggregation with PLINQ doesn't require adding locks in your code. Instead, all the synchronization occurs internally, within PLINQ.
However, if you still want to investigate the performance for different types of the operations, you can use the Parallel.For and Parallel.ForaEach methods overloads with some aggregation approach, something like this:
double[] sequence = ...
object lockObject = new object();
double sum = 0.0d;
Parallel.ForEach(
// The values to be aggregated
sequence,
// The local initial partial result
() => 0.0d,
// The loop body
(x, loopState, partialResult) =>
{
return Normalize(x) + partialResult;
},
// The final step of each local context
(localPartialSum) =>
{
// Enforce serial access to single, shared result
lock (lockObject)
{
sum += localPartialSum;
}
}
);
return sum;
If you need additional partition for your data, you can use a Partitioner for the methods:
var rangePartitioner = Partitioner.Create(0, sequence.Length);
Parallel.ForEach(
// The input intervals
rangePartitioner,
// same code here);
Also Aggregate method can be used for the PLINQ, with some merge logic
(illustration from MSDN again):
Useful links:
Parallel Aggregation
Enumerable.Min<TSource>(IEnumerable<TSource>) method
Enumerable.Sum method
Enumerable.Max<TSource> (IEnumerable<TSource>) method

Project sequence so that each element will become sum of all before it with LINQ

I have a following code that transforms each element of an array into sum of all elements before it. The procedural implementation is as follows:
float[] items = {1, 5, 10, 100}; //for example
float[] sums = new float[items.Length];
float total = 0;
for(int i = 0; i < items.Length; i++){
total+=items[i];
sums[i] = total;
}
How would I implement this as a LINQ one-liner?
I know it can be done for example as
items.Select((x, i) => items.Take(i + 1).Sum())
but I think it's not very efficient when the array size grows, as it has to do Sum() for each element.
LINQ doesn't support this case very cleanly, to be honest - you want a mixture of aggregation and projection. You can do it with side-effects, which is horrible:
// Don't use this!
float sum = 0f;
var sums = items.Select(x => sum +=x).ToArray();
Side-effects in LINQ are nasty. Likewise you can do it using Take/Sum as shown by RePierre and L.B - but that takes an operation which is naturally O(N) and converts it into an operation which is O(N^2).
The MoreLINQ project I started a while ago does have support for this, in its Scan and PreScan members. In this case you want Scan, I believe:
var sums = items.Scan((x, y) => x + y);
If you don't want to use a third-party library, don't want to use side-effects, don't want the inefficiency of the Take solution, and only need addition, and only need it for a single type (e.g. float in your case) you can easily introduce your own method:
public static IEnumerable<float> RunningSum(this IEnumerable<float> source)
{
if (source == null)
{
throw new ArgumentNullException(source);
}
float sum = 0f;
foreach (var item in source)
{
sum += item;
yield return sum;
}
}
As you'll have noticed, this is basically the same code as your original - but is lazily evaluated and applies to any sequence of floats.
var result = items.Select((item, index) => items.Take(index).Sum() + item);
EDIT
You can use Aggregate method to create the sums:
var result = items.Aggregate(new List<float>(), (seed, item) =>
{
seed.Add(seed.LastOrDefault() + item);
return seed;
});
The Reactive Extensions team at Microsoft released an "Interactive Extensions" library that adds many useful extensions to IEnumerable<T>. One of them is Scan which does exactly what you want.
Here's the IX way of doing a running total:
IEnumerable<float> results = items.Scan(0.0f, (x1, x2) => x1 + x2);

Is there a better performing functional version of this iterative algorithm in C#?

I was hoping to figure out a way to write the below in a functional style with extension functions. Ideally this functional style would perform well compared to the iterative/loop version. I'm guessing that there isn't a way. Probably because of the many additional function calls and stack allocations, etc.
Fundamentally I think the pattern which is making it troublesome is that it both calculates a value to use for the Predicate and then needs that calculated value again as part of the resulting collection.
// This is what is passed to each function.
// Do not assume the array is in order.
var a = (0).To(999999).ToArray().Shuffle();
// Approx times in release mode (on my machine):
// Functional is avg 20ms per call
// Iterative is avg 5ms per call
// Linq is avg 14ms per call
private static List<int> Iterative(int[] a)
{
var squares = new List<int>(a.Length);
for (int i = 0; i < a.Length; i++)
{
var n = a[i];
if (n % 2 == 0)
{
int square = n * n;
if (square < 1000000)
{
squares.Add(square);
}
}
}
return squares;
}
private static List<int> Functional(int[] a)
{
return
a
.Where(x => x % 2 == 0 && x * x < 1000000)
.Select(x => x * x)
.ToList();
}
private static List<int> Linq(int[] a)
{
var squares =
from num in a
where num % 2 == 0 && num * num < 1000000
select num * num;
return squares.ToList();
}
An alternative to Konrad's suggestion. This avoids the double calculation, but also avoids even calculating the square when it doesn't have to:
return a.Where(x => x % 2 == 0)
.Select(x => x * x)
.Where(square => square < 1000000)
.ToList();
Personally, I wouldn't sweat the difference in performance until I'd seen it be significant in a larger context.
(I'm assuming that this is just an example, by the way. Normally you'd possibly compute the square root of 1000000 once and then just compare n with that, to shave off a few milliseconds. It does require two comparisons or an Abs operation though, of course.)
EDIT: Note that a more functional version would avoid using ToList at all. Return IEnumerable<int> instead, and let the caller transform it into a List<T> if they want to. If they don't, they don't take the hit. If they only want the first 5 values, they can call Take(5). That laziness could be a big performance win over the original version, depending on the context.
Just solving your problem of the double calculation:
return (from x in a
let sq = x * x
where x % 2 == 0 && sq < 1000000
select sq).ToList();
That said, I’m not sure that this will lead to much performance improvement. Is the functional variant actually noticeably faster than the iterative one? The code offers quite a lot of potential for automated optimisation.
How about some parallel processing? Or does the solution have to be LINQ (which I consider to be slow).
var squares = new List<int>(a.Length);
Parallel.ForEach(a, n =>
{
if(n < 1000 && n % 2 == 0) squares.Add(n * n);
}
The Linq version would be:
return a.AsParallel()
.Where(n => n < 1000 && n % 2 == 0)
.Select(n => n * n)
.ToList();
I don't think there's a functional solution that will be completely on-par with the iterative solution performance-wise. In my timings (see below) the 'functional' implementation from the OP appears to be around twice as slow as the iterative implementation.
Micro-benchmarks like this one are prone to all manner of issues. A common tactic in dealing with variability problems is to repeatedly call the method being timed and compute an average time per call - like this:
// from main
Time(Functional, "Functional", a);
Time(Linq, "Linq", a);
Time(Iterative, "Iterative", a);
// ...
static int reps = 1000;
private static List<int> Time(Func<int[],List<int>> func, string name, int[] a)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
List<int> ret = null;
for(int i = 0; i < reps; ++i)
{
ret = func(a);
}
sw.Stop();
Console.WriteLine(
"{0} per call timings - {1} ticks, {2} ms",
name,
sw.ElapsedTicks/(double)reps,
sw.ElapsedMilliseconds/(double)reps);
return ret;
}
Here are the timings from one session:
Functional per call timings - 46493.541 ticks, 16.945 ms
Linq per call timings - 46526.734 ticks, 16.958 ms
Iterative per call timings - 21971.274 ticks, 8.008 ms
There are a host of other challenges as well: strobe-effects with the timer use, how and when the just-in-time compiler does its thing, the garbage collector running its collections, the order that competing algorithms are run, the type of cpu, the OS swapping other processes in and out, etc.
I tried my hand at a little optimization. I removed the square from the test (num * num < 1000000) - changing it to (num < 1000) - which seemed safe since there are no negatives in the input - that is, I took the square root of both sides of the inequality. Surprisingly, I got different results as compared to the methods in the OP - there were only 500 items in my optimized output as compared to the 241,849 from the three implementations in the OP implementations. So why the difference? Much of the input when squared overflows 32 bit integers, so those extra 241,349 items came from numbers that when squared overflowed to either negative numbers or numbers under 1 million while still passing our evenness test.
optimized (functional) timing:
Optimized per call timings - 16849.529 ticks, 6.141 ms
This was one of the functional implementations altered as suggested. It output the 500 items passing the criteria as expected. It is deceptively "faster" only because it output fewer items than the iterative solution.
We can make the original implementations blow up with an OverflowException by adding a checked block around their implementations. Here is a checked block added to the "Iterative" method:
private static List<int> Iterative(int[] a)
{
checked
{
var squares = new List<int>(a.Length);
// rest of method omitted for brevity...
return squares;
}
}

c# lambda expression complexity

A simple question regarding lambda expression
I wanted to get an average of all trades in following code. The formula I am using is ((price 1*qty 1+(price 2*qty 2)....+(price n*qty n)/(qty 1+qty 2+...+qty n)
In following code, I am using sum function to calculate total of (price*qty) and the complexity will be O(n) and once more time to add up all qty the complexity will be O(n). So, is there any way I can find a summation of both using complexity O(n) means a single lambda expression which can calculate both results.
Using for loop, I can calculate both results in O(n) complexity.
class Program
{
static void Main(string[] args)
{
List<Trade> trades = new List<Trade>()
{
new Trade() {price=2,qty=2},
new Trade() {price=3,qty=3}
};
///using lambda
int price = trades.Sum(x => x.price * x.qty);
int qty = trades.Sum(x => x.qty);
///using for loop
int totalPriceQty=0, totalQty=0;
for (int i = 0; i < trades.Count; ++i)
{
totalPriceQty += trades[i].price * trades[i].qty;
totalQty += trades[i].qty;
}
Console.WriteLine("Average {0}", qty != 0 ? price / qty : 0);
Console.Read();
}
}
class Trade
{
public int price;
public int qty;
}
Edit: I know that the coefficient does not count. Let me rephrase the question by saying that with lambda we are going to go through each element in the list twice while with for loop we are going to go through each element only once. Is there any solution with lambda so it does not have to go through list elements twice?
As mentioned the Big-O is unchanged, whether you iterate once or twice. If you wanted to use Linq to iterate just once you could use a custom aggregator (since you are reducing to the same properties we can just use an instance of Trade for aggregation):
var ag = trades.Aggregate(new Trade(),
(agg, trade) =>
{
agg.price += trade.price * trade.qty;
agg.qty += trade.qty;
return agg;
});
int price = ag.price;
int qty = ag.qty;
At this point personally I would just use a foreach loop or the simple lambdas you already have - unless performance is crucial here (measure it!)
Big-O complexity does not take consideration of constant coefficients. O(n) + O(n) still gives O(n).
If you’re determined you want to it in lambda, here’s an example using the Aggregate operator. It looks contrived, and I wouldn’t recommend it over a traditional for loop.
var result = trades.Aggregate(
Tuple.Create(0, 0),
(acc, trade) =>
Tuple.Create(acc.Item1 + trade.price * trade.qty, acc.Item2 + trade.qty));
int totalPrice = result.Item1;
int totalQuantity = result.Item2;
To expand on BrokenGlass's answer, you could also use an anonymous type as your aggregator like so:
var result = trades.Aggregate(
new { TotalValue = 0L, TotalQuantity = 0L },
(acc, trade) => new
{
TotalValue = acc.TotalValue + trade.price,
TotalQuantity = acc.TotalQuantity + trade.qty
}
);
There are two small upsides to this approach:
If you're doing this kind of calculation on a large number of trades (or on large trades), it's possible that you would overflow the int that keep track of total value of trades and total shares. This approach lets you specify long as your data type (so it would take longer to overflow).
The object you get back from this aggregation will have more meaningful properties than you would if you simply returned a Trade object.
The big downside is that it's kinda weird to look at.

Categories

Resources