for (var keyValue = 0; keyValue < dwhSessionDto.KeyValues.Count; keyValue++)
{...}
var count = dwhSessionDto.KeyValues.Count;
for (var keyValue = 0; keyValue < count; keyValue++)
{...}
I know there's a difference between the two, but is one of them faster than the other? I would think the second is faster.
Yes, the first version is much slower. After all, I'm assuming you're dealing with types like this:
public class SlowCountProvider
{
public int Count
{
get
{
Thread.Sleep(1000);
return 10;
}
}
}
public class KeyValuesWithSlowCountProvider
{
public SlowCountProvider KeyValues
{
get { return new SlowCountProvider(); }
}
}
Here, your first loop will take ~10 seconds, whereas your second loop will take ~1 second.
Of course, you might argue that the assumption that you're using this code is unjustified - but my point is that the right answer will depend on the types involved, and the question doesn't state what those types are.
Now if you're actually dealing with a type where accessing KeyValues and Count is cheap (which is quite likely) I wouldn't expect there to be much difference. Mind you, I'd almost always prefer to use foreach where possible:
foreach (var pair in dwhSessionDto.KeyValues)
{
// Use pair here
}
That way you never need the count. But then, you haven't said what you're trying to do inside the loop either. (Hint: to get more useful answers, provide more information.)
it depends how difficult it is to compute dwhSessionDto.KeyValues.Count if its just a pointer to an int then the speed of each version will be the same. However, if the Count value needs to be calculated, then it will be calculated every time, and therefore impede perfomance.
EDIT -- heres some code to demonstrate that the condition is always re-evaluated
public class Temp
{
public int Count { get; set; }
}
static void Main(string[] args)
{
var t = new Temp() {Count = 5};
for (int i = 0; i < t.Count; i++)
{
Console.WriteLine(i);
t.Count--;
}
Console.ReadLine();
}
The output is 0, 1, 2 - only !
See comments for reasons why this answer is wrong.
If there is a difference, it’s the other way round: Indeed, the first one might be faster. That’s because the compiler recognizes that you are iterating from 0 to the end of the array, and it can therefore elide bounds checks within the loop (i.e. when you access dwhSessionDTo.KeyValues[i]).
However, I believe the compiler only applies this optimization to arrays so there probably will be no difference here.
It is impossible to say without knowing the implementation of dwhSessionDto.KeyValues.Count and the loop body.
Assume a global variable bool foo = false; and then following implementations:
/* Loop body... */
{
if(foo) Thread.Sleep(1000);
}
/* ... */
public int Count
{
get
{
foo = !foo;
return 10;
}
}
/* ... */
Now, the first loop will perform approximately twice as fast as the second ;D
However, assuming non-moronic implementation, the second one is indeed more likely to be faster.
No. There is no performance difference between these two loops. With JIT and Code Optimization, it does not make any difference.
There is no difference but why you think that thereis difference , can you please post your findings?
if you see the implementation of insert item in Dictionary using reflector
private void Insert(TKey key, TValue value, bool add)
{
int freeList;
if (key == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.key);
}
if (this.buckets == null)
{
this.Initialize(0);
}
int num = this.comparer.GetHashCode(key) & 0x7fffffff;
int index = num % this.buckets.Length;
for (int i = this.buckets[index]; i >= 0; i = this.entries[i].next)
{
if ((this.entries[i].hashCode == num) && this.comparer.Equals(this.entries[i].key, key))
{
if (add)
{
ThrowHelper.ThrowArgumentException(ExceptionResource.Argument_AddingDuplicate);
}
this.entries[i].value = value;
this.version++;
return;
}
}
if (this.freeCount > 0)
{
freeList = this.freeList;
this.freeList = this.entries[freeList].next;
this.freeCount--;
}
else
{
if (this.count == this.entries.Length)
{
this.Resize();
index = num % this.buckets.Length;
}
freeList = this.count;
this.count++;
}
this.entries[freeList].hashCode = num;
this.entries[freeList].next = this.buckets[index];
this.entries[freeList].key = key;
this.entries[freeList].value = value;
this.buckets[index] = freeList;
this.version++;
}
Count is a internal member to this class which is incremented each item you insert an item into dictionary
so i beleive that there is no differenct at all.
The second version can be faster, sometimes. The point is that the condition is reevaluated after every iteration, so if e.g. the getter of "Count" actually counts the elements in an IEnumerable, or interogates a database /etc, this will slow things down.
So I'd say that if you dont affect the value of "Count" in the "for", the second version is safer.
Related
I am performing two updates on a value I get from TryGet I would like to know that which of these is better?
Option 1: Locking only out value?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (onlineinfo)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
Option 2: Locking whole dictionary?
if (HubMemory.AppUsers.TryGetValue(ConID, out OnlineInfo onlineinfo))
{
lock (HubMemory.AppUsers)
{
onlineinfo.SessionRequestId = 0;
onlineinfo.AudioSessionRequestId = 0;
onlineinfo.VideoSessionRequestId = 0;
}
}
I'm going to suggest something different.
Firstly, you should be storing immutable types in the dictionary to avoid a lot of threading issues. As it is, any code could modify the contents of any items in the dictionary just by retrieving an item from it and changing its properties.
Secondly, ConcurrentDictionary provides the TryUpdate() method to allow you to update values in the dictionary without having to implement explicit locking.
TryUpdate() requires three parameters: The key of the item to update, the updated item and the original item that you got from the dictionary and then updated.
TryUpdate() then checks that the original has NOT been updated by comparing the value currently in the dictionary with the original that you pass to it. Only if it is the SAME does it actually update it with the new value and return true. Otherwise it returns false without updating it.
This allows you to detect and respond appropriately to cases where some other thread has changed the value of the item you're updating while you were updating it. You can either ignore this (in which case, first change gets priority) or try again until you succeed (in which case, last change gets priority). What you do depend on your situation.
Note that this requires that your type implements IEquatable<T>, since that is used by the ConcurrentDictionary to compare values.
Here's a sample console app that demonstrates this:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
sealed class Test: IEquatable<Test>
{
public Test(int value1, int value2, int value3)
{
Value1 = value1;
Value2 = value2;
Value3 = value3;
}
public Test(Test other) // Copy ctor.
{
Value1 = other.Value1;
Value2 = other.Value2;
Value3 = other.Value3;
}
public int Value1 { get; }
public int Value2 { get; }
public int Value3 { get; }
#region IEquatable<Test> implementation (generated using Resharper)
public bool Equals(Test other)
{
if (other is null)
return false;
if (ReferenceEquals(this, other))
return true;
return Value1 == other.Value1 && Value2 == other.Value2 && Value2 == other.Value3;
}
public override bool Equals(object obj)
{
return ReferenceEquals(this, obj) || obj is Test other && Equals(other);
}
public override int GetHashCode()
{
unchecked
{
return (Value1 * 397) ^ Value2;
}
}
public static bool operator ==(Test left, Test right)
{
return Equals(left, right);
}
public static bool operator !=(Test left, Test right)
{
return !Equals(left, right);
}
#endregion
}
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = new Test(original.Value1 + 1, original.Value2 + 1, original.Value3 + 1);
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
There's a lot of boilerplate code there to support the IEquatable<T> implementation and also to support the immutability.
Fortunately, C# 9 has introduced the record type which makes immutable types much easier to implement. Here's the same sample console app that uses a record instead. Note that record types are immutable and also implement IEquality<T> for you:
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace System.Runtime.CompilerServices // Remove this if compiling with .Net 5
{ // This is to allow earlier versions of .Net to use records.
class IsExternalInit {}
}
namespace Demo
{
record Test(int Value1, int Value2, int Value3);
static class Program
{
static void Main()
{
var dict = new ConcurrentDictionary<int, Test>();
dict.TryAdd(0, new Test(1000, 2000, 3000));
dict.TryAdd(1, new Test(4000, 5000, 6000));
dict.TryAdd(2, new Test(7000, 8000, 9000));
Parallel.Invoke(() => update(dict), () => update(dict));
}
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
for (int attempt = 0 ;; ++attempt)
{
var original = dict[0];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
Console.WriteLine($"dict.TryUpdate() returned false in iteration {i} attempt {attempt} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
}
}
}
Note how much shorter record Test is compared to class Test, even though it provides the same functionality. (Also note that I added class IsExternalInit to allow records to be used with .Net versions prior to .Net 5. If you're using .Net 5, you don't need that.)
Finally, note that you don't need to make your class immutable. The code I posted for the first example will work perfectly well if your class is mutable; it just won't stop other code from breaking things.
Addendum 1:
You may look at the output and wonder why so many retry attempts are made when the TryUpdate() fails. You might expect it to only need to retry a few times (depending on how many threads are concurrently attempting to modify the data). The answer to this is simply that the Console.WriteLine() takes so long that it's much more likely that some other thread changed the value in the dictionary again while we were writing to the console.
We can change the code slightly to only print the number of attempts OUTSIDE the loop like so (modifying the second example):
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
while (true)
{
var original = dict[1];
var modified = original with
{
Value1 = original.Value1 + 1,
Value2 = original.Value2 + 1,
Value3 = original.Value3 + 1
};
var updatedOk = dict.TryUpdate(1, modified, original);
if (updatedOk) // Updated OK so don't try again.
break; // In some cases you might not care, so you would never try again.
++attempt;
}
if (attempt > 0)
Console.WriteLine($"dict.TryUpdate() took {attempt} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
With this change, we see that the number of retry attempts drops significantly. This shows the importance of minimising the amount of time spent in code between TryUpdate() attempts.
Addendum 2:
As noted by Theodor Zoulias below, you could also use ConcurrentDictionary<TKey,TValue>.AddOrUpdate(), as the example below shows. This is probably a better approach, but it is slightly harder to understand:
static void update(ConcurrentDictionary<int, Test> dict)
{
for (int i = 0; i < 100000; ++i)
{
int attempt = 0;
dict.AddOrUpdate(
1, // Key to update.
key => new Test(1, 2, 3), // Create new element; won't actually be called for this example.
(key, existing) => // Update existing element. Key not needed for this example.
{
++attempt;
return existing with
{
Value1 = existing.Value1 + 1,
Value2 = existing.Value2 + 1,
Value3 = existing.Value3 + 1
};
}
);
if (attempt > 1)
Console.WriteLine($"dict.TryUpdate() took {attempt-1} retries in iteration {i} on thread {Thread.CurrentThread.ManagedThreadId}");
}
}
If you just need to lock the dictionary value, for instance to make sure the 3 values are set at the same time. Then it doesn't really matter what reference type you lock over, just as long as it is a reference type, it's the same instance, and everything else that needs to read or modify those values are also locked on the same instance.
You can read more on how the Microsoft CLR implementation deals with locking and how and why locks work with a reference types here
Why Do Locks Require Instances In C#?
If you are trying to have internal consistency with the dictionary and the value, that's to say, if you are trying to protect not only the internal consistency of the dictionary and the setting and reading of object in the dictionary. Then the your lock is not appropriate at all.
You would need to place a lock around the entire statement (including the TryGetValue) and every other place where you add to the dictionary or read/modify the value. Once again, the object you lock over is not important, just as long as it's consistent.
Note 1 : it is normal to use a dedicated instance to lock over (i.e. some instantiated object) either statically or an instance member depending on your needs, as there is less chance of you shooting yourself in the foot.
Note 2 : there are a lot more ways that can implement thread safety here, depending on your needs, if you are happy with stale values, whether you need every ounce of performance, and if you have a degree in minimal lock coding and how much effort and innate safety you want to bake in. And that is entirely up to you and your solution.
The first option (locking on the entry of the dictionary) is more efficient because it is unlikely to create significant contention for the lock. For this to happen, two threads should try to update the same entry at the same time. The second option (locking on the entire dictionary) is quite possible to create contention under heavy usage, because two threads will be synchronized even if they try to update different entries concurrently.
The first option is also more in the spirit of using a ConcurrentDictionary<K,V> in the first place. If you are going to lock on the entire dictionary, you might as well use a normal Dictionary<K,V> instead. Regarding this dilemma, you may find this question interesting: When should I use ConcurrentDictionary and Dictionary?
I have a general question, concerning performance and best practice.
When working with a List (or any other datatype) from a different Class, which is better practice? Copying it at the beginning, working with the local and then re-copying it to the original, or always access the original?
An Example:
access the original:
public class A
{
public static List<int> list = new List<int>();
}
public class B
{
public static void insertString(int i)
{
// insert at right place
int count = A.list.Count;
if (count == 0)
{
A.list.Add(i);
}
else
{
for (int j = 0; j < count; j++)
{
if (A.list[j] >= i)
{
A.list.Insert(j, i);
break;
}
if (j == count - 1)
{
A.list.Add(i);
}
}
}
}
}
As you see I access the original List A.list several times. Here the alternative:
Copying:
public class A
{
public static List<int> list = new List<int>();
}
public class B
{
public static void insertString(int i)
{
List<int> localList = A.list;
// insert at right place
int count = localList.Count;
if (count == 0)
{
localList.Add(i);
}
else
{
for (int j = 0; j < count; j++)
{
if (localList[j] >= i)
{
localList.Insert(j, i);
break;
}
if (j == count - 1)
{
localList.Add(i);
}
}
}
A.list = localList;
}
}
Here I access the the list in the other class only twice (getting it at the beginning and setting it at the end). Which would be better.
Please note that this is a general question and that the algorithm is only an example.
I won't bother thinking about performance here and instead focus on best practice:
Giving out the whole List violates encapsulation. B can modify the List and all its elements without A noticing (This is not a problem if A never uses the List itself but then A wouldn't even need to store it).
A simple example: A creates the List and immediately adds one element. Subsequently, A never bothers to check List.Count, because it knows that the List cannot be empty. Now B comes along and empties the List...
So any time B is changed, you need to also check A to see if all the assumptions of A are still correct. This is enough of a headache if you have full control over the code. If another programmer uses your class A, he may do something unexpected with the List and never check if that's ok.
Solution(s):
If B only needs to iterate over the elements, write an IEnumerable accessor. If B mustn't modify the elements, make the accessor deliver copies.
If B needs to modify the List (add/remove elements), either give B a copy of the List (containing copies of the elements if they needn't be modified) and accept a new List from B or use an accessor as before and implement the necessary List operations. In both cases, A will know if B modifies the List and can react accordingly.
Example:
class A
{
private List<ItemType> internalList;
public IEnumerable<ItemType> Items()
{
foreach (var item in internalList)
yield return item;
// or maybe item.Copy();
// new ItemType(item);
// depending on ItemType
}
public RemoveFromList(ItemType toRemove)
{
internalList.Remove(toRemove);
// do other things necessary to keep A in a consistent state
}
}
Suppose the following code:
if (myDictionary.ContainsKey(aKey))
myDictionary[aKey] = aValue;
else
myDictionary.Add(aKey, aValue);
This code accesses the dictionary two times, once for determining whether aKey exist, another time for updating (if exists) or adding (if does not exist). I guess the performance of this method is "acceptable" when this code is executed only a few times. However, in my application similar code is executed roughly 500K times. I profiled my code, and it shows 80% of CPU time spent on this section (see the following figure), so this motivates an improvement.
Note that, the dictionary is lambdas.
First workaround is simply:
myDictionary[aKey] = aValue;
If aKey exist it's value is replaced with aValue; if does not exist, a KeyValuePair with aKey as key and aValue as value is added to myDictionary. However, this method has two drawbacks:
First, you don't know if aKey exist or not that prevents you from additional logics. For instance, you can not rewrite following code based on this workaround:
int addCounter = 0, updateCounter = 0;
if (myDictionary.ContainsKey(aKey))
{
myDictionary[aKey] = aValue;
addCounter++;
}
else
{
myDictionary.Add(aKey, aValue);
updateCounter++;
}
Second, the update can not be a function of the old value. For instance, you can not do a logic similar to:
if (myDictionary.ContainsKey(aKey))
myDictionary[aKey] = (myDictionary[aKey] * 2) + aValue;
else
myDictionary.Add(aKey, aValue);
The second workaround is to use ConcurrentDictionary. It's clear that by using delegates we can solve the second aforementioned issue; however, still, it is not clear to me how we can address the first issue.
Just to remind you, my concern is to speed up. Given that there is only one thread using this procedure, I don't think the penalty of concurrency (with locks) for only one thread is worth using ConcurrentDictionary.
Am I missing a point? does anyone have a better suggestion?
If you really want AddOrUpdate method like in ConcurrentDictionary but without performance implications of using one, you will have to implement such Dictionary yourself.
The good news is that since CoreCLR is open source, you can take actual .Net Dictionary source from CoreCLR repository and apply your own modification. It seems it will not be so hard, take a look at the Insert private method there.
One possible implementation would be (untested):
public void AddOrUpdate(TKey key, Func<TKey, TValue> adder, Func<TKey, TValue, TValue> updater) {
if( key == null ) {
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.key);
}
if (buckets == null) Initialize(0);
int hashCode = comparer.GetHashCode(key) & 0x7FFFFFFF;
int targetBucket = hashCode % buckets.Length;
for (int i = buckets[targetBucket]; i >= 0; i = entries[i].next) {
if (entries[i].hashCode == hashCode && comparer.Equals(entries[i].key, key)) {
entries[i].value = updater(key, entries[i].value);
version++;
return;
}
}
int index;
if (freeCount > 0) {
index = freeList;
freeList = entries[index].next;
freeCount--;
}
else {
if (count == entries.Length)
{
Resize();
targetBucket = hashCode % buckets.Length;
}
index = count;
count++;
}
entries[index].hashCode = hashCode;
entries[index].next = buckets[targetBucket];
entries[index].key = key;
entries[index].value = adder(key);
buckets[targetBucket] = index;
version++;
}
Since .NET 6 there is a new method CollectionsMarshal.GetValueRefOrAddDefault to do just that.
Sample usage:
Dictionary<string, string> dictionary = new Dictionary<string, string>();
ref string? dictionaryValue = ref CollectionsMarshal.GetValueRefOrAddDefault(dictionary, "key", out bool exists);
//variable 'exists' is true if key was present, and false if it had to be added
if (exists)
{
//Update the value of dictionaryValue variable
dictionaryValue = dictionaryValue?.ToLowerCaseInvariant();
}
else
{
//assign new value
dictionaryValue = "test";
}
The only drawback is that you cannot decide not to add the new value after invoking this method. It always creates placeholder empty value in your dictionary if key is missing. You basically have to assign this new value or you are left with an empty entry in your dictionary.
The algorithm creates all possible variants of the sequence from variants for each member of the sequence.
C# code :
static void Main(string[] args)
{
var arg = new List<List<int>>();
int i = 0;
for (int j = 0; j < 5; j++)
{
arg.Add(new List<int>());
for (int j1 = i; j1 < i + 3; j1++)
{
//if (j1 != 5)
arg[j].Add(j1);
}
i += 3;
}
List<Utils<int>.Variant<int>> b2 = new List<Utils<int>.Variant<int>>();
//int[][] bN;
var s = System.Diagnostics.Stopwatch.StartNew();
//for(int j = 0; j < 10;j++)
b2 = Utils<int>.Produce2(arg);
s.Stop();
Console.WriteLine(s.ElapsedMilliseconds);
}
public class Variant<T>
{
public T element;
public Variant<T> previous;
}
public static List<Variant<T>> Produce2(List<List<T>> input)
{
var ret = new List<Variant<T>>();
foreach (var form in input)
{
var newRet = new List<Variant<T>>(ret.Count * form.Count);
foreach (var el in form)
{
if (ret.Count == 0)
{
newRet.Add(new Variant<T>{ element = el, previous = null });
}
else
{
foreach (var variant in ret)
{
var buf = new Variant<T> { previous = variant, element = el };
newRet.Add(buf);
}
}
}
ret = newRet;
}
return ret;
}
Scala code :
object test {
def main() {
var arg = new Array[Array[Int]](5)
var i = 0
var init = 0
while (i<5)
{
var buf = new Array[Int](3)
var j = 0
while (j<3)
{
buf(j) = init
init = init+1
j = j + 1
}
arg(i)=buf
i = i + 1
}
println("Hello, world!")
val start = System.currentTimeMillis
var res = Produce(arg)
val stop = System.currentTimeMillis
println(stop-start)
/*for(list <- res)
{
for(el <- list)
print(el+" ")
println
}*/
println(res.length)
}
def Produce[T](input:Array[Array[T]]):Array[Variant[T]]=
{
var ret = new Array[Variant[T]](1)
for(val forms <- input)
{
if(forms!=null)
{
var newRet = new Array[Variant[T]](forms.length*ret.length)
if(ret.length>0)
{
for(val prev <-ret)
if(prev!=null)
for(val el <-forms)
{
newRet = newRet:+new Variant[T](el,prev)
}
}
else
{
for(val el <- forms)
{
newRet = newRet:+new Variant[T](el,null)
}
}
ret = newRet
}
}
return ret
}
}
class Variant[T](var element:T, previous:Variant[T])
{
}
As others have said, the difference is in how you're using the collections. Array in Scala is the same thing as Java's primitive array, [], which is the same as C#'s primitive array []. Scala is clever enough to do what you ask (namely, copy the entire array with a new element on the end), but not so clever as to tell you that you'd be better off using a different collection. For example, if you just change Array to ArrayBuffer it should be much faster (comparable to C#).
Actually, though, you'd be better off not using for loops at all. One of the strengths of Scala's collections library is that you have a wide variety of powerful operations at your disposal. In this case, you want to take every item from forms and convert it into a Variant. That's what map does.
Also, your Scala code doesn't seem to actually work.
If you want all possible variants from each member, you really want to use recursion. This implementation does what you say you want:
object test {
def produce[T](input: Array[Array[T]], index: Int = 0): Array[List[T]] = {
if (index >= input.length) Array()
else if (index == input.length-1) input(index).map(elem => List(elem))
else {
produce(input, index+1).flatMap(variant => {
input(index).map(elem => elem :: variant)
})
}
}
def main() {
val arg = Array.tabulate(5,3)((i,j) => i*3+j)
println("Hello, world!")
val start = System.nanoTime
var res = produce(arg)
val stop = System.nanoTime
println("Time elapsed (ms): " + (stop-start)/1000000L)
println("Result length: " + res.length)
println(res.deep)
}
}
Let's unpack this a little. First, we've replaced your entire construction of the initial variants with a single tabulate instruction. tabulate takes a target size (5x3, here), and then a function that maps from the indices into that rectangle into the final value.
We've also made produce a recursive function. (Normally we'd make it tail-recursive, but let's keep things as simple as we can for now.) How do you generate all variants? Well, all variants is clearly (every possibility at this position) + (all variants from later positions). So we write that down recursively.
Note that if we build variants recursively like this, all the tails of the variants end up the same, which makes List a perfect data structure: it's a singly-linked immutable list, so instead of having to copy all those tails over and over again, we just point to them.
Now, how do we actually do the recursion? Well, if there's no data at all, we had better return an empty array (i.e. if index is past the end of the array). If we're on the last element of the array of variations, we basically want each element to turn into a list of length 1, so we use map to do exactly that (elem => List(elem)). Finally, if we are not at the end, we get the results from the rest (which is produce(input, index+1)) and make variants with each element.
Let's take the inner loop first: input(index).map(elem => elem :: variant). This takes each element from variants in position index and sticks them onto an existing variant. So this will give us a new batch of variants. Fair enough, but where do we get the new variant from? We produce it from the rest of the list: produce(input, index+1), and then the only trick is that we need to use flatMap--this takes each element, produces a collection out of it, and glues all those collections together.
I encourage you to throw printlns in various places to see what's going on.
Finally, note that with your test size, it's actually an insigificant amount of work; you can't accurately measure that, even if you switch to using the more accurate System.nanoTime as I did. You'd need something like tabulate(12,3) before it gets significant (500,000 variants produced).
The :+ method of the Array (more precisely of ArrayOps) will always create a copy of the array. So instead of a constant time operation you have one that is more or less O(n).
You do it within nested cycles => your whole stuff will be an order of magnitude slower.
This way you more or less emulate an immutable data structure with a mutable one (which was not designed for it).
To fix it you can either use Array as a mutable data structure (but then try to avoid endless copying), or you can switch to a immutable one. I did not check your code very carefully, but the first bet is usually List, check the scaladoc of the various methods to see their performance behaviour.
ret.length is not 0 all the time, right before return it is 243. The size of array should not be changed, and List in .net is an abstraction on top of array. BUT thank you for the point - problem was that I used :+ operator with array which as I understand caused implicit use of type LinkedList
I have a List class, and I would like to override GetEnumerator() to return my own Enumerator class. This Enumerator class would have two additional properties that would be updated as the Enumerator is used.
For simplicity (this isn't the exact business case), let's say those properties were CurrentIndex and RunningTotal.
I could manage these properties within the foreach loop manually, but I would rather encapsulate this functionality for reuse, and the Enumerator seems to be the right spot.
The problem: foreach hides all the Enumerator business, so is there a way to, within a foreach statement, access the current Enumerator so I can retrieve my properties? Or would I have to foreach, use a nasty old while loop, and manipulate the Enumerator myself?
Strictly speaking, I would say that if you want to do exactly what you're saying, then yes, you would need to call GetEnumerator and control the enumerator yourself with a while loop.
Without knowing too much about your business requirement, you might be able to take advantage of an iterator function, such as something like this:
public static IEnumerable<decimal> IgnoreSmallValues(List<decimal> list)
{
decimal runningTotal = 0M;
foreach (decimal value in list)
{
// if the value is less than 1% of the running total, then ignore it
if (runningTotal == 0M || value >= 0.01M * runningTotal)
{
runningTotal += value;
yield return value;
}
}
}
Then you can do this:
List<decimal> payments = new List<decimal>() {
123.45M,
234.56M,
.01M,
345.67M,
1.23M,
456.78M
};
foreach (decimal largePayment in IgnoreSmallValues(payments))
{
// handle the large payments so that I can divert all the small payments to my own bank account. Mwahaha!
}
Updated:
Ok, so here's a follow-up with what I've termed my "fishing hook" solution. Now, let me add a disclaimer that I can't really think of a good reason to do something this way, but your situation may differ.
The idea is that you simply create a "fishing hook" object (reference type) that you pass to your iterator function. The iterator function manipulates your fishing hook object, and since you still have a reference to it in your code outside, you have visibility into what's going on:
public class FishingHook
{
public int Index { get; set; }
public decimal RunningTotal { get; set; }
public Func<decimal, bool> Criteria { get; set; }
}
public static IEnumerable<decimal> FishingHookIteration(IEnumerable<decimal> list, FishingHook hook)
{
hook.Index = 0;
hook.RunningTotal = 0;
foreach(decimal value in list)
{
// the hook object may define a Criteria delegate that
// determines whether to skip the current value
if (hook.Criteria == null || hook.Criteria(value))
{
hook.RunningTotal += value;
yield return value;
hook.Index++;
}
}
}
You would utilize it like this:
List<decimal> payments = new List<decimal>() {
123.45M,
.01M,
345.67M,
234.56M,
1.23M,
456.78M
};
FishingHook hook = new FishingHook();
decimal min = 0;
hook.Criteria = x => x > min; // exclude any values that are less than/equal to the defined minimum
foreach (decimal value in FishingHookIteration(payments, hook))
{
// update the minimum
if (value > min) min = value;
Console.WriteLine("Index: {0}, Value: {1}, Running Total: {2}", hook.Index, value, hook.RunningTotal);
}
// Resultint output is:
//Index: 0, Value: 123.45, Running Total: 123.45
//Index: 1, Value: 345.67, Running Total: 469.12
//Index: 2, Value: 456.78, Running Total: 925.90
// we've skipped the values .01, 234.56, and 1.23
Essentially, the FishingHook object gives you some control over how the iterator executes. The impression I got from the question was that you needed some way to access the inner workings of the iterator so that you could manipulate how it iterates while you are in the middle of iterating, but if this is not the case, then this solution might be overkill for what you need.
With foreach you indeed can't get the enumerator - you could, however, have the enumerator return (yield) a tuple that includes that data; in fact, you could probably use LINQ to do it for you...
(I couldn't cleanly get the index using LINQ - can get the total and current value via Aggregate, though; so here's the tuple approach)
using System.Collections;
using System.Collections.Generic;
using System;
class MyTuple
{
public int Value {get;private set;}
public int Index { get; private set; }
public int RunningTotal { get; private set; }
public MyTuple(int value, int index, int runningTotal)
{
Value = value; Index = index; RunningTotal = runningTotal;
}
static IEnumerable<MyTuple> SomeMethod(IEnumerable<int> data)
{
int index = 0, total = 0;
foreach (int value in data)
{
yield return new MyTuple(value, index++,
total = total + value);
}
}
static void Main()
{
int[] data = { 1, 2, 3 };
foreach (var tuple in SomeMethod(data))
{
Console.WriteLine("{0}: {1} ; {2}", tuple.Index,
tuple.Value, tuple.RunningTotal);
}
}
}
You can also do something like this in a more Functional way, depending on your requirements. What you are asking can be though of as "zipping" together multiple sequences, and then iterating through them all at once. The three sequences for the example you gave would be:
The "value" sequence
The "index" sequence
The "Running Total" Sequence
The next step would be to specify each of these sequences seperately:
List<decimal> ValueList
var Indexes = Enumerable.Range(0, ValueList.Count)
The last one is more fun... the two methods I can think of are to either have a temporary variable used to sum up the sequence, or to recalculate the sum for each item. The second is obviously much less performant, I would rather use the temporary:
decimal Sum = 0;
var RunningTotals = ValueList.Select(v => Sum = Sum + v);
The last step would be to zip these all together. .Net 4 will have the Zip operator built in, in which case it will look like this:
var ZippedSequence = ValueList.Zip(Indexes, (value, index) => new {value, index}).Zip(RunningTotals, (temp, total) => new {temp.value, temp.index, total});
This obviously gets noisier the more things you try to zip together.
In the last link, there is source for implementing the Zip function yourself. It really is a simple little bit of code.