For example I have this as List<List<int>>:
[2,4,4,2,5]
[1,3,6,3,8]
[0,3,9,0,0]
Should return the sum but only taking cells assuming that the cell count is always the same:
[3, 10, 19, 5, 13]
I am trying to find an easy way to solve this using Linq if it is possible because I am doing this with a lot of for loops and if conditions and I am complicating myself.
Is there a possible way to achieve this using Linq?
Linq approach
List<List<int>> items = new List<List<int>>() {
new List<int> { 2, 4, 4, 2, 5 },
new List<int> { 1, 3, 6, 3, 8 },
new List<int> { 0, 3, 9, 0, 0 } };
List<int> result = Enumerable.Range(0, items.Min(x => x.Count)).Select(x => items.Sum(y => y[x])).ToList();
var xx = new List<List<int>>() {
new List<int>() { 2, 4, 4, 2, 5 },
new List<int>() { 1, 3, 6, 3, 8 },
new List<int>() { 0, 3, 9, 0, 0 },
};
var y = xx.Aggregate((r, x) => r.Zip(x).Select(p => p.First + p.Second).ToList());
I am doing this with a lot of for loops and if conditions and I am complicating myself.
You can accomplish it by using a single for loop.
Two possible approaches to achieve that are:
Approach 1
Creating an array with a capacity equal to the size of either of the lists in the original list collection
Filling the array with 0s
Looping through all lists in the original list collection, aggregating the sum for each index
Approach 2
Creating a list based on the first list in the original list collection
Looping through all subsequent lists in the original list collection, aggregating the sum for each index
Both approaches benefit from the assumption given in the question post:
[...] assuming that the cell count is always the same
If your original list collection is defined as a List<List<int>>:
List<List<int>> valuesCollection = new()
{
new() { 2, 4, 4, 2, 5 },
new() { 1, 3, 6, 3, 8 },
new() { 0, 3, 9, 0, 0 },
};
, the two approaches may be implemented as follows:
Approach 1
var indexCount = valuesCollection[0].Count;
var sums = new int[indexCount];
Array.Fill(sums, 0);
foreach (var values in valuesCollection)
{
for (var i = 0; i < sums.Length; i++)
{
sums[i] += values[i];
}
}
Approach 2
Note: Uses namespace System.Linq
var sums = valuesCollection[0].ToList();
foreach (var values in valuesCollection.Skip(1))
{
for (var i = 0; i < sums.Count; i++)
{
sums[i] += values[i];
}
}
Using either approach, sums's resulting content will be { 3, 10, 19, 5, 13 }.
Example fiddle here.
I have an array A with values: {10, 12, 6, 14, 7} and I have an array B with values: {1, 8, 2}
I have sorted the array B in an ascending order and then combined both the arrays in a new array C as shown in the following code -
static void Main()
{
int A[] = {10, 12, 6, 14, 7};
int B[] = {1, 8, 2};
Array.Sort(B);
var myList = new List<int>();
myList.AddRange(A);
myList.AddRange(B);
int[] C = myList.ToArray();
//need values in this order: 10, 1, 12, 2, 8, 6, 14, 7
}
Now I wanna sort the array C this way: 10, 1, 12, 2, 8, 6, 14, 7
The smaller values should be between the larger values, for ex: 1 is between 10 and 12, 2 is between 12 and 8, 6 is between 8 and 14, so on and so forth.
How can I do this in C#?
If recursion is needed, how can I add it to the code?
What I understood from your example is that you are trying to alternate between large and small values such that the small value is always smaller than the number to the left and the right. I wrote an algorithm below to do that however it does not yield the exact same results you requested. However I believe it does meet the requirement.
The straggling 7 is considered the next smallest number in the sequence but there is no number that follows it. Based on your example it appears that is allowed.
To Invoke
int[] A = { 10, 12, 6, 14, 7 };
int[] B = { 1, 8, 2 };
var result = Sort(A, B);
Sort Method
public static int[] Sort(int[] A, int[] B)
{
var result = new int[A.Length + B.Length];
var resultIndex = 0;
Array.Sort(A);
Array.Sort(B);
//'Pointer' for lower index, higher index
var aLeft = 0;
var aRight = A.Length-1;
var bLeft = 0;
var bRight = B.Length - 1;
//When Items remain in both arrays
while (aRight >= aLeft && bRight >= bLeft)
{
//Add smallest
if (resultIndex % 2 > 0)
{
if (A[aLeft] < B[bLeft])
result[resultIndex++] = A[aLeft++];
else
result[resultIndex++] = B[bLeft++];
}
//Add largest
else
{
if (A[aRight] > B[bRight])
result[resultIndex++] = A[aRight--];
else
result[resultIndex++] = B[bRight--];
}
}
//When items only in array A
while (aRight >= aLeft)
{
//Add smallest
if (resultIndex % 2 > 0)
result[resultIndex++] = A[aLeft++];
//Add largest
else
result[resultIndex++] = A[aRight--];
}
//When items remain only in B
while (bRight >= bLeft)
{
//Add smallest
if (resultIndex % 2 > 0)
result[resultIndex++] = B[bLeft++];
//Add largest
else
result[resultIndex++] = B[bRight--];
}
return result;
}
Result
[14, 1, 12, 2, 10, 6, 8, 7]
I would like to try adding a dropout layer in my model but I get this error on Train method:
Volume should have a Shape [1] to be converter to a System.Double
What did I do wrong? I would also like to know how to "disable" the dropout layer when I'm not in training (testing).
SgdTrainer trainer;
int numFeatures = 3;
Net<double> net = new Net<double>();
Volume<double> inputVolume, outputVolume;
trainer = new SgdTrainer(net) { LearningRate = 0.0001, BatchSize = 128 };
// 4 test cases with 3 features each
double[] inputData = new double[12] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 6, 7, 8 };
// binary classification: 0,1 = is class; 1,0 = not class
double[] outputData = new double[8] { 0, 1, 1, 0, 0, 1, 1, 0 };
net.AddLayer(new InputLayer(1, 1, numFeatures));
net.AddLayer(new FullyConnLayer(10));
net.AddLayer(new ReluLayer());
net.AddLayer(new DropoutLayer(0.5)); // (ಠ_ಠ)
net.AddLayer(new FullyConnLayer(2));
net.AddLayer(new SoftmaxLayer(2));
inputVolume = BuilderInstance.Volume.From(inputData, new Shape(1, 1, numFeatures, inputData.Length / numFeatures));
outputVolume = BuilderInstance.Volume.From(outputData, new Shape(1, 1, 2, outputData.Length / 2));
trainer.Train(inputVolume, outputVolume); // get error if there is dropout above
Volume should have a Shape [1] to be converter to a System.Double
This error was due to a bug recently introduced in ConvNetSharp. It was fixed in PR #133
I would also like to know how to "disable" the dropout layer when I'm not in training
Dropout layer knows when you are training or evaluating the model. It will drop and scale inputs when necessary.
So I want to simulate a roulette to proof that the House always wins.
I’m almost done but I stumbled upon a problem. I’m able to enter how many times to roll and it works fine. I get different numbers and it also tells me if red or black.
However the number 0 won’t show up in the results. I don’t know how to fix this, the code looks fine to me.
Code:
namespace ConsoleApplication9
{
class Program
{
static void Main(string[] args)
{
int[] Null = new int[1] { 0 };
int[] Rote = new int[18] { 1, 3, 5, 7, 9, 12, 14, 16, 18, 19, 21, 23, 25, 27, 30, 32, 34, 36 };
int[] Schwarze = new int[18] { 2, 4, 6, 8, 10, 11, 13, 15, 17, 20, 22, 24, 26, 28, 29, 31, 33, 35 };
// 0 ohne Tischlimit
var list = new List<int>();
list.AddRange(Rote);
list.AddRange(Schwarze);
list.AddRange(Null);
Console.WriteLine("Wie oft soll gespielt werden?");
int Anzahl = Convert.ToInt32(Console.ReadLine());
Random zufall = new Random();
for (int i = 0; i < Anzahl; ++i)
{
int number = list[zufall.Next(0, list.Count - 1)];
if (Rote.Contains(number))
{
Console.WriteLine("Rot" + number);
}
if (Schwarze.Contains(number))
{
Console.WriteLine("Schwarz" + number);
}
if (Null.Contains(number))
{
Console.WriteLine("Null" + number);
}
}
Console.ReadLine();
}
}
Ok, the thing is that Random.Next Method (Int32, Int32) uses upper bound as exclusive. So you have 0 as last element of list. And passing list.Count - 1 results in generating values between 0 and list.Count - 2. So the last element of the list is just ignored as you will never generate the last index list.Count - 1. You need to pass list.Count to Next method:
int number = list[zufall.Next(0, list.Count)];
https://msdn.microsoft.com/en-us/library/2dx6wyd4(v=vs.110).aspx
The Next(Int32, Int32) overload returns random integers that range
from minValue to maxValue – 1
I have need of a sort of specialized dictionary. My use case is this: The user wants to specify ranges of values (the range could be a single point as well) and assign a value to a particular range. We then want to perform a lookup using a single value as a key. If this single value occurs within one of the ranges then we will return the value associated to the range.
For example:
// represents the keyed value
struct Interval
{
public int Min;
public int Max;
}
// some code elsewhere in the program
var dictionary = new Dictionary<Interval, double>();
dictionary.Add(new Interval { Min = 0, Max = 10 }, 9.0);
var result = dictionary[1];
if (result == 9.0) JumpForJoy();
This is obviously just some code to illustrate what I'm looking for. Does anyone know of an algorithm to implement such a thing? If so could they point me towards it, please?
I have already tried implementing a custom IEqualityComparer object and overloading Equals() and GetHashCode() on Interval but to no avail so far. It may be that I'm doing something wrong though.
A dictionary is not the appropriate data structure for the operations you are describing.
If the intervals are required to never overlap then you can just build a sorted list of intervals and binary search it.
If the intervals can overlap then you have a more difficult problem to solve. To solve that problem efficiently you'll want to build an interval tree:
http://en.wikipedia.org/wiki/Interval_tree
This is a well-known data structure. See "Introduction To Algorithms" or any other decent undergraduate text on data structures.
This is only going to work when the intervals don't overlap. And your main problem seems to be converting from a single (key) value to an interval.
I would write a wrapper around SortedList. The SortedList.Keys.IndexOf() would find you an index that can be used to verify if the interval is valid and then use it.
This isn't exactly what you want but I think it may be the closest you can expect.
You can of course do better than this (Was I drinking earlier?). But you have to admit it is nice and simple.
var map = new Dictionary<Func<double, bool>, double>()
{
{ d => d >= 0.0 && d <= 10.0, 9.0 }
};
var key = map.Keys.Single(test => test(1.0))
var value = map[key];
I have solved a similar problem by ensuring that the collection is contiguous where the intervals never overlap and never have gaps between them. Each interval is defined as a lower boundary and any value lies in that interval if it is equal to or greater than that boundary and less than the lower boundary of the next interval. Anything below the lowest boundary is a special case bin.
This simplifies the problem somewhat. We also then optimized key searches by implementing a binary chop. I can't share the code, unfortunately.
I would make a little Interval class, which would something like that:
public class Interval
{
public int Start {get; set;}
public int End {get; set;}
public int Step {get; set;}
public double Value {get; set;}
public WriteToDictionary(Dictionary<int, double> dict)
{
for(int i = Start; i < End; i += Step)
{
dict.Add(i, Value);
}
}
}
So you still can a normal lookup within your dictionary. Maybe you should also perform some checks before calling Add() or implement some kind of rollback if any value is already within the dictionary.
You can find a Java flavored C# implementation of an interval tree in the Open Geospatial Library. It needs some minor tweaks to solve your problem and it could also really use some C#-ification.
It's Open Source but I don't know under what license.
i adapted some ideas for Dictionary and func, like "ChaosPandion" gave me the idea in his earlier post here above.
i still solved the coding, but if i try to refactor
i have a amazing problem/bug/lack of understanding:
Dictionary<Func<string, double, bool>, double> map = new Dictionary<Func<string, double, bool>, double>()
{
{ (a, b) => a == "2018" && b == 4, 815.72},
{ (a, b) => a == "2018" && b == 6, 715.72}
};
What is does is, that i call the map with a search like "2018"(year) and 4(month), which the result is double value 815,72.
When i check the unique map entries they look like this:
map working unique keys
so thats the orginal behaviour, anything fine so far.
Then i try to refactor it, to this:
Dictionary<Func<string, double, bool>, double> map =
new Dictionary<Func<string, double, bool>, double>();
WS22(map, values2018, "2018");
private void WS22(Dictionary<Func<string, double, bool>, double> map, double[] valuesByYear, string strYear)
{
int iMonth = 1;
// step by step this works:
map.Add((a, b) => (a == strYear) && (b == 1), dValue);
map.Add((a, b) => (a == strYear) && (b == 2), dValue);
// do it more elegant...
foreach (double dValue in valuesByYear)
{
//this does not work: exception after second iteration of foreach run
map.Add((a, b) => (a == strYear) && (b == iMonth), dValue );
iMonth+=1;
}
}
this works: (i use b==1 and b==2)
this does not work (map not working exception on add item on second iteration)
so i think the problem is, that the map does not have a unique key while adding to map dictionary. The thing is, i dont see my error, why b==1 is working and b==iMonth not.
Thx for any help, that open my eyes :)
Using Binary Search, I created an MSTest v2 test case that approaches the solution. It assumes that the index is the actual number you are looking for, which does not (might not?) suit the description given by the OP.
Note that the ranges do not overlap. And that the ranges are
[negative infinity, 0)
[0, 5]
(5, 15]
(15, 30]
(30, 100]
(100, 500]
(500, positive infinity]
This values passed as minimumValues are sorted, since they are constants in my domain. If these values can change, the minimumValues list should be sorted again.
Finally, there is a test that uses if statements to get to the same result (which is probably more flexible if you need something else than the index).
[TestClass]
public class RangeUnitTests
{
[DataTestMethod]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, -1, 0)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 0, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 1, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 5, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 7, 2)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 15, 2)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 16, 3)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 30, 3)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 31, 4)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 100, 4)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 101, 5)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 500, 5)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 501, 6)]
public void Use_BinarySearch_To_Determine_Range(int[] minimumValues, int inputValue, int expectedRange)
{
var list = minimumValues.ToList();
var index = list.BinarySearch(inputValue);
if (index < 0)
{
index = ~index;
}
Assert.AreEqual(expectedRange, index);
}
[DataTestMethod]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, -1, 0)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 0, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 1, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 5, 1)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 7, 2)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 15, 2)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 16, 3)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 30, 3)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 31, 4)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 100, 4)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 101, 5)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 500, 5)]
[DataRow(new[] { -1, 5, 15, 30, 100, 500 }, 501, 6)]
public void Use_Ifs_To_Determine_Range(int[] _, int inputValue, int expectedRange)
{
int actualRange = 6;
if (inputValue < 0)
{
actualRange = 0;
}
else if (inputValue <= 5)
{
actualRange = 1;
}
else if (inputValue <= 15)
{
actualRange = 2;
}
else if (inputValue <= 30)
{
actualRange = 3;
}
else if (inputValue <= 100)
{
actualRange = 4;
}
else if (inputValue <= 500)
{
actualRange = 5;
}
Assert.AreEqual(expectedRange, actualRange);
}
}
I did a little perfomance testing by duplicating the initial set [DataRow] several times (up to 260 testcases for each method). I did not see a significant difference in performance with these parameteres. Note that I ran each [DataTestMethod] in a seperate session. Hopefully this balances out any start-up costs that the test framework might add to first test that is executed.
You could check out the powercollections here found on codeplex that has a collection that can do what you are looking for.
Hope this helps,
Best regards,
Tom.