Why it's not equal? It's the same with CollectionAssert too.
var a = new[] { new[] { 1, 2 }, new[] { 3, 4 } };
var b = new[] { new[] { 1, 2 }, new[] { 3, 4 } };
// if you comment these two lines the test passes
a[0] = a[1];
b[0] = b[1];
Assert.That(a, Is.EqualTo(b));
Gives:
Expected and actual are both <System.Int32[2][]>
Values differ at index [1]
Expected and actual are both <System.Int32[2]>
I'm using nunit 2.6.4.14350 and run from ReSharper test runner in VS .NET 4.5 project.
The same is reproducable for standalone NUnit test runner (2.6.4).
I reported this bug but it's closed as won't fix: https://github.com/nunit/nunit/issues/1209
So you can either use NUnit 3.x or accept that it's just broken in NUnit 2.6.x.
Althoug either a and bare of type Int32[2][] that does not mean they are equal as Equals returns true if the references of your arrays are identical which they are not. What you want is to echeck if their content is the same using a.SequenceEquals(b).
Related
I would like to configure my EditorConfig (https://editorconfig.org/) configuration in a way that the C# code snippet var v = new Object(kind) {Id = num++}; is automatically reformatted to var v = new Object(kind) { Id = num++ }; by adding spaces after the opening and before the closing bracket. I went through the documentation and also checked the C# manual (https://learn.microsoft.com/en-us/visualstudio/ide/editorconfig-code-style-settings-reference?view=vs-2017#example-editorconfig-file) but couldn't quite find a solution yet.
Microsoft has implemented there own csharp and dotnet specific options for the .editorconfig.
Here are the up-to-date spacing options.
I guess you want to do something like that:
// csharp_space_before_open_square_brackets = true
int [] numbers = new int [] { 1, 2, 3, 4, 5 };
// csharp_space_between_empty_square_brackets = true
int[ ] numbers = new int[ ] { 1, 2, 3, 4, 5 };
// csharp_space_between_square_brackets = true
int index = numbers[ 0 ];
Disclamer: For brackets, there are only square properties currently
Apologies if I'm missing something very basic.
For a given lattice array in which lattice values represent the minimum for their bucket, what is the best way to group an array of values.
e.g.
double[] lattice = { 2.3, 2.8, 4.1, 4.7 };
double[] values = { 2.35, 2.4, 2.6, 3, 3.8, 4.5, 5.0, 8.1 };
GroupByLattice(values, lattice);
such that GroupByLattice returns IGroupings that look like:
2.3 : { 2.35, 2.4, 2.6 }
2.8 : { 3, 3.8 }
4.1 : { 4.5 }
4.7 : { 5.0, 8.1 }
edit:
I'm green enough with LINQ queries that this is the best I can some up with:
values.GroupBy( curr => lattice.First( lat => curr > lat) )
Issues with this:
Everything ends up in the first bucket - I can understand why (of course the first bucket satisfies the case for each after) but I'm having a hard time wrapping my head around these in-place operations to get the predicate that I actually want.
I suspect that having a LINQ query inside of a LINQ query will not be very performant
Post-Mortem Solution and Results:
Dmitry Bychenko provided a great answer, I just wanted to provide some followup for those who may come across this answer in the future. I had originally been trying to solve: How can I simplify a huge dataset for plotting?
For starters, my first attempt was actually pretty close. With my lattice being already ordered I simply needed to change a .First( ... ) to a .Last( ... )
i.e.
values.GroupBy( curr => lattice.Last( lat => curr > lat) )
That's all well and good, but was curious about how much better Dmitry's solution would perform. I tested it with a random set of 10000 doubles, with a lattice at a 0.25 spacing. (I pulled out the .Select(...) transform from Dmitry's solution to keep it fair)
The average of 20 runs spit out the result:
Mine: 602ms
Dmitrys: 3ms
Uh ... WOW! That's a 200x increase in speed. 200x! I had to run this a few times and inspect in the debugger just to be certain that the LINQ statement was evaluating before the timestamp (Trusty .ToArray() to the rescue). I'm going to say it now, anyone who's looking to accomplish this same task should most certainly use this methodology
Providing that lattice is sorted (it's easy to sort the array with Array.Sort(lattice)) you can use Array.BinarySearch:
double[] lattice = { 2.3, 2.8, 4.1, 4.7 };
double[] values = { 2.35, 2.4, 2.6, 3, 3.8, 4.5, 5.0, 8.1 };
var result = values
.GroupBy(item => {
int index = Array.BinarySearch(lattice, item);
return index >= 0 ? lattice[index] : lattice[~index - 1];
})
.Select(chunk => String.Format("{0} : [{1}]",
chunk.Key, String.Join(", ", chunk)));
Test
Console.Write(String.Join(Environment.NewLine, result));
Outcome
2.3 : [2.35, 2.4, 2.6]
2.8 : [3, 3.8]
4.1 : [4.5]
4.7 : [5, 8.1]
If you ever need it faster, you can iterate the arrays only once if both of them are sorted:
double[] lattice = { 2.3, 2.8, 4.1, 4.7 };
double[] values = { 2.35, 2.4, 2.6, 3, 3.8, 4.5, 5.0, 8.1 };
var result = new List<double>[lattice.Length]; // array of lists
for (int l = lattice.Length - 1, v = values.Length - 1; l >= 0; l--) // starts from last elements
{
result[l] = new List<double>(values.Length / lattice.Length * 2); // optional initial capacity of the list
for (; v >= 0 && values[v] >= lattice[l]; v--)
{
result[l].Insert(0, values[v]);
}
}
I checked the section of the C# language specification regarding enums, but was unable to explain the output for the following code:
enum en {
a = 1, b = 1, c = 1,
d = 2, e = 2, f = 2,
g = 3, h = 3, i = 3,
j = 4, k = 4, l = 4
}
en[] list = new en[] {
en.a, en.b, en.c,
en.d, en.e, en.f,
en.g, en.h, en.i,
en.j, en.k, en.l
};
foreach (en ele in list) {
Console.WriteLine("{1}: {0}", (int)ele, ele);
}
It outputs:
c: 1
c: 1
c: 1
d: 2
d: 2
d: 2
g: 3
g: 3
g: 3
k: 4
k: 4
k: 4
Now, why would it select the third "1", the first "2" and "3", but the second "4"? Is this undefined behavior, or am I missing something obvious?
This is specifically documented to be undocumented behaviour.
There is probably something in the way the code is written that will end up picking the same thing every time but the documentation of Enum.ToString states this:
If multiple enumeration members have the same underlying value and you attempt to retrieve the string representation of an enumeration member's name based on its underlying value, your code should not make any assumptions about which name the method will return.
(my emphasis)
As mentioned in a comment, a different .NET runtime might return different values, but the whole problem with undocumented behaviour is that it is prone to change for no (seemingly) good reason. It could change depending on the weather, the time, the mood of the programmer, or even in a hotfix to the .NET runtime. You cannot rely on undocumented behavior.
Note that in your example, ToString is exactly what you want to look at since you're printing the value, which will in turn convert it to a string.
If you try to do a comparison, all the enum values with the same underlying numerical value is equivalent and you cannot tell which one you stored in a variable in the first place.
In other words, if you do this:
var x = en.a;
there is no way to afterwards deduce that you wrote en.a and not en.b or en.c as they all compare equal, they all have the same underlying value. Well, short of creating a program that reads its own source.
I have one data source like -4,-3,-3,-2,-1,0,1,2,2,3,4 , I have one function and this function can capture repeated number for example in this data source we have -3,2 are repeated .The repeated numbers are reported in end of the program.
I couldn't find good example(I spent 3 hours).
How can I implement a unit test with NUnit that can be test the same situation and it tells me the results, if you have some example , It will be very useful to me.(Really appreciated).
You can use TestCase attributes for simple data like what you've described.
[Test]
[TestCase(new[] { -4, -3, -3, -2, -1, 0, 1, 2, 2, 3, 4 }, new []{-3,2})]
public void YourTest(int[] given, int[] expected)
{ ... }
Note: ReSharper (at least my version of it) doesn't honor multiple test cases like this one so I had to confirm multiple test cases with the NUnit GUI.
First things first - get a working test. Something like this:
[Test]
public void DetectsMinusThreeAndTwo()
{
RepeatingDigitsDetector target = new RepeatingDigitsDetector();
int[] source = new int[] { -4, -3, -3, -2, -1, 0, 1, 2, 2, 3, 4 };
int[] expected = new int[] { -3, -2 };
int[] actual = target.GetRepeats(source);
Assert.AreEqual(expected.Length, actual.Length, "checking lengths");
for (int i = 0; i < expected.Length; i++)
{
Assert.AreEqual(expected[i], actual[i], "checking element {0}", i);
}
}
Later, you can start adding in goodies like the TestCase or TestCaseSource attributes. But if you're trying to do TDD (as the tdd tag implies), you need to start with a test.
I would recommend TestCaseSource in this instance. Several tests could make the data harder to read inside the TestCase attribute.
As your test data gets complex, it will be difficult to handle.
Consider storing your data in another source such as excel, json or Database.
I personally like storing test data in embedded json files.
The package JsonSectionReader provides good support for this.
I have a Vector class, and I was testing the following unit test (using nUnit).
1 Vector test1 = new Vector(new double[] { 6, 3, 4, 5 });
2 Vector test2 = test1;
3 Assert.AreEqual(test1, test2, "Reference test");
4 test2 = new Vector(new double[] { 3, 3, 4, 5 });
5 Assert.AreEqual(test1, test2, "Reference test");
The first test in line 3 passes, but the second test in line 5 fails. Shouldn't test2 also point to the same memory as test1, since I did the assignment statement in line 2? My Vector is defined as a class, so it is a reference type. On the other hand, the following tests pass:
1 Vector test1 = new Vector(new double[] { 6, 3, 4, 5 });
2 Vector test2 = test1;
3 Assert.AreEqual(test1, test2, "Reference test");
4 test2[1] = 4;
5 Assert.AreEqual(test1, test2, "Reference test");
Does that mean that, when I use the new operator to define a new object, old assignments are no longer valid? Any other (or correct - if I am wrong) explanation?
The line
test2 = new Vector(new double[] { 3, 3, 4, 5 });
creates a new instance of Vector on the heap and assigns its address to the test2 variable. test2 will point to a new, completely distinct object after that.
In contrast, the line
test2[1] = 4;
does not change the test2 variable itself (which is a reference to some object on the heap). Rather, it's changing the object it points to. test2 still refers to the same location on the heap.
To summarize, in the former, you are changing the reference while in the latter, you are altering the referent.
When you assign a variable like:
test2 = new Vector(new double[] { 3, 3, 4, 5 });
You are changing the value of test2 to be the new reference returned by the right hand side of the assignment operator. Of course the reference returned here is different than the one in test1, because it is a separate case of the constructor being called, and the references could not possibly be the same, as the Vector is being constructed with different arguments.
Yes, when you use the new operator to define a new object, old assignments are no longer valid.
Your vector IS a reference type, but when you say test2 = something you are saying "now test2 points to something else".
As an aside, if you want two different Vector objects with the same internal values to be considered equal, you can get that by implementing IEquatable on your Vector class, but that's another question...
Equals compares the values to see if they match, whereas if you want to compare references you need to use ReferenceEquals.
Check out http://msdn.microsoft.com/en-us/library/dd183752.aspx
Vector test1 = new Vector(new double[] { 6, 3, 4, 5 });
Is creating two objects, the vector its self, and the reference to it.
Imagin you have a list of items on a page.
When you use new Vector you effectively write a new line on the page which contains the vector.
Objects
{1,2,3,4,5}
You also have a list of references (Vector test1 = new Vector) which referance the first page, and (test2 = test1)
References
test1-> 1
test2-> 1
when you say 'test2 = new Vector {5,4,3,2,1} you then end up with a new vector object on the first page, and change which vector test2 is referring to.
Objects
{1,2,3,4,5}
{5,4,3,2,1}
References
test1 -> 1
test2 -> 2
In your second example both test1 and test2 are still pointing to the same object, which is why the test passes.