Negating a variable shortcut - c#

This maybe a trivial question but I couldn't find it so easily.
There are some shortcuts like
i = i + 1;
i++;
i = i+20;
i += 20;
But is there something to negate in place a variable?
MyClass.MyVeryLongSubClass.MoreStuff.MyBooleanHere = !MyClass.MyVeryLongSubClass.MoreStuff.MyBooleanHere;

Something like:
x ^= true;
It's a bit obscure though, which is why people generally don't use that.

A purely numeric (and not obscure) solution would be:
x *= -1;
UPDATE
Note also that the assignment operation yields a value and can be used within an expression.
instead of
x = -x;
y = 100 * x;
you can write
y = 100 * (x = -x);
or even
y = 100 * (x *= -1);
But I prefer the first version. The second and third are not easily understandable.

Related

How to make the C++ compiler to follow the same precedence, associativity and order of evaluation that C# has in the this assignment statement?

Consider the following piece of code:
int x = 1;
int y = 2;
y = x + (x = y);
When this runs in C#, the variables end up assigned with these values:
x = 2
y = 3
On the other hand, when the same runs in C++, the variables end like this:
x = 2
y = 4
Clearly, the C++ compiler is using different precedence, associativity and order of evaluation rules than C# (as explained in this Eric Lippert's article).
So, the question is:
Is it possible to rewrite the assignment statement to force C++ to evaluate the same as C# does?
The only restriction is to keep it as a one-liner. I know this can be rewritten by splitting the assignments into two separate lines, but the goal is to maintain it in a single line.
Yes, it is possible indeed.
x = (y+=x) - x;
So simple.
Is it possible to rewrite the assignment statement to force C++ to evaluate the same as C# does?
Yes. To clarify, the rule in C# is that most of the time, the left side of an expression is evaluated fully before the right side. So C# guarantees that the evaluation of x on the left of the + happens before the side effect of the x = y on the right side. C++ does not make this guarantee; compilers can disagree as to the meaning of this expression.
(Challenge: I said "most" for a reason. Give an example of a simple expression in C# where a side effect on the left is executed after a side effect on the right.)
I know this can be rewritten by splitting the assignments into two separate lines
Right. You wish to have a statement that has two side effects: increment y by the original value of x, and assign the original value of y to x. So you could write that as
int t = y;
y = y + x;
x = t;
But:
the goal is to maintain it in a single line.
I assume by "line" you mean "statement". Fine. Just use the comma operator.
int t = ((t = y), (y = y + x), (x = t));
Easy peasy. Why you would want to, I don't know. I assume this is a puzzle designed to elicit signal on whether or not you know about the comma operator.
Extra credit: How many of the parentheses in that statement are required?
Super bonus: what if we don't want to use the comma operator? We can use other operators as sequencing operators.
int t = ((t = y) & 0) || ((y = y + x) & 0) || (x = t);
We execute the left of the ||, and it turns out to be false, so we execute the right side, and so on.
Similarly, ((expr1) & 0) ? 0 : (expr2) guarantees you that expr1 runs before expr2.
Basically what you need here is a guarantee that a number of subexpressions happen in a particular order. Look up "sequence points" to see which operators in C and C++ produce sequence points; you can use them to sequence expressions, hence the name.
You could write:
int x = 1;
int y = 2;
y = std::exchange(x,y) + y;
But highly unrecommended. You even have to stop browsing and think about this easy operation.

What wrong with this implement of this arcsine approximate in C#

This is a formula to approximate arcsine(x) using Taylor series from this blog
This is my implementation in C#, I don't know where is the wrong place, the code give wrong result when running:
When i = 0, the division will be 1/x. So I assign temp = 1/x at startup. For each iteration, I change "temp" after "i".
I use a continual loop until the two next value is very "near" together. When the delta of two next number is very small, I will return the value.
My test case:
Input is x =1, so excected arcsin(X) will be arcsin (1) = PI/2 = 1.57079633 rad.
class Arc{
static double abs(double x)
{
return x >= 0 ? x : -x;
}
static double pow(double mu, long n)
{
double kq = mu;
for(long i = 2; i<= n; i++)
{
kq *= mu;
}
return kq;
}
static long fact(long n)
{
long gt = 1;
for (long i = 2; i <= n; i++) {
gt *= i;
}
return gt;
}
#region arcsin
static double arcsinX(double x) {
int i = 0;
double temp = 0;
while (true)
{
//i++;
var iFactSquare = fact(i) * fact(i);
var tempNew = (double)fact(2 * i) / (pow(4, i) * iFactSquare * (2*i+1)) * pow(x, 2 * i + 1) ;
if (abs(tempNew - temp) < 0.00000001)
{
return tempNew;
}
temp = tempNew;
i++;
}
}
public static void Main(){
Console.WriteLine(arcsin());
Console.ReadLine();
}
}
In many series evaluations, it is often convenient to use the quotient between terms to update the term. The quotient here is
(2n)!*x^(2n+1) 4^(n-1)*((n-1)!)^2*(2n-1)
a[n]/a[n-1] = ------------------- * --------------------- -------
(4^n*(n!)^2*(2n+1)) (2n-2)!*x^(2n-1)
=(2n(2n-1)²x²)/(4n²(2n+1))
= ((2n-1)²x²)/(2n(2n+1))
Thus a loop to compute the series value is
sum = 1;
term = 1;
n=1;
while(1 != 1+term) {
term *= (n-0.5)*(n-0.5)*x*x/(n*(n+0.5));
sum += term;
n += 1;
}
return x*sum;
The convergence is only guaranteed for abs(x)<1, for the evaluation at x=1 you have to employ angle halving, which in general is a good idea to speed up convergence.
You are saving two different temp values (temp and tempNew) to check whether or not continuing computation is irrelevant. This is good, except that you are not saving the sum of these two values.
This is a summation. You need to add every new calculated value to the total. You are only keeping track of the most recently calculated value. You can only ever return the last calculated value of the series. So you will always get an extremely small number as your result. Turn this into a summation and the problem should go away.
NOTE: I've made this a community wiki answer because I was hardly the first person to think of this (just the first to put it down in a comment). If you feel that more needs to be added to make the answer complete, just edit it in!
The general suspicion is that this is down to Integer Overflow, namely one of your values (probably the return of fact() or iFactSquare()) is getting too big for the type you have chosen. It's going to negative because you are using signed types — when it gets to too large a positive number, it loops back into the negative.
Try tracking how large n gets during your calculation, and figure out how big a number it would give you if you ran that number through your fact, pow and iFactSquare functions. If it's bigger than the Maximum long value in 64-bit like we think (assuming you're using 64-bit, it'll be a lot smaller for 32-bit), then try using a double instead.

Can't get cost function for logistic regression to work

I'm trying to implement logistic regression by myself writing code in C#. I found a library (Accord.NET) that I use to minimize the cost function. However I'm always getting different minimums. Therefore I think something may be wrong in the cost function that I wrote.
static double costfunction(double[] thetas)
{
int i = 0;
double sum = 0;
double[][] theta_matrix_transposed = MatrixCreate(1, thetas.Length);
while(i!=thetas.Length) { theta_matrix_transposed[0][i] = thetas[i]; i++;}
i = 0;
while (i != m) // m is the number of examples
{
int z = 0;
double[][] x_matrix = MatrixCreate(thetas.Length, 1);
while (z != thetas.Length) { x_matrix[z][0] = x[z][i]; z++; } //Put values from the training set to the matrix
double p = MatrixProduct(theta_matrix_transposed, x_matrix)[0][0];
sum += y[i] * Math.Log(sigmoid(p)) + (1 - y[i]) * Math.Log(1 - sigmoid(p));
i++;
}
double value = (-1 / m) * sum;
return value;
}
static double sigmoid(double z)
{
return 1 / (1 + Math.Exp(-z));
}
x is a list of lists that represent the training set, one list for each feature. What's wrong with the code? Why am I getting different results every time I run the L-BFGS? Thank you for your patience, I'm just getting started with machine learning!
That is very common with these optimization algorithms - the minima you arrive at depends on your weight initialization. The fact that you are getting different minimums doesn't necessarily mean something is wrong with your implementation. Instead, check your gradients to make sure they are correct using the finite differences method, and also look at your train/validation/test accuracy to see if they are also acceptable.

Need help for search optimization

I am fairly new to programming and i need some help with optimizing.
Basically a part of my method does:
for(int i = 0; i < Tiles.Length; i++)
{
x = Tiles[i].WorldPosition.x;
y = Tiles[i].WorldPosition.y;
z = Tiles[i].WorldPosition.z;
Tile topsearch = Array.Find(Tiles,
search => search.WorldPosition == Tiles[i].WorldPosition +
new Vector3Int(0,1,0));
if(topsearch.isEmpty)
{
// DoMyThing
}
}
So i am searching for a Tile in a position which is 1 unit above the current Tile.
My problem is that for the whole method it takes 0.1 secs which results in a small hick up..Without Array.Find the method is 0.01 secs.
I tried with a for loop also, but still not great result, because i need 3 more checks for
the bottom, left and right..
Can somebody help me out and point me a way of acquiring some fast results?
Maybe i should go with something like threading?
You could create a 3-dimensional array so that you can look up a tile at a specific location by just looking what's in Tiles[x, y + 1, z].
You can then iterate through your data in 2 loops: one to build up Tiles and one to do the checks you are doing in your code above, which would then just be:
for(int i = 0; i < Tiles.Length; i++)
{
Tile toFind = Tiles[Tile[i].x, Tile[i].y + 1, Tile[i].z];
if (toFind != null) ...
}
You would have to dimension the array so that you have 1 extra row in the y so that Tiles[x, y + 1, z] doesn't cause an index-out-of-range exception.
Adding to Roy's solution, if the space is not continuous, as it might be, you could put a hashcode of WorldPosition (the x, y and z coordinates) to some good use here.
I mean you could override WorldPosition's GetHashCode with your own implementation like that:
public class WorldPosition
{
public int X;
public int Y;
public int Z;
public override int GetHashCode()
{
int result = X.GetHashCode();
result = (result * 397) ^ Y.GetHashCode();
result = (result * 397) ^ Z.GetHashCode();
return result;
}
}
See Why is '397' used for ReSharper GetHashCode override? for explanation.
Then you can put your tiles in a Dictionary<WorldPosition, Tile>.
This would allow for quickly looking up for dict[new WorldPosition(x, y, z + 1)] etc. Dictionaries use hashcode for keys, so it would be fast.
First, like #Roy suggested, try storing the values in an array so you can access them with x,y,z coordinates,
Another thing you could do is change the search to
Tile topsearch = Array.Find(Tiles,
search => search.WorldPosition.x == Tiles[i].WorldPosition.x &&
search.WorldPosition.y == (Tiles[i].WorldPosition.y + 1) &&
search.WorldPosition.z == Tiles[i].WorldPosition.z)
This might be faster as well, depending on how many fields your WorldPosition has

Which is fast : Query Syntax vs. Loops

The following code provides two approaches that generate pairs of integers whose sum is less than 100, and they're arranged in descending order based on their distance from (0,0).
//approach 1
private static IEnumerable<Tuple<int,int>> ProduceIndices3()
{
var storage = new List<Tuple<int, int>>();
for (int x = 0; x < 100; x++)
{
for (int y = 0; y < 100; y++)
{
if (x + y < 100)
storage.Add(Tuple.Create(x, y));
}
}
storage.Sort((p1,p2) =>
(p2.Item1 * p2.Item1 +
p2.Item2 * p2.Item2).CompareTo(
p1.Item1 * p1.Item1 +
p1.Item2 * p1.Item2));
return storage;
}
//approach 2
private static IEnumerable<Tuple<int, int>> QueryIndices3()
{
return from x in Enumerable.Range(0, 100)
from y in Enumerable.Range(0, 100)
where x + y < 100
orderby (x * x + y * y) descending
select Tuple.Create(x, y);
}
This code is taken from the book Effective C# by Bill Wagner, Item 8. In the entire article, the author has focused more on the syntax, the compactness and the readability of the code, but paid very little attention to the performance, and almost didn't discuss it.
So I basically want to know, which approach is faster? And what is usually better at performance (in general) : Query Syntax or Manual Loops?
Please discuss them in detail, providing references if any. :-)
Profiling is truth, but my gut feeling would be that the loops are probably faster. The important thing is that 99 times out of 100 the performance difference just doesn't matter in the grand scheme of things. Use the more readable version and your future self will thank you when you need to maintain it later.
Running each function 1000 times:
for loop: 2623 ms
query: 2821 ms
looks logic since the second one is just syntaxic sugar for the first one. But i would use the second one for its readability.
Though this doesn't strictly answer your question, performance-wise I would suggest merging that x+y logic into the iteration, thus:
for (int x = 0; x < 100; x++)
for (int y = 0; y < 100 - x; y++)
storage.Add(Tuple.Create(x, y));

Categories

Resources