I have 2 sets of 3 conditions each.
Let a,b,c be one set of conditions and x,y,z be the other set. I need to evaluate the following
if(a && x)
.....
else if(a && y)
.....
else if(a && z)
.....
In the same way 3 conditions with b and 3 with c.
What is the best way to evaluate the conditions without writing all the combinations?
Update :
I am making a game in Unity. The conditions a,b,c check whether the x position of the player is within a certain range compared to a ball's position and conditions x,y,z do the same for y position. Based on the correct condition(from the combination ax,ay,az,bx,by,bz,cx,cy,cz) one animation state(out of 9) is selected. So each condition would produce a different result.
Update :
I ended up making two functions that evaluate the two sets of conditions and each returns an enum on which bitwise OR is done to get the final state.
My first thought would be to at least test the first group of conditions first, and then test the second group of conditions afterwards. While this is not an answer that avoids writing all nine if statements, you will on average reduce the amount of if statements being checked.
if (a)
{
if (x) { }
else if (y) { }
else if (z) { }
}
else if (b)
{
if (x) { }
else if (y) { }
else if (z) { }
}
else if (c)
{
if (x) { }
else if (y) { }
else if (z) { }
}
Generate all possible combinations. Since you deal with Booleans, each gives you 2 options. Then the simplest possible combination of 2 bools results to 2 x 2 = 4 combinations. Generally saying, K = 2^n, where n represent number of booleans you deal with.
Would I be you, I would stick to Tuple<bool, bool, bool> and nested for(var i = 0; i < 2; i++) cycles (for n parameters n cycles needed).
Then I would feed a list of Tuple<Predicate<Tuple<bool, bool, bool>>, Action> which pairs predicate with callback that should trigger when predicates equals true.
Rest could be achieved with Linq. Something alike possibleCombinationsList.Select(tripleOfBooleans => predicatesAndCallbacksList.Single(predicateAndCallback => predicateAndCallback.Item1(tripleOfBooleans) == true)).Sinle().Item2();
Have fun.
This is one way of doing it using Linq expressions, Linq expressions is a way to define expressions dynamically where you can do multiple operations on them. i used here a fake conditional expression that takes a number as an argument just for the sake of demonstration. first define each condition in a method or directly within you code. then put all those into arrays, each conditions set to an array. then two foreach that executes the expressions.
int num = 20;
ConditionalExpression[] firstSet = { ExpressionA(num), ExpressionB(num), ExpressionC(num) };
ConditionalExpression[] secondSet = { ExpressionX(num), ExpressionY(num), ExpressionZ(num) };
foreach(var firstSetExpression in firstSet)
foreach (var secondSetExpression in secondSet)
{
var result1 = Expression.Lambda<Func<bool>>(firstSetExpression.Test).Compile();
var result2 = Expression.Lambda<Func<bool>>(secondSetExpression.Test).Compile();
if (result1.Invoke() && result2.Invoke())
{
// do your thing here
}
}
Where you define expressions for each condition within a method, for example like this:
private static ConditionalExpression ExpressionA(int num)
{
return Expression.Condition(
Expression.Constant(num > 10),
Expression.Constant("num is greater than 10"),
Expression.Constant("num is smaller than 10")
);
}
this can be even optimized in many ways, but only to get you started.
Update: This is another way for who dont like to compile at runtime. using delegates:
Func<int, bool>[] firstSet = new Func<int,bool> [] { ExpressionA(), ExpressionA(), ExpressionA() };
Func<int, bool>[] secondSet = new Func<int, bool>[] { ExpressionA(), ExpressionA(), ExpressionA() };
foreach(var firstSetExpression in firstSet)
foreach (var secondSetExpression in secondSet)
{
if (firstSetExpression.Invoke(20) && secondSetExpression.Invoke(20))
{
// do your thing here
}
}
...
...
...
private static Func<int, bool> ExpressionA()
{
return (x) => x > 10;
}
Good Luck.
Related
I've got a static List<long> primes of all known primes up to a certain point, and a function like this:
static bool isPrime(long p)
{
double rootP = Math.Sqrt(p);
foreach (long q in primes)
{
if (p % q == 0)
return false;
if (q >= rootP)
return true;
}
return true;
}
which could be parallelised like this:
static bool isPrime(long p)
{
double rootP = Math.Sqrt(p);
primes.AsParallel().ForAll(q =>
{
if (p % q == 0)
return false;
if (q > rootP)
break;
});
return true;
}
However, this gives a compile-time error saying some return types in my block aren't implicitly convertible to the delegate return type.
I'm somewhat new to LINQ, especially PLINQ. This, to me, seems like a good candidate for parallelism, since the checking of each known prime against the candidate prime is an independent process.
Is there an easy way to fix my block so that it works, or do I need to attack this problem in a completely different way?
Syntax-wise, you're making two mistakes in your code:
A return in a lambda doesn't return from the enclosing method, just from the lambda, so your return false; wouldn't work correctly.
You can't break out of lambda. Even if that lambda is executed in a loop, that loop is basically invisible to you, you certainly can't control it directly.
I think the best way to fix your code is to use a LINQ method made exactly for this purpose: Any(), along with TakeWhile() to filter out the primes that are too large:
static bool IsPrime(long p)
{
double rootP = Math.Sqrt(p);
return !primes.AsParallel().TakeWhile(q => q > rootP).Any(q => p % q == 0);
}
But there is also a flaw in your reasoning:
This, to me, seems like a good candidate for parallelism, since the checking of each known prime against the candidate prime is an independent process.
It's not as simple as that. Checking each prime is also an extremely simple process. This means that the overhead of simple parallelization (like the one I suggested above) is likely to be bigger than the performance gains. A more complicated solution (like the one suggested by Matthew Watson in a comment) could help with that.
This is a solution to your problem:
static List<long> primes = new List<long>() {
2,3,5,7,11,13,17,19,23
};
static bool isPrime(long p) {
var sqrt = (long)Math.Sqrt(p);
if (sqrt * sqrt == p)
return (false);
return (primes.AsParallel().All(n => n > sqrt || p % n != 0));
}
It even tries to reduce parallelism for quadratic numbers and will stop checking more candidates as soon as the first is found which is
The error occurs because if your condition
p % q == 0
is true the closure will return false and when
q > rootP
it breaks and returns nothing. This will work in PHP but not in .NET :P
A lambda is a full anonymous function and return types allways have to be consistent.
You have to redesign your code here. You have done it right in your non-parallelised example... Just replace break with return true (it won't be prettier then, but it should work).
It maybe easier to use Parallel.For instead
static volatile bool result = true;
static bool isPrime(long p)
{
double rootP = Math.Sqrt(p);
Parallel.For(0, primes.Count, (q, state) =>
{
if (p % q == 0)
{
result = false;
state.Break();
}
if (q > rootP)
state.Break();
});
return result;
}
I have a generic GetMinimum method. It accepts array of IComparable type (so it may be string[] or double[]). in the case of double[] how can I implement this method to ignore the double.NaN values? (I'm looking for good practices)
when I pass this array
double[] inputArray = { double.NaN, double.NegativeInfinity, -2.3, 3 };
it returns the double.NaN!
public T GetMinimum<T>(T[] array) where T : IComparable<T>
{
T result = array[0];
foreach (T item in array)
{
if (result.CompareTo(item) > 0)
{
result = item;
}
}
return result;
}
Since both NaN < x and NaN > x will always be false, asking for the minimum of a collection that can contain NaN is simply not defined. It is like dividing by zero: there is no valid answer.
So the logical approach would be to pre-filter the values. That will not be generic but that should be OK.
var results = inputArray.EliminateNaN().GetMinimum();
Separation of concerns: the filtering should not be the responsibility (and burden) of GetMinimum().
You can't from inside the method. The reason is you have no idea what T can be from inside the method. May be you can by some little casting, but ideally this should be your approach:
public T GetMinimum<T>(T[] array, params T[] ignorables) where T : IComparable<T>
{
T result = array[0]; //some issue with the logic here.. what if array is empty
foreach (T item in array)
{
if (ignorables.Contains(item)
continue;
if (result.CompareTo(item) > 0)
{
result = item;
}
}
return result;
}
Now call this:
double[] inputArray = { double.NaN, double.NegativeInfinity, -2.3, 3 };
GetMinimum(inputArray, double.NaN);
If you're sure there is only item to be ignored, then you can take just T as the second parameter (perhaps as an optional parameter).
Or otherwise in a shorter approach, just:
inputArray.Where(x => !x.Equals(double.NaN)).Min();
According to this Q&A, it's complicated: Sorting an array of Doubles with NaN in it
Fortunately, you can hack around it:
if( item is Single || item is Double ) {
Double fitem = (Double)item;
if( fitem == Double.NaN ) continue;
}
I have the following code
int someCount = 0;
for ( int i =0 ; i < intarr.Length;i++ )
{
if ( intarr[i] % 2 == 0 )
{
someCount++;
continue;
}
// Some other logic for those not satisfying the condition
}
Is it possible to use any of the Array.Where or Array.SkiplWhile to achieve the same?
foreach(int i in intarr.where(<<condtion>> + increment for failures) )
{
// Some other logic for those not satisfying the condition
}
Use LINQ:
int someCount = intarr.Count(val => val % 2 == 0);
I definitely prefer #nneonneo's way for short statements (and it uses an explicit lambda), but if you want to build a more elaborate query, you can use the LINQ query syntax:
var count = ( from val in intarr
where val % 2 == 0
select val ).Count();
Obviously this is probably a poor choice when the query can be expressed with a single lambda expression, but I find it useful when composing larger queries.
More examples: http://code.msdn.microsoft.com/101-LINQ-Samples-3fb9811b
Nothing (much) prevents you from rolling your own Where that counts the failures. "Nothing much" because neither lambdas nor methods with yield return statements are allowed to reference out/ref parameters, so the desired extension with the following signature won't work:
// dead-end/bad signature, do not attempt
IEnumerable<T> Where(
this IEnumerable<T> self,
Func<T,bool> predicate,
out int failures)
However, we can declare a local variable for the failure-count and return a Func<int> that can get the failure-count, and a local variable is completely valid to reference from lambdas. Thus, here's a possible (tested) implementation:
public static class EnumerableExtensions
{
public static IEnumerable<T> Where<T>(
this IEnumerable<T> self,
Func<T,bool> predicate,
out Func<int> getFailureCount)
{
if (self == null) throw new ArgumentNullException("self");
if (predicate == null) throw new ArgumentNullException("predicate");
int failures = 0;
getFailureCount = () => failures;
return self.Where(i =>
{
bool res = predicate(i);
if (!res)
{
++failures;
}
return res;
});
}
}
...and here's some test code that exercises it:
Func<int> getFailureCount;
int[] items = { 0, 1, 2, 3, 4 };
foreach(int i in items.Where(i => i % 2 == 0, out getFailureCount))
{
Console.WriteLine(i);
}
Console.WriteLine("Failures = " + getFailureCount());
The above test, when run outputs:
0
2
4
Failures = 2
There are a couple caveats I feel obligated to warn about. Since you could break out of the loop prematurely without having walked the entire IEnumerable<>, the failure-count would only reflect encountered-failures, not the total number of failures as in #nneonneo's solution (which I prefer.) Also, if the implementation of LINQ's Where extension were to change in a way that called the predicate more than once per item, then the failure count would be incorrect. One more point of interest is that, from within your loop body you should be able to make calls to the getFailureCount Func to get the current running failure count so-far.
I presented this solution to show that we are not locked-into the existing prepackaged solutions. The language and framework provides us with lots of opportunities to extend it to suit our needs.
Say I have a list of integers:
List<int> myInts = new List<int>() {1,2,3,5,8,13,21};
I would like to get the next available integer, ordered by increasing integer. Not the last or highest one, but in this case the next integer that is not in this list. In this case the number is 4.
Is there a LINQ statement that would give me this? As in:
var nextAvailable = myInts.SomeCoolLinqMethod();
Edit: Crap. I said the answer should be 2 but I meant 4. I apologize for that!
For example: Imagine that you are responsible for handing out process IDs. You want to get the list of current process IDs, and issue a next one, but the next one should not just be the highest value plus one. Rather, it should be the next one available from an ordered list of process IDs. You could get the next available starting with the highest, it does not really matter.
I see a lot of answers that write a custom extension method, but it is possible to solve this problem with the standard linq extension methods and the static Enumerable class:
List<int> myInts = new List<int>() {1,2,3,5,8,13,21};
// This will set firstAvailable to 4.
int firstAvailable = Enumerable.Range(1, Int32.MaxValue).Except(myInts).First();
The answer provided by #Kevin has a undesirable performance profile. The logic will access the source sequence numerous times: once for the .Count call, once for the .FirstOrDefault call, and once for each .Contains call. If the IEnumerable<int> instance is a deferred sequence, such as the result of a .Select call, this will cause at least 2 calculations of the sequence, along with once for each number. Even if you pass a list to the method, it will potentially go through the entire list for each checked number. Imagine running it on the sequence { 1, 1000000 } and you can see how it would not perform well.
LINQ strives to iterate source sequences no more than once. This is possible in general and can have a big impact on the performance of your code. Below is an extension method which will iterate the sequence exactly once. It does so by looking for the difference between each successive pair, then adds 1 to the first lower number which is more than 1 away from the next number:
public static int? FirstMissing(this IEnumerable<int> numbers)
{
int? priorNumber = null;
foreach(var number in numbers.OrderBy(n => n))
{
var difference = number - priorNumber;
if(difference != null && difference > 1)
{
return priorNumber + 1;
}
priorNumber = number;
}
return priorNumber == null ? (int?) null : priorNumber + 1;
}
Since this extension method can be called on any arbitrary sequence of integers, we make sure to order them before we iterate. We then calculate the difference between the current number and the prior number. If this is the first number in the list, priorNumber will be null and thus difference will be null. If this is not the first number in the list, we check to see if the difference from the prior number is exactly 1. If not, we know there is a gap and we can add 1 to the prior number.
You can adjust the return statement to handle sequences with 0 or 1 items as you see fit; I chose to return null for empty sequences and n + 1 for the sequence { n }.
This will be fairly efficient:
static int Next(this IEnumerable<int> source)
{
int? last = null;
foreach (var next in source.OrderBy(_ => _))
{
if (last.HasValue && last.Value + 1 != next)
{
return last.Value + 1;
}
last = next;
}
return last.HasValue ? last.Value + 1 : Int32.MaxValue;
}
public static class IntExtensions
{
public static int? SomeCoolLinqMethod(this IEnumerable<int> ints)
{
int counter = ints.Count() > 0 ? ints.First() : -1;
while (counter < int.MaxValue)
{
if (!ints.Contains(++counter)) return counter;
}
return null;
}
}
Usage:
var nextAvailable = myInts.SomeCoolLinqMethod();
Ok, here is the solution that I came up with that works for me.
var nextAvailableInteger = Enumerable.Range(myInts.Min(),myInts.Max()).FirstOrDefault( r=> !myInts.Contains(r));
If anyone has a more elegant solution I would be happy to accept that one. But for now, this is what I'm putting in my code and moving on.
Edit: this is what I implemented after Kevin's suggestion to add an extension method. And that was the real answer - that no single LINQ extension would do so it makes more sense to add my own. That is really what I was looking for.
public static int NextAvailableInteger(this IEnumerable<int> ints)
{
return NextAvailableInteger(ints, 1); // by default we use one
}
public static int NextAvailableInteger(this IEnumerable<int> ints, int defaultValue)
{
if (ints == null || ints.Count() == 0) return defaultValue;
var ordered = ints.OrderBy(v => v);
int counter = ints.Min();
int max = ints.Max();
while (counter < max)
{
if (!ordered.Contains(++counter)) return counter;
}
return (++counter);
}
Not sure if this qualifies as a cool Linq method, but using the left outer join idea from This SO Answer
var thelist = new List<int> {1,2,3,4,5,100,101};
var nextAvailable = (from curr in thelist
join next in thelist
on curr + 1 equals next into g
from newlist in g.DefaultIfEmpty()
where !g.Any ()
orderby curr
select curr + 1).First();
This puts the processing on the sql server side if you're using Linq to Sql, and allows you to not have to pull the ID lists from the server to memory.
var nextAvailable = myInts.Prepend(0).TakeWhile((x,i) => x == i).Last() + 1;
It is 7 years later, but there are better ways of doing this than the selected answer or the answer with the most votes.
The list is already in order, and based on the example 0 doesn't count. We can just prepend 0 and check if each item matches it's index. TakeWhile will stop evaluating once it hits a number that doesn't match, or at the end of the list.
The answer is the last item that matches, plus 1.
TakeWhile is more efficient than enumerating all the possible numbers then excluding the existing numbers using Except, because we TakeWhile will only go through the list until it finds the first available number, and the resulting Enumerable collection is at most n.
The answer using Except generates an entire enumerable of answers that are not needed just to grab the first one. Linq can do some optimization with First(), but it still much slower and more memory intensive than TakeWhile.
I'm working on some Project Euler questions and need some help understanding a solution I found.
My question is: Where the heck is X being set in the SkipWhile method call?? When I break the code during runtime and step through to that point I never see a value being set for it. Yet the code will work all the way through. I checked the definition for SkipWhile and maybe I just don't understand how the arguments being passed in the call satisfy the 3 parameter method definition. Same thing for Math.Pow - Where is that X getting set!?
public long FindGreatestPrimeFactor(long factorGreaterThan, long number)
{
long upperBound = (long)Math.Ceiling(Math.Sqrt(number));
// find next factor of number
long nextFactor = Range(factorGreaterThan + 1, upperBound)
.SkipWhile(x => number % x > 0).FirstOrDefault();
// if no other factor was found, then the number must be prime
if (nextFactor == 0)
{
return number;
}
else
{
// find the multiplicity of the factor
long multiplicity = Enumerable.Range(1, Int32.MaxValue)
.TakeWhile(x => number % (long)Math.Pow(nextFactor, x) == 0)
.Last();
long quotient = number / (long)Math.Pow(nextFactor, multiplicity);
if (quotient == 1)
{
return nextFactor;
}
else
{
return FindGreatestPrimeFactor(nextFactor, quotient);
}
}
}
private IEnumerable<long> Range(long first, long last)
{
for (long i = first; i <= last; i++)
{
yield return i;
}
}
I believe you are talking about the lambda expression:
x => number % x > 0
All lambda expressions use the lambda operator =>, which is read as "goes to". The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block.
In a LINQ expression, each item, when iterated over, is supplied to the lambda. In the body of the lambda, if you wish to refer to the item, you need to give it a name. In this case the parameter ends up named x.
The expressions that look like this:
x => number % x > 0
are called lambda expressions. They actually are functions, and x is a parameter. SkipWhile takes a function, and then executes it with different values for its parameters.
Here is how the lambda expression would be written as a function:
bool Foobar(long x)
{
return number % x > 0;
}
In SkipWhile, I believe the function is called with x being the first item in the list. If it is true, the function is called again with the second item in the list, and so on down until the function returns false.
In this case, SkipWhile is asking for a function that will convert a value of the type in the list to a bool. Lambda expressions are a concise way to express this.
SkipWhile is retrieving its input values (the x) from the Range method, which in turn returns numbers from factorGreaterThan + 1 up to upperBound. Not sure why the author decided to write a method for this, since this is built in with the Enumerable.Range method.