Math.Net system of linear equations with a 0 value in solution - c#

I am trying to solve a Matrix in Math.Net when one of the actual solutions to the matrix is 0, but I am getting -NaN- as results.
Here is an example matrix which has already been reduced for simplicity.
1 0 1 | 10000
0 1 -1 | 1000
0 0 0 | 0
Code example:
public void DoExample()
{
Matrix<double> A = Matrix<double>.Build.DenseOfArray(new double[,] {
{ 1, 0, 1 },
{ 0, 1, -1 },
{ 0, 0, 0 },
});
Vector<double> B = Vector<double>.Build.Dense(new double[] { 10000, 1000, 0 });
var result = A.Solve(B);
}
The solution I am hoping to get to is [ 10000, 1000, 0 ].
As you can see, the result I want is already the augment vector. This is because I simplified the matrix to reduced row echelon form (RREF) by hand using Gauss-Jordan for this example. If I could somehow use a Gauss-Jordan operations within Math.Net to do this, I could check for the scenario where an all 0 row exists in the RREF matrix. Can this be done?
Otherwise, is there any way I can recognize when 0 is the only possible solution for one of the variables using the existing Math.Net linear algebra solver operations?
Thanks!

This is degenerate matrix with rank 2, and you cannot expect to get true solution (there are infinity number of solutions)

The iterative solver can actually handle this, for example
using MathNet.Numerics.LinearAlgebra.Double.Solvers;
A.SolveIterative(B, new MlkBiCgStab());
returns
[10000, 1000, 0]
Interestingly, with the MKL Native Provider this also works with the normal Solve routine, but not with the managed provider (as you have found out) nor with e.g. the OpenBLAS native provider.

Related

Given N, for consecutive numbers from 1 to N create a recursive C# method to print out all possible '+' and '-' combinations that result in X

I'm dealing with the following assignment, and as I'm not very familiar with recursion I'm at a loss. I'd greatly appreciate it if someone with more experience could point me in the right direction.
The assignment reads as follows:
Write a console application that determines all combinations of '+' and '-' operators which can be put between natural numbers ranging from 1 to a given N >=2, so that the result of the expression is a given X number. If there is no possible combination, the application will output 'N/A'.
Ex:
For inputs:
6 //N
3 //X
The console will read:
1 + 2 + 3 - 4 - 5 + 6 = 3
1 + 2 - 3 + 4 + 5 - 6 = 3
1 - 2 - 3 - 4 + 5 + 6 = 3
Given the circumstances of the assignment, I'm not allowed to use any other directive than 'System'. I've found several versions of this 'Coin Change' problem, but mostly in C++ or Python and also quite different than my current assignment. I'm not asking anyone to do my assignment for me, I'm simply looking for some valid pointers so I know where to start.
This code sample should help you. You can adapt this recursion to your needs, since it only counts number of such combinations.
Take into account that this approach is quite slow, you can find some DP solutions that are much faster.
private static int Search(int start, int end, int cur, int searched)
{
if (start > end)
{
return Convert.ToInt32(cur == searched);
}
return Search(start + 1, end, cur + start, searched) + Search(start + 1, end, cur - start, searched);
}
static void Main(string[] args)
{
int result = Search(2, 6, 1, 3);
}
I would make a function (let's call it f) that takes 5 parameters: The current number, the end number (N), the desidered result (X), the formula so far and the result of the formula so far.
In the function you first test if the current number is the end number. If it is, then test if the result of the formula is the desired number. If it is, print the formula.
If you're not at the end yet, then call the function itself twice. Once where you add the next number and once where you subtract it.
The first call to the function would be
f(1, 6, 3, "1", 1). It would then call itself twice with
f(2, 6, 3, "1 + 2", 3) and
f(2, 6, 3, "1 - 2", -1)
Then it would continue like that until it reaches the calls with 6 numbers in the formula where it would check if the result is 3.
Hope that helps you get started.
This answer strives to be the pinnacle in the art of helping without helping.
Which is what was asked for.
So here, the complete albeit a bit obfuscated solution in a language, you might have never heard of. But if you think about what you see long enough, you might get an idea how to solve this in C#.
data Expr = Lit Integer | Add | Sub deriving(Show)
compute :: [Expr] -> Integer -> Integer
compute [] value = value
compute (Add : Lit x : rest) value = compute rest (value + x)
compute (Sub : Lit x : rest) value = compute rest (value - x)
compute (Lit x1 : Add : Lit x2 : rest) value = compute rest (value + x1 + x2)
compute (Lit x1 : Sub : Lit x2 : rest) value = compute rest (value + x1 - x2)
compute [Lit x] _ = x
solve :: Integer -> [Integer] -> [Expr] -> [[Expr]] -> [[Expr]]
solve goal [] current found
| goal == compute current 0 = current : found
| otherwise = found
solve goal (n:ns) current found =
solve goal ns (current ++ [Add, Lit n]) []
++ solve goal ns (current ++ [Sub, Lit n]) []
prettyFormula :: [Expr] -> String
prettyFormula f =
concat $ fmap (\xp -> case xp of
Lit n -> show n
Add -> "+"
Sub -> "-") f
With that loaded and with fmap prettyFormula (solve 3 [2..6] [Lit 1] []) in the REPL, you get your result:
["1+2+3-4-5+6","1+2-3+4+5-6","1-2-3-4+5+6"]

C# formula to determine index

I have a maths issue within my program. I think the problem is simple but I'm not sure what terms to use, hence my own searches returned nothing useful.
I receive some values in a method, the only thing I know (in terms of logic) is the numbers will be something which can be duplicated.
In other words, the numbers I could receive are predictable and would be one of the following
1
2
4
16
256
65536
etc
I need to know at what index they appear at. In othewords, 1 is always at index 0, 2 at index 1, 4 at index 3, 16 is at index 4 etc.
I know I could write a big switch statement but I was hoping a formula would be tidier. Do you know if one exists or any clues as the names of the math forumula's I'm using.
The numbers you listed are powers of two. The inverse function of raising a number to a power is the logarithm, so that's what you use to go backwards from (using your terminology here) a number to an index.
var num = 256;
var ind = Math.Log(num, 2);
Above, ind is the base-2 logarithm of num. This code will work for any base; just substitute that base for 2. If you are only going to be working with powers of 2 then you can use a special-case solution that is faster based on the bitwise representation of your input; see What's the quickest way to compute log2 of an integer in C#?
Try
Math.Log(num, base)
where base is 2
MSDN: http://msdn.microsoft.com/en-us/library/hd50b6h5.aspx
Logarithm will return to You power of base from you number.
But it's in case if your number really are power of 2,
otherwise you have to understand exactly what you have, what you need
It also look like numbers was powered to 2 twice, so that try this:
private static int getIndexOfSeries(UInt64 aNum)
{
if (aNum == 1)
return 0;
else if (aNum == 2)
return 1;
else
{
int lNum = (int)Math.Log(aNum, 2);
return 1+(int)Math.Log(lNum, 2);
}
}
Result for UInt64[] Arr = new UInt64[] { 1, 2, 4, 16, 256, 65536, 4294967296 } is:
Num[0] = 1
Num[1] = 2
Num[2] = 4
Num[3] = 16
Num[4] = 256
Num[5] = 65536
Num[6] = 4294967296 //65536*65536
where [i] - index
You should calculate the base 2 logarithm of the number
Hint: For the results:
0 2
1 4
2 16
3 256
4 65536
5 4294967296
etc.
The formula is, for a give integer x:
Math.Pow(2, Math.Pow(2, x));
that is
2 to the power (2 to the power (x) )
Once the formula is known, one could solve it for x (I won't go through that since you already got an answer).

C# Left Shift Operator

There's a statement a co-worker of mine wrote which I don't completely understand. Unfortunately he's not available right now, so here it is (with modified names, we're working on a game in Unity).
private readonly int FRUIT_LAYERS =
(1 << LayerMask.NameToLayer("Apple"))
| (1 << LayerMask.NameToLayer("Banana"));
NameToLayer takes a string and returns an integer. I've always seen left shift operators used with the constant integer on the right side, not the left, and all the examples I'm finding via Google follow that approach. In this case, I think he's pushing Apple and Banana onto the same relative layer (which I'll use later for filtering). In the future there would be more "fruits" to filter by. Any brilliant stackoverflowers who can give me an explanation of what's happening on those lines?
1 << x is essentially saying "give me a number where the (x+1)-th bit is one and the rest of the numbers are all zero.
x | y is a bitwise OR, so it will go through each bit from 1 to n and if that bit is one in either x or y then that bit will be one in the result, if not it will be zero.
So if LayerMask.NameToLayer("Apple") returns 2 and LayerMask.NameToLayer("Banana") returns 3 then FRUIT_LAYERS will be a number with the 3rd and 4th bits set, which is 1100 in binary, or 12 in base 10.
Your coworker is essentially using an int in place of a bool[32] to try to save on space. The block of code you show is analogous to
bool[] FRUIT_LAYERS = new bool[32];
FRUIT_LAYERS[LayerMask.NameToLayer("Apple")] = true;
FRUIT_LAYERS[LayerMask.NameToLayer("Banana")] = true;
You might want to consider a pattern more like this:
[Flags]
enum FruitLayers : int
{
Apple = 1 << 0,
Banana = 1 << 1,
Kiwi = 1 << 2,
...
}
private readonly FruitLayers FRUIT_LAYERS = FruitLayers.Apple | FruitLayers.Banana;
The code is shifting the binary value 1 to the left, the number of binary places to shift is determined by the Apple and Banana, after both values are shifted the are ORed in a binary way
Example:
Assume apple returns 2 and banana returns 3 you get:
1 << 2 which is 0100 (that means 4 in decimal)
1 << 3 which is 1000 ( that means eight in decimal)
now 0100 bitwise or with 1000 is 1100 which means 12
1 << n is basically an equivalent to 2n.

Custom order by, is it possible?

I have the following collection:
-3, -2, -1, 0, 1, 2, 3
How can I in a single order by statement sort them in the following form:
The negative numbers are sorted first by their (absolute value) then the positive numbers.
-1, -2, -3, 0, 1, 2, 3
Combination sorting, first by the sign, then by the absolute value:
list.OrderBy(x => Math.Sign(x)).ThenBy(x => Math.Abs(x));
or:
from x in list
orderby Math.Sign(x), Math.Abs(x)
select x;
This is conceptually similar to the SQL statement:
SELECT x
FROM list
ORDER BY SIGN(x), ABS(x)
In LINQ-to-Objects, the sort is performed only once, not twice.
WARNING: Math.Abs(x) will fail if x == int.MinValue. If this marginal case is important, then you have to handle it separately.
var numbers = new[] { -3, -2, -1, 0, 1, 2, 3 };
var customSorted = numbers.OrderBy(n => n < 0 ? int.MinValue - n : n);
The idea here is to compare non-negative numbers by the value they have. And compare negative numbers with the value int.MinValue - n which is -2147483648 - n and because n is negative, the higher negative number we, the lower negative result the outcome will be.
It doesn't work when the list itself contains the number int.MinValue because this evaluates to 0 which would be equal to 0 itself. As Richard propose it could be made with longĀ“s if you need the full range but the performance will be slightly impaired by this.
Try something like (VB.Net example)
Orderby(Function(x) iif(x<0, Math.Abs(x), x*1000))
...if the values are <1000
You could express it in LINQ, but if I were reading the code two years later, I'd prefer to see something like:
list.OrderBy(i=>i, new NegativeThenPositiveByAscendingAbsoluteValueComparer());
You will need to implement IComparer.

Applying hidden Markov model to multiple simultaneous bit sequences

This excellent article on implementing a Hidden Markov Model in C# does a fair job of classifying a single bit sequence based on training data.
How to modify the algorithm, or build it out (multiple HMMs?) to support the classification of multiple simultaneous bit sequences?
Example
Instead of classifying just one stream:
double t1 = hmm.Evaluate(new int[] { 0,1 }); // 0.49999423004045024
double t2 = hmm.Evaluate(new int[] { 0,1,1,1 }); // 0.11458685045803882
Rather classify a dual bit stream:
double t1 = hmm.Evaluate(new int[] { [0, 0], [0, 1] });
double t2 = hmm.Evaluate(new int[] { [0, 0], [1, 1], [0, 1], [1, 1] });
Or even better, three streams:
double t1 = hmm.Evaluate(new int[] { [0, 0, 1], [0, 0, 1] });
double t2 = hmm.Evaluate(new int[] { [0, 0, 1], [1, 1, 0], [0, 1, 1], [1, 1, 1] });
Obviously the training data would also be expanded.
The trick is to model the set of observations as the n-ary cartesian product of all possible values of each sequence, in your case the HMM will have 2^n output symbol where n is the number of bit sequences.
Example: for three bit sequences, the 8 symbols are: 000 001 010 011 100 101 110 111, as if we created a megavariable whose values are all the possible tuples of values of the individual observation sequences (0/1 of each bit sequence)
The article mentioned deals with the hidden Markov model implementation in the Accord.NET Framework. When using the complete version of the framework, and not just the subproject available in that article, one can use the generic HiddenMarkovModel model and use any suitable emission symbol distribution. If the user would like to express the joint probability between two or three discrete variables, it would be worth to use the JointDistribution class.
If, however, there are many symbol variables, such that expression all possible variable combinations is not practical, it should be better to use a continuous representation for the features and use a Multivariate Normal distribution instead.
An example would be:
// Specify a initial normal distribution for the samples.
var initialDensity = MultivariateNormalDistribution(3); // 3 dimensions
// Create a continuous hidden Markov Model with two states organized in a forward
// topology and an underlying multivariate Normal distribution as probability density.
var model = new HiddenMarkovModel<MultivariateNormalDistribution>(new Ergodic(2), density);
// Configure the learning algorithms to train the sequence classifier until the
// difference in the average log-likelihood changes only by as little as 0.0001
var teacher = new BaumWelchLearning<MultivariateNormalDistribution>(model)
{
Tolerance = 0.0001,
Iterations = 0,
};
// Fit the model
double likelihood = teacher.Run(sequences);

Categories

Resources