I have an integer input representing a 32 bit mask.
This mask goes alongside a string array of length 32.
Sometimes the string array will contain nulls or empty strings. In these situations I would like to "remove" the null-or-empty-string bit from from the mask.
This mapping would involve shifting all subsequent bits to the right:
"d" "c" null "a" -> "d" "c" "a"
1 1 1 1 -> 0 1 1 1
Finally, I would like to be able to unmap this mask when I'm finished with it. In this case I would be inserting zeroes wherever a null-or-empty-string exists in the string array.
The string array does not change between these mappings, so I am not concerned about the data becoming desynchronised.
Without using LINQ or lambda expressions, how can I implement these methods?
private static readonly string[] m_Names;
public static int MapToNames(int mask)
{
}
public static int MapFromNames(int mask)
{
}
I have been looking a little at the BitArray class, but it doesn't seem very helpful as it doesn't provide methods for converting to/from an integer.
I know that this is not the answer of your question, but think about saving bit mask values in integer field like a sum of all bit mask fields. It is easy to control and maintain. How to check for the bit mask values.
if(bitmaskIntValue & value)
{
}
Example: You have bitmaskIntValue = 10-> 2^1 + 2^3
if(bitmaskIntValue & 2)
{
//first or second input is checked.(depending from where you start 2^0 or 2^1)
}
Related
Note: This is more of a logic/math problem than a specific C# problem.
I have my own class called Number - it very simply contains two separate byte arrays called Whole and Decimal. These byte arrays each represent essentially an infinitely large whole number, but, when put together the idea is that they create a whole number with a decimal part.
The bytes are stored in a little-endian format, representing a number. I'm creating a method called AddNumbers which will add two of these Numbers together.
This method relies on another method called PerformAdd, which just adds two arrays together. It simply takes in a pointer to the final byte array, a pointer to one array to add, and a pointer to the second array to add - as well as the length of each of them. The two arrays are just named "larger" and "smaller". Here is the code for this method:
private static unsafe void PerformAdd(byte* finalPointer, byte* largerPointer, byte* smallerPointer, int largerLength, int smallerLength)
{
int carry = 0;
// Go through all the items that can be added, and work them out.
for (int i = 0; i < smallerLength; i++)
{
var add = *largerPointer-- + *smallerPointer-- + carry;
// Stick the result of this addition in the "final" array.
*finalPointer-- = (byte)(add & 0xFF);
// Now, set a carry from this.
carry = add >> 8;
}
// Now, go through all the remaining items (which don't need to be added), and add them to the "final" - still working with the carry.
for (int i = smallerLength; i < largerLength; i++)
{
var wcarry = *largerPointer-- + carry;
// Stick the result of this addition in the "final" array.
*finalPointer-- = (byte)(wcarry & 0xFF);
// Now, set a carry from this.
carry = wcarry >> 8;
}
// Now, if we have anything still left to carry, carry it into a new byte.
if (carry > 0)
*finalPointer-- = (byte)carry;
}
This method isn't where the problem lies - the problem is with how I use it. It's the AddNumbers method that uses it. The way it works is fine - it organizes the two separate byte arrays into the "larger" (larger meaning having a higher length of bytes) and "smaller". And then it creates pointers, it does this both for Whole and Decimal separately. The problem is with the decimal part.
Let's say we're adding the numbers 1251 and 2185 together, in this situation you would get 3436 - so that works perfectly!
Take another example as well: You have the numbers 4.6 and add 1.2 - once again, this works fine, and you get 5.8. The problem comes with the next example.
We have 15.673 and 1.783, you would expect 17.456, however, actually, this returns: 16.1456, and the reason for that is because it doesn't carry the "1".
So, this is my problem: How would I implement a way that knows when and how to do this? Here's the code for my AddNumbers method:
public static unsafe Number AddNumbers(Number num1, Number num2)
{
// Store the final result.
Number final = new Number(new byte[num1.Whole.Length + num2.Whole.Length], new byte[num1.Decimal.Length + num2.Decimal.Length]);
// We're going to figure out which number (num1 or num2) has more bytes, and then we'll create pointers to smallest and largest.
fixed (byte* num1FixedWholePointer = num1.Whole, num1FixedDecPointer = num1.Decimal, num2FixedWholePointer = num2.Whole, num2FixedDecPointer = num2.Decimal,
finalFixedWholePointer = final.Whole, finalFixedDecimalPointer = final.Decimal)
{
// Create a pointer and figure out which whole number has the most bytes.
var finalWholePointer = finalFixedWholePointer + (final.Whole.Length - 1);
var num1WholeLarger = num1.Whole.Length > num2.Whole.Length ? true : false;
// Store the larger/smaller whole number lengths.
var largerLength = num1WholeLarger ? num1.Whole.Length : num2.Whole.Length;
var smallerLength = num1WholeLarger ? num2.Whole.Length : num1.Whole.Length;
// Create pointers to the whole numbers (the largest amount of bytes and smallest amount of bytes).
var largerWholePointer = num1WholeLarger ? num1FixedWholePointer + (num1.Whole.Length - 1) : num2FixedWholePointer + (num2.Whole.Length - 1);
var smallerWholePointer = num1WholeLarger ? num2FixedWholePointer + (num2.Whole.Length - 1) : num1FixedWholePointer + (num1.Whole.Length - 1);
// Handle decimal numbers.
if (num1.Decimal.Length > 0 || num2.Decimal.Length > 0)
{
// Create a pointer and figure out which decimal has the most bytes.
var finalDecPointer = finalFixedDecimalPointer + (final.Decimal.Length - 1);
var num1DecLarger = num1.Decimal.Length > num2.Decimal.Length ? true : false;
// Store the larger/smaller whole number lengths.
var largerDecLength = num1DecLarger ? num1.Decimal.Length : num2.Decimal.Length;
var smallerDecLength = num1DecLarger ? num2.Whole.Length : num1.Decimal.Length;
// Store pointers for decimals as well.
var largerDecPointer = num1DecLarger ? num1FixedDecPointer + (num1.Decimal.Length - 1) : num2FixedDecPointer + (num2.Decimal.Length - 1);
var smallerDecPointer = num1DecLarger ? num2FixedDecPointer + (num2.Decimal.Length - 1) : num1FixedDecPointer + (num1.Decimal.Length - 1);
// Add the decimals first.
PerformAdd(finalDecPointer, largerDecPointer, smallerDecPointer, largerDecLength, smallerDecLength);
}
// Add the whole number now.
PerformAdd(finalWholePointer, largerWholePointer, smallerWholePointer, largerLength, smallerLength);
}
return final;
}
The format you selected is fundamentally hard to use and I'm not aware of anyone who uses the same format for this task. For example, multiplication or division in that format must be very hard to implement.
Actually I don't think you store enough information to uniquely restore the value in the first place. How in your format stored representations are different for 0.1 and 0.01? I don't think you can distinguish those two values.
The issue you are facing is a lesser side-effect of the same problem: you store binary representations for decimal values and expect to be able to imply unique size (number of digits) of the decimal representation. You can't do it because when decimal overflow happens you are not guaranteed to get an overflow in your 256-based stored value as well. Actually it is more often not to happen simultaneously.
I don't think you can resolve this issue in any other way than explicitly storing something equivalent to the number of digits after the decimal point. And if you are going to do that anyway, why not switch to a much simpler format of a single BigInteger (yes, it is a part of the standard library although there is nothing like BigDecimal) and a scale? This is the format used by many similar libraries. In that format 123.45 is stored as pair of 12345 and -2 (for decimal position) while 1.2345 is stored as a pair of 12345 and -4. Multiplication in that format is almost a trivial task (given that BigInteger already implements multiplication, so you just need to be able to truncate zeros at the end). Addition and subtraction are less trivial but what you need is first match the scales of the two numbers using multiplication by 10, then use standard addition over BigInteger and then normalize back (remove zeros at the end). Division is still hard and you have to decide what rounding strategies you want support because division of two numbers is not guaranteed to fit into a number of a fixed precision.
If you just need BigDecimal in C# I would just suggest to find and use an existing implementation. For example https://gist.github.com/nberardi/2667136 (I am not the author, but it seems fine).
If you HAVE to implement it for any reason (school, etc) even then I would just resort to using BigInteger.
If you have to implement it with byte arrays... You can still benefit from the idea of using scale. You obviously have to take any extra digits after your operations such as "PerformAdd" and then carry them over to the main number.
However problems don't stop there. When you begin implementing multiplication you will run into more issues and you will have to start to mix decimal and integer part inevitably.
8.73*0.11 -> 0.9603
0.12*0.026 -> 0.00312
As you can see integer and decimal parts mix up and then decimal part grows into a longer sequence
however if you represent these as:
873|2 * 11|2 -> 873*11|4 -> 9603|4 -> 0.9603
12|2 & 26|3 -> 12*26|5 -> 312|5 -> 0.00312
these problems disappear.
I've been programming for many years, but have never needed to use bitwise operations too much or really deal with data too much on a bit or even byte level, until now. So, please forgive my lack of knowledge.
I'm having to process streaming message frame data that I'm getting via socket communication. The message frames are a series of hex bytes encoded Big Endian which I read into a byte array called byteArray. Take the following 2 bytes for example:
0x03 0x20
The data I need is represented in the first 14 bits - meaning I need to convert the first 14 bits into an int value. (The last 2 bits represent 2 other bool values). I have coded the following to accomplish this:
if (BitConverter.IsLittleEndian)
{
Array.Reverse(byteArray);
}
BitArray bitArray = GetBitArrayFromRange(new BitArray(byteArray), 0, 14);
int dataValue = GetIntFromBitArray(bitArray)
The dataValue variable ends up with the correct result which is: 800
The two functions I'm calling are here:
private static BitArray GetBitArrayFromRange(BitArray bitArray, int startIndex, int length)
{
var newBitArray = new BitArray(length);
for (int i = startIndex; i < length; i++)
{
newBitArray[i] = bitArray.Get(i);
}
return newBitArray;
}
private static int GetIntFromBitArray(BitArray bitArray)
{
int[] array = new int[1];
bitArray.CopyTo(array, 0);
return array[0];
}
Since I have a lack of experience in this area, my question is: Does this code look correct/reasonable? Or, is there a more preferred/conventional way of accomplishing what I need?
Thanks!
"The dataValue variable ends up with the correct result which is: 800"
Shouldn't that correct result be actually 200?
1) 00000011 00100001 : is integer 0x0321 (so now skip beginning two bits 01...)
2) xx000000 11001000 : is extracted last 14 bits (missing 2 bits, so those xx count as zero)
3) 00000000 11001000 : is expected final result from 14-bits extraction = 200
At present it looks like you have an empty (zero filled) 16 bits into which you put the 14 bits. Somehow you putting in exact same position (left-hand side instead of right-hand side)
Original bits : 00000011 00100001
Slots 16 bit : XXXXXXXX XXXXXXXX
Instead of this : XX000000 11001000 //correct way
You have done : 00000011 001000XX //wrong way
Your right-hand side XX are zero so your result is 00000011 00100000 which would give 800, but that's wrong because it's not the true value of those specific 14 bits you extracted.
"Is there a more preferred/conventional way of accomplishing what I
need?"
I guess bit-shifting is the conventional way...
Solution (pseudo-code) :
var myShort = 0x0321; //Short means 2 bytes
var answer = (myShort >> 2); //bitshift to right-hand side by 2 places
By nudging everything 2 places/slots towards right, you can see how the two now-empty places at far-left becomes the XX (automatically zero until you change them), and by nudging you have also just removed the (right-side) 2 bits you wanted to ignore... Leaving you with correct 14-bit value.
PS:
Regarding your code... I've not had chance to test it all but the below logic seems more appropriate for your GetBitArrayFromRange function :
for (int i = 0; i < (length-1); i++)
{
newBitArray[i] = bitArray.Get(startIndex + i);
}
8654 -> 8653; 1000 -> 0999; 0100 -> 0099; 0024 -> 0023; 0010 -> 0009; 0007 -> 0006 etc.
I have a string variable of fixed length 4; its characters are always numbers. I want to make subtraction while obeying the given rule that its length of 4 must be protected.
What I tried: Convert.ToInt32, .Length operations etc. In my code, I always faced some sort of errors.
I devised that I can do this via 3 steps:
1. Convert the string value to an (int) integer
2. Subtract "1" in that integer to find a new integer
3. Add "4 - length of new integer" times "0" to the beginning.
Anyway, independent of the plotted solution above (since I am a newbee; perhaps even my thought may divert a standard user from a normal plausible approach for solution), is not there a way to perform the above via a function or something else in C#?
A number doesn't have a format, it's string representation has a format.
The steps you outlined for performing the arithmetic and outputting the result are correct. I would suggest using PadLeft to output the result in the desired format:
int myInt = int.Parse("0100");
myInt = myInt - 1;
string output = myInt.ToString().PadLeft(4, '0');
//Will output 0099
Your steps are almost right, however there is a easier way to accomplish getting the leading 0's, use Numeric string formatting. Using the formatting string "D4" it will behave exactly like you want.
var oldString = "1000";
var oldNum = Convert.ToInt32(oldString);
var newNum = oldNum - 1;
var newString = newNum.ToString("D4");
Console.WriteLine(newString); //prints "0999"
Click to run example
You could also use the custom formatting string "0000".
Well, I think others have implemented what you have implemented already. The reason might be that you didn't post your code. But none of the answers addresses your main question...
Your approach is totally fine. To make it reusable, you need to put it into a method. A method can look like this:
private string SubtractOnePreserveLength(string originalNumber)
{
// 1. Convert the string value to an (int) integer
// 2. Subtract "1" in that integer to find a new integer
// 3. Add "4 - length of new integer" times "0" to the beginning.
return <result of step 3 >;
}
You can then use the method like this:
string result = SubtractOnePreserveLength("0100");
// result is 0099
There are some notations to write numbers in C# that tell if what you wrote is float, double, integer and so on.
So I would like to write a binary number, how do I do that?
Say I have a byte:
byte Number = 10011000 //(8 bits)
How should I write it without having the trouble to know that 10011000 in binary = 152 in decimal?
P.S.: Parsing a string is completely out of question (I need performance)
as of c# 6 c# 7 you can use 0b prefix to get binary similar to the 0x for hex
int x = 0b1010000; //binary value of 80
int seventyFive = 0b1001011; //binary value of 75
give it a shot
You can write this:
int binaryNotation = 0b_1001_1000;
In C# 7.0 and later, you can use the underscore '_' as a digit seperator including decimal, binary, or hexadecimal notation, to improve legibility.
There's no way to do it other than parsing a string, I'm afraid:
byte number = (byte) Convert.ToInt32("10011000", 2);
Unfortunately you will be unable to assign constant values like that, of course.
If you find yourself doing that a lot, I guess you could write an extension method on string to make things more readable:
public static class StringExt
{
public static byte AsByte(this string self)
{
return (byte)Convert.ToInt32(self, 2);
}
}
Then the code would look like this:
byte number = "10011000".AsByte();
I'm not sure that would be a good idea though...
Personally, I just use hex initializers, e.g.
byte number = 0x98;
What does the CreateMask() function of BitVector32 do?
I did not get what a Mask is.
Would like to understand the following lines of code. Does create mask just sets bit to true?
// Creates and initializes a BitVector32 with all bit flags set to FALSE.
BitVector32 myBV = new BitVector32( 0 );
// Creates masks to isolate each of the first five bit flags.
int myBit1 = BitVector32.CreateMask();
int myBit2 = BitVector32.CreateMask( myBit1 );
int myBit3 = BitVector32.CreateMask( myBit2 );
int myBit4 = BitVector32.CreateMask( myBit3 );
int myBit5 = BitVector32.CreateMask( myBit4 );
// Sets the alternating bits to TRUE.
Console.WriteLine( "Setting alternating bits to TRUE:" );
Console.WriteLine( " Initial: {0}", myBV.ToString() );
myBV[myBit1] = true;
Console.WriteLine( " myBit1 = TRUE: {0}", myBV.ToString() );
myBV[myBit3] = true;
Console.WriteLine( " myBit3 = TRUE: {0}", myBV.ToString() );
myBV[myBit5] = true;
Console.WriteLine( " myBit5 = TRUE: {0}", myBV.ToString() );
What is the practical application of this?
It returns a mask which you can use for easier retrieving of interesting bit.
You might want to check out Wikipedia for what a mask is.
In short: a mask is a pattern in form of array of 1s for the bits that you are interested in and 0s for the others.
If you have something like 01010 and you are interested in getting the last 3 bits, your mask would look like 00111. Then, when you perform a bitwise AND on 01010 and 00111 you will get the last three bits (00010), since AND only is 1 if both bits are set, and none of the bits beside the first three are set in the mask.
An example might be easier to understand:
BitVector32.CreateMask() => 1 (binary 1)
BitVector32.CreateMask(1) => 2 (binary 10)
BitVector32.CreateMask(2) => 4 (binary 100)
BitVector32.CreateMask(4) => 8 (binary 1000)
CreateMask(int) returns the given number multiplied by 2.
NOTE: The first bit is the least significant bit, i.e. the bit farthest to the right.
BitVector32.CreateMask() is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2 (left shift is not circular, so you may start loosing digits, more is explained here)
BitVector32 vector = new BitVector32();
int bit1 = BitVector32.CreateMask();
int bit2 = BitVector32.CreateMask(bit1);
int bit3 = 1 << 2;
int bit5 = 1 << 4;
Console.WriteLine(vector.ToString());
vector[bit1 | bit2 | bit3 | bit5] = true;
Console.WriteLine(vector.ToString());
Output:
BitVector32{00000000000000000000000000000000}
BitVector32{00000000000000000000000000010111}
Check this other post link text.
And also, CreateMask does not return the given number multiplied by 2.
CreateMask creates a bit-mask based on an specific position in the 32-bit word (that's the paramater that you are passing), which is generally x^2 when you are talking about a single bit (flag).
I stumbled upon this question trying to find out what CreateMask does exactly. I did not feel the current answers answered the question for me. After some reading and experimenting, I would like to share my findings:
Basically what Maksymilian says is almast correct: "BitVector32.CreateMask is a substitution for the left shift operator (<<) which in most cases results in multiplication by 2".
Because << is a binary operator and CreateMask only takes one argument, I would like to add that BitVector32.CreateMask(x) is equivalant to x << 1.
Bordercases
However, BitVector32.CreateMask(x) is not equivalant to x << 1 for two border cases:
BitVector32.CreateMask(int.MinValue):
An InvalidOperationException will be thrown. int.MinValue corresponds to 10000000000000000000000000000000. This seems bit odd. Especially considering every other value with a 1 as the leftmost bit (i.e. negative numbers) works fine. In contrast: int.MinValue << 1 would not throw an exception and just return 0.
When you call BitVector32.CreateMask(0) (or BitVector32.CreateMask()). This will return 1
(i.e. 00000000000000000000000000000000 becomes 00000000000000000000000000000001),
whereas 0 << 1 would just return 0.
Multiplication by 2
CreateMask almost always is equivalent to multiplication by 2. Other than the above two special cases, it differs when the second bit from the left is different from the leftmost bit. An int is signed, so the leftmost bit indicates the sign. In that scenario the sign is flipped. E.g. CreateMask(-1) (or 11111111111111111111111111111111) results in -2 (or 11111111111111111111111111111110), but CreateMask(int.MaxValue) (or 01111111111111111111111111111111) also results in -2.
Anyway, you probably shouldn't use it for this purpose. As I understand, when you use a BitVector32, you really should only consider it a sequence of 32 bits. The fact that they use ints in combination with the BitVector32 is probably just because it's convenient.
When is CreateMask useful?
I honestly don't know. It seems from the documentation and the name "previous" of the argument of the function that they intended it to be used in some kind of sequence: "Use CreateMask() to create the first mask in a series and CreateMask(int) for all subsequent masks.".
However, in the code example, they use it to create the masks for the first 5 bits, to subsequently do some operations on those bits. I cannot imagine they expect you to write 32 calls in a row to CreateMask to be able to do some stuff with the bits near the left.