replacing 1st digit with second digit in a byte - c#

What I'm trying to do is loop through the values in the byte array getting the first digit of each value and swapping its place with the second digit, so 35 would be 53 and 24 would be 42.. I can almost do this but i have to convert everything to strings and that seems a little overkill
I've tried it for a while but so far I've only figured that I can convert everything to a string and then work on them, just seems a little clunky..

It sounds like you want to swap the high and low nibble in each byte.
0x35; // High nibble = 3, Low Nibble = 5
To do this, you want to shift the high nibble right 4 bits (to make it the low nibble), and shift the low nibble left 4 bits (to make it the high nibble).
var ar = new byte[] { 0x35, 0x24 };
for (int i=0; i<ar.Length; i++) {
byte b = ar[i];
b = (b>>4) | ((b&0x0F)<<4);
ar[i] = b;
}

byte nmbBase =16; //or any other 10 for decimal
byte firstDigit = number /nmbBase;
byte secondDigit = number % nmbBase;
number = secondDigit*numberBase + firstDigit;
This is from cellphone,sorry for any mistakes. You should get in which direction you should go.

Related

C# - What's the fastest way to convert a number to the smallest BitArray

I would like to convert a number to a BitArray, with the resulting BitArray only being as big as it needs to be.
For instance:
BitArray tooBig = new BitArray(new int[] { 9 });
results in a BitArray with a length of 32 bit, however for the value 9 only 4 bits are required. How can I create BitArrays which are only as long as they need to be? So in this example, 4 bits. Or for the number 260 I expected the BitArray to be 9 bits long
You can figure out all the bits first and then create the array by checking if the least significant bit is 1 or 0 and then right shifting until the number is 0. Note this will not work for negative numbers where the 32nd bit would be 1 to indicate the sign.
public BitArray ToShortestBitArray(int x)
{
var bits = new List<bool>();
while(x > 0)
{
bits.Add((x & 1) == 1);
x >>= 1;
}
return new BitArray(bits.ToArray());
}
Assuming you are working with exclusively unsigned integers, the number of bits you need is equal to the base 2 logarithm of the (number+1), rounded up.
Counting the bits is probably the easiest solution.
In JavaScript for example...
// count bits needed to store a positive number
const bits = (x, b = 0) => x > 0 ? bits(x >> 1, b + 1) : b;

What is the conventional way to convert a range of bits to an integer value in c# .net?

I've been programming for many years, but have never needed to use bitwise operations too much or really deal with data too much on a bit or even byte level, until now. So, please forgive my lack of knowledge.
I'm having to process streaming message frame data that I'm getting via socket communication. The message frames are a series of hex bytes encoded Big Endian which I read into a byte array called byteArray. Take the following 2 bytes for example:
0x03 0x20
The data I need is represented in the first 14 bits - meaning I need to convert the first 14 bits into an int value. (The last 2 bits represent 2 other bool values). I have coded the following to accomplish this:
if (BitConverter.IsLittleEndian)
{
Array.Reverse(byteArray);
}
BitArray bitArray = GetBitArrayFromRange(new BitArray(byteArray), 0, 14);
int dataValue = GetIntFromBitArray(bitArray)
The dataValue variable ends up with the correct result which is: 800
The two functions I'm calling are here:
private static BitArray GetBitArrayFromRange(BitArray bitArray, int startIndex, int length)
{
var newBitArray = new BitArray(length);
for (int i = startIndex; i < length; i++)
{
newBitArray[i] = bitArray.Get(i);
}
return newBitArray;
}
private static int GetIntFromBitArray(BitArray bitArray)
{
int[] array = new int[1];
bitArray.CopyTo(array, 0);
return array[0];
}
Since I have a lack of experience in this area, my question is: Does this code look correct/reasonable? Or, is there a more preferred/conventional way of accomplishing what I need?
Thanks!
"The dataValue variable ends up with the correct result which is: 800"
Shouldn't that correct result be actually 200?
1) 00000011 00100001 : is integer 0x0321 (so now skip beginning two bits 01...)
2) xx000000 11001000 : is extracted last 14 bits (missing 2 bits, so those xx count as zero)
3) 00000000 11001000 : is expected final result from 14-bits extraction = 200
At present it looks like you have an empty (zero filled) 16 bits into which you put the 14 bits. Somehow you putting in exact same position (left-hand side instead of right-hand side)
Original bits : 00000011 00100001
Slots 16 bit : XXXXXXXX XXXXXXXX
Instead of this : XX000000 11001000 //correct way
You have done : 00000011 001000XX //wrong way
Your right-hand side XX are zero so your result is 00000011 00100000 which would give 800, but that's wrong because it's not the true value of those specific 14 bits you extracted.
"Is there a more preferred/conventional way of accomplishing what I
need?"
I guess bit-shifting is the conventional way...
Solution (pseudo-code) :
var myShort = 0x0321; //Short means 2 bytes
var answer = (myShort >> 2); //bitshift to right-hand side by 2 places
By nudging everything 2 places/slots towards right, you can see how the two now-empty places at far-left becomes the XX (automatically zero until you change them), and by nudging you have also just removed the (right-side) 2 bits you wanted to ignore... Leaving you with correct 14-bit value.
PS:
Regarding your code... I've not had chance to test it all but the below logic seems more appropriate for your GetBitArrayFromRange function :
for (int i = 0; i < (length-1); i++)
{
newBitArray[i] = bitArray.Get(startIndex + i);
}

Converting non digit text to double and then back to string

I have a unique situation where I have to write code on top of an already establish platform so I am trying to figure out a hack to make something work.
The problem I have is I have a user defined string. Basically naming a signal. I need to get this into another program but the only method available is within a double value. Below is what I have tried but not been able to get it to work. I tried converting the string to byte array and then creating a new string by looping the bytes. Then I convert this string to a Double. Then use BitCoverter to get it back to byte array and then try to get the string.
Not sure if this can even be achieve. Any ideas?
string signal = "R3MEXA";
string newId = "1";
byte[] asciiBytes = System.Text.Encoding.ASCII.GetBytes(signal);
foreach (byte b in asciiBytes)
newId += b.ToString();
double signalInt = Double.Parse(newId);
byte[] bytes = BitConverter.GetBytes(signalInt);
string result = System.Text.Encoding.ASCII.GetString(bytes);
Asuming your string consists of ASCII characters (7Bit):
Convert your string into a bit-Array, seven bits per character.
Convert this bit-array into a string of digits, using 3 bits for each digit. (there are digits 0..7)
Convert this string of digits to a double number.
You initially set newId to "1", which means when you're doing later conversion, you're not going to get the right output unless to account for the "1" again.
It doesn't work, because if you convert it back you don't know the length of a byte.
So I made every byte to a length of 3.
string signal = "R3MEXA";
string newId = "1";
byte[] asciiBytes = System.Text.Encoding.ASCII.GetBytes(signal);
foreach (byte b in asciiBytes)
newId += b.ToString().PadLeft(3,'0'); //Add Zero, if the byte has less than 3 digits
double signalInt = Double.Parse(newId);
//Convert it back
List<byte> bytes = new List<byte>(); //Create a list, we don't know how many bytes will come (Or you calc it: maximum is _signal / 3)
//string _signal = signalInt.ToString("F0"); //Maybe you know a better way to get the double to string without scientific
//This is my workaround to get the integer part from the double:
//It's not perfect, but I don't know another way at the moment without losing information
string _signal = "";
while (signalInt > 1)
{
int _int = (int)(signalInt % 10);
_signal += (_int).ToString();
signalInt /= 10;
}
_signal = String.Join("",_signal.Reverse());
for (int i = 1; i < _signal.Length; i+=3)
{
byte b = Convert.ToByte(_signal.Substring(i, 3)); //Make 3 digits to one byte
if(b!=0) //With the ToString("F0") it is possible that empty bytes are at the end
bytes.Add(b);
}
string result = System.Text.Encoding.ASCII.GetString(bytes.ToArray()); //Yeah "R3MEX" The "A" is lost, because double can't hold that much.
What can improved?
Not every PadLeft is necessary. Work from back to front and if the third digit of a byte is greater than 2, you know, that the byte has only two digits. (Sorry for my english, I write an example).
Example
194 | 68 | 75 | 13
194687513
Reverse:
315786491
31 //5 is too big 13
57 //8 is too big 75
86 //4 is too big 68
491 //1 is ok 194

How to work with the bits in a byte

I have a single byte which contains two values. Here's the documentation:
The authority byte is split into two fields. The three least significant bits carry the user’s authority level (0-5). The five most
significant bits carry an override reject threshold. If these bits are
set to zero, the system reject threshold is used to determine whether
a score for this user is considered an accept or reject. If they are
not zero, then the value of these bits multiplied by ten will be the
threshold score for this user.
Authority Byte:
7 6 5 4 3 ......... 2 1 0
Reject Threshold .. Authority
I don't have any experience of working with bits in C#.
Can someone please help me convert a Byte and get the values as mentioned above?
I've tried the following code:
BitArray BA = new BitArray(mybyte);
But the length comes back as 29 and I would have expected 8, being each bit in the byte.
-- Thanks for everyone's quick help. Got it working now! Awesome internet.
Instead of BitArray, you can more easily use the built-in bitwise AND and right-shift operator as follows:
byte authorityByte = ...
int authorityLevel = authorityByte & 7;
int rejectThreshold = authorityByte >> 3;
To get the single byte back, you can use the bitwise OR and left-shift operator:
int authorityLevel = ...
int rejectThreshold = ...
Debug.Assert(authorityLevel >= 0 && authorityLevel <= 7);
Debug.Assert(rejectThreshold >= 0 && rejectThreshold <= 31);
byte authorityByte = (byte)((rejectThreshold << 3) | authorityLevel);
Your use of the BitArray is incorrect. This:
BitArray BA = new BitArray(mybyte);
..will be implicitly converted to an int. When that happens, you're triggering this constructor:
BitArray(int length);
..therefore, its creating it with a specific length.
Looking at MSDN (http://msdn.microsoft.com/en-us/library/x1xda43a.aspx) you want this:
BitArray BA = new BitArray(new byte[] { myByte });
Length will then be 8 (as expected).
To get a value of the five most significant bits in a byte as an integer, shift the byte to the right by 3 (i.e. by 8-5), and set the three upper bits to zero using bitwise AND operation, like this:
byte orig = ...
int rejThreshold = (orig >> 3) & 0x1F;
>> is the "shift right" operator. It moves bits 7..3 into positions 4..0, dropping the three lower bits.
0x1F is the binary number 00011111, which has the upper three bits set to zero, and the lower five bits set to one. AND-ing with this number zeroes out three upper bits.
This technique can be generalized to get other bit patterns and other integral data types. You shift the bits that you want into the least-significant position, and apply a mask that "cuts out" the number of bits that you want. In some cases, shifting would not be necessary (e.g. when you get the least significant group of bits). In other cases, such as above, the masking would not be necessary, because you get the most significant group of bits in an unsigned type (if the type is signed, ANDing would be required).
You're using the wrong constructor (probably).
The one that you're using is probably this one, while you need this one:
var bitArray = new BitArray(new [] { myByte } );

Convert 2 bytes to a number

I have a control that has a byte array in it.
Every now and then there are two bytes that tell me some info about number of future items in the array.
So as an example I could have:
...
...
Item [4] = 7
Item [5] = 0
...
...
The value of this is clearly 7.
But what about this?
...
...
Item [4] = 0
Item [5] = 7
...
...
Any idea on what that equates to (as an normal int)?
I went to binary and thought it may be 11100000000 which equals 1792. But I don't know if that is how it really works (ie does it use the whole 8 items for the byte).
Is there any way to know this with out testing?
Note: I am using C# 3.0 and visual studio 2008
BitConverter can easily convert the two bytes in a two-byte integer value:
// assumes byte[] Item = someObject.GetBytes():
short num = BitConverter.ToInt16(Item, 4); // makes a short
// out of Item[4] and Item[5]
A two-byte number has a low and a high byte. The high byte is worth 256 times as much as the low byte:
value = 256 * high + low;
So, for high=0 and low=7, the value is 7. But for high=7 and low=0, the value becomes 1792.
This of course assumes that the number is a simple 16-bit integer. If it's anything fancier, the above won't be enough. Then you need more knowledge about how the number is encoded, in order to decode it.
The order in which the high and low bytes appear is determined by the endianness of the byte stream. In big-endian, you will see high before low (at a lower address), in little-endian it's the other way around.
You say "this value is clearly 7", but it depends entirely on the encoding. If we assume full-width bytes, then in little-endian, yes; 7, 0 is 7. But in big endian it isn't.
For little-endian, what you want is
int i = byte[i] | (byte[i+1] << 8);
and for big-endian:
int i = (byte[i] << 8) | byte[i+1];
But other encoding schemes are available; for example, some schemes use 7-bit arithmetic, with the 8th bit as a continuation bit. Some schemes (UTF-8) put all the continuation bits in the first byte (so the first has only limited room for data bits), and 8 bits for the rest in the sequence.
If you simply want to put those two bytes next to each other in binary format, and see what that big number is in decimal, then you need to use this code:
if (BitConverter.IsLittleEndian)
{
byte[] tempByteArray = new byte[2] { Item[5], Item[4] };
ushort num = BitConverter.ToUInt16(tempByteArray, 0);
}
else
{
ushort num = BitConverter.ToUInt16(Item, 4);
}
If you use short num = BitConverter.ToInt16(Item, 4); as seen in the accepted answer, you are assuming that the first bit of those two bytes is the sign bit (1 = negative and 0 = positive). That answer also assumes you are using a big endian system. See this for more info on the sign bit.
If those bytes are the "parts" of an integer it works like that. But beware, that the order of bytes is platform specific and that it also depends on the length of the integer (16 bit=2 bytes, 32 bit=4bytes, ...)
In case that item[5] is the MSB
ushort result = BitConverter.ToUInt16(new byte[2] { Item[5], Item[4] }, 0);
int result = 256 * Item[5] + Item[4];

Categories

Resources