I don't know why but when you do the next thing you will never get the same as the original byte array:
var b = new byte[] {252, 2, 56, 8, 9};
var g = System.Text.Encoding.ASCII.GetChars(b);
var f = System.Text.Encoding.ASCII.GetBytes(g);
If you will run this code you will see that b != f, Why?!
Is there any way to convert bytes to chars and then back to bytes and get the same as the original byte array?
byte value can be 0 to 255.
When the byte value > 127, then result of
System.Text.Encoding.ASCII.GetChars()
is always '?' which has value 63
Therefore,
System.Text.Encoding.ASCII.GetBytes()
result always get 63 (wrong value) for those have initial byte value > 127
If you need TABLE ASCII -II then you can do as following
var b = new byte[] { 252, 2, 56, 8, 9 };
//another encoding
var e = Encoding.GetEncoding("437");
//252 inside the mentioned table is ⁿ and now you have it
var g = e.GetString(b);
//now you can get the byte value 252
var f = e.GetBytes(g);
Similar posts you can read
How to convert the byte 255 to a signed char in C#
How can I convert extended ascii to a System.String?
Why not use chars?
var b = new byte[] {252, 2, 56, 8, 9};
var g = new char[b.Length];
var f = new byte[g.Length]; // can also be b.Length, doens't really matter
for (int i = 0; i < b.Length; i++)
{
g[i] = Convert.ToChar(b[i]);
}
for (int i = 0; i < f.Length; i++)
{
f[i] = Convert.ToByte(g[i]);
}
The only difference is first byte: 252. Because ascii char is 1-byte signed char and it's value range is -128 to 127. Actually your input is incorrect. signed char can't be 252.
Related
I'm trying to convert an unHex value to a string but it's not working.
I have the following value 0x01BB92E7F716F55B144768FCB2EA40187AE6CF6B2E52A64F7331D0539507441F7D770112510D679F0B310116B0D709E049A19467672FFA532A7C30DFB72
Result I hope would be this
but executing the function below displays this result
»’ Ç ÷ õ [Ghü²ê # zæÏk.R¦Os1ÐS • D} w Q gŸ 1 ° × àI¡ ”gg / úS * | 0ß ·) = ¤
Any idea how I can extract the information as expected
public static string Hex2String (string input)
{
var builder = new StringBuilder ();
for (int i = 0; i < socketLength; i + = 2)
{
// throws an exception if not properly formatted
string hexdec = input.Substring (i, 2);
int number = Int32.Parse (hexdec, NumberStyles.HexNumber);
char charToAdd = (char) number;
builder.Append (charToAdd);
}
return builder.ToString ();
}
Your result is base64-encoded. Base64 is a way of taking a byte array and turning it into human-readable characters.
Your code tries to take these raw bytes and cast them to chars, but not all byte values are valid printable characters: some are control characters, some can't be printed, etc.
Instead, let's turn the hex string into a byte array, and then turn that byte array into a base64 string.
string input = "01BB92E7F716F55B144768FCB2EA40187AE6CF6B2E52A64F7331D0539507441F7D770112510D679F0B310116B0D709E049A19467672FFA532A7C30DFB72";
byte[] bytes = new byte[input.Length / 2];
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = byte.Parse(input.Substring(i * 2, 2), NumberStyles.HexNumber);
}
string result = Convert.ToBase64String(bytes);
This results in:
AbuS5/cW9VsUR2j8supAGHrmz2suUqZPczHQU5UHRB99dwESUQ1nnwsxARaw1wngSaGUZ2cv+lMqfDDftw==
See it running here.
I want to convert decimal integers taken from an array and convert into 4-bit binary and store each of the bit into array in c#
static void Main()
{
BinaryInput a = new BinaryInput();
int[] input = { 7, 0, 0, 0, 2, 0, 4, 4, 0 };
int x;
int[] bits = new int[36];
ArrayList Actor = new ArrayList();
for (int i = 0; i < input.Length; i++)
{
x = (int)input[i];
string result = Convert.ToString(x, 2);
bits = result.PadLeft(4, '0').Select(c =>int.Parse(c.ToString())).ToArray();
Actor.Add(bits);
}
}
The ArrayList Actor consists of 9 arrays and each array consist of binary number......but i want to add each of the binary bit in a single array as an individual element of the array or arraylist {0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0}
You can write a method to get the "bits" of a number like this
private static IEnumerable<int> ToBitSequence(this int num, int size)
{
while (size > 0)
{
yield return num & 1;
size--;
num >>= 1;
}
}
Then you can use it in the following way to get your desired results.
int[] input = { 7, 0, 0, 0, 2, 0, 4, 4, 0 };
var bits = input.Reverse().SelectMany(i => i.ToBitSequence(4)).Reverse();
Console.WriteLine(string.Join(",", bits));
Results in
0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0
The reason for the two Reverse calls is because ToBitSequence will return the least significant bit first, so by feeding the number in reverse order and then reversing the results you will get the bits from most significant to least starting with the first number in your list.
This is preferable to all the parsing and formatting between char, string, and int that you're currently doing.
However if you just change Actor to List<int> and do Actor.AddRange(bits); that would also work.
Use BitArray
BitArray b = new BitArray(new byte[] { x });
int[] bits = b.Cast<bool>().Select(bit => bit ? 1 : 0).ToArray();
This will give you the bits, then use
bits.Take(4).Reverse()
to get the least significant 4 bits in most-significant order first for each number.
I was wondering, is there a way to convert a BitArray into a byte (opposed to a byte array)? I'll have 8 bits in the BitArray..
BitArray b = new BitArray(8);
//in this section of my code i manipulate some of the bits in the byte which my method was given.
byte[] bytes = new byte[1];
b.CopyTo(bytes, 0);
This is what i have so far.... it doesn't matter if i have to change the byte array into a byte or if i can change the BitArray directly into a byte. I would prefer being able to change the BitArray directly into a byte... any ideas?
You can write an extension method
static Byte GetByte(this BitArray array)
{
Byte byt = 0;
for (int i = 7; i >= 0; i--)
byt = (byte)((byt << 1) | (array[i] ? 1 : 0));
return byt;
}
You can use it like so
var array = new BitArray(8);
array[0] = true;
array[1] = false;
array[2] = false;
array[3] = true;
Console.WriteLine(array.GetByte()); <---- prints 9
9 decimal = 1001 in binary
I will have the input as the left hand side 11 digits and the output should be preceeding to that or can any one tell how can i get that ouput byu taking those inputs as byte array
Example of first line
#changed to support dot
string[] line = "82 44 b4 2e 39 39 39 39 39 35".Split(' ');
byte[] bytes = new byte[line.Length];
for (int i = 0; i < line.Length; i++) {
int candidate = Convert.ToInt32(line[i], 16);
if (!(!(candidate < 0x20 || candidate > 127)))
candidate = 46; //.
bytes[i] = Convert.ToByte(candidate);
}
string s = System.Text.Encoding.ASCII.GetString(bytes);
If you already have the input as a byte array, then this should give you the string;
string result = Encoding.ASCII.GetString(input); // where "input" is your byte[]
This is far from optimized, but this should work
int code = Convert.ToInt32("0x" + "82", 16); //this is your first char
code = code >= 0x20 && code < 0x7f ? code : 0x2E;
byte b = Convert.ToByte(code);
string txt = System.Text.Encoding.UTF8.GetString(new byte[] { b });
That encoding seems to be ASCII, but some of the characters have their high bit set for some reason. You can clear that bit and use Encoding.ASCII to build a string from your byte array:
string result = Encoding.ASCII.GetString(
yourByteArray.Select(b => (byte) (b & 0x7F)).ToArray());
EDIT: If you can't use LINQ, you can do:
for (int i = 0; i < yourByteArray.Length; ++i) {
yourByteArray[i] &= 0x7F;
}
string result = Encoding.ASCII.GetString(yourByteArray);
To get you started:
// Convert the number expressed in base-16 to an integer.
int value = Convert.ToInt32(hex, 16);
This was taken from MSDN
byte[] bytes=new byte[]{0xf3,0x28,0x48,0x78,0x98};
var output=string.Concat(
bytes.Select(b => (char) (b >= 0x20 && b<0x7f ? b :(byte)'.') ).ToArray()
);
Assumption:
Converting a
byte[] from Little Endian to Big
Endian means inverting the order of the bits in
each byte of the byte[].
Assuming this is correct, I tried the following to understand this:
byte[] data = new byte[] { 1, 2, 3, 4, 5, 15, 24 };
byte[] inverted = ToBig(data);
var little = new BitArray(data);
var big = new BitArray(inverted);
int i = 1;
foreach (bool b in little)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
i = 1;
foreach (bool b in big)
{
Console.Write(b ? "1" : "0");
if (i == 8)
{
i = 0;
Console.Write(" ");
}
i++;
}
Console.WriteLine();
Console.WriteLine(BitConverter.ToString(data));
Console.WriteLine(BitConverter.ToString(ToBig(data)));
foreach (byte b in data)
{
Console.Write("{0} ", b);
}
Console.WriteLine();
foreach (byte b in inverted)
{
Console.Write("{0} ", b);
}
The convert method:
private static byte[] ToBig(byte[] data)
{
byte[] inverted = new byte[data.Length];
for (int i = 0; i < data.Length; i++)
{
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--)
{
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(inverted, i);
}
return inverted;
}
The output of this little application is different from what I expected:
00000001 00000010 00000011 00000100 00000101 00001111 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
80-40-C0-20-A0-F0-18
01-02-03-04-05-0F-18
1 2 3 4 5 15 24
1 2 3 4 5 15 24
For some reason the data remains the same, unless printed using BitConverter.
What am I not understanding?
Update
New code produces the following output:
10000000 01000000 11000000 00100000 10100000 11110000 00011000
00000001 00000010 00000011 00000100 00000101 00001111 00011000
01-02-03-04-05-0F-18
80-40-C0-20-A0-F0-18
1 2 3 4 5 15 24
128 64 192 32 160 240 24
But as I have been told now, my method is incorrect anyway because I should invert the bytes
and not the bits?
This hardware developer I'm working with told me to invert the bits because he cannot read the data.
Context where I'm using this
The application that will use this does not really work with numbers.
I'm supposed to save a stream of bits to file where
1 = white and 0 = black.
They represent pixels of a bitmap 256x64.
byte 0 to byte 31 represents the first row of pixels
byte 32 to byte 63 the second row of pixels.
I have code that outputs these bits... but the developer is telling
me they are in the wrong order... He says the bytes are fine but the bits are not.
So I'm left confused :p
No. Endianness refers to the order of bytes, not bits. Big endian systems store the most-significant byte first and little-endian systems store the least-significant first. The bits within a byte remain in the same order.
Your ToBig() function is returning the original data rather than the bit-swapped data, it seems.
Your method may be correct at this point. There are different meanings of endianness, and it depends on the hardware.
Typically, it's used for converting between computing platforms. Most CPU vendors (now) use the same bit ordering, but different byte ordering, for different chipsets. This means, that, if you are passing a 2-byte int from one system to another, you leave the bits alone, but swap bytes 1 and 2, ie:
int somenumber -> byte[2]: somenumber[high],somenumber[low] ->
byte[2]: somenumber[low],somenumber[high] -> int newNumber
However, this isn't always true. Some hardware still uses inverted BIT ordering, so what you have may be correct. You'll need to either trust your hardware dev. or look into it further.
I recommend reading up on this on Wikipedia - always a great source of info:
http://en.wikipedia.org/wiki/Endianness
Your ToBig method has a bug.
At the end:
invertedBits.CopyTo(data, i);
}
return data;
You need to change that to:
byte[] newData = new byte[data.Length];
invertedBits.CopyTo(newData, i);
}
return newData;
You're resetting your input data, so you're receiving both arrays inverted. The problem is that arrays are reference types, so you can modify the original data.
As greyfade already said, endianness is not about bit ordering.
The reason that your code doesn't do what you expect, is that the ToBig method changes the array that you send to it. That means that after calling the method the array is inverted, and data and inverted are just two references pointing to the same array.
Here's a corrected version of the method.
private static byte[] ToBig(byte[] data) {
byte[] result = new byte[data.length];
for (int i = 0; i < data.Length; i++) {
var bits = new BitArray(new byte[] { data[i] });
var invertedBits = new BitArray(bits.Count);
int x = 0;
for (int p = bits.Count - 1; p >= 0; p--) {
invertedBits[x] = bits[p];
x++;
}
invertedBits.CopyTo(result, i);
}
return result;
}
Edit:
Here's a method that changes endianness for a byte array:
static byte[] ConvertEndianness(byte[] data, int wordSize) {
if (data.Length % wordSize != 0) throw new ArgumentException("The data length does not divide into an even number of words.");
byte[] result = new byte[data.Length];
int offset = wordSize - 1;
for (int i = 0; i < data.Length; i++) {
result[i + offset] = data[i];
offset -= 2;
if (offset < -wordSize) {
offset += wordSize * 2;
}
}
return result;
}
Example:
byte[] data = { 1,2,3,4,5,6 };
byte[] inverted = ConvertEndianness(data, 2);
Console.WriteLine(BitConverter.ToString(inverted));
Output:
02-01-04-03-06-05
The second parameter is the word size. As endianness is the ordering of bytes in a word, you have to specify how large the words are.
Edit 2:
Here is a more efficient method for reversing the bits:
static byte[] ReverseBits(byte[] data) {
byte[] result = new byte[data.Length];
for (int i = 0; i < data.Length; i++) {
int b = data[i];
int r = 0;
for (int j = 0; j < 8; j++) {
r <<= 1;
r |= b & 1;
b >>= 1;
}
result[i] = (byte)r;
}
return result;
}
One big problem I see is ToBig changes the contents of the data[] array that is passed to it.
You're calling ToBig on an array named data, then assigning the result to inverted, but since you didn't create a new array inside ToBig, you modified both arrays, then you proceed to treat the arrays data and inverted as different when in reality they are not.