C# problem with byte[] - c#

I am loading a file into a byte[]. By my understanding the byte[] should contain a specific elements of bytes (8-bit). When i print each byte, all of them are not 8-bit (i.e. they dont have the length of 8).
My Code:
FileStream stream = File.OpenRead(#"C:\Image\Img.jpg");
byte[] fileByte = new byte[stream.Length];
stream.Read(fileByte, 0, fileByte.Length);
for (int i = 0; i <= fileByte.Length - 1; i++)
{
Console.WriteLine(Convert.ToString(fileByte[i], 2));
}
Output:
10001110
11101011
10001100
1000111
10011010
10010011
1001010
11000000
1001001
100100
I think my understanding is wrong here, Can you please let me know (or provide me some tutorial links) where I am missing this.

Leading 0's don't get printed.

When converting a numeric to a string, you lose any leading zeros. (Note that all of your entries start with "1".) You can use PadLeft to put them back in.
FileStream stream = File.OpenRead(#"C:\Image\Img.jpg");
byte[] fileByte = new byte[stream.Length];
stream.Read(fileByte, 0, fileByte.Length);
for (int i = 0; i <= fileByte.Length - 1; i++)
{
Console.WriteLine(Convert.ToString(fileByte[i], 2).PadLeft(8,'0'));
}

They all have 8 bits, but the non significant zeroes (the zeroes on the left) are not printed.

It is simply that the leading zeros are not included...

Are the bytes without leading zeros? You kinda chose a bad example because we do not know the decimal values you are displaying (ok maybe someone who knows the header structure for a .jpg file knows). I'm willing to bet leading zeros are not displayed in the binary equivalents.

Related

Most efficient way to save binary code to file

I have a string that only contains 1 and 0 and I need to save this to a .txt-File.
I also want it to be as small as possible. Since I have binary code, I can turn it into pretty much everything. Saving it as binary is not an option, since apparently every character will be a whole byte, even if it's a 1 or a 0.
I thought about turning my string into an Array of Byte but trying to convert "11111111" to Byte gave me a System.OverflowException.
My next thought was using an ASCII Codepage or something. But I don't know how reliable that is. Alternatively I could turn all of the 8-Bit pieces of my string into the corresponding numbers. 8 characters would turn into a maximum of 3 (255), which seems pretty nice to me. And since I know the highest individual number will be 255 I don't even need any delimiter for decoding.
But I'm sure there's a better way.
So:
What exactly is the best/most efficient way to store a string that only contains 1 and 0?
You could represent all your data as 64 bit integers and then write them to a binary file:
// The string we are working with.
string str = #"1010101010010100010101101";
// The number of bits in a 64 bit integer!
int size = 64;
// Pad the end of the string with zeros so the length of the string is divisible by 64.
str += new string('0', str.Length % size);
// Convert each 64 character segment into a 64 bit integer.
long[] binary = new long[str.Length / size]
.Select((x, idx) => Convert.ToInt64(str.Substring(idx * size, size), 2)).ToArray();
// Copy the result to a byte array.
byte[] bytes = new byte[binary.Length * sizeof(long)];
Buffer.BlockCopy(binary, 0, bytes, 0, bytes.Length);
// Write the result to file.
File.WriteAllBytes("MyFile.bin", bytes);
EDIT:
If you're only writing 64 bits then it's a one-liner:
File.WriteAllBytes("MyFile.bin", BitConverter.GetBytes(Convert.ToUInt64(str, 2)));
I would suggest using BinaryWriter. Like this:
BinaryWriter writer = new BinaryWriter(File.Open(fileName, FileMode.Create));

String.Format of a ByteArray converts 0x00 to 0, how can I retain 00

I am trying to print each byte in an array (Byte Array) using a for loop. However since I am using the String.Format, it converts the 0x00 in the byte array to a 0. How can I print it as 00.
Trace.Write("\n--->");
for (int K = 1; K <= j; K++)
Debug.Write(string.Format("{0:X}", FrameByteArray[K]));
I know it should be simple, but I have a hard time figuring it out.
Please advice.
Just use {0:X2} instead - this will ensure the number will always have at least two characters.

Limit UTF-8 encoded bytes length from string

I need to limit the output byte[] length encoded with UTF-8 encoding. Eg. byte[] length must be less than or equals 1000 First I wrote the following code
int maxValue = 1000;
if (text.Length > maxValue)
text = text.Substring(0, maxValue);
var textInBytes = Encoding.UTF8.GetBytes(text);
works good if string is just using ASCII characters, because 1 byte per character. But if characters goes beyond that it could be 2 or 3 or even 6 bytes per character. That would be a problem with the above code. So to fix that problem I wrote this.
List<byte> textInBytesList = new List<byte>();
char[] textInChars = text.ToCharArray();
for (int a = 0; a < textInChars.Length; a++)
{
byte[] valueInBytes = Encoding.UTF8.GetBytes(textInChars, a, 1);
if ((textInBytesList.Count + valueInBytes.Length) > maxValue)
break;
textInBytesList.AddRange(valueInBytes);
}
I haven't tested code, but Im sure it will work as I want. However, I dont like the way it is done, is there any better way to do this ? Something I'm missing ? or not aware of ?
Thank you.
My first posting on Stack Overflow, so be gentle! This method should take care of things pretty quickly for you..
public static byte[] GetBytes(string text, int maxArraySize, Encoding encoding) {
if (string.IsNullOrEmpty(text)) return null;
int tail = Math.Min(text.Length, maxArraySize);
int size = encoding.GetByteCount(text.Substring(0, tail));
while (tail >= 0 && size > maxArraySize) {
size -= encoding.GetByteCount(text.Substring(tail - 1, 1));
--tail;
}
return encoding.GetBytes(text.Substring(0, tail));
}
It's similar to what you're doing, but without the added overhead of the List or having to count from the beginning of the string every time. I start from the other end of the string, and the assumption is, of course, that all characters must be at least one byte. So there's no sense in starting to iterate down through the string any farther in than maxArraySize (or the total length of the string).
Then you can call the method like so..
byte[] bytes = GetBytes(text, 1000, Encoding.UTF8);

Difference between using Encoding.GetBytes or cast to byte [duplicate]

This question already has answers here:
Encoding used in cast from char to byte
(3 answers)
Closed 9 years ago.
I was wondering if there's any difference between converting characters to byte with Encoding.UTF8.GetBytes or manually using (byte) before characters and convert them to byte?
For an example, look at following code:
public static byte[] ConvertStringToByteArray(string str)
{
int i, n;
n = str.Length;
byte[] x = new byte[n];
for (i = 0; i < n; i++)
{
x[i] = (byte)str[i];
}
return x;
}
var arrBytes = ConvertStringToByteArray("Hello world");
or
var arrBytes = Encoding.UTF8.GetBytes("Hello world");
I liked the question so I executed your code on an ANSI text in Hebrew I read from a text file.
The text was "שועל"
string text = System.IO.File.ReadAllText(#"d:\test.txt");
var arrBytes = ConvertStringToByteArray(text);
var arrBytes1 = Encoding.UTF8.GetBytes(text);
The results were
As you can see there is a difference when the code point of any of your characters exceeds the 0-255 range of byte.
Your ConvertStringToByteArray method is incorrect.
you are casting each char to byte. char's numerical value is its Unicode code point which could be larger than a byte, so the casting will often result in an arithmetic overflow.
Your example works because you've used characters with code points within the byte range.
when wanna cast characters that have encoding, you cant use first one, and you must say chose encoding standard
Yes there is a difference. All .Net strings are stored as UTF16 LE.
Use this code to make a test string, so you get high order bytes in your chars, i.e chars that have a different representation in UTF8 and UTF16.
var testString = new string(
Enumerable.Range(char.MinValue, char.MaxValue - char.MinValue)
.Select(Convert.ToChar)
.ToArray());
This makes a string with every possible char value. If you do
ConvertStringToByteArray(testString).SequenceEqual(
Encoding.UTF8.GetBytes(testString));
It will return false, demonstrating that the results differ.

Creating a .wav File in C#

As an excuse to learn C#, I have been trying to code a simple project: creating audio files. To start, I want to make sure that I can write files that meet the WAVE format. I have researched the format online (for example, here), but whenever I try to play back a file, it won't open correctly. Here is my code. Is something missing or incorrect?
uint numsamples = 44100;
ushort numchannels = 1;
ushort samplelength = 1; // in bytes
uint samplerate = 22050;
FileStream f = new FileStream("a.wav", FileMode.Create);
BinaryWriter wr = new BinaryWriter(f);
wr.Write("RIFF");
wr.Write(36 + numsamples * numchannels * samplelength);
wr.Write("WAVEfmt ");
wr.Write(16);
wr.Write((ushort)1);
wr.Write(numchannels);
wr.Write(samplerate);
wr.Write(samplerate * samplelength * numchannels);
wr.Write(samplelength * numchannels);
wr.Write((ushort)(8 * samplelength));
wr.Write("data");
wr.Write(numsamples * samplelength);
// for now, just a square wave
Waveform a = new Waveform(440, 50);
double t = 0.0;
for (int i = 0; i < numsamples; i++, t += 1.0 / samplerate)
{
wr.Write((byte)((a.sample(t) + (samplelength == 1 ? 128 : 0)) & 0xff));
}
The major problem is:
BinaryWriter.Write(string) writes a string that is prefixed with it's length for BinaryReader to read it back. It is not intended to be used like your case. You need to write the bytes directly instead of using BinaryWriter.Write(string).
What you should do:
Convert the string into bytes and then write the bytes directly.
byte[] data = System.Text.Encoding.ASCII.GetBytes("RIFF");
binaryWriter.Write(data);
or make it one line:
binaryWriter.Write(System.Text.Encoding.ASCII.GetBytes("RIFF"));
There may also be other problems, like the integers you are writing may not be of the same size as required. You should check them carefully.
As for endianess, the link you put states that data are in little-endian and BinaryWriter uses little-endian, so this should not be a problem.
The simplest way possible, you can simply change:
wr.Write("RIFF");
to:
wr.Write("RIFF".ToArray());
Writing a string in a binary file, it will include the length of the string so that it can be deserialized back into a string later. In this case you just want the four bytes to be written as four bytes, and converting it to a char array will do just that.
I lack the proper WAV data, but try replacing the part of your code where you generate the header with this code (replace appropriately):
wr.Write(Encoding.ASCII.GetBytes("RIFF"));
wr.Write(0);
wr.Write(Encoding.ASCII.GetBytes("WAVE"));
wr.Write(Encoding.ASCII.GetBytes("fmt "));
wr.Write(18 + (int)(numsamples * samplelength));
wr.Write((short)1); // Encoding
wr.Write((short)numchannels); // Channels
wr.Write((int)(samplerate)); // Sample rate
wr.Write((int)(samplerate * samplelength * numchannels)); // Average bytes per second
wr.Write((short)(samplelength * numchannels)); // block align
wr.Write((short)(8 * samplelength)); // bits per sample
wr.Write((short)(numsamples * samplelength)); // Extra size
wr.Write("data");
#Alvin-wong answer works perfect. Just wanted to add another suggestion although a few more lines is:
binaryWriter.Write('R');
binaryWriter.Write('I');
binaryWriter.Write('F');
binaryWriter.Write('F');

Categories

Resources