How can i write all bits of a file using c#?
For example writing 0 to all bits
Please provide me with a sample
I'm not sure why you'd want to do this, but this will overwrite a file with data that is the same length but contains byte values of zero:
File.WriteAllBytes(filePath, new byte[new FileInfo(filePath).Length]);
Definitely has the foul stench of homework to it.
Hint - Think why someone might want to do this. Just deleting the file and replacing with a file of 0s of the correct length might not be what you're after.
Have a look at System.IO.FileInfo; you'll need to open a writable stream for the file you're interested in and then write however many bytes (with value 0 in your example) to it as there are in the file already (which you can ascertain via FileInfo.Length). Be sure to dispose of the stream once you're done with it – using constructs are useful for this purpose.
Consider using the BinaryWriter available in the .NET framework
using(BinaryWriter binWriter =
new BinaryWriter(File.Open(fileName, FileMode.Create)))
{
binWriter.Write("Hello world");
}
When you say write all bits to a file I'll assume you mean bits as in nyble, bit, byte. That's just writing an integer to a file. You can't have a 4 bit file as far as I know so the smallest denomination will be a byte.
You probably don't want to be responsible for serializing yourself, so your easiest option would be to use the BinaryReader and BinaryWriter classes, and then manipulate the bits inside your C#.
The BinaryWriter class uses a 4 byte integer as minimum however. For example
writer.Write( 1 ); // 01
writer.Write( 10 ); // 0a
writer.Write( 100 ); // 64
writer.Write( 1000 ); // 3e8
writer.Write( 10000 ); // 2710
//writer.Write( 123456789 ); // 75BCD15
is written to file as
01 00 00 00 0a 00 00 00 64 00 00 00 e8 03 00 00 10 27 00 00 15 cd 5b 07
read into a byte and then test against >= powers of 2 to get each of the bits in that byte
Related
I'm communicating to a device that returns uuencoded data:
ASCII: EZQAEgETAhMQIBwIAUkAAABj
HEX: 45-5A-51-41-45-67-45-54-41-68-4D-51-49-42-77-49-41-55-6B-41-41-41-42-6A
The documentation for this device states the above is uuencoded but I can't figure out how to decode it. The final result won't be a human readable string but the first byte reveals the number of bytes for the following product data. (Which would be 23 or 24?)
I've tried using Crypt2 to decode it; it doesn't seem to match 644, 666, 744 modes.
I've tried to hand write it out following the Wiki: https://en.wikipedia.org/wiki/Uuencoding#Formatting_mechanism
Doesn't make sense! How do I decode this uuencoded data?
I agree with #canton7 that it looks like it's base64 encoded. You can decode it like this
byte[] decoded = Convert.FromBase64String("EZQAEgETAhMQIBwIAUkAAABj");
and if you want, you can print the hex values like this
Console.WriteLine(BitConverter.ToString(decoded));
which prints
11-94-00-12-01-13-02-13-10-20-1C-08-01-49-00-00-00-63
As #HansKilian says in the comments, this is not uuencoded.
If you base64-decode it you get (in hex):
11 94 00 12 01 13 02 13 10 20 1c 08 01 49 00 00 00 63
The first number, 17 in decimal, is the same as the number of bytes following it, which matches:
The final result won't be a human readable string but the first byte reveals the number of bytes for the following product data.
(#HansKilian made the original call that it was base64-encoded. This answer provides confirmation of that by looking at the first decoded byte, but please accept his answer)
I am currently implementing code using AES CBC in Microsoft NET System.Security.Cryptography and noticed something weird: there are 16 bytes at the end of each ciphertext which don’t seem to belong there.
Trying to find more information about what those 16 bytes might represent, or what data they might hold, I’ve searched all over the internet for related information – without any success.
Trying to figure it out nevertheless, I even ran some experiments setting the key, the IV and the plain text to 16 0x00s. By comparing the ciphertexts with another platform's AES CBC implementation, I verified that the first bytes are valid AES CBC ciphertext. The only difference is that NET seems to add 16 bytes at the end of the ciphertext.
Now, I don't believe it is padding because:
the cypher text for an additional block of sixteen 0's is different
as are manually entering the padding for PKCS7, ANSI X.923, and ISO7816-4.
Further research leads me to conclude the M$ padding modes in .NET appear to ignore the "PaddingMode" setting.
0000000000000000000000000000000000000000000000000000000000000000 Plain Text
66E94BD4EF8A2C3B884CFA59CA342B2EF795BD4A52E29ED713D313FA20E98DBC5C047616756FDC1C32E0DF6E8C59BB2A None
66E94BD4EF8A2C3B884CFA59CA342B2EF795BD4A52E29ED713D313FA20E98DBC5C047616756FDC1C32E0DF6E8C59BB2A Zeros
66E94BD4EF8A2C3B884CFA59CA342B2EF795BD4A52E29ED713D313FA20E98DBC5C047616756FDC1C32E0DF6E8C59BB2A PKCS7
66E94BD4EF8A2C3B884CFA59CA342B2EF795BD4A52E29ED713D313FA20E98DBC5C047616756FDC1C32E0DF6E8C59BB2A ANSIX923
66E94BD4EF8A2C3B884CFA59CA342B2EF795BD4A52E29ED713D313FA20E98DBC5C047616756FDC1C32E0DF6E8C59BB2A ISO7816
Besides that, I’m optimistically assuming those bytes aren’t merely something only Microsoft knows about. Is there some paper, reference, or documentation I failed to find, which might explain those last 16 bytes? What am I missing?
If you decrypt without padding you would see this:
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
That is how a PKCS5/7 padding looks. Here it's padded with 16 bytes, and thus the padding byte is 16 (0x10).
The extra 16 bytes are PKCS#7 padding. The padding is added prior to encryption so it is also encrypted.
If you encrypt with PKCS#7 padding the result is exactly the result you get. The default for "AES CBC in Microsoft NET System.Security.Cryptography" is PKCS#7 (née PKCS#5) padding.
Since the data to be encrypted is an exact multiple of the block size an entire block of padding is added.
See PKCS#7 padding.
See online encryption, the trailing 10101010101010101010101010101010 is the padding added to the data to be encrypted.
How to convert a simple string to a null-terminated one?
Example:
Example string: "Test message"
Here are the bytes:
54 65 73 74 20 6D 65 73 73 61 67 65
I need string with bytes like follows:
54 00 65 00 73 00 74 00 20 00 6D 00 65 00 73 00 73 00 61 00 67 00 65 00 00
I could use loops, but will be too ugly code. How can I make this conversion by native methods?
It looks like you want a null-terminated Unicode string. If the string is stored in a variable str, this should work:
var bytes = System.Text.Encoding.Unicode.GetBytes(str + "\0");
(See it run.)
Note that the resulting array will have three zero bytes at the end. This is because Unicode represents characters using two bytes. The first zero is half of the last character in the original string, and the next two are how Unicode encodes the null character '\0'. (In other words, there is one extra null character using my code than what you originally specified, but this is probably what you actually want.)
A little background on c# strings is a good place to start.
The internal structure of a C# string is different from a C string.
a) It is unicode, as is a 'char'
b) It is not null terminated
c) It includes many utility functions that in C/C++ you would require for.
How does it get away with no null termination? Simple! Internally a C# String manages a char array. C# arrays are structures, not pointers (as in C/C++). As such, they are aware of their own length. The Null termination in C/C++ is required so that string utility functions like strcmp() are able to detect the end of the string in memory.
The null character does exist in c#.
string content = "This is a message!" + '\0';
This will give you a string that ends with a null terminator. Importantly, the null character is invisible and will not show up in any output. It will show in the debug windows. It will also be present when you convert the string to a byte array (for saving to disk and other IO operations) but if you do Console.WriteLine(content) it will not be visible.
You should understand why you want that null terminator, and why you want to avoid using a loop construct to get what you are after. A null terminated string is fairly useless in c# unless you end up converting to a byte array. Generally you will only do that if you want to send your string to a native method, over a network or to a usb device.
It is also important to be aware of how you are getting your bytes. In C/C++, a char is stored as 1 bytes (8bit) and the encoding is ANSI. In C# the encoding is unicode, it is two bytes (16bit). Jon Skeet's answer shows you how to get the bytes in unicode.
Tongue in cheek but potentially useful answer.
If you are after output on your screen in hex as you have shown there you want to follow two steps:
Convert string (with null character '\0' on the end) to byte array
Convert bytes strings representations encoded in hex
Interleave with spaces
Print to screen
Try this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace stringlulz
{
class Program
{
static void Main(string[] args)
{
string original = "Test message";
byte[] bytes = System.Text.Encoding.Unicode.GetBytes(original + '\0');
var output = bytes.Aggregate(new StringBuilder(), (s, p) => s.Append(p.ToString("x2") + ' '), s => { s.Length--; return s; });
Console.WriteLine(output.ToString().ToUpper());
Console.ReadLine();
}
}
}
The output is:
54 00 65 00 73 00 74 00 20 00 6D 00 65 00 73 00 73 00 61 00 67 00 65 00 00 00
Here's a tested C# sample of an xml command null terminated and works great.
strCmd = #"<?xml version=""1.0"" encoding=""utf-8""?><Command name=""SerialNumber"" />";
sendB = System.Text.Encoding.UTF8.GetBytes(strCmd+"\0");
sportin.Send = sendB;
So here's my issue. I have a binary file that I want to edit. I can use a hex editor to edit it of course, but I need to make a program to edit this particular file. Say that I know a certain hex I want to edit, I know it's address etc. Let's say that it's a 16-bit binary, and the address is 00000000, it's on row 04 and it has a value of 02. How could I create a program that would change the value of that hex, and only that hex with the click of a button?
I've found resources that talk about similar things, but I can't for the life of me find help with the exact issue.
Any help would be appreciated, and please, don't just tell me the answer if there is one but try and explain a bit.
I think this is best explained with a specific example. Here are the first 32 bytes of an executable file as shown in Visual Studio's hex editor:
00000000 4D 5A 90 00 03 00 00 00 04 00 00 00 FF FF 00 00
00000010 B8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
Now a file is really just a linear sequence of bytes. The rows that you see in a hex editor are just there to make things easier to read. When you want to manipulate the bytes in a file using code, you need to identify the bytes by their 0-based positions. In the above example, the positions of the non-zero bytes are as follows:
Position Value
-------- ------
0 0x4D
1 0x5A
2 0x90
4 0x03
8 0x04
12 0xFF
13 0xFF
16 0xB8
24 0x40
In the hex editor representation shown above, the numbers on the left represent the positions of the first byte in the corresponding line. The editor is showing 16 bytes per line, so they increment by 16 (0x10) at each line.
If you simply want to take one of the bytes in the file and change its value, the most efficient approach that I see would be to open the file using a FileStream, seek to the appropriate position, and overwrite the byte. For example, the following will change the 0x40 at position 24 to 0x04:
using (var stream = new FileStream(path, FileMode.Open, FileAccess.ReadWrite)) {
stream.Position = 24;
stream.WriteByte(0x04);
}
Well the first thing would probably be to understand the conversions. Hex to decimal probably isn't as important (unless of course you need to change the value from a decimal first, but that's a simple conversion formula), but hex to binary will be important seeing as each hex character (0-9,A-F) corresponds to a specific binary output.
After understanding that stuff, the next step is to figure out exactly what you are searching for, make the proper conversion, and replace that exact string. I would recommend (if the buffer wouldn't be too large) to take the entire hex dump and replace whatever you're searching for in there to avoid overwriting a duplicate binary sequence.
Hope that helps!
Regards,
Dennis M.
I want to create a very simple piece of software in C# .NET that I can pass a folder's path to and detect all files with a frequency of below a given threshold. Any pointers on how I would do this?
You have to read mp3 files. To do that you have to find specifications for them.
Generally mp3 file is wrapped into ID3 tag, so that you have to read it, find its length and skip it. Let's take ID3v2.3 for example:
ID3v2/file identifier "ID3"
ID3v2 version $03 00
ID3v2 flags %abc00000
ID3v2 size 4 * %0xxxxxxx
so bytes 6,7,8,9 store header length in big-endian form. Here is sample of some file:
0 1 2 3 4 5 6 7 8 9 A B C D E F
49 44 33 03 00 00 00 00 07 76 54 43 4f 4e 00 00
07 76 - is the size. You need to shift left first byte so that actual size is 3F6. Then add 10 (A) to get the offset = 400. This is address of start of mp3 header.
Then you take description of mp3 header:
bits are: AAAAAAAA AAABBCCD EEEEFFGH IIJJKLMM, we need FF , sampling frequency and convert t to actual frequency:
bits MPEG1 MPEG2 MPEG2.5
00 44100 22050 11025
01 48000 24000 12000
10 32000 16000 8000
11 reserv. reserv. reserv.
You can use UltraID3Lib to get mp3 metadata (bitrate, frequency)
Check value of frequency bits in a file. There is some info about mp3 format.