Can someone explain how I can convert a float(Vector3.x) to a byte array with c# and decode it with node js?
I read on the internet that Vector3.x is a system.single data type and use 4 bytes(32 bits). I use BitConverter to convert it to a byte array. With Nodejs I use readFloatBE().
I don`t know what I'm doing wrong, but I get constantly a bad result with node js with console.log().
Unity csharp:
public static int FloatToBit(int offset, ref byte[] data, Single number)
{
byte[] byteArray = System.BitConverter.GetBytes(number);
for (int i = 0;i<4;i++)
{
data[offset + i] = byteArray[i];
}
return 4;
}
Node js
readFloat: function (offset, data) {
var b = new Buffer(4);
for (var i = 0; i < 4; i++) {
b[i] = data[offset + i];
}
return data.readFloatLE(b, 0);
},
If I send -2.5, unity output is: 0 0 32 191 with -1 unity output is: 0 0 128 192
Nodejs output with readFloatLE: 3.60133705331478e-43
Here's a working set of data from front to back.
C#:
Single fl = 2.5F;
var bytes = System.BitConverter.GetBytes(fl);
var str = BitConverter.ToString(bytes); // 00-00-20-40
Nodejs:
let buffer = Buffer.from([ 0x00, 0x00, 0x20, 0x40 ]);
let float = buffer.readFloatLE(); // 2.5
Note the method I used to create the buffer in nodejs, especially (also tested and verified with -1, but I left the code off for brevity).
Thanks for the replies.
I put the same question over here:Unity Questions
Somebody give me the answer to use:
readFloat: function (offset, data) {
return data.readFloatLE(offset);
},
instead creating a new buffer.
Parameter is a buffer. This was working for me. I still don`t understand why my example is not working.
Related
This could be long one. I do have a binary file, that contains some information.
What I want to do:
File (Binary) is read from OpenFileDialog
I'm now searching for specific bytes in this file
I'm getting offset of that byte, and then I'm checking byte value of offset+2
Basic if for (if offset+2 value is 0x08, then do this, if not, then do something else)
Now, search for offset for another byte pattern.
Copy everything from that offset till the end of file
Save copied byte array to file.
So, here're my codes for every step.
Step one:
1.
Byte[] bytes;
OpenFileDialog ofd = new OpenFileDialog();
ofd.ShowDialog();
path = ofd.FileName;
bytes = File.ReadAllBytes(path);
Step two, search specific pattern in this file. I used some help here on Stackoverflow, and end up with this:
VOID from stackoverflow:
static public List<int> SearchBytePattern(byte[] pattern, byte[] bytes)
{
List<int> positions = new List<int>();
int patternLength = pattern.Length;
int totalLength = bytes.Length;
byte firstMatchByte = pattern[0];
for (int i = 0; i < totalLength; i++)
{
if (firstMatchByte == bytes[i] && totalLength - i >= patternLength)
{
byte[] match = new byte[patternLength];
Array.Copy(bytes, i, match, 0, patternLength);
if (match.SequenceEqual<byte>(pattern))
{
positions.Add(i);
i += patternLength - 1;
}
}
}
return positions;
}
My void to search for pattern:
void CheckCamera()
{
Byte[] szukajkamera = { 0x02, 0x00, 0x08, 0x00, 0x20};
List<int> positions = SearchBytePattern(szukajkamera, bytes);
foreach (var item in positions){
MessageBox.Show(item.ToString("X2"));
IndexCamera = item;
}
int OffsetCameraCheck = IndexCamera + 2;
}
Item is now my offset, where 02 00 08 00 20 is in file.
Now, how do I check, if bytes(offset=IndexCamera+2) == 0x08 ?
I can do array.IndexOf, but there's plenty of 08 before that 08 I'm looking for.
For step 5 I'm also doing the thing, but it gets impossible for me, when Buffer.BlockCopy ask me for length.
For step 5 and forward I need to search again in this same file for another pattern, get it's offset and copy from that offset till the end. If I want so, then I need to buffer.blockcopy to non-empty byte array, but I just need it empty! I totally lost it. Please, help me.
Thank you!
how do I do bytes(offset=IndexCamera+2) == 0x08 ?
if(bytes[IndexCamera+2] == 0x08)....
When doing pattern searching the above answer does work, however you need to adapt it to search for more of the pattern.
Eg:
If you are looking for the location of 08 1D 1A AA 43 88 33
then you would need something like:
public static unsafe long IndexOf(this byte[] haystack, byte[] needle, long startOffset = 0)
{
fixed (byte* h = haystack) fixed (byte* n = needle)
{
for (byte* hNext = h + startOffset, hEnd = h + haystack.LongLength + 1 - needle.LongLength, nEnd = n + needle.LongLength; hNext < hEnd; hNext++)
for (byte* hInc = hNext, nInc = n; *nInc == *hInc; hInc++)
if (++nInc == nEnd)
return hNext - h;
return -1;
}
}
Note : Credit to Dylan Nicholson who wrote this code.
I want to hash password using mx.utils.SHA256 or SHA256 algo based password in ActionScript for my SQLite local database hashed password. So that I can match the inserted password with the database stored HashedPassword. For this I am using Salt too.
I want the same things with ActionScript which I have done in VB code.
How can I change the following in ActionScript from VB.NET?
Encoding.UTF8.GetBytes("String")
String Salt - type parameter.
System.Text.Encoding.Default.GetBytes(Salt.ToString.ToCharArray))
byte HashOut - type parameter.
Convert.ToBase64String(HashOut)
Array.Copy() method Copies one Byte Array to another according to specified length:
Array.Copy(Data, DataAndSalt, Data.Length) // concatenation of Arrays in context of `ActionScript`
Fairly simple process, but the documentation of Actionscript's SHA256 class is pretty lackluster, What you need to do is:
Write your salted string to a ByteArray
Call SHA256.computeDigest()
EG:
public function hashMyString(mySaltedInput:String):String
{
var bytes:ByteArray = new ByteArray;
bytes.writeUTFBytes(mySaltedInput):
return SHA256.computeDigest(bytes);
}
I have Created the whole code according to my requirements Own My Own , Which was done in the VB and now both are producing the same results .
Encoding.UTF8.GetBytes("String") VB code in ActionScript is
yourByteArray.writeMultiByte("String", "iso-8859-1");
System.Text.Encoding.Default.GetBytes(Salt.ToString.ToCharArray))
VB code in ActionScript is
byterrSalt.writeMultiByte(Salt,Salt);
Array.Copy(Data, DataAndSalt, Data.Length)
it was for concatenation of byte array which has been done in
actions script is done by
var DataAndSalt:ByteArray = new ByteArray();
DataAndSalt.writeBytes(Data);
DataAndSalt.writeBytes(Salt);
DataAndSalt ByteArray Will have both byteArray now Data + Salt
Data is ByteArray and you can Concatenate Many Byte Arrays by .writeBytes(YourByteArray)
. Convert.ToBase64String(HashOut) is done By the following fucntion
private static const BASE64_CHARS:String = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
public static function encodeByteArray(data:ByteArray):String {
// Initialise output
var output:String = "";
// Create data and output buffers
var dataBuffer:Array;
var outputBuffer:Array = new Array(4);
// Rewind ByteArray
data.position = 0;
// while there are still bytes to be processed
while (data.bytesAvailable > 0) {
// Create new data buffer and populate next 3 bytes from data
dataBuffer = new Array();
for (var i:uint = 0; i < 3 && data.bytesAvailable > 0; i++) {
dataBuffer[i] = data.readUnsignedByte();
}
// Convert to data buffer Base64 character positions and
// store in output buffer
outputBuffer[0] = (dataBuffer[0] & 0xfc) >> 2;
outputBuffer[1] = ((dataBuffer[0] & 0x03) << 4) | ((dataBuffer[1]) >> 4);
outputBuffer[2] = ((dataBuffer[1] & 0x0f) << 2) | ((dataBuffer[2]) >> 6);
outputBuffer[3] = dataBuffer[2] & 0x3f;
// If data buffer was short (i.e not 3 characters) then set
// end character indexes in data buffer to index of '=' symbol.
// This is necessary because Base64 data is always a multiple of
// 4 bytes and is basses with '=' symbols.
for (var j:uint = dataBuffer.length; j < 3; j++) {
outputBuffer[j + 1] = 64;
}
// Loop through output buffer and add Base64 characters to
// encoded data string for each character.
for (var k:uint = 0; k < outputBuffer.length; k++) {
output += BASE64_CHARS.charAt(outputBuffer[k]);
}
}
// Return encoded data
return output;
}
Thank You
Udit Bhardwaj
I am honestly really confused on reading binary files in C#.
I have C++ code for reading binary files:
FILE *pFile = fopen(filename, "rb");
uint n = 1024;
uint readC = 0;
do {
short* pChunk = new short[n];
readC = fread(pChunk, sizeof (short), n, pFile);
} while (readC > 0);
and it reads the following data:
-156, -154, -116, -69, -42, -36, -42, -41, -89, -178, -243, -276, -306,...
I tried convert this code to C# but cannot read such data. Here is code:
using (var reader = new BinaryReader(File.Open(filename, FileMode.Open)))
{
sbyte[] buffer = new sbyte[1024];
for (int i = 0; i < 1024; i++)
{
buffer[i] = reader.ReadSByte();
}
}
and i get the following data:
100, -1, 102, -1, -116, -1, -69, -1, -42, -1, -36
How can i get similar data?
A short is not a signed byte, it's a signed 16 bit value.
short[] buffer = new short[1024];
for (int i = 0; i < 1024; i++) {
buffer[i] = reader.ReadInt16();
}
That's because in C++ you're reading shorts and in C# you're reading signed bytes (that's why SByte means). You should use reader.ReadInt16()
Your C++ code reads 2 bytes at a time (you're using sizeof(short)), while your C# code reads one byte at a time. A SByte (see http://msdn.microsoft.com/en-us/library/d86he86x(v=vs.71).aspx) uses 8 bits of storage.
You should use the same data type to get the correct output or cast to a new type.
In c++ you are using short. (i suppose the file is also written with short) so use short itself in c#. or you can use Sytem.Int16.
You are getting different values because short and sbyte are not equivalent. short is 2 bytes and Sbyte is 1 byte
using (var reader = new BinaryReader(File.Open(filename, FileMode.Open)))
{
System.Int16[] buffer = new System.Int16[1024];
for (int i = 0; i < 1024; i++)
{
buffer[i] = reader.ReadInt16();
}
}
I'm trying to convert two bytes into an unsigned short so I can retrieve the actual server port value. I'm basing it off from this protocol specification under Reply Format. I tried using BitConverter.ToUint16() for this, but the problem is, it doesn't seem to throw the expected value. See below for a sample implementation:
int bytesRead = 0;
while (bytesRead < ms.Length)
{
int first = ms.ReadByte() & 0xFF;
int second = ms.ReadByte() & 0xFF;
int third = ms.ReadByte() & 0xFF;
int fourth = ms.ReadByte() & 0xFF;
int port1 = ms.ReadByte();
int port2 = ms.ReadByte();
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
string ip = String.Format("{0}.{1}.{2}.{3}:{4}-{5} = {6}", first, second, third, fourth, port1, port2, actualPort);
Debug.WriteLine(ip);
bytesRead += 6;
}
Given one sample data, let's say for the two byte values, I have 105 & 135, the expected port value after conversion should be 27015, but instead I get a value of 34665 using BitConverter.
Am I doing it the wrong way?
If you reverse the values in the BitConverter call, you should get the expected result:
int actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
On a little-endian architecture, the low order byte needs to be second in the array. And as lasseespeholt points out in the comments, you would need to reverse the order on a big-endian architecture. That could be checked with the BitConverter.IsLittleEndian property. Or it might be a better solution overall to use IPAddress.HostToNetworkOrder (convert the value first and then call that method to put the bytes in the correct order regardless of the endianness).
BitConverter is doing the right thing, you just have low-byte and high-byte mixed up - you can verify using a bitshift manually:
byte port1 = 105;
byte port2 = 135;
ushort value = BitConverter.ToUInt16(new byte[2] { (byte)port1, (byte)port2 }, 0);
ushort value2 = (ushort)(port1 + (port2 << 8)); //same output
To work on both little and big endian architectures, you must do something like:
if (BitConverter.IsLittleEndian)
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port2 , (byte)port1 }, 0);
else
actualPort = BitConverter.ToUInt16(new byte[2] {(byte)port1 , (byte)port2 }, 0);
I'm trying to convert an int value to a byte array, but I'm using the byte for MIDI information (meaning that the 0x00 byte which is returned when using GetBytes acts as a separator) which renders my MIDI information useless.
I would like to convert the int to an array which leaves out the 0x00 bytes and just contains the bytes which contain actual values. How can I do this?
You've completely misunderstood what you need, but luckily you mentioned MIDI. You need to use the multi-byte encoding that MIDI defines, which is somewhat similar to UTF-8 in that less than 8 bits of data are placed into each octet, with the remaining providing information about the number of bits used.
See the description on wikipedia. Pay close attention to the fact that protobuf uses this encoding, you can probably reuse some of Google's code.
Based on the info Ben added, this should do what you require:
static byte[] VlqEncode(int value)
{
uint uvalue = (uint)value;
if (uvalue < 128) return new byte[] { (byte)uvalue }; // simplest case
// calculate length of buffer required
int len = 0;
do {
len++;
uvalue >>= 7;
} while (uvalue != 0);
// encode (this is untested, following the VQL/Midi/protobuf confusion)
uvalue = (uint)value;
byte[] buffer = new byte[len];
for (int offset = len - 1; offset >= 0; offset--)
{
buffer[offset] = (byte)(128 | (uvalue & 127)); // only the last 7 bits
uvalue >>= 7;
}
buffer[len - 1] &= 127;
return buffer;
}