Send float array from C++ server to C# client - c#

I'm trying to send some data from a C++ server to a C# client. I was able to send over char arrays. But there is some problem with float array.
This is the code on the C++ server side
float* arr;
arr = new float[12];
//array init...
if((bytecount = send(*csock, (const char*)arr, 12*sizeof(float), 0))==SOCKET_ERROR){
}
so yes i'm trying to send over a float array of size 12.
here's the code for the client side. (it was strange that there was no easy way to get the float out of the stream in the first place. I have never used C# before and maybe there's something better?)
//get the data in a char array
streamReader.Read(temp, 0, temp.Length);
//**the problem lies right here in receiving the data itself
//now convert the char array to byte array
for (int i = 0; i < (elems*4); i++) //elems = size of the float array
{
byteArray = BitConverter.GetBytes(temp[i]);
byteMain[i] = byteArray[0];
}
//finally convert it to a float array
for (int i = 0; i < elems; i++)
{
float val = BitConverter.ToSingle(byteMain, i * 4);
myarray[i] = val;
}
let's look at the memory dump on both sides and the problem will be clear -
//c++ bytes corresponding to the first 5 floats in the array
//(2.1 9.9 12.1 94.9 2.1 ...)
66 66 06 40 66 66 1e 41 9a 99 41 41 cd cc bd 42 66 66 06 40
//c# - this is what i get in the byteMain array
66 66 06 40 66 66 1e 41 fd fd 41 41 fd 3d ? 42 66 66 06 40
there are 2 problems here on the c# side-
1) first it does not handle anything above 0x80 (above 127) (incompatible structures?)
2) for some unbelievable reason it drops a byte!!
and this happens in 'temp' right at the time of receiving the data
I've been trying to figure something out but nothing still.
Do you have any idea why this might be happening? I'm sure I'm doing something wrong...
Suggestions for a better approach?
Thanks a lot

It's not clear from your code what the streamReader variable points to (ie what's its type?) but I would suggest you use the BinaryReader instead. That way, you can just read data one float at a time and never bother with the byte[] array at all:
var reader = new BinaryReader(/* put source stream here */)
var myFloat = reader.ReadSingle();
// do stuff with myFloat...
// then you can read another
myFloat = reader.ReadSingle();
// etc.
Different readers do different things with data. For instance the text reader (and stream reader) will assume all is text in a particular encoding (like UTF-8) and may interpret the data in a way you didn't expect. The BinaryReader will not do that as it was designed to let you specify exactly the data types you want to read out of your stream.

I'm not sure about C#, but C++ makes no guarantees about the internal, binary representations of floats (or any other data type). For all you know, 0.42 might be represented using these 4 bytes: '0', '.', '4', '2'.
The easiest solution would be transferring human-readable strings such as "2.1 9.9 12.1 94.9 2.1" and using cin/cout/printf/scanf and friends.

In network, you should always convert your numbers into a common format and then read back. In other terms any data other than bytes should be encapsulated. So regardless of your programming languages, this is what you need to do. I cannot comment on what is wrong with your code but this might solve your issue and will save some headache later on. Think if the architecture is 64 bits or it uses different endian.
EDIT:
I guess your problem lies with signed unsigned and can be solved with Isak's answer, but still mind what I had said.
If you need help on encapsulation check Beej's Network Guide. It should have a sample how to encode floats over network.

Related

Creating Bit Array in Powershell/C# from integers

I'm trying to reverse engineer a game database and have come to a roadblock.
I can load all the tables/fields/records , however I'm stuck when it comes to converting the record values to hex or bits
the values (in game) are as follows: (15 bits) 192 - (10 bits) 20 - (5 bits) 19 - (5 bits) 2
In the db file , it shows 00 C0 - 00 0A - A6 - 00
This is strange , because only the first value (00 C0) is the same in Hex (192)
The other values are different , I'm guessing this is because they are not full bytes (10 and 5 bits respectively) so it must be using a bit array.
This guess is further proven when I change the final value from 2 , to 31. The last 2 values in the db are changed, and the hex string becomes 00 C0 - 00 0A - E6 - 07
So what's the best way to get these 4 integers in to a bit array in PowerShell so I can try to determine what's going on here ? If it is not obvious to any more experienced programmers what is at play here. If required I could also use C# however I'm less experienced.
Thanks
I am not sure what you want to achieve. 5bits words are literally odd.
It could be that there is no clear conversion here but something like a hash. Anyways, you could technically count from 0 to 2^35 - 1 and poke that in your game and lookup the result in your database.
Let me give you a few conversion methods:
To bit array:
$Bits =
[convert]::ToString(192, 2).PadLeft(15, '0') +
[convert]::ToString( 20, 2).PadLeft(10, '0') +
[convert]::ToString( 19, 2).PadLeft( 5, '0') +
[convert]::ToString( 2, 2).PadLeft( 5, '0')
$Bits
00000001100000000000101001001100010
And back:
if ($Bits -Match '(.{15})(.{10})(.{5})(.{5})') {
$Matches[1..4].Foreach{ [convert]::ToByte($_, 2) }
}
192
20
19
2
To Int64:
$Int64 = [convert]::ToInt64($Bits, 2)
$Int64
201347682
To bytes:
$Bytes = [BitConverter]::GetBytes($Int64)
[System.BitConverter]::ToString($Bytes)
62-52-00-0C-00-00-00-00
Note that the bytes list is reverse order:
[convert]::ToString(0x62, 2)
1100010

C# Int vs Byte performance & SQL Int vs Binary performance

In a C# windows app I handle HEX Strings. A single HEX string will have 5-30 HEX parts.
07 82 51 2A F1 C9 63 69 17 C1 1B BA C7 7A 18 20 20 8A 95 7A 54 5A E0 2E D4 3D 29
Currently I take this string and parse it into N number of integers using Convert.ToInt32(string, 16). I then add these int values to a database. When I extract these values from the database, I extract them as Ints and then convert them back into HEX string.
Would it be better performance wise to convert these string to bytes and then add them as binary data types within the database?
EDIT:
The 5-30 HEX parts correspond to specific tables where all the parts make up 1 record with individual parts. For instance, if i had 5 HEX values, they correspond to 5 seperate columns of 1 record.
EDIT:
To clarify (sorry):
I have 9 tables. Each table has a set number of columns.
table1:30
table2:18
table3:18
table4:18
table5:18
table6:13
table7:27
table8:5
table9:11
Each of these columns in every table corresponds to a specific HEX value.
For example, my app will receive a "payload" of 13 HEX components in a single string format: 07 82 51 2A F1 C9 63 69 17 C1 1B BA C7. Currently I take this string and parse the individual HEX components and convert them to ints, storing them in an int array. I then take these int values and store them in the corresponding table and columns in the database. When I read these values I get them as ints and then convert them to HEX strings.
What I am wondering is If I should conver the HEX string into a Byte array and store the bytes as SQL Binary variable types.
Well in terms of performance, you should of course test both ways.
However, in terms of readability, if this is just arbitrary data, I'd certainly suggest using a byte array. If it's actually meant to represent a sequence of integers, that's fine - but why would you represent an arbitrary byte array using a collection of 4-byte integers? It doesn't fit in well with anything else:
You have to consider padding if your input data isn't a multiple of 4 bytes
It's a pain to work with in terms of reading and writing the data with streams
It's not clear how you're storing the integers in the database, but I'd expect a blob to be more efficient if you're just trying to store the whole thing
I would suggest writing the code the more natural way, keeping your data close to the kind of thing it's really trying to represent, and then measuring the performance. If it's good enough, then you don't need to look any further. If it's not, you'll have a good basis for tweaking.
Yes, by far. Inserting many rows is far worse than inserting few bigger rows.
A data model often depends not on just how you want to write, but also how you want to find and read the data.
Some considerations:
If you ever have a need to find a particular a "HEX part", even when not at the start of the "HEX string", then each "HEX part" will need to be in a separate row so a database index can pick it up.
Depending on your DBMS/API, it may not be easy to seek through a BLOB or byte array. This may be important for loading non-prefix "HEX parts" or performing modifications in the middle of the "HEX string".
If the "HEX string" needs to be a PRIMARY, UNIQUE or FOREIGN KEY, or needs to be searchable by prefix, then you'll typically need a database type that is actually indexable (BLOBs typically aren't, but most DBMSes have alternate types for smaller byte arrays that are).
All in all, a byte array is probably what you need, but beware of the considerations above.

How to edit a binary file's hex value using C#

So here's my issue. I have a binary file that I want to edit. I can use a hex editor to edit it of course, but I need to make a program to edit this particular file. Say that I know a certain hex I want to edit, I know it's address etc. Let's say that it's a 16-bit binary, and the address is 00000000, it's on row 04 and it has a value of 02. How could I create a program that would change the value of that hex, and only that hex with the click of a button?
I've found resources that talk about similar things, but I can't for the life of me find help with the exact issue.
Any help would be appreciated, and please, don't just tell me the answer if there is one but try and explain a bit.
I think this is best explained with a specific example. Here are the first 32 bytes of an executable file as shown in Visual Studio's hex editor:
00000000 4D 5A 90 00 03 00 00 00 04 00 00 00 FF FF 00 00
00000010 B8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
Now a file is really just a linear sequence of bytes. The rows that you see in a hex editor are just there to make things easier to read. When you want to manipulate the bytes in a file using code, you need to identify the bytes by their 0-based positions. In the above example, the positions of the non-zero bytes are as follows:
Position Value
-------- ------
0 0x4D
1 0x5A
2 0x90
4 0x03
8 0x04
12 0xFF
13 0xFF
16 0xB8
24 0x40
In the hex editor representation shown above, the numbers on the left represent the positions of the first byte in the corresponding line. The editor is showing 16 bytes per line, so they increment by 16 (0x10) at each line.
If you simply want to take one of the bytes in the file and change its value, the most efficient approach that I see would be to open the file using a FileStream, seek to the appropriate position, and overwrite the byte. For example, the following will change the 0x40 at position 24 to 0x04:
using (var stream = new FileStream(path, FileMode.Open, FileAccess.ReadWrite)) {
stream.Position = 24;
stream.WriteByte(0x04);
}
Well the first thing would probably be to understand the conversions. Hex to decimal probably isn't as important (unless of course you need to change the value from a decimal first, but that's a simple conversion formula), but hex to binary will be important seeing as each hex character (0-9,A-F) corresponds to a specific binary output.
After understanding that stuff, the next step is to figure out exactly what you are searching for, make the proper conversion, and replace that exact string. I would recommend (if the buffer wouldn't be too large) to take the entire hex dump and replace whatever you're searching for in there to avoid overwriting a duplicate binary sequence.
Hope that helps!
Regards,
Dennis M.

How to analyse contents of binary serialization stream?

I'm using binary serialization (BinaryFormatter) as a temporary mechanism to store state information in a file for a relatively complex (game) object structure; the files are coming out much larger than I expect, and my data structure includes recursive references - so I'm wondering whether the BinaryFormatter is actually storing multiple copies of the same objects, or whether my basic "number of objects and values I should have" arithmentic is way off-base, or where else the excessive size is coming from.
Searching on stack overflow I was able to find the specification for Microsoft's binary remoting format:
http://msdn.microsoft.com/en-us/library/cc236844(PROT.10).aspx
What I can't find is any existing viewer that enables you to "peek" into the contents of a binaryformatter output file - get object counts and total bytes for different object types in the file, etc;
I feel like this must be my "google-fu" failing me (what little I have) - can anyone help? This must have been done before, right??
UPDATE: I could not find it and got no answers so I put something relatively quick together (link to downloadable project below); I can confirm the BinaryFormatter does not store multiple copies of the same object but it does print quite a lot of metadata to the stream. If you need efficient storage, build your own custom serialization methods.
Because it is maybe of interest for someone I decided to do this post about What does the binary format of serialized .NET objects look like and how can we interpret it correctly?
I have based all my research on the .NET Remoting: Binary Format Data Structure specification.
Example class:
To have a working example, I have created a simple class called A which contains 2 properties, one string and one integer value, they are called SomeString and SomeValue.
Class A looks like this:
[Serializable()]
public class A
{
public string SomeString
{
get;
set;
}
public int SomeValue
{
get;
set;
}
}
For the serialization I used the BinaryFormatter of course:
BinaryFormatter bf = new BinaryFormatter();
StreamWriter sw = new StreamWriter("test.txt");
bf.Serialize(sw.BaseStream, new A() { SomeString = "abc", SomeValue = 123 });
sw.Close();
As can be seen, I passed a new instance of class A containing abc and 123 as values.
Example result data:
If we look at the serialized result in an hex editor, we get something like this:
Let us interpret the example result data:
According to the above mentioned specification (here is the direct link to the PDF: [MS-NRBF].pdf) every record within the stream is identified by the RecordTypeEnumeration. Section 2.1.2.1 RecordTypeNumeration states:
This enumeration identifies the type of the record. Each record (except for MemberPrimitiveUnTyped) starts with a record type enumeration. The size of the enumeration is one BYTE.
SerializationHeaderRecord:
So if we look back at the data we got, we can start interpreting the first byte:
As stated in 2.1.2.1 RecordTypeEnumeration a value of 0 identifies the SerializationHeaderRecord which is specified in 2.6.1 SerializationHeaderRecord:
The SerializationHeaderRecord record MUST be the first record in a binary serialization. This record has the major and minor version of the format and the IDs of the top object and the headers.
It consists of:
RecordTypeEnum (1 byte)
RootId (4 bytes)
HeaderId (4 bytes)
MajorVersion (4 bytes)
MinorVersion (4 bytes)
With that knowledge we can interpret the record containing 17 bytes:
00 represents the RecordTypeEnumeration which is SerializationHeaderRecord in our case.
01 00 00 00 represents the RootId
If neither the BinaryMethodCall nor BinaryMethodReturn record is present in the serialization stream, the value of this field MUST contain the ObjectId of a Class, Array, or BinaryObjectString record contained in the serialization stream.
So in our case this should be the ObjectId with the value 1 (because the data is serialized using little-endian) which we will hopefully see again ;-)
FF FF FF FF represents the HeaderId
01 00 00 00 represents the MajorVersion
00 00 00 00 represents the MinorVersion
BinaryLibrary:
As specified, each record must begin with the RecordTypeEnumeration. As the last record is complete, we must assume that a new one begins.
Let us interpret the next byte:
As we can see, in our example the SerializationHeaderRecord it is followed by the BinaryLibrary record:
The BinaryLibrary record associates an INT32 ID (as specified in [MS-DTYP] section 2.2.22) with a Library name. This allows other records to reference the Library name by using the ID. This approach reduces the wire size when there are multiple records that reference the same Library name.
It consists of:
RecordTypeEnum (1 byte)
LibraryId (4 bytes)
LibraryName (variable number of bytes (which is a LengthPrefixedString))
As stated in 2.1.1.6 LengthPrefixedString...
The LengthPrefixedString represents a string value. The string is prefixed by the length of the UTF-8 encoded string in bytes. The length is encoded in a variable-length field with a minimum of 1 byte and a maximum of 5 bytes. To minimize the wire size, length is encoded as a variable-length field.
In our simple example the length is always encoded using 1 byte. With that knowledge we can continue the interpretation of the bytes in the stream:
0C represents the RecordTypeEnumeration which identifies the BinaryLibrary record.
02 00 00 00 represents the LibraryId which is 2 in our case.
Now the LengthPrefixedString follows:
42 represents the length information of the LengthPrefixedString which contains the LibraryName.
In our case the length information of 42 (decimal 66) tell's us, that we need to read the next 66 bytes and interpret them as the LibraryName.
As already stated, the string is UTF-8 encoded, so the result of the bytes above would be something like: _WorkSpace_, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
ClassWithMembersAndTypes:
Again, the record is complete so we interpret the RecordTypeEnumeration of the next one:
05 identifies a ClassWithMembersAndTypes record. Section 2.3.2.1 ClassWithMembersAndTypes states:
The ClassWithMembersAndTypes record is the most verbose of the Class records. It contains metadata about Members, including the names and Remoting Types of the Members. It also contains a Library ID that references the Library Name of the Class.
It consists of:
RecordTypeEnum (1 byte)
ClassInfo (variable number of bytes)
MemberTypeInfo (variable number of bytes)
LibraryId (4 bytes)
ClassInfo:
As stated in 2.3.1.1 ClassInfo the record consists of:
ObjectId (4 bytes)
Name (variable number of bytes (which is again a LengthPrefixedString))
MemberCount(4 bytes)
MemberNames (which is a sequence of LengthPrefixedString's where the number of items MUST be equal to the value specified in the MemberCount field.)
Back to the raw data, step by step:
01 00 00 00 represents the ObjectId. We've already seen this one, it was specified as the RootId in the SerializationHeaderRecord.
0F 53 74 61 63 6B 4F 76 65 72 46 6C 6F 77 2E 41 represents the Name of the class which is represented by using a LengthPrefixedString. As mentioned, in our example the length of the string is defined with 1 byte so the first byte 0F specifies that 15 bytes must be read and decoded using UTF-8. The result looks something like this: StackOverFlow.A - so obviously I used StackOverFlow as name of the namespace.
02 00 00 00 represents the MemberCount, it tell's us that 2 members, both represented with LengthPrefixedString's will follow.
Name of the first member:
1B 3C 53 6F 6D 65 53 74 72 69 6E 67 3E 6B 5F 5F 42 61 63 6B 69 6E 67 46 69 65 6C 64 represents the first MemberName, 1B is again the length of the string which is 27 bytes in length an results in something like this: <SomeString>k__BackingField.
Name of the second member:
1A 3C 53 6F 6D 65 56 61 6C 75 65 3E 6B 5F 5F 42 61 63 6B 69 6E 67 46 69 65 6C 64 represents the second MemberName, 1A specifies that the string is 26 bytes long. It results in something like this: <SomeValue>k__BackingField.
MemberTypeInfo:
After the ClassInfo the MemberTypeInfo follows.
Section 2.3.1.2 - MemberTypeInfo states, that the structure contains:
BinaryTypeEnums (variable in length)
A sequence of BinaryTypeEnumeration values that represents the Member Types that are being transferred. The Array MUST:
Have the same number of items as the MemberNames field of the ClassInfo structure.
Be ordered such that the BinaryTypeEnumeration corresponds to the Member name in the MemberNames field of the ClassInfo structure.
AdditionalInfos (variable in length), depending on the BinaryTpeEnum additional info may or may not be present.
| BinaryTypeEnum | AdditionalInfos |
|----------------+--------------------------|
| Primitive | PrimitiveTypeEnumeration |
| String | None |
So taking that into consideration we are almost there...
We expect 2 BinaryTypeEnumeration values (because we had 2 members in the MemberNames).
Again, back to the raw data of the complete MemberTypeInfo record:
01 represents the BinaryTypeEnumeration of the first member, according to 2.1.2.2 BinaryTypeEnumeration we can expect a String and it is represented using a LengthPrefixedString.
00 represents the BinaryTypeEnumeration of the second member, and again, according to the specification, it is a Primitive. As stated above, Primitive's are followed by additional information, in this case a PrimitiveTypeEnumeration. That's why we need to read the next byte, which is 08, match it with the table stated in 2.1.2.3 PrimitiveTypeEnumeration and be surprised to notice that we can expect an Int32 which is represented by 4 bytes, as stated in some other document about basic datatypes.
LibraryId:
After the MemerTypeInfo the LibraryId follows, it is represented by 4 bytes:
02 00 00 00 represents the LibraryId which is 2.
The values:
As specified in 2.3 Class Records:
The values of the Members of the Class MUST be serialized as records that follow this record, as specified in section 2.7. The order of the records MUST match the order of MemberNames as specified in the ClassInfo (section 2.3.1.1) structure.
That's why we can now expect the values of the members.
Let us look at the last few bytes:
06 identifies an BinaryObjectString. It represents the value of our SomeString property (the <SomeString>k__BackingField to be exact).
According to 2.5.7 BinaryObjectString it contains:
RecordTypeEnum (1 byte)
ObjectId (4 bytes)
Value (variable length, represented as a LengthPrefixedString)
So knowing that, we can clearly identify that
03 00 00 00 represents the ObjectId.
03 61 62 63 represents the Value where 03 is the length of the string itself and 61 62 63 are the content bytes that translate to abc.
Hopefully you can remember that there was a second member, an Int32. Knowing that the Int32 is represented by using 4 bytes, we can conclude, that
must be the Value of our second member. 7B hexadecimal equals 123 decimal which seems to fit our example code.
So here is the complete ClassWithMembersAndTypes record:
MessageEnd:
Finally the last byte 0B represents the MessageEnd record.
Vasiliy is right in that I will ultimately need to implement my own formatter/serialization process to better handle versioning and to output a much more compact stream (before compression).
I did want to understand what was happening in the stream, however, so I wrote up a (relatively) quick class that does what I wanted:
parses its way through the stream, building a collections of object names, counts and sizes
once done, outputs a quick summary of what it found - classes, counts and total sizes in the stream
It's not useful enough for me to put it somewhere visible like codeproject, so I just dumped the project in a zip file on my website: http://www.architectshack.com/BinarySerializationAnalysis.ashx
In my specific case it turns out that the problem was twofold:
The BinaryFormatter is VERY verbose (this is known, I just didn't realize the extent)
I did have issues in my class, it turned out I was storing objects that I didn't want
Hope this helps someone at some point!
Update: Ian Wright contacted me with a problem with the original code, where it crashed when the source object(s) contained "decimal" values. This is now corrected, and I've used the occasion to move the code to GitHub and give it a (permissive, BSD) license.
Our application operates massive data. It can take up to 1-2 GB of RAM, like your game. We met same "storing multiple copies of the same objects" problem. Also binary serialization stores too much meta data. When it was first implemented the serialized file took about 1-2 GB. Nowadays I managed to decrease the value - 50-100 MB. What did we do.
The short answer - do not use the .Net binary serialization, create your own binary serialization mechanism. We have own BinaryFormatter class, and ISerializable interface (with two methods Serialize, Deserialize).
Same object should not be serialized more than once. We save it's unique ID and restore the object from cache.
I can share some code if you ask.
EDIT: It seems you are correct. See the following code - it proves I was wrong.
[Serializable]
public class Item
{
public string Data { get; set; }
}
[Serializable]
public class ItemHolder
{
public Item Item1 { get; set; }
public Item Item2 { get; set; }
}
public class Program
{
public static void Main(params string[] args)
{
{
Item item0 = new Item() { Data = "0000000000" };
ItemHolder holderOneInstance = new ItemHolder() { Item1 = item0, Item2 = item0 };
var fs0 = File.Create("temp-file0.txt");
var formatter0 = new BinaryFormatter();
formatter0.Serialize(fs0, holderOneInstance);
fs0.Close();
Console.WriteLine("One instance: " + new FileInfo(fs0.Name).Length); // 335
//File.Delete(fs0.Name);
}
{
Item item1 = new Item() { Data = "1111111111" };
Item item2 = new Item() { Data = "2222222222" };
ItemHolder holderTwoInstances = new ItemHolder() { Item1 = item1, Item2 = item2 };
var fs1 = File.Create("temp-file1.txt");
var formatter1 = new BinaryFormatter();
formatter1.Serialize(fs1, holderTwoInstances);
fs1.Close();
Console.WriteLine("Two instances: " + new FileInfo(fs1.Name).Length); // 360
//File.Delete(fs1.Name);
}
}
}
Looks like BinaryFormatter uses object.Equals to find same objects.
Have you ever looked inside the generated files? If you open "temp-file0.txt" and "temp-file1.txt" from the code example you'll see it has lots of meta data. That's why I recommended you to create your own serialization mechanism.
Sorry for being cofusing.
Maybe you could run your program in debug mode and try adding a control point.
If that is impossible due to the size of the game or other dependencies you can always coade a simple/small app that includes the deserialization code and peek from the debug mode there.

Binary to Ascii and back again

I'm trying to interface with a hardware device via the serial port. When I use software like Portmon to see the messages they look like this:
42 21 21 21 21 41 45 21 26 21 29 21 26 59 5F 41 30 21 2B 21 27
42 21 21 21 21 41 47 21 27 21 28 21 27 59 5D 41 32 21 2A 21 28
When I run them thru a hex to ascii converter the commands don't make sense. Are these messages in fact something different than hex? My hope was to see the messages the device is passing and emulate them using c#. What can I do to find out exactly what the messages are?
Does the hardware device specify a protocol? Just because it's a serial port connection it doesn't mean that it has to be ASCII/Readable english Text. It could as well be just a sequence of bytes where for example 42 is a command and 21212121 is data to that command. Could be an initialization sequence or whatever.
At the end of the day, all you work with is a series of bytes. The meaning of them can be found in a protocol specification or if you don't have one, you need to manually look at each command. Issue a command to the device, capture the input, issue another command.
Look for patterns. Common Initialization? What could be the commands? What data gets passed?
Yes, it's tedious, but reverse engineering is rarely easy.
The ASCII for the Hex is this:
B!!!!AE!&!)!&Y_A0!+!'
B!!!!AG!'!(!'Y]A2!*!(
That does look like some sort of protocol to me, with some Initialization Sequence (B!!!!) and commands (AE and AG), but that's just guessing.
The decive is sending data to the computer. All digital data has the form of ones and zeroes, such as 10101001010110010... . Most often one combines groups of eight such bits (binary digits) into bytes, so all data consists of bytes. One byte can thus represent any of the 2^8 values 0 to 2^8 - 1 = 255, or, in hexadecimal notation, any of the numbers 0x00 to 0xFF.
Sometimes the bytes represent a string of alphanumerical (and other) characters, often ASCII encoded. This data format assigns a character to each value from 0 to 127. But all data is not ASCII-encoded characters.
For instance, if the device is a light-intensity sensor, then each byte could give the light intensity as a number between 0 (pitch-black) and 255 (as bright as it gets). Or, the data could be a bitmap image. Then the data would start with a couple of well-defined structures (namely this and this) specifying the colour depth (number of bits per pixel, i.e. more or less the number of colours), the width, the height, and the compression of the bitmap. Then the pixel data would begin. Typically the bytes would go BBGGRRBBGGRRBBGGRR where the first BB is the blue intensity of the first pixel, the first GG is the green intensity of the first pixel, the first RR is the red intensity of the first pixel, the second BB is the blue intensity of the second pixel, and so on.
In fact the data could mean anything. Whay kind of device is it? Does it have an open specification?

Categories

Resources