I have a TCP Client,which puts a packet in a structure
using System.Runtime.InteropServices;
[StructLayoutAttribute(LayoutKind.Sequential)]
public struct tPacket_5000_E
{
public Int16 size;
public Int16 opcode;
public byte securityCount;
public byte securityCRC;
public byte flag;
[MarshalAsAttribute(UnmanagedType.ByValArray, SizeConst = 8, ArraySubType = UnmanagedType.I1)]
public byte[] blowfish;
public UInt32 seedCount;
public UInt32 seedCRC;
[MarshalAsAttribute(UnmanagedType.ByValArray, SizeConst = 5, ArraySubType = UnmanagedType.I1)]
public UInt32[] seedsecurity;
}
The code I use to put the packet in the structure is:
tPacket_5000_E packet = new tPacket_5000_E();
GCHandle pin = GCHandle.Alloc(data, GCHandleType.Pinned);
packet = (tPacket_5000_E)Marshal.PtrToStructure(pin.AddrOfPinnedObject(), typeof(tPacket_5000_E));
pin.Free();
Now,before i continue I must tell you that I'm translating this project from C++ to C#.
This is the problem:
The last 3 members of tPacket_5000_E are Int32(i tried UInt32 too),which is DWORD in C++.
The values before those three members,which are NOT Int32,are equal to those in C++.(I inject same packet in both C++ and C# project).
However,those three members have different values.
in C++ the values are(correct):
seedCount:0x00000079
seedCRC:0x000000d1
SeedSecurity:
-[0]:0x548ac099
-1:0x03c4d378
-[2]:0x292e9eab
-[3]:0x4eee5ee3
-[4]:0x1071206e
in C# the values are(incorrect):
seedCount:0xd1000000
seedCRC:0x99000000
SeedSecurity:
-[0]: 0x78548ac0
-1: 0xab03c4d3
-[2]: 0xe3292e9e
-[3]: 0x6e4eee5e
-[4]: 0x00107120
The packet in both applications is equal
byte[] data = new byte[] {
0x25, 0x00, 0x00, 0x50, 0x00, 0x00, 0x0E, 0x10,
0xCE, 0xEF, 0x47, 0xDA, 0xC3, 0xFE, 0xFF, 0x79,
0x00, 0x00, 0x00, 0xD1, 0x00, 0x00, 0x00, 0x99,
0xC0, 0x8A, 0x54, 0x78, 0xD3, 0xC4, 0x03, 0xAB,
0x9E, 0x2E, 0x29, 0xE3, 0x5E, 0xEE, 0x4E, 0x6E,
0x20, 0x71, 0x10};
Click here for further information
Why the last three members in the struct are different and how to fix them?
Thanks in advance!
I'd expect that the root of your problem is that the three byte values
public byte securityCount;
public byte securityCRC;
public byte flag;
cause the next 32-bit values not to be word-aligned, and your two sets of code are adding (or not adding) internal padding differently.
I expect that the different packings look something like this:
C++ C#
================================ ================================
[size ][opcode ] [size ][opcode ]
[secCnt][secCrc][flag ][blow0 ] [secCnt][secCrc][flag ][blow0 ]
[blow1 ][blow2 ][blow3 ][blow4 ] [blow1 ][blow2 ][blow3 ][blow4 ]
[blow5 ][blow6 ][blow7 ][seedCou [blow5 ][blow6 ][blow7 ]..PAD...
nt ][seedCRC [seedCount ]
][seedSec [seedCRC ]
urity0 ][seedSec [seedSecurity0 ]
urity1 ][seedSec [seedSecurity1 ]
urity2 ][seedSec [seedSecurity2 ]
urity3 ][seedSec [seedSecurity3 ]
urity4 ] [seedSecurity4 ]
... with C# inserting a byte of padding which causes later values to be one byte off.
You can try using
[StructLayout(LayoutKind.Sequential,Pack=1)]
before your struct definition, which should use the minimum amount of space possible.
Mastering Structs in C# has some good information on how/why this happens.
I suspect that Daniel L is on the right track in his answer.
I would try adding a 4th byte after the flag. My guess is that your C++ compiler is aligning the values on word boundaries. That would "align" the C# version as well.
Related
I coding the part of the game that generates the AOB Pattern for use in the game where modding is allowed (to be precise, third-party programs are allowed).
When I read these bytes in my program,
{ 0xBA, 0x79, 0x03, 0x00, 0x00, 0x48, 0x8D, 0x4C, 0x24, 0x28, ... }
This byte array will be created, and i need to divide that byte array similar to the picture.
Like this.
{
{0xBA}, {0x79, 0x03, 0x00, 0x00}
},
{
{0x48}, {0x8D}, {0x4C}, {0x24}, {0x28}
},
{
{0xE8}, {0x9F, 0x40, 0x51, 0x00}
}, ...
I succeeded in using SharpDisasm(https://github.com/justinstenning/SharpDisasm) to divide it into the following.
var asm = new SharpDisasm.Disassembler(br.ReadBytes(2048), mode, 0, true);
var disasm = asm.Disassemble();
foreach (var inst in disasm)
{
//print 'inst.Bytes' will output like follow.
}
{0xBA, 0x79, 0x03, 0x00, 0x00}
{0x48, 0x8D, 0x4C, 0x24, 0x28}
{0xE8, 0x9F, 0x40, 0x51, 0x00}
...
The array must be divided into Opcode and Value (I don't know if it is correct) to make it look like this.
{0xBA, null, null, null, null, 0x48, 0x8D, 0x4C, 0x24, null, 0xE8, null, null, null, null}
However, SharpDisasm doesn't seem to support anything similar to that function.
Is there another library or a good way to handle it?
I want to share code from a C# project inside a VB.Net project.
I want to refer a public class and its variables inside VB.
So I've put both VB and C# project inside the same solution.
Here is the declaration of C# class insde C# project:
public class MyUtils
{
public static byte[] zeroArray = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
....
When I want to refer it inside VB I will have below errors:
'MyUtils' is not accessible in this context because it is 'Friend'.
I have change the accessibility of every object to public in C# but I don't know how to allow access to C# class. I should add that I have not enough familiarity with VB and its inheritance mechanisms.
I created a C# console app named "ConsoleApp2" using .NET Framework 4.8 and added a class named "MyUtils":
namespace ConsoleApp2
{
public class MyUtils
{
public static byte[] zeroArray = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
}
}
I built the project to make sure that worked.
Then I added a VB.NET console app project named "ConsoleApp1" to the same solution. I added a reference to the ConsoleApp2 project:
and used this code:
Module Module1
Sub Main()
Dim bb = ConsoleApp2.MyUtils.zeroArray
Console.WriteLine(String.Join(" ", bb.Select(Function(b) b.ToString("X2"))))
Console.ReadLine()
End Sub
End Module
and ran it to get the output:
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
i ve been searching for a function to set a registry key with c#.
Obvously there is the method Registry.SetValue(KEY, valueName, value, registryValueKind)
[... whereby valueName is the name of the edited value, KEY is the main key name and registryValueKind is the type of change to be made]
If you have a closer look at registryValueKind-Enum there are 8 types:
DWord, String, ExpandString, Binary, MultiString, QWord, Unknown and
None.
In a Msdn-Article the different data-types are described:
REG_SZ, REG_MULTI_SZ, REG_DWORD, REG_QWORD, REG_BINARY, REG_EXPAND_SZ, REG_HEX.
So i wonder how to store a hex(7)-value [therefore a REG_HEX-value] with the help of Registry.SetValue().
Further i wonder how to save a value like hex(7):56,00,45,00,4e,00,30,00,00,00,4c,00,4f,00,4f,00,50,00,42,\
00,41,00,43,00,4b,00,00,00,00,00 which is, in addition to being of type hex(7) seperated by a "\".
Thanks in adavance!
There is no such thing as "hexadecimal value", hexadecimal is just a textual representation of a binary value.
What you want is:
Registry.SetValue(
"HKEY_CURRENT_USER\\MyKeyName",
"MyValue",
new byte[] { 0x56, 0x00, 0x45, 0x00, 0x4e, 0x00, 0x30, 0x00, 0x00, 0x00, 0x4c, 0x00, 0x4f, 0x00, 0x4f, 0x00, 0x50, 0x00, 0x42, 0x00, 0x41, 0x00, 0x43, 0x00, 0x4b, 0x00, 0x00, 0x00, 0x00, 0x00 },
RegistryValueKind.Binary);
I have a hex value of 0x1047F71 and I want to put in byte array of 4 bytes. Is this the right way to do it:
byte[] sync_welcome_sent = new byte[4] { 0x10, 0x47, 0xF7, 0x01 };
or
byte[] sync_welcome_sent = new byte[4] { 0x01, 0x04, 0x7F, 0x71 };
I would appreciate any help.
If you want to be compatible with Intel little-endian, the answer is "None of the above", because the answer would be "71h, 7fh, 04h, 01h".
For big-endian, the second answer above is correct: "01h, 04h, 7fh, 71h".
You can get the bytes with the following code:
uint test = 0x1047F71;
var bytes = BitConverter.GetBytes(test);
If you want big-endian, you can just reverse the bytes using Linq like so:
var bytes = BitConverter.GetBytes(test).Reverse();
However, if you are running the code on a Big Endian system, reversing the bytes will not be necessary, since BitConverter.GetBytes()will return them as big endian on a big endian system.
Therefore you should write the code as follows:
uint test = 0x1047F71;
var bytes = BitConverter.GetBytes(test);
if (BitConverter.IsLittleEndian)
bytes = bytes.Reverse().ToArray();
// now bytes[] are big-endian no matter what system the code is running on.
My goal is to get a 64bit value hence a byte array of size 8. However my problem is that I want to set the first 20 bits myself and then have the rest to be 0s. Can this be done with the shorthand byte array initialisation?
E.g. if I wanted all 0s I would say:
byte[] test = new byte[] {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
What I've thought about/tried:
So each hexadecimal digit corresponds to 4 binary digits. Hence, if I want to specify the first 20bits, then I specify the first 5 hexadecimal digits? But I'm not sure of how to do this:
byte[] test = new byte[] {0xAF, 0x17, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00};
That would mean that I've specified the first 24 bits right? And not 20.
I could use a BitArray and do it that way but I'm just wondering whether it can be done in the above way.
How about:
byte byte1 = 0xFF;
byte byte2 = 0xFF;
byte byte3 = 0xFF;
// 8bits 8bits 4bits : total = 20 bits
// 11111111 11111111 11110000
byte[] test = new byte[] { byte1, byte2, (byte)(byte3 & 0xF0), 0x00, 0x00, 0x00, 0x00, 0x00 };
You can write your bytes backward, and use BitConverter.GetBytes(long):
var bytes = BitConverter.GetBytes(0x117AF);
Demo.
Since each hex digit corresponds to a single four-bit nibble, you can initialize data in "increments" of four bits. However, the data written in reverse will be almost certainly less clear to human readers of your code.