I have an application that shows a WebBrowser component, which contains a flash application that create a XMLSocket with a server.
I'm now trying to hook recv ( luckly a LocalHook) for log purpuse, but when I try to read the socket content I get only strange chars, but if i set the hook with SpyStudio I get readable strings.
Here is the code I use :
I set the hook with
CreateRecvHook = LocalHook.Create(
LocalHook.GetProcAddress("ws2_32.dll", "recv"),
new Drecv(recv_Hooked),
this);
CreateRecvHook.ThreadACL.SetExclusiveACL(new Int32[] { 0 });
I set up everything I need with
[DllImport("ws2_32.dll")]
static extern int recv(
IntPtr socketHandle,
IntPtr buf,
int count,
int socketFlags
);
[UnmanagedFunctionPointer(CallingConvention.StdCall,
CharSet = CharSet.Unicode,
SetLastError = true)]
delegate int Drecv(
IntPtr socketHandle,
IntPtr buf,
int count,
int socketFlags
);
static int recv_Hooked(
IntPtr socketHandle,
IntPtr buf,
int count,
int socketFlags)
{
byte[] test = new byte[count];
Marshal.Copy(buf, test, 0, count);
IntPtr ptr = IntPtr.Zero;
ptr = Marshal.AllocHGlobal(count);
Marshal.Copy(test, 0, ptr, count);
string s = System.Text.UnicodeEncoding.Unicode.GetString(test);
Debug.WriteLine(s);
System.IO.StreamWriter file = new System.IO.StreamWriter("log.txt");
file.WriteLine(s);
file.Close();
return recv(socketHandle, buf, count, socketFlags);;
}
I've already tried using different Encoding without success. As a side note, the WebBrowser doesn't seems to have any problem.
You're saving the content of the uninitialized buffer, no wonder it's garbage.
There is nothing in that buffer until after recv (the real one) fills it in. You also can't know how many bytes are actually valid except by inspecting the return code from the real recv.
Related
I'm struggling a bit on this part...
I want to do this in CE!(that is read the value 20 in my c# app)
However my code is not working...
[DllImport("kernel32.dll")]
public static extern bool ReadProcessMemory(IntPtr hProcess, IntPtr lpBaseAddress, byte[] lpBuffer, int dwSize, ref IntPtr lpNumberOfBytesRead);
public int ReadInt32(IntPtr address, int[] pointers)
{
/* FOR REFERENCE ONLY! PSEUDO-CODE
ReadProcessMemory(..., ModuleBaseAddress + 0x010F418, Temporary, ..., ...); // -> 0x02A917F8
ReadProcessMemory(..., 0x02A917F8+0x48, Temporary, .....,.); // -> 0x02A9A488
[02A9A488] = 20
*/
IntPtr bytesRead = IntPtr.Zero;
byte[] _buff = new byte[sizeof(int)];
int offIndex = 0;
IntPtr finalval = address;
Console.WriteLine("[BASE] {0:x}", (int)address);
foreach(int PointerOffs in pointers)
{
ReadProcessMemory(hProcess, address, _buff, _buff.Length, ref bytesRead);
finalval += pointers[offIndex];
Console.WriteLine("[Curr ADDRESS] {0:x}", finalval);
offIndex++;
}
return BitConverter.ToInt32(_buff, 0);
}
And this is how I access the method:
int currAmmo = (int) pReader.ReadInt32((IntPtr)LocalPlayer.BaseAddress, LocalPlayer.oMGAmmo);
Console.Write("[AMMO] {0}\n", currAmmo);
Your function has enough problems to warrant a replacement, I tried to fix it but it was easier just to start fresh. By utilizing a pre increment instead of a post increment you will de-reference the first pointer before adding an offset which is ideal.
public static int ReadInt32(IntPtr hProc, IntPtr ptr, int[] offsets)
{
IntPtr addr = ptr;
var buffer = new byte[4];
for (int i = 0; i < offsets.Length; ++i)
{
ReadProcessMemory(hProc, addr, buffer, buffer.Length, out var read1);
addr = IntPtr.Add(new IntPtr(BitConverter.ToInt32(buffer, 0)), offsets[i]);
}
ReadProcessMemory(hProc, addr, buffer, 4, out var read);
return BitConverter.ToInt32(buffer, 0);
}
I learned C# today just to answer this question :)
How does one read memory using a process module's base address and offsets? I have grabbed the desired module's base address with the following:
Process process = Process.GetProcessesByName("process")[0];
ProcessModule bClient;
ProcessModuleCollection bModules = process.Modules;
IntPtr processHandle = OpenProcess(0x10, false, process.Id);
int firstOffset = 0xA4C58C;
int anotherOffset = 0xFC;
for (int i = 0; i < bModules.Count; i++)
{
bClient = bModules[i];
if (bClient.ModuleName == "module.dll")
{
IntPtr baseAddress = bClient.BaseAddress;
Console.WriteLine("Base address: " + baseAddress);
}
}
After that I added the first offset to the base address:
IntPtr firstPointer = IntPtr.Add(baseAddress, (int)firstOffset);
This gives me a pointer; 440911244 in this case.
I can use this pointer in Cheat Engine, for instance, to browse its memory region and find the value to which the anotherPointer points to but I don't find the proper way to add the offset to firstPointer, however.
My question is, do I have to use ReadProcessMemory just before adding the final anotherOffset to the pointer? If so, what is the proper way of using it in this case?
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool ReadProcessMemory(
IntPtr hProcess,
IntPtr lpBaseAddress,
IntPtr lpBuffer,
int dwSize,
out IntPtr lpNumberOfBytesRead);
Change the ReadProcessMemory lpBuffer paramater to:
byte[] lpBuffer,
then
byte[] buffer = new byte[sizeof(float)];
IntPtr bytesRead = IntPtr.Zero;
IntPtr readAddress = IntPtr.Add(baseAddress, firstOffset);
readAddress = IntPtr.Add(readAddress, anotherOffset)
ReadProcessMemory(processHandle, readAddress, buffer, buffer.Length, out bytesRead);
float value = BitConverter.ToSingle(buffer, 0);
I am using FFMPEG in C# and have the following function prototpe:
public static extern AVIOContext* avio_alloc_context(byte* buffer, int buffer_size, int write_flag, void* opaque, IntPtr read_packet, IntPtr write_packet, IntPtr seek);
In C/C++ this function is declared as follows:
avio_alloc_context (unsigned char *buffer, int buffer_size, int write_flag, void *opaque, int(*read_packet)(void *opaque, uint8_t *buf, int buf_size), int(*write_packet)(void *opaque, uint8_t *buf, int buf_size), int64_t(*seek)(void *opaque, int64_t offset, int whence))
In C/C++ I can do the following to call this function:
int readFunction(void* opaque, uint8_t* buf, int buf_size)
{
// Do something here
int numBytes = CalcBytes();
return numBytes;
}
int64_t seekFunction(void* opaque, int64_t offset, int whence)
{
// Do seeking here
return pos;
}
AVIOContext * avioContext = avio_alloc_context(ioBuffer, ioBufferSize, 0, (void*)(&fileStream), &readFunction, NULL, &seekFunction);
Where the readFunction and seekFunction are callback functions that are used in reading/seeking etc.
I am unsure how to copy this behaviour in the C# version of the code when it expects an IntPtr. How can I create the callback functions and pass them in the C# version?
Turns out you can do this, however it is not entirely intuitive.
First you need to create a delegate with the UnmanagedFunctionPointer and ensure the params can be passed
back from the callee to the caller after being modified using [In, Out]
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
public delegate int av_read_function_callback(IntPtr opaque, [MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 2), In, Out] byte[] endData, int bufSize);
In the function we can then marshal this delegate as follows:
private av_read_function_callback mReadCallbackFunc;
mReadCallbackFunc = new av_read_function_callback(ReadPacket);
mAvioContext = FFmpegInvoke.avio_alloc_context(mReadBuffer, mBufferSize, 0, null, Marshal.GetFunctionPointerForDelegate(mReadCallbackFunc), IntPtr.Zero, IntPtr.Zero);
where ReadPacket looks something like
public int ReadPacket(IntPtr opaque, byte[] endData, int bufSize)
{
// Do stuff here
}
This results in the same behaviour as a function pointer in C++.
I am working on LOTUS Notes API, during the process i came to a point where the fucntio in like this ,
bytesRead = fread (Buffer, 1, (WORD) Length, hCDFile);
Now i found some C# equivalent method like , which runs inside a while loop .At the first iteration the method seems working fine ( the result are same when i debug c version of code and C# version). But in second iteration say suppose the values of dwLengthHost =35,
before this method i called another method
NSFDUMPReadFromFile(hCDFile, ref RecordTypeCanonicalPtr, sizeof (ushort)) which calls the fread function and give value RecordTypeCanonicalPtr=149 . But after that when same method is called later the RecordTypeCanonicalPtr and dwLengthHost values changes automatically .
[DllImport("msvcrt.dll")]
public static extern UInt32 fread(ref IntPtr Buffer, uint Size, uint Count, IntPtr Stream);
private bool NSFDUMPReadFromFile(IntPtr hCDFile,
ref IntPtr Buffer,
UInt32 Length)
{
UInt32 bytesRead = NotesApi.fread(ref Buffer, 1, (uint)Length, hCDFile);
/* Read bytes from the file */
if (bytesRead == Length)
return true;
else
return false;
}
Look like you need to use FileStream
You can create it by using File.Open
Exactly same behavior as:
bytesRead = fread (Buffer, 1, (WORD) Length, hCDFile);
should provide following C# code
bytesRead = file.Read(Buffer, 0, Length)
full example might be following
using(file = File.Open("test.bin", FileMode.Open))
{
var length = 256;
var buffer = new byte[length];
var bytesRead = file.Read(buffer, 0, length);
}
I am trying to PInvoke this function (GetPackageId) from kernel32.dll:
http://msdn.microsoft.com/en-us/library/windows/desktop/hh446607(v=vs.85).aspx
I defined the structs and imports as follows:
[StructLayout(LayoutKind.Sequential)]
public struct PACKAGE_ID
{
uint reserved;
uint processorArchitecture;
PACKAGE_VERSION version;
String name;
String publisher;
String resourceId;
String publisherId;
}
[StructLayout(LayoutKind.Explicit)]
public struct PACKAGE_VERSION
{
[FieldOffset(0)] public UInt64 Version;
[FieldOffset(0)] public ushort Revision;
[FieldOffset(2)] public ushort Build;
[FieldOffset(4)] public ushort Minor;
[FieldOffset(6)] public ushort Major;
}
[DllImport("kernel32.dll", EntryPoint = "GetPackageId", SetLastError = true)]
static extern int GetPackageId(IntPtr hProcess,out uint bufferLength,out PACKAGE_ID pBuffer);
And calling it like this:
PACKAGE_ID buffer = new PACKAGE_ID();
result = GetPackageId(hProcess, out bufferLength, out buffer);
However I get a return value of 122 (ERROR_INSUFFICIENT_BUFFER). I am rather new to PInvoke and am not quite sure how to proceed from here. Do I need to initialize the strings before calling the function?
You are going to need to change the p/invoke:
[DllImport("kernel32.dll", SetLastError=true)]
static extern int GetPackageId(
IntPtr hProcess,
ref int bufferLength,
IntPtr pBuffer
);
You call it once passing 0 for the length:
int len = 0;
int retval = GetPackageId(hProcess, ref len, IntPtr.Zero);
Then you need to check that retval equals ERROR_INSUFFICIENT_BUFFER. If it does not then you have an error.
if (retval != ERROR_INSUFFICIENT_BUFFER)
throw new Win32Exception();
Otherwise you can continue.
IntPtr buffer = Marshal.AllocHGlobal(len);
retval = GetPackageId(hProcess, ref len, buffer);
Now you can check retval against ERROR_SUCCESS.
if (retval != ERROR_SUCCESS)
throw new Win32Exception();
And finally we can convert the buffer to a PACKAGE_ID.
PACKAGE_ID packageID = (PACKAGE_ID)Marshal.PtrToStructure(buffer,
typeof(PACKAGE_ID));
Put it all together and it looks like this:
int len = 0;
int retval = GetPackageId(hProcess, ref len, IntPtr.Zero);
if (retval != ERROR_INSUFFICIENT_BUFFER)
throw new Win32Exception();
IntPtr buffer = Marshal.AllocHGlobal((int)len);
try
{
retval = GetPackageId(hProcess, ref len, buffer);
if (retval != ERROR_SUCCESS)
throw new Win32Exception();
PACKAGE_ID packageID = (PACKAGE_ID)Marshal.PtrToStructure(buffer,
typeof(PACKAGE_ID));
}
finally
{
Marshal.FreeHGlobal(buffer);
}
From the comments it appears that we also need to make changes to the way the PACKAGE_ID struct is marshalled.
I suggest the following:
[StructLayout(LayoutKind.Sequential)]
public struct PACKAGE_ID
{
uint reserved;
uint processorArchitecture;
PACKAGE_VERSION version;
IntPtr name;
IntPtr publisher;
IntPtr resourceId;
IntPtr publisherId;
}
followed by calls to Marshal.PtrToStringUni to convert the IntPtr string fields into C# strings. Naturally this conversion needs to happen before the call to FreeHGlobal.
My guess is that the API actually allocates the string buffers in the space beyond the end of PACKAGE_ID. Which is why you have to ask how much memory to allocate. I don't have Windows 8 at hand to test this hypothesis.
From the docs for GetPackageId it seems you should send the size of the buffer as argument when calling, i.e. bufferLength should be initialized with the size of the passed buffer.
On return the bufferLength will tell you the size of the returned buffer.
Or did misread the docs?