I am getting the following error when unlocking a file
Arithmetic operation resulted in an overflow
System.IntPtr.ToInt32
I suspect it is the following line to pBuffer.ToInt32() :
IntPtr iPtr = new IntPtr(pBuffer.ToInt32() + (i * Marshal.SizeOf(fi3)));
I am unable to reproduce the error myself and the error is not displaying the correct line number. I am looking for a way to reproduce this or any feedback on the possible cause. Thanks
public void Close()
{
const int MAX_PREFERRED_LENGTH = -1;
int readEntries;
int totalEntries;
IntPtr pBuffer = IntPtr.Zero;
FILE_INFO_3 fi3 = new FILE_INFO_3();
int iStatus = NetFileEnum(this.HostName, this.HostPathToShare + this.FileNameFromShare, null, 3, ref pBuffer, MAX_PREFERRED_LENGTH, out readEntries, out totalEntries, pBuffer);
if (iStatus == 0)
{
for (int i = 0; i < readEntries; i++)
{
IntPtr iPtr = new IntPtr(pBuffer.ToInt32() + (i * Marshal.SizeOf(fi3)));
fi3 = (FILE_INFO_3)Marshal.PtrToStructure(iPtr, typeof(FILE_INFO_3));
NetFileClose(this.HostName, fi3.fi3_id);
}
}
NetApiBufferFree(pBuffer);
}
[DllImport("netapi32.dll", SetLastError=true, CharSet = CharSet.Unicode)]
static extern int NetFileClose(
string servername,
int id);
[DllImport("Netapi32.dll", SetLastError=true)]
static extern int NetApiBufferFree(IntPtr Buffer);
[DllImport("netapi32.dll", SetLastError=true, CharSet=CharSet.Unicode)]
static extern int NetFileEnum(
string servername,
string basepath,
string username,
int level,
ref IntPtr bufptr,
int prefmaxlen,
out int entriesread,
out int totalentries,
IntPtr resume_handle
);
Update
I added the win32 apis code.
The below answers look correct and the machine is 64 bit. But I am unable to reproduce it on the dev server despite the dev environment being 64 bit. Any ideas to reproduce the error?
The error is caused by your code running in a 64 bit context and returning a pointer address that lies outside the range addressable with 32 bits, so .ToInt32() throws.
Call Environment.Is64BitProcess to detect whether your process is running in 32 or 64 bit, and convert the address accordingly:
long pointerAddress;
if (Environment.Is64BitProcess)
{
pointerAddress = pBuffer.ToInt64();
}
else
{
pointerAddress = pBuffer.ToInt32();
}
var fileInfoPointer = new IntPtr(pointerAddress + (i * Marshal.SizeOf(fi3)));
I see two errors in the code right off:
You set your pBuffer to 0, and never actually allocate it. It should error when you pass it to NetFileEnum, although that is a Win32 API function, so it may not notice.
You convert pBuffer .ToInt32(), which should work when compiled for x86 specifically, but if you have Any CPU or x64 as your target platform, this is going to be a problem also.
Related
I've been trying to create my own memory reader in C# based on a couple of articles I've seen on CodeProject. I got everything working as I would have liked when compiled for 32-bit Windows, but the issues began when I tried to convert over to a 64-bit build.
I was able to get VirtualQueryEx to work in a 64-bit C++ test project, where the MEMORY_BASIC_INFORMATION is already defined, and the BaseAddress member is a PVOID.
When I tried to move over to C#, I have to define MEMORY_BASIC_INFORMATION myself as indicated in the CodeProject example above that I am using as guidance.
The below code works for applications that have small memory profiles, but for larger applications, the MI.BaseAddress variable below seems to truncate to 2,147,356,672, and the program is stuck in an infinite loop, where currentAddress is always equal to MI.BaseAddress + MI.RegionSize (before casting to ulong).
Any guidance would be greatly appreciated. Thank you!
public struct MEMORY_BASIC_INFORMATION
{
public ulong BaseAddress;
public ulong AllocationBase;
public int AllocationProtect;
public ulong RegionSize;
public int State;
public ulong Protect;
public ulong Type;
}
-
[DllImport("kernel32.dll", SetLastError=true)]
static extern int VirtualQueryEx(IntPtr hProcess, IntPtr lpAddress, out MEMORY_BASIC_INFORMATION lpBuffer, uint dwLength);
-
public void getBlocks(int pid)
{
totalBytes = 0;
if (addresses != null)
addresses = null;
addresses = new List<UIntPtr>();
if (sizes != null)
sizes = null;
sizes = new List<UInt64>();
setPID(pid);
getHandle();
ulong currentAddress;
ulong maxAddress;
SYSTEM_INFO SI;
GetSystemInfo(out SI);
maxAddress = (ulong)SI.maximumApplicationAddress;
currentAddress = (ulong)SI.minimumApplicationAddress;
MEMORY_BASIC_INFORMATION MI;
while (currentAddress < maxAddress)
{
VirtualQueryEx(hProc, (IntPtr)currentAddress, out MI, (uint)Marshal.SizeOf(typeof(MEMORY_BASIC_INFORMATION)));
if (MI.State == MEM_COMMIT)
{
totalBytes += (ulong)MI.RegionSize;
}
currentAddress = (ulong)MI.BaseAddress + (ulong)MI.RegionSize;
}
I have made a method that will read a multi-pointer by providing the wanted offsets + the start adress. (Code below). To summarize, Im trying to streamline this method and below I will explain my problem.
I have been strugling around with the conversion. The parameter is an IntPtr and the output of a read adress is an byte array, my first idea was: "Convert Byte array to IntPtr, reprocess it and finally convert the last read adress into a int32 (since the last adress is not a pointer it will never be read so converting here to Int32 should be allright)",
However that did not give a nice result. So currently Im stuck with the solution of converting Byte array to Int32, then Int32 to IntPtr. People do say that the bitconverter is a bad approach because it might cause issues on 64-bit platforms and I do also believe there is an approach that may give a better performence (since Im converting an object 2 times).
Finally if anyone think it would be possible to make a similar function in C++ and then P/Invoke it in C# (I guess it would be more efficient that way?) please tell me. (Im trying to adapt my programming knowledges. And find combination of languages very interesting)
[DllImport("kernel32.dll", EntryPoint = "ReadProcessMemory")]
public static extern Int32 ReadProcessMemory(IntPtr hProcess, IntPtr lpBaseAddress,
[In, Out] byte[] buffer, Int32 sizeout, out IntPtr lpNumberOfBytesRead);
public Int32 ReadBytes(IntPtr Handle, IntPtr Address, int[] Offsets, int BytesToRead = 4)
{
IntPtr ptrBytesRead;
byte[] value = new byte[BytesToRead];
ReadProcessMemory(Handle, Address, value, BytesToRead, out ptrBytesRead);
//Read Offsets
for (int i = 0; i < Offsets.Length; i++)
{
ReadProcessMemory(Handle,
new IntPtr(BitConverter.ToInt32(value, 0) + Offsets[i]),
value,
BytesToRead,
out ptrBytesRead);
}
return BitConverter.ToInt32(value, 0);
}
Any ideas to streamline this method would be very well appreciated! Thanks on advance!
As #usr stated, the performance of the code will be entirely dominated by the calls to ReadProcessMemory. You should not expect to improve the performance from its current level.
However, you can make the code much easier to read by avoiding byte arrays and BitConverter. Like this:
[DllImport("kernel32.dll", SetLastError = true)]
public static extern int ReadProcessMemory(
IntPtr hProcess,
IntPtr lpBaseAddress,
out IntPtr lpBuffer,
IntPtr nSize,
out IntPtr lpNumberOfBytesRead
);
private static IntPtr ReadProcessPointer(IntPtr hProcess, IntPtr Address)
{
IntPtr result;
IntPtr NumberOfBytesRead;
if (ReadProcessMemory(hProcess, Address, out result, (IntPtr)IntPtr.Size, out NumberOfBytesRead) == 0)
throw new Win32Exception();
return result;
}
public static IntPtr FollowPointers(IntPtr hProcess, IntPtr Address, int[] Offsets)
{
IntPtr ptr = ReadProcessPointer(hProcess, Address);
for (int i = 0; i < Offsets.Length; i++)
ptr = ReadProcessPointer(hProcess, ptr + Offsets[i]);
return ptr;
}
I posted this question a few days ago, and I have some follow up doubts about marshaling an IntPtr to a struct.
The thing goes like this:
As stated in the question I am referencing, I make calls to asynchronous methods on a native Dll. These methods communicate their completion with Windows Messages. I receive the Windows Message correctly now and, within it, an lParam property (of type IntPrt).
According to the documentation I am following, this lParam points to the struct that has the results of the execution of the method. As a particular example, one of the structures I am trying to fill is defined as follows:
Original C signature:
typedef struct _wfs_result {
ULONG RequestID;
USHORT hService;
TIMESTAMP tsTimestamp; /*Win32 SYSTEMTIME structure according to documentation*/
LONG hResult;
union {
DWORD dwCommandCode;
DWORD dwEventID;
} u;
LPVOID lpBuffer;
} WFSRESULT, *LPWFSRESULT;
My C# definition:
[StructLayout(LayoutKind.Sequential), Serializable]
public struct Timestamp
{
public ushort wYear;
public ushort wMonth;
public ushort wDayOfWeek;
public ushort wDay;
public ushort wHour;
public ushort wMinute;
public ushort wSecond;
public ushort wMilliseconds;
}
[StructLayout(LayoutKind.Explicit), Serializable]
public struct WFSResult
{
[FieldOffset(0), MarshalAs(UnmanagedType.U4)]
public uint RequestID;
[FieldOffset(4), MarshalAs(UnmanagedType.U2)]
public ushort hService;
[FieldOffset(6), MarshalAs(UnmanagedType.Struct, SizeConst = 16)]
public Timestamp tsTimestamp;
[FieldOffset(22), MarshalAs(UnmanagedType.U4)]
public int hResult;
[FieldOffset(26), MarshalAs(UnmanagedType.U4)]
public UInt32 dwCommandCode;
[FieldOffset(26), MarshalAs(UnmanagedType.U4)]
public UInt32 dwEventID;
[FieldOffset(30), MarshalAs(UnmanagedType.U4)]
public Int32 lpBuffer;
}
Now the fun part: the native Dll I am calling belongs to an independent process, FWMAIN32.EXE, which is running in the same machine (single instance). I believe the Window Message that I receive, which is application specific (above WM_USER), returns an LParam that is not really pointing to the struct I am expecting, and that the struct resides somewhere in the memory space of the FWMAIN32.EXE process.
Initially, I tried to just Marshal.PtrToStructure (with little hope actually) and the struct got filled with garbage data. I also tried with GetLParam with same outcome. Finally, I tried to go across process boundaries with the ReadProcessMemory API, as explained in these posts:
C# p/invoke, Reading data from an Owner Drawn List Box
http://www.codeproject.com/KB/trace/minememoryreader.aspx
I get the exception code 299 (ERROR_PARTIAL_COPY: Only part of a ReadProcessMemory or WriteProcessMemory request was completed.)
And additionally the byte[] I get from using ReadProcessMemory is: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
My code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using System.ComponentModel;
using System.Runtime.InteropServices;
namespace XFSInteropMidleware
{
public class CrossBoundaryManager
{
[DllImport("kernel32")]
static extern IntPtr OpenProcess(UInt32 dwDesiredAccess, Int32 bInheritHandle, UInt32 dwProcessId);
[DllImport("kernel32")]
static extern Int32 ReadProcessMemory(IntPtr hProcess, IntPtr lpBaseAddress, [In, Out] byte[] lpBuffer, UInt32 dwSize, out IntPtr lpNumberOfBytesRead);
[DllImport("kernel32")]
static extern Int32 CloseHandle(IntPtr hObject);
[DllImport("kernel32")]
static extern int GetLastError();
private const string nativeProcessName = "FWMAIN32";
private IntPtr hProcess = IntPtr.Zero;
const uint PROCESS_ALL_ACCESS = (uint)(0x000F0000L | 0x00100000L | 0xFFF);
static int dwSize = 34; //The size of the struct I want to fill
byte[] lpBuffer = new byte[dwSize];
public void OpenProcess()
{
Process[] ProcessesByName = Process.GetProcessesByName(nativeProcessName);
hProcess = CrossBoundaryManager.OpenProcess(CrossBoundaryManager.PROCESS_ALL_ACCESS, 1, (uint)ProcessesByName[0].Id);
}
public byte[] ReadMemory(IntPtr lParam, ref int lastError)
{
try
{
IntPtr ptrBytesReaded;
OpenProcess();
Int32 result = CrossBoundaryManager.ReadProcessMemory(hProcess, lParam, lpBuffer, (uint)lpBuffer.Length, out ptrBytesReaded);
return lpBuffer;
}
finally
{
int processLastError = GetLastError();
if (processLastError != 0)
{
lastError = processLastError;
}
if (hProcess != IntPtr.Zero)
CloseHandle(hProcess);
}
}
public void CloseProcessHandle()
{
int iRetValue;
iRetValue = CrossBoundaryManager.CloseHandle(hProcess);
if (iRetValue == 0)
throw new Exception("CloseHandle failed");
}
}
}
And I use it like this:
protected override void WndProc(ref Message m)
{
StringBuilder sb = new StringBuilder();
switch (m.Msg)
{
case OPEN_SESSION_COMPLETE:
GCHandle openCompleteResultGCH = GCHandle.Alloc(m.LParam); //So the GC does not eat the pointer before I can use it
CrossBoundaryManager manager = new CrossBoundaryManager();
int lastError = 0;
byte[] result = manager.ReadMemory(m.LParam, ref lastError);
if (lastError != 0)
{
txtState.Text = "Last error: " + lastError.ToString();
}
StringBuilder byteResult = new StringBuilder();
for (int i = 0; i < result.Length; i++)
{
byteResult.Append(result[i].ToString() + " ");
}
sb.AppendLine("Memory Read Result: " + byteResult.ToString());
sb.AppendLine("Request ID: " + BitConverter.ToInt32(result, 0).ToString());
txtResult.Text += sb.ToString();
manager.CloseProcessHandle();
break;
}
base.WndProc(ref m);
}
Is it correct to go across process boundaries in this case?
Is it correct to use lParam as the base address for ReadProcessMemory?
Is the CLR turning lParam to something I cannot use?
Why I am getting the 299 exception?
I correctly get the process ID of FWMAIN32.EXE, but how can I be sure the lParam is pointing inside its memory space?
Should I consider the use of "unsafe"? Could anyone recommend that approach?
Are there any other ways to custom marshal the struct?
Too many questions on a single post, I know, but I think they all point to resolving this issue. Thank you all for your help in advance, and sorry I had to make it so long.
I guess I have to take this one myself. So, as stated in the comments above, removing the
GCHandle openCompleteResultGCH = GCHandle.Alloc(m.LParam);
line did the trick. I understood that when a pointer in manage context is pointing to a struct in unmanaged context, the GC would collect it as the pointer really had nothing in its address. It is in fact the other way around. When, in managed context, we hold an object or struct that is being pointed from an unmanaged context, the GC could collect it because no pointer in the managed context is pointing to it, thus the need to pin it in order to keep the GC at distance.
So, in the end, there was no need go across process boundaries in this case. I removed the call to the Kernell32 methods, as the CLR handles the marshalling quiet well and Marshal.PtrToStructure was all I needed.
Credit goes to Jim and David who pointed me in the right direction.
Ive been working on this the whole day, and im still stuck
i ported this code from c/c++ to c# im so close but i get these exceptions
Exception of type 'System.ExecutionEngineException' was thrown.
and
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
here is the code, the code is not cleaned up/optimized yet cause im still testing it
public unsafe static void GetHash(string data, byte[] hash)
{
byte[] input = System.Text.UnicodeEncoding.Unicode.GetBytes(data);
hash = new byte[128];
IntPtr hProv = IntPtr.Zero;
IntPtr hHash = IntPtr.Zero;
Crypto.CryptAcquireContext(ref hProv, string.Empty, string.Empty, Crypto.PROV_RSA_FULL, 0);
if (Crypto.CryptCreateHash(hProv, Crypto.CALG_SHA1, IntPtr.Zero, 0, ref hHash))
{
if (Crypto.CryptHashData(hHash, input, ((input.Length) + 1) * 2, 0))
{
byte[] buffer = new byte[20];
IntPtr pBuffer = IntPtr.Zero;
int length = 20;
if (Crypto.CryptGetHashParam(hHash, Crypto.HP_HASHVAL, ref pBuffer, ref length, 0))
{
Crypto.CryptDestroyHash(hHash);
Crypto.CryptReleaseContext(hProv, 0);
byte tail = 0;
unsafe
{
//no matter what i do it stops here!!!!! :(
//one error is "Exception of type 'System.ExecutionEngineException' was thrown."
//the other is "System.AccessViolationException crossed a native/managed boundary
//Attempted to read or write protected memory. This is often an indication that other memory is corrupt."
try
{
//-------------------------- This is where the exepctions starts
//I have commented the code, cause im kinda getting tired of this Exception
//I tried 2 ways of getting a byte[] from a pointer
//the 1e way, does not work
//for (int i = 0; i < length; i++)
//buffer[i] = (byte)Marshal.ReadByte(pBuffer, i);
//the 2e way does not work
//System.Runtime.InteropServices.Marshal.Copy(pBuffer,buffer, 0, 20);
//--------------------------
}
catch (Exception ex)
{
}
}
//there is more code here, but i removed
//since i only want till where code goes sofare
}
}
}
}
hope anybody can help me out here,
Thnx in advance
JB
I fixed it without the use of unsafe or the fixed statement, what i did was 2 simple like most of the codings tmp issues
i have this class Crypto where i have all advapi.dll functions in and the function returned a pointer to the byte array in memory and this is what the function needed before my change.
[DllImport("advapi32.dll", SetLastError = true)]
public static extern bool CryptGetHashParam(
IntPtr hHash,
Int32 dwParam,
ref IntPtr pbData, // this is where my problem was!!!!
ref Int32 pdwDataLen,
Int32 dwFlags
i changed the function to
[DllImport("advapi32.dll", SetLastError = true)]
public static extern bool CryptGetHashParam(
IntPtr hHash,
Int32 dwParam,
Byte[] pbData, //i changed it from IntPtr to byte array
ref Int32 pdwDataLen,
Int32 dwFlags
and that solved my corrupt memory issue
Hope this issue helps some body else working with CryptGetHashParam
i ported this code from c/c++ cause there where no c# sample on the net, so here is one of the first.
thnx all for trying to help me out, but i fixed it myself
JB
I'm not certain, but it's likely because your .Net objects aren't pinned in memory. See this: http://dotnet.dzone.com/news/net-memory-control-use-gchandl. The gist of it is that .Net objects can be moving around in memory after you've passed them through interop, and when that happens stuff starts getting crazy.
Unfortunately I'm on a netbook at the moment and can't try it myself. Does that help?
The problem below is ralated to my previous question
Converting static link library to dynamic dll
My first step was to develop a dll, that was done. (Thanks John Knoeller prakash. Your input was very helpful)
Now when i call the function in the dll from my c# application i get the error
"Attempted to read or write protected memory. This is often an indication that other memory is corrupt."
Here is the C++ definition
extern "C" DEMO2_API void Decompress(char* inp_buff, unsigned short*
inp_len, char* buffer_decomp,unsigned *output_len,unsigned short* errorCode);
My C# Converstion p/Involke
private static extern void Decompress(
byte[] inp_buff,
ref ushort inp_len,
byte[] buffer_decomp,
ref int output_len,
ref ushort errorCode
);
And I am calling it as below
byte[] dst = new byte[2048];
int outlen = 2048;
ushort errorCode = 0;
Decompress(src, (ushort )src.Length, dst, ref outlen,ref errorCode);
return dst;
What is wrong?
I see a signature mismatch on the inp_len parameter. In the C++ definition you use a pointer to a short unsigned int, while in the C# method you use a ushort.
for pointers you must use IntPtr .net type
#necrostaz
It is not necessary that we use IntPtr for pointers.
Look below all of these four declarations are valid and currently i am using it.
[DllImport("user32.dll")]
public static extern IntPtr SendMessage(IntPtr hWnd, int msg, int wParam, String lParam);
[DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)]
public static extern IntPtr SendMessage(IntPtr hWnd, Int32 Msg, IntPtr wParam, IntPtr lParam);
[DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)]
public static extern IntPtr SendMessage(IntPtr hWnd, Int32 Msg, IntPtr wParam, StringBuilder lParam);
[DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = false)]
public static extern IntPtr SendMessage(IntPtr hWnd, Int32 Msg, IntPtr wParam, String lParam);
question is still open
In addition to the missing "ref" on the inp_len declaration that Maurits pointed out, you need to make sure that your pointer sizes match.
If you're running on a 32-bit operating system you should be OK, but if your code runs on 64-bit too, then you need to ensure that either:
You mark your .net entry assembly as x86 (not Any CPU)
or
You supply a 32-bit and 64-bit build of the C++ dll and install the correct one for the interop to call.
I have had the same problem two years ago. In my case the reason for the access violation was that the memory was allocated outside the DLL. As a solution I added two functions for memory allocation and deallocation to the DLL.
Another solution could be a change of the .net security settings. Some keywords are "Code Access Security Police Tool" (caspol.exe) and ".NET Framework Configuration Tool" (mscorcfg.msc). In VS there is also a security tab in the project property dialog. I'm not an expert in .net security so someone else should know more details.
The following code runs without any problems. It's very similar to yours:
C++:
extern "C" __declspec(dllexport) void TestFunction(char* inp_buff,
unsigned short* inp_len,
char* buffer_decomp,
unsigned *output_len,
unsigned short* errorCode)
{
//copy input buffer to output buffer
int size = min(*inp_len,*output_len);
for(int i=0; i<size; i++)
buffer_decomp[i] = inp_buff[i];
errorCode = 0;
}
C#:
using System;
using System.Runtime.InteropServices;
class Program
{
[DllImport("TEST.DLL")]
public static extern void TestFunction(byte[] inp_buff,
ref ushort inp_len,
byte[] out_buff,
ref int out_len,
ref ushort errorCode);
static void Main(string[] args)
{
//prepare input buffer
byte[] inp_buff = new byte[20];
inp_buff[0] = (byte)'T';
inp_buff[1] = (byte)'E';
inp_buff[2] = (byte)'S';
inp_buff[3] = (byte)'T';
ushort inp_len = (ushort)inp_buff.Length;
//prepare output buffer
byte[] out_buff = new byte[20];
int out_len = out_buff.Length;
ushort errorCode = 0;
TestFunction(inp_buff, ref inp_len, out_buff, ref out_len, ref errorCode);
//see if copying was successful
for(int i=0; i<out_len; i++)
Console.Out.Write(out_buff[i]);
}
}
Try it out. I have taken a look at the open parts of the library you are using. Here is a direct excerpt of the function lzo_decomp:
in = lzo_malloc(IN_LEN);
out = lzo_malloc(OUT_LEN);
wrkmem = lzo_malloc(LZO1Z_999_MEM_COMPRESS);
if (in == NULL || out == NULL || wrkmem == NULL)
{
printf("out of memory\n");
}
in_len = IN_LEN;
lzo_memset(in,0,in_len );
lzo_memset ( out, 0, OUT_LEN );
memcpy ( out, &input_buffer, inp_buff_len);
lzo_free(wrkmem);
lzo_free(out);
lzo_free(in);
r = lzo1z_decompress(out,*inp_len,in,&out_len,NULL );
For serenity: "in" and "out" are not the function arguments for the input and output buffers but temporary pointers. What can you see (beside from bad formatted code)? The only two buffers lzo1z_decompress is called with are "in" and "out". And these two buffers are freed before the call. I'm not surprised that there is an access violation. I only can underline nobugz's advice: Contact the author.
The 4th parameter need to be passed using out mode instead of ref. That solved the problem.