gDebugger doesn't show allocated textures with OpenTK - c#

i'm using OpenTK warpper for C#, shaders weren't running correctly (i want to generate vertex displacement shader using textures), so i pretend to use a gpu debugger to see what was happening.
The application is quite simple. It just creates a game window, load shaders, textures and render textured cubes (so it's working fine, i discovered the problem trying to use vertex displacement).
I used gDebugger and AMD CodeXL with same results. Debugger detects shaders, vbos, etc but never see allocated textures. This have non-sense because when i'm running application i see a textured cube spinning around the screen and debugger render the object on back/front buffer.
For reference, here is the texture-loading function:
int loadImage(Bitmap image,int tex)
{
int texID = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texID);
System.Drawing.Imaging.BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
image.UnlockBits(data);
return texID;
}
I was searching more information, but i couldn't find anything about this error, i'm not sure if the problem is in the wrapper, in the function or i must have something else considered
EDIT:
It seems that the problem is in the wrapper, OpenTK.BindTexture is different to native glBindTexture, so the profiler can't catch the call and that's why the textures are not shown. So the next step is look for the way to do native gl calls using OpenTK.
PROPOSED SOLUTION:
As i said, some functions of OpenTK wrapper are different to native glCalls, when you use functions like GL.BindTexture, GL.GenTexture (i suppose there are more functions, but i don't know yet), OpenTK uses overloaded calls that not match with the original calls and profilers can't catch it.
It's easy to check, just use OpenTK.GenTextures or GL.BindTextures adding breakpoints into profiler with those functions and they will never break.
Now, the solution, i was thinking about it and the conclussion was to replace some OpenTK calls with native gl.dll calls using GetProcAddress function.
This gives me some ideas:
http://blogs.msdn.com/b/jonathanswift/archive/2006/10/03/dynamically-calling-an-unmanaged-dll-from-.net-_2800_c_23002900_.aspx
Using opengl32.dll included in the profiler, i use the same struct as in the previous link
static class NativeMethods
{
[DllImport("kernel32.dll")]
public static extern IntPtr LoadLibrary(string dllToLoad);
[DllImport("kernel32.dll")]
public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName);
[DllImport("kernel32.dll")]
public static extern bool FreeLibrary(IntPtr hModule);
}
This is added in the GameWindow class:
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate void BindTexture(OpenTK.Graphics.OpenGL.TextureTarget target, int texID);
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate void GenTexture(int n, int[] arr_text);
And here the new bindTexture function:
IntPtr pDll = NativeMethods.LoadLibrary(#"....\opengl32.dll");
IntPtr pAddressOfFunctionToCall = NativeMethods.GetProcAddress(pDll, "glBindTexture");
BindTexture bindTexture = (BindTexture)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(BindTexture));
bindTexture(OpenTK.Graphics.OpenGL.TextureTarget.Texture2D, arr_texture[0]);
Now if you try again with breakpoints in the profiler, it will break with glGenTextures and glBindTextures, recognising allocated textures.
I hope it helps.

Related

SetConsoleActiveScreenBuffer does not display screen buffer

I am currently trying to write a console application in C# with two screen buffers, which should be swapped back and forth (much like VSync on a modern GPU). Since the System.Console class does not provide a way to switch buffers, I had to P/Invoke several methods from kernel32.dll.
This is my current code, grossly simplified:
static void Main(string[] args)
{
IntPtr oldBuffer = GetStdHandle(-11); //Gets the handle for the default console buffer
IntPtr newBuffer = CreateConsoleScreenBuffer(0, 0x00000001, IntPtr.Zero, 1, 0); //Creates a new console buffer
/* Write data to newBuffer */
SetConsoleActiveScreenBuffer(newBuffer);
}
The following things occured:
The screen remains empty, even though it should be displaying newBuffer
When written to oldBuffer instead of newBuffer, the data appears immediately. Thus, my way of writing into the buffer should be correct.
Upon calling SetConsoleActiveScreenBuffer(newBuffer), the error code is now 6, which means invalid handle. This is strange, as the handle is not -1, which the documentation discribes as invalid.
I should note that I very rarely worked with the Win32 API directly and have very little understanding of common Win32-related problems. I would appreciate any sort of help.
As IInspectable points out in the comments, you're setting dwDesiredAccess to zero. That gives you a handle with no access permissions. There are some edge cases where such a handle is useful, but this isn't one of them.
The only slight oddity is that you're getting "invalid handle" rather than "access denied". I'm guessing you're running Windows 7, so the handle is a user-mode object (a "pseudohandle") rather than a kernel handle.
At any rate, you need to set dwDesiredAccess to GENERIC_READ | GENERIC_WRITE as shown in the sample code.
Also, as Hans pointed out in the comments, the declaration on pinvoke.net was incorrect, specifying the last argument as a four-byte integer rather than a pointer-sized integer. I believe the correct declaration is
[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr CreateConsoleScreenBuffer(
uint dwDesiredAccess,
uint dwShareMode,
IntPtr lpSecurityAttributes,
uint dwFlags,
IntPtr lpScreenBufferData
);

Array from C++ to C#

I am trying to pass a double array (its actually a std::vector, but converted at transfer) from a c++ dll into a c# script (unity).
Using the approach outlined here https://stackoverflow.com/a/31418775.
I can successfully get the size of the array printing on my console in unity however I am not able to use "CoTaskMemAlloc" to allocate memory for the array since I am using Xcode and it doesnt seem to have COM.
For a little more background this array is part of a control for a GUI, c++ creates it and the user edits with the c# GUI - so the plan is to be able to pass the array back to c++ when it has been edited.
C++ code
extern "C" ABA_API void getArray(long* len, double **data){
*len = delArray.size();
auto size = (*len)*sizeof(double);
*data = static_cast<double*>(CoTaskMemAlloc(size));
memcpy(*data, delArray.data(), size);
}
C# code
[DllImport("AudioPluginSpecDelay")]
private static extern void getArray (out int length,[MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 2)] out double[] array);
int theSize;
double[] theArray;
getArray(out theSize, out theArray);
If I leave out the code concerning the array, the int passes just fine. So I beleive the method to be the right one, its just getting around the lack of CoTaskMemAlloc.
You should be able to allocate memory in XCode using malloc and free it in C# using Marshal.FreeCoTaskMem. To be able to free it however, you need to have the IntPtr for it:
C++ code
extern "C" ABA_API void getArray(long* len, double **data)
{
*len = delArray.size();
auto size = (*len)*sizeof(double);
*data = static_cast<double*>(malloc(size));
memcpy(*data, delArray.data(), size);
}
C# code
[DllImport("AudioPluginSpecDelay")]
private static extern void getArray(out int length, out IntPtr array);
int theSize;
IntPtr theArrayPtr;
double[] theArray;
getArray(out theSize, out theArrayPtr);
Marshal.Copy(theArrayPtr, theArray, 0, theSize);
Marshal.FreeCoTaskMem(theArrayPtr);
// theArray is a valid managed object while the native array is already freed
Edit
From Memory Management I gathered that Marshal.FreeCoTaskMem would most likely be implemented using free(), so the fitting allocator would be malloc().
There are two ways to be really sure:
Allocate the memory in CLI using Marshal.AllocCoTaskMem, pass it to native to have it filled, and then free it in the CLI again using Marshal.FreeCoTaskMem.
Leave it as it is (native allocates memory with malloc()), but do not free the memory in CLI. Instead, have another native function like freeArray(double **data) and have it free() the array for you once CLI is done using it.
I am not an expert on Unity, but it seems that Unity relies on Mono for it's C# scripting support. Take a look at this documentation page:
Memory Management in Mono
We can assume from there that you will need to have platform-dependent code on your C++ side, you will need to use CoTaskMemAlloc/CoTaskMemFree in Windows and GLib memory functions g_malloc() and g_free() for Unix (like iOS, Android etc).
If you have control over all your code, C++ and C#, the easiest way to implement this would be to do all the memory allocation/deallocation in the C# script.
Sample code (untested):
//C++ code
extern "C" ABA_API long getArrayLength(){
return delArray.size();
}
extern "C" ABA_API void getArray(long len, double *data){
if (delArray.size() <= len)
memcpy(data, delArray.data(), delArray.size());
}
// C# code
[DllImport("AudioPluginSpecDelay")]
private static extern int getArrayLength();
[DllImport("AudioPluginSpecDelay")]
private static extern void getArray(int length,[MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 0)] double[] array);
int theSize = getArrayLength();
double[] theArray = new double[theSize];
getArray(theSize, theArray);

Using CUDA in a DLL with a C# Application

What I am trying to do is write a C# application to generate pictures of fractals (mandlebrot and julia sets). I am using unmanaged C++ with CUDA to do the heavy lifting, and C# for the user interface. When I try to run this code, I am not able to call the method I wrote in the DLL - I get an unhandled exception error for an invalid parameter.
The C++ DLL is designed to return a pointer to the pixel data for a bitmap, which is used by the .NET Bitmap to create a bitmap and display it in a PictureBox control.
Here is the relevant code:
C++: (CUDA methods omitted for conciseness
extern "C" __declspec(dllexport) int* generateBitmap(int width, int height)
{
int *bmpData = (int*)malloc(3*width*height*sizeof(int));
int *dev_bmp;
gpuErrchk(cudaMalloc((void**)&dev_bmp, (3*width*height*sizeof(int))));
kernel<<<BLOCKS_PER_GRID, THREADS_PER_BLOCK>>>(dev_bmp, width, height);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
cudaFree(dev_bmp);
return bmpData;
}
C#:
public class NativeMethods
{
[DllImport(#"C:\...\FractalMaxUnmanaged.dll")]
public static unsafe extern int* generateBitmap(int width, int height);
}
//...
private unsafe void mandlebrotButton_Click(object sender, EventArgs e)
{
int* ptr = NativeMethods.generateBitmap(FractalBox1.Width, FractalBox1.Height);
IntPtr iptr = new IntPtr(ptr)
fractalBitmap = new Bitmap(
FractalBox1.Width,
FractalBox1.Height,
3,
System.Drawing.Imaging.PixelFormat.Format24bppRgb,
iptr );
FractalBox1.Image = fractalBitmap;
}
Error:
************** Exception Text **************
Managed Debugging Assistant 'PInvokeStackImbalance' has detected a problem in 'C:\...WindowsFormsApplication1.vshost.exe'.
I believe the problem I am having is with the IntPtr - is this the correct method to pass a pointer from unmanaged C++ to a C# application? Is there a better method? Is passing a pointer the best method to accomplish what I am trying to do or is there a better way to pass the pixel data from unmanaged C++ w/ CUDA to C#?
EDIT:
From what I gather from the error I get when I debug the application, PInvokeStackImbalance implies that the signatures for the unmanaged and managed code don't match. However, they sure look like they match to me.
I feel like I'm missing something obvious here, any help or recommended reading would be appreciated.
You need to define the same calling convention in C and C#:
In C:
extern "C" __declspec(dllexport) int* __cdecl generateBitmap(int width, int height)
In C#:
[DllImport(#"C:\...\FractalMaxUnmanaged.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr generateBitmap(int width, int height);
Instead of cdecl you can also use stdcall, it only needs to be the same on both sides.
And handling my self a lot of managed/unmanaged code, I also advise you to pass the image array as an argument and do the memory allocation in C#. Doing so, you don't need to take care of manually freeing memory in managed world.

Can not pass winform control size into unmanaged code

I use unmanaged libraries to obtain video stream from IP Camera.
There is function:
[DllImport("client.dll", EntryPoint = "Network_ClientStartLive", SetLastError = true)]
protected static extern int Network_ClientStartLive(
ref IntPtr pStream,
IntPtr hDev,
IntPtr pClientInfo,
[MarshalAs(UnmanagedType.FunctionPtr)] ReadDatacbf lpfnCallbackFunc = null,
UInt32 dwUserData = 0
);
The pClientInfo is a pointer to structure type of:
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
protected struct LiveConnect
{
public UInt32 dwChannel;
public IntPtr hPlayWnd;
public UInt32 dwConnectMode;
}
where hPlayWnd is a handle of window in which video stream must be output.
The library detects the video resolution by size of this window (during the call to Network_ClientStartLive). I checked it on C++ MFC program, where output window is Picture control and by setting size with method MoveWindow was defined output video resolution.
In the C# version of this program I'm using a PictureBox-control to draw the video stream. The video is displayed but the size of the PictureBox does not affect to video stream resolution. I tried several methods to change PictureBox size:
setting pictureBox.Size
using WinAPI SetWindowPos:
[DllImport("user32.dll")]
private static extern bool SetWindowPos(
IntPtr hWnd,
IntPtr hWndInsertAfter,
int x,
int y,
int width,
int height,
uint uFlags);
In both methods the size of the control was changed but the camera library continued to output video stream in maximum resolution.
How can I solve this problem?
Thanks!
Every control in Windows Forms has a SizeChanged (http://msdn.microsoft.com/en-us/library/system.windows.forms.control.sizechanged(v=vs.110).aspx) event. Maybe its possible to add code to change the video resolution manually in this event handler? If not, the PictureBox handle you provide may not be sending WM_SIZE messages which would be what the unmanaged library would be looking for. As mentioned in one of the comments, Spy++ (included with Visual Studio) would be a useful program to monitor the messages and make sure that the handle values and the events are what you expect them to be.

Call a function from an injected DLL

First off I would like to say, that I am not trying to hack a game. I am actually employed by the company whose process I am trying to inject. :)
I would like to know how to call a function from an already injected DLL.
So, I have successfully injected and loaded my DLL in the target using CreateRemoteThread(). Below you can see a snippet of the injection:
private static bool Inject(Process pToBeInjected, string sDllPath,out string sError, out IntPtr hwnd, out IntPtr hLibModule)
{
IntPtr zeroPtr = (IntPtr)0;
hLibModule = zeroPtr;
IntPtr hProcess = NativeUtils.OpenProcess(
(0x2 | 0x8 | 0x10 | 0x20 | 0x400), //create thread, query info, operation ,write, and read
1,
(uint)pToBeInjected.Id);
hwnd = hProcess;
IntPtr loadLibH = NativeUtils.GetProcAddress( NativeUtils.GetModuleHandle("kernel32.dll"),"LoadLibraryA");
IntPtr dllAddress = NativeUtils.VirtualAllocEx(
hProcess,
(IntPtr)null,
(IntPtr)sDllPath.Length, //520 bytes should be enough
(uint)NativeUtils.AllocationType.Commit |
(uint)NativeUtils.AllocationType.Reserve,
(uint)NativeUtils.MemoryProtection.ExecuteReadWrite);
byte[] bytes = CalcBytes(sDllPath);
IntPtr ipTmp = IntPtr.Zero;
NativeUtils.WriteProcessMemory(
hProcess,
dllAddress,
bytes,
(uint)bytes.Length,
out ipTmp);
IntPtr hThread = NativeUtils.CreateRemoteThread(
hProcess,
(IntPtr)null,
(IntPtr)0,
loadLibH, //handle to LoabLibrary function
dllAddress,//Address of the dll in remote process
0,
(IntPtr)null);
uint retV= NativeUtils.WaitForSingleObject(hThread, NativeUtils.INFINITE_WAIT);
bool exitR = NativeUtils.GetExitCodeThread(hThread, out hLibModule);
return true;
}
Note: Error checking and freeing resources were removed for brevity, but rest assured I check all the pointers and free my resources.
After the function above exits, I have a non-zero module handle to my DLL returned by LoadLibrary through hLibModule, meaning that the DLL was loaded correctly.
My DLL is a C# class library meant to show a message box (for testing). I have tried testing the function and the message box pops up. It looks like this:
public class Class1
{
public static void ThreadFunc(IntPtr param )
{
IntPtr libPtr = LoadLibrary("user32.dll");
MessageBox(IntPtr.Zero, "I'm ALIVE!!!!", "InjectedDll", 0);
}
[DllImport("kernel32", SetLastError = true)]
public static extern IntPtr LoadLibrary(string lpFileName);
[DllImport("user32.dll", CharSet = CharSet.Auto)]
static extern int MessageBox(IntPtr hWnd, String text, String caption, int options);
}
I compile it from Visual Studio and the DLL appears in the Debug folder. I then pass the full path of my DLL to the injector.
After injection into the target process, I don't know how to call my ThreadFunc from the injected DLL, so it never executes.
I cannot use GetProcAddress(hLibModule,"ThreadFunc") since I am out of process, so the answer must lie into calling CreateRemoteThread() somehow. Also, I have read that DllMain is no longer allowed for .NET DLLs, so I cannot get any free execution that way either.
Does anyone have any idea how to call a function from an injected DLL?
Thank you in advance.
Well, you already got a thread running inside that process. You make it do something boring, it only loads a DLL. This works completely by accident, LoadLibrary just happens to have to correct function signature.
It can do much more. That however better be unmanaged code, just like LoadLibrary(), you cannot count on any managed code running properly. That takes a heckofalot more work, you have to load and initialize the CLR and tell it to load and execute the assembly you want to run. And no, you cannot load the CLR in DllMain().
Keywords to look for are CorBindToRuntimeEx() and ICLRRuntimeHost::ExecuteInAppDomain(). This is gritty stuff to get going but I've seen it done. COM and C++ skills and generous helpings of luck required.

Categories

Resources