Adjust brightness the native way in Windows - c#

I was wondering what is the native way to adjust the brightness in Windows?
By native, I mean the method that also displays the brightness overlay in the top left corner in Windows 8, 8.1, and 10, as if the special brightness keys have been pressed.
I was looking all over the Internet for this, but some solutions that do work, adjust the brightness, but no overlay is shown. Any idea? Is there something like
SendMessage(HWND_BROADCAST, WM_SYSCOMMAND, SC_MONITORPOWER, 2);
which turns off the monitor, but for brightness, that can be used from C++? Or C#? Thanks.
Update: here is the sample code.
HMONITOR hMonitor = NULL;
DWORD cPhysicalMonitors;
LPPHYSICAL_MONITOR pPhysicalMonitors = NULL;
// Get the monitor handle.
hMonitor = MonitorFromWindow(GetForegroundWindow(), MONITOR_DEFAULTTOPRIMARY);
// Get the number of physical monitors.
BOOL bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR(
hMonitor,
&cPhysicalMonitors
);
if (bSuccess)
{
// Allocate the array of PHYSICAL_MONITOR structures.
pPhysicalMonitors = (LPPHYSICAL_MONITOR)malloc(
cPhysicalMonitors * sizeof(PHYSICAL_MONITOR));
if (pPhysicalMonitors != NULL)
{
// Get the array.
bSuccess = GetPhysicalMonitorsFromHMONITOR(
hMonitor, cPhysicalMonitors, pPhysicalMonitors);
// Use the monitor handles (not shown).
DWORD pdwMinimumBrightness = 0;
DWORD pdwCurrentBrightness = 0;
DWORD pdwMaximumBrightness = 0;
DWORD dwMonitorCapabilities = 0;
DWORD dwSupportedColorTemperatures = 0;
bSuccess = GetMonitorCapabilities(pPhysicalMonitors, &dwMonitorCapabilities, &dwSupportedColorTemperatures);
cout << bSuccess << endl;
// Close the monitor handles.
bSuccess = DestroyPhysicalMonitors(
cPhysicalMonitors,
pPhysicalMonitors);
// Free the array.
free(pPhysicalMonitors);
}
}
cout << bSuccess always writes 0 in terminal.
GetMonitorCapabilities fails with error -1071241854 (0xC0262582: "An error occurred while transmitting data to the device on the I2C bus."). Then how do the brightness keys on my computer work?

Related

Getting VCP codes from external monitor in C#

I need to get various VCP codes from an DDC/CI enabled external monitor in a C# application running on a Windows machine.
I have been able to find the WIN32 API which can does exactly this, but uses unmanaged code, specifically C++. Additionally, I have searched for native C# ways to do this and only 1 solution came up, namely the Nuget package MonitorControl. However, this Nuget package only support the highlevel, not lowlevel API, plus when I downloaded the package I wasn't able to use it due to missing files.
So from this website (Using the Low-Level Monitor Configuration Functions) I have been able to come up with some C++ code(largely copy/pasted from the example on the website), which I have implemented successfully.
#include "Windows.h"
#include "PhysicalMonitorEnumerationAPI.h"
#include "lowlevelmonitorconfigurationapi.h"
#pragma comment(lib, "Dxva2.lib")
void readVCPFromMonitor(BYTE vcpCode)
{
HMONITOR hMonitor = NULL;
DWORD cPhysicalMonitors;
LPPHYSICAL_MONITOR pPhysicalMonitors = NULL;
HWND hWnd = ::GetDesktopWindow();
// Get the monitor handle.
hMonitor = MonitorFromWindow(hWnd, MONITOR_DEFAULTTOPRIMARY);
// Get the number of physical monitors.
BOOL bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR(hMonitor, &cPhysicalMonitors);
if (bSuccess)
{
// Allocate the array of PHYSICAL_MONITOR structures.
pPhysicalMonitors = (LPPHYSICAL_MONITOR)malloc(cPhysicalMonitors * sizeof(PHYSICAL_MONITOR));
if (pPhysicalMonitors != NULL)
{
// Get the array.
bSuccess = GetPhysicalMonitorsFromHMONITOR(hMonitor, cPhysicalMonitors, pPhysicalMonitors);
// Get physical monitor handle.
HANDLE hPhysicalMonitor = pPhysicalMonitors[0].hPhysicalMonitor;
MC_VCP_CODE_TYPE pvct = MC_SET_PARAMETER;
DWORD pdwCurrentValue = 0;
DWORD pdwMaximvumValue = 0;
bSuccess = GetVCPFeatureAndVCPFeatureReply(hPhysicalMonitor, vcpCode, &pvct, &pdwCurrentValue, &pdwMaximvumValue);
if (bSuccess == FALSE)
{
std::cout << "Error getting VCP Code" << std::endl;
}
else
{
std::cout << "VCP Code Value : " << std::dec << pdwCurrentValue << std::endl;
if (pvct != 0)
{
std::cout << "VCP Code Type: " << std::dec << pvct << std::endl;
}
if (pdwMaximvumValue != 0)
{
std::cout << "VCP Code Max value: " << std::dec << pdwMaximvumValue << std::endl;
}
}
// Close the monitor handles.
bSuccess = DestroyPhysicalMonitors(cPhysicalMonitors, pPhysicalMonitors);
// Free the array.
free(pPhysicalMonitors);
}
}
}
This code can be called with a specific VCP code as a parameter and spits out the result to the console, just for testing purposes. But now I need to "translate" this code to C#, and I dont know how to do the proper marshalling.
Any pointers, example code or other help would be greatly appreciated!
Final note: I have also found the Nuget package CsWin32 which should automate much of the marshalling, but I have not yet succeded.
If anyone know of a library or anything else that I can use, I am also very open to that.

Looping over data in CUDA kernel causes app to abort

issue:
As I increase the amount of data that is being processed inside of loop that is inside of CUDA kernel - it causes the app to abort!
exception:
ManagedCuda.CudaException: 'ErrorLaunchFailed: An exception occurred
on the device while executing a kernel. Common causes include
dereferencing an invalid device pointer and accessing out of bounds
shared memory.
question:
I would appreciate if somebody could shed a light on limitations that I am hitting with my current implementation and what exactly causes the app to crash..
Alternatively, I am attaching a full kernel code, for the sake if somebody could say how it can be re-modelled in such a way, when no exceptions are thrown. The idea is that kernel is accepting combinations and then performing calculations on the same set of data (in a loop). Therefore, loop calculations that are inside shall be sequential. The sequence in which kernel itself is executed is irrelevant. It's combinatorics problem.
Any bit of advice is welcomed.
code (Short version, which is enough to abort the app):
extern "C"
{
__device__ __constant__ int arraySize;
__global__ void myKernel(
unsigned char* __restrict__ output,
const int* __restrict__ in1,
const int* __restrict__ in2,
const double* __restrict__ in3,
const unsigned char* __restrict__ in4)
{
for (int row = 0; row < arraySize; row++)
{
// looping over sequential data.
}
}
}
In the example above if the arraySize is somewhere close to 50_000 then the app starts to abort. With the same kind of input parameters, if we override or hardcore the arraySize to 10_000 then the code finishes successfully.
code - kernel (full version)
#iclude <cuda.h>
#include "cuda_runtime.h"
#include <device_launch_parameters.h>
#include <texture_fetch_functions.h>
#include <builtin_types.h>
#define _SIZE_T_DEFINED
#ifndef __CUDACC__
#define __CUDACC__
#endif
#ifndef __cplusplus
#define __cplusplus
#endif
texture<float2, 2> texref;
extern "C"
{
__device__ __constant__ int width;
__device__ __constant__ int limit;
__device__ __constant__ int arraySize;
__global__ void myKernel(
unsigned char* __restrict__ output,
const int* __restrict__ in1,
const int* __restrict__ in2,
const double* __restrict__ in3,
const unsigned char* __restrict__ in4)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if (index >= limit)
return;
bool isTrue = false;
int varA = in1[index];
int varB = in2[index];
double calculatable = 0;
for (int row = 0; row < arraySize; row++)
{
if (isTrue)
{
int idx = width * row + varA;
if (!in4[idx])
continue;
calculatable = calculatable + in3[row];
isTrue = false;
}
else
{
int idx = width * row + varB;
if (!in4[idx])
continue;
calculatable = calculatable - in3[row];
isTrue = true;
}
}
if (calculatable >= 0) {
output[index] = 1;
}
}
}
code - host (full version)
public static void test()
{
int N = 10_245_456; // size of an output
CudaContext cntxt = new CudaContext();
CUmodule cumodule = cntxt.LoadModule(#"kernel.ptx");
CudaKernel myKernel = new CudaKernel("myKernel", cumodule, cntxt);
myKernel.GridDimensions = (N + 255) / 256;
myKernel.BlockDimensions = Math.Min(N, 256);
// output
byte[] out_host = new byte[N]; // i.e. bool
var out_dev = new CudaDeviceVariable<byte>(out_host.Length);
// input
int[] in1_host = new int[N];
int[] in2_host = new int[N];
double[] in3_host = new double[50_000]; // change it to 10k and it's OK
byte[] in4_host = new byte[10_000_000]; // i.e. bool
var in1_dev = new CudaDeviceVariable<int>(in1_host.Length);
var in2_dev = new CudaDeviceVariable<int>(in2_host.Length);
var in3_dev = new CudaDeviceVariable<double>(in3_host.Length);
var in4_dev = new CudaDeviceVariable<byte>(in4_host.Length);
// copy input parameters
in1_dev.CopyToDevice(in1_host);
in2_dev.CopyToDevice(in2_host);
in3_dev.CopyToDevice(in3_host);
in4_dev.CopyToDevice(in4_host);
myKernel.SetConstantVariable("width", 2);
myKernel.SetConstantVariable("limit", N);
myKernel.SetConstantVariable("arraySize", in3_host.Length);
// exception is thrown here
myKernel.Run(out_dev.DevicePointer, in1_dev.DevicePointer, in2_dev.DevicePointer,in3_dev.DevicePointer, in4_dev.DevicePointer);
out_dev.CopyToHost(out_host);
}
analysis
My initial assumption was that I am having memory issues, however, according to VS debugger I am hitting a little above 500mb of data on a host environment. So I imagine that no matter how much data I copy to GPU - it shouldn't exceed 1Gb or even maximum 11Gb. Later on I have noticed that the crashing only is happening when the loop that is inside a kernel is having many records of data to process. It makes me to believe that I am hitting some kind of thread time-out limitations or something of that sort. Without a solid proof.
system
My system specs are 16Gb of Ram, and GeForce 1080 Ti 11Gb.
Using Cuda 9.1., and managedCuda version 8.0.22 (also tried with 9.x version from master branch)
edit 1: 26.04.2018 Just tested the same logic, but only on OpenCL. The code not only finished successfully, but also performs 1.5-5x time better than the CUDA, depending on the input parameter sizes:
kernel void Test (global bool* output, global const int* in1, global const int* in2, global const double* in3, global const bool* in4, const int width, const int arraySize)
{
int index = get_global_id(0);
bool isTrue = false;
int varA = in1[index];
int varB = in2[index];
double calculatable = 0;
for (int row = 0; row < arraySize; row++)
{
if (isTrue)
{
int idx = width * row + varA;
if (!in4[idx]) {
continue;
}
calculatable = calculatable + in3[row];
isTrue = false;
}
else
{
int idx = width * row + varB;
if (!in4[idx]) {
continue;
}
calculatable = calculatable - in3[row];
isTrue = true;
}
}
if (calculatable >= 0)
{
output[index] = true;
}
}
I don't really want to start OpenCL/CUDA war here. If there is anything I should be concerned about in my original CUDA implementation - please let me know.
edit: 26.04.2018. After following suggestions from the comment section I was able to increase the amount of data processed, before an exception is thrown, by 3x. I was able to achieve that by switching to .ptx generated in Release mode, rather than Debug mode. This improvement could be related to the fact that in Debug settings we also have Generate GPU Debug information set to Yes and other unnecessary settings that could affect performance.. I will now try to search info about how timings can be increased for kernel.. I am still not reaching the results of OpenCL, but getting close.
For CUDA file generation I am using VS2017 Community, CUDA 9.1 project, v140 toolset, build for x64 platform, post build events disabled, configuration type: utility. Code generation set to: compute_30,sm_30. I am not sure why it's not sm_70, for example. I don't have other options.
I have managed to improve the CUDA performance over OpenCL. And what's more important - the code can now finish executing without exceptions. The credits go to Robert Crovella. Thank You!
Before showing the results here are some specs:
CPU Intel i7 8700k 12 cores (6+6)
GPU GeForce 1080 Ti 11Gb
Here are my results (library/technology):
CPU parallel for loop: 607907 ms (default)
GPU (Alea, CUDA): 9905 ms (x61)
GPU (managedCuda, CUDA): 6272 ms (x97)
GPU (Coo, OpenCL): 8277 ms (x73)
THE solution 1:
The solution was to increase the WDDM TDR Delay from default 2 seconds to 10 seconds. As easy as that.
The solution 2:
I was able to squeeze out a bit more of performance by:
updating the compute_30,sm_30 settings to compute_61,sm_61 in CUDA project properties
using the Release settings instead of Debug
using .cubin file instead of .ptx
If anyone still wants to suggesst some ideas on how to improve the performance any further - please share them! I am opened to ideas. This question has been resolved, though!
p.s. if your display blinks in the same fashion as described here, then try increasing the delay as well.

DLL won't work in C#

Whenever I try to reference my own DLL in C# through Visual Studio, it tells me it was unable to make a reference to the DLL as it's not a COM library.
I've searched around the internet to find a solution to this with no clear answer or help any where really. It's a rather "simple" DLL which captures the raw picture data from a Fingerprint Scanner. I have tested that the C++ code worked just fine before I tried to make it into a DLL, just so you know.
I followed Microsofts guide on how to make a DLL and here is what I ended up with:
JTBioCaptureFuncsDll.h
JTBioCaptureFuncsDll.cpp
JTBioCapture.cpp
JTBioCaptureFuncsDll.h
#ifdef JTBIOCAPTUREFUNCSDLL_EXPORTS
#define JTBIOCAPTUREFUNCSDLL_API __declspec(dllexport)
#else
#define JTBIOCAPTUREFUNCSDLL_API __declspec(dllimport)
#endif
using byte = unsigned char*;
struct BioCaptureSample {
INT32 Width;
INT32 Height;
INT32 PixelDepth;
byte Buffer;
};
JTBioCaptureFuncsDll.cpp
// JTBioCapture.cpp : Defines the exported functions for the DLL application.
//
#include "stdafx.h"
namespace JTBioCapture
{
using byte = unsigned char*;
class JTBioCapture
{
public:
// Returns a Struct with Information Regarding the Fingerprint Sample
static JTBIOCAPTUREFUNCSDLL_API BioCaptureSample CaptureSample();
};
}
JTBioCapture.cpp
/*
* Courtesy of WinBio God Satish Agrawal on Stackoverflow
*/
BioCaptureSample CaptureSample()
{
HRESULT hr = S_OK;
WINBIO_SESSION_HANDLE sessionHandle = NULL;
WINBIO_UNIT_ID unitId = 0;
WINBIO_REJECT_DETAIL rejectDetail = 0;
PWINBIO_BIR sample = NULL;
SIZE_T sampleSize = 0;
// Connect to the system pool.
hr = WinBioOpenSession(
WINBIO_TYPE_FINGERPRINT, // Service provider
WINBIO_POOL_SYSTEM, // Pool type
WINBIO_FLAG_RAW, // Access: Capture raw data
NULL, // Array of biometric unit IDs
0, // Count of biometric unit IDs
WINBIO_DB_DEFAULT, // Default database
&sessionHandle // [out] Session handle
);
if (FAILED(hr))
{
wprintf_s(L"\n WinBioOpenSession failed. hr = 0x%x\n", hr);
goto e_Exit;
}
// Capture a biometric sample.
wprintf_s(L"\n Calling WinBioCaptureSample - Swipe sensor...\n");
hr = WinBioCaptureSample(
sessionHandle,
WINBIO_NO_PURPOSE_AVAILABLE,
WINBIO_DATA_FLAG_RAW,
&unitId,
&sample,
&sampleSize,
&rejectDetail
);
if (FAILED(hr))
{
if (hr == WINBIO_E_BAD_CAPTURE)
{
wprintf_s(L"\n Bad capture; reason: %d\n", rejectDetail);
}
else
{
wprintf_s(L"\n WinBioCaptureSample failed. hr = 0x%x\n", hr);
}
goto e_Exit;
}
wprintf_s(L"\n Swipe processed - Unit ID: %d\n", unitId);
wprintf_s(L"\n Captured %d bytes.\n", sampleSize);
// Courtesy of Art "Messiah" Baker at Microsoft
PWINBIO_BIR_HEADER BirHeader = (PWINBIO_BIR_HEADER)(((PBYTE)sample) + sample->HeaderBlock.Offset);
PWINBIO_BDB_ANSI_381_HEADER AnsiBdbHeader = (PWINBIO_BDB_ANSI_381_HEADER)(((PBYTE)sample) + sample->StandardDataBlock.Offset);
PWINBIO_BDB_ANSI_381_RECORD AnsiBdbRecord = (PWINBIO_BDB_ANSI_381_RECORD)(((PBYTE)AnsiBdbHeader) + sizeof(WINBIO_BDB_ANSI_381_HEADER));
PBYTE firstPixel = (PBYTE)((PBYTE)AnsiBdbRecord) + sizeof(WINBIO_BDB_ANSI_381_RECORD);
int width = AnsiBdbRecord->HorizontalLineLength;
int height = AnsiBdbRecord->VerticalLineLength;
wprintf_s(L"\n ID: %d\n", AnsiBdbHeader->ProductId.Owner);
wprintf_s(L"\n Width: %d\n", AnsiBdbRecord->HorizontalLineLength);
wprintf_s(L"\n Height: %d\n", AnsiBdbRecord->VerticalLineLength);
BioCaptureSample returnSample;
byte byteBuffer;
for (int i = 0; i < AnsiBdbRecord->BlockLength; i++) {
byteBuffer[i] = firstPixel[i];
}
returnSample.Buffer = byteBuffer;
returnSample.Height = height;
returnSample.Width = width;
returnSample.PixelDepth = AnsiBdbHeader->PixelDepth;
/*
* NOTE: (width / 3) is necessary because we ask for a 24-bit BMP but is only provided
* a greyscale image which is 8-bit. So we have to cut the bytes by a factor of 3.
*/
// Commented out as we only need the Byte buffer. Comment it back in should you need to save a BMP of the fingerprint.
// bool b = SaveBMP(firstPixel, (width / 3), height, AnsiBdbRecord->BlockLength, L"C:\\Users\\smf\\Desktop\\fingerprint.bmp");
// wprintf_s(L"\n Success: %d\n", b);
e_Exit:
if (sample != NULL)
{
WinBioFree(sample);
sample = NULL;
}
if (sessionHandle != NULL)
{
WinBioCloseSession(sessionHandle);
sessionHandle = NULL;
}
wprintf_s(L"\n Press any key to exit...");
_getch();
return returnSample;
}
The idea is that in C# you call "CaptureSample()" and then the code attempts to capture a fingerprint scan. When it does a scan, a struct should be returned to C# that it can work with, holding:
Byte Buffer
Image Height
Image Width
Image Pixeldepth
But when I try to reference the DLL in my C# project I get the following error:
I have also tried to use the TlbImp.exe tool to make the DLL but to no avail. It tells me that the DLL is not a valid type library.
So I'm a bit lost here. I'm new to C++ so making an Interop/COM Component is not something I've done before nor make a DLL for use in C#.
You cannot reference a library of unmanaged code written in C++ in a .NET Project.
So to call code from such library you have to either use DllImport, or use a WrapperClass.
I referred to this answer : https://stackoverflow.com/a/574810/4546874.

Getting Text from SysListView32 in 64bit

here is my code :
public static string ReadListViewItem(IntPtr lstview, int item)
{
const int dwBufferSize = 1024;
int dwProcessID;
LV_ITEM lvItem;
string retval;
bool bSuccess;
IntPtr hProcess = IntPtr.Zero;
IntPtr lpRemoteBuffer = IntPtr.Zero;
IntPtr lpLocalBuffer = IntPtr.Zero;
IntPtr threadId = IntPtr.Zero;
try
{
lvItem = new LV_ITEM();
lpLocalBuffer = Marshal.AllocHGlobal(dwBufferSize);
// Get the process id owning the window
threadId = GetWindowThreadProcessId(lstview, out dwProcessID);
if ((threadId == IntPtr.Zero) || (dwProcessID == 0))
throw new ArgumentException("hWnd");
// Open the process with all access
hProcess = OpenProcess(PROCESS_ALL_ACCESS, false, dwProcessID);
if (hProcess == IntPtr.Zero)
throw new ApplicationException("Failed to access process");
// Allocate a buffer in the remote process
lpRemoteBuffer = VirtualAllocEx(hProcess, IntPtr.Zero, dwBufferSize, MEM_COMMIT,
PAGE_READWRITE);
if (lpRemoteBuffer == IntPtr.Zero)
throw new SystemException("Failed to allocate memory in remote process");
// Fill in the LVITEM struct, this is in your own process
// Set the pszText member to somewhere in the remote buffer,
// For the example I used the address imediately following the LVITEM stuct
lvItem.mask = LVIF_TEXT;
lvItem.iItem = item;
lvItem.iSubItem = 2;
lvItem.pszText = (IntPtr)(lpRemoteBuffer.ToInt32() + Marshal.SizeOf(typeof(LV_ITEM)));
lvItem.cchTextMax = 50;
// Copy the local LVITEM to the remote buffer
bSuccess = WriteProcessMemory(hProcess, lpRemoteBuffer, ref lvItem,
Marshal.SizeOf(typeof(LV_ITEM)), IntPtr.Zero);
if (!bSuccess)
throw new SystemException("Failed to write to process memory");
// Send the message to the remote window with the address of the remote buffer
SendMessage(lstview, LVM_GETITEMText, 0, lpRemoteBuffer);
// Read the struct back from the remote process into local buffer
bSuccess = ReadProcessMemory(hProcess, lpRemoteBuffer, lpLocalBuffer, dwBufferSize,IntPtr.Zero);
if (!bSuccess)
throw new SystemException("Failed to read from process memory");
// At this point the lpLocalBuffer contains the returned LV_ITEM structure
// the next line extracts the text from the buffer into a managed string
retval = Marshal.PtrToStringAnsi((IntPtr)(lpLocalBuffer +
Marshal.SizeOf(typeof(LV_ITEM))));
}
finally
{
if (lpLocalBuffer != IntPtr.Zero)
Marshal.FreeHGlobal(lpLocalBuffer);
if (lpRemoteBuffer != IntPtr.Zero)
VirtualFreeEx(hProcess, lpRemoteBuffer, 0, MEM_RELEASE);
if (hProcess != IntPtr.Zero)
CloseHandle(hProcess);
}
return retval;
}
no matter what i do retval returns empty, although lpLocalBuffer doesnt .
here is the def of ListItem :
[StructLayout(LayoutKind.Sequential)]
private struct LV_ITEM
{
public int mask;
public int iItem;
public int iSubItem;
public int state;
public int stateMask;
public IntPtr pszText;
public int cchTextMax;
public int iImage;
internal int lParam;
internal int iIndent;
}
i tried compiling for 86x , 64bit, any cpu , nothing seems to work at all !
any idea why this might be happening ?
C# + .net4 , windows 7 64bit.
Here's a different approach to doing this - use UI Automation. It does the cross-process, cross-bitness work for you, and will work against listviews, listboxes, or pretty much any other standard Windows UI. Here's a sample app that will get the HWND from the listview under the mouse pointer, and dump the items in it. It dumps just the name of each item; with Listviews, I think you can recurse into the fields in each item if you want.
// Compile using: csc ReadListView.cs /r:UIAutomationClient.dll
using System;
using System.Windows.Automation;
using System.Runtime.InteropServices;
class ReadListView
{
public static void Main()
{
Console.WriteLine("Place pointer over listview and hit return...");
Console.ReadLine();
// Get cursor position, then the window handle at that point...
POINT pt;
GetCursorPos(out pt);
IntPtr hwnd = WindowFromPoint(pt);
// Get the AutomationElement that represents the window handle...
AutomationElement el = AutomationElement.FromHandle(hwnd);
// Walk the automation element tree using content view, so we only see
// list items, not scrollbars and headers. (Use ControlViewWalker if you
// want to traverse those also.)
TreeWalker walker = TreeWalker.ContentViewWalker;
int i = 0;
for( AutomationElement child = walker.GetFirstChild(el) ;
child != null;
child = walker.GetNextSibling(child) )
{
// Print out the type of the item and its name
Console.WriteLine("item {0} is a \"{1}\" with name \"{2}\"", i++, child.Current.LocalizedControlType, child.Current.Name);
}
}
[StructLayout(LayoutKind.Sequential)]
private struct POINT
{
public int x;
public int y;
};
[DllImport("user32.dll")]
private static extern IntPtr WindowFromPoint(POINT pt);
[DllImport("user32.dll")]
private static extern int GetCursorPos(out POINT pt);
}
I know this is old, but I found it while trying to solve my problem and hopefully this will help someone else.
I used the recommendation in this question, that was in C++, and slightly modified the LV_ITEM structure to make it work with 64bit in VB.NET (I haven't tested in C# but I imagine the solution is quite similar.)
Public Structure LV_ITEM64
Public mask As Integer
Public iItem As Integer
Public iSubItem As Integer
Public state As Integer
Public stateMask As Integer
Public placeholder1 As Integer
Public pszText As Integer
Public placeholder2 As Integer
Public cchTextMax As Integer
Public iImage As Integer
End Structure
Then, when declaring the instance of the structure, I used the following code to choose between 64 bit and 32 bit structures:
Dim lvi As Object
If IntPtr.Size = 4 Then
lvi = New LV_ITEM
Else
lvi = New LV_ITEM64
End If
You have clarified that you are trying to read items from a list view control in a 32 bit process into a different 64 bit process.
I have seen many questions on this topic in various forums and not one ever seemed to achieve a successful outcome.
I think your best option is to create a 32 bit executable which will be able to read out of the other program's list view.
There is at least one obstacle to overcome if your program is 32-bit and the target program is 64-bit. Or the other way around. The LVITEM declaration will be wrong, IntPtr has the wrong number of bits. Which makes Marshal.SizeOf() return the wrong value. Alignment is okay, I think, by accident. Changing the field to either int or long can fix the problem, depending on the bitness of the target program. Which you can find out by looking at the Taskmgr.exe, Processes tab. The process name is post-fixed with "*32" if it is a 32-bit process. Or simply stay out of trouble by setting your project's Target platform setting to match the target process (x86 or AnyCPU).
Debug this by using Debug + Windows + Memory + Memory1. Put "lpLocalBuffer" in the Address box and observe what you see vs what your code reads. You should definitely be able to tell from the hex view that you got the string properly. Note that if you see zeros between the string characters then the target process uses the Unicode version of the list view. Marshal.PtrToStringUnicode is then required to read it.
Sorry my response is so late but I just came across the same issue. Here is the structure I used for VB.NET which works on both 32 and 64 bit systems.
<StructLayout(LayoutKind.Sequential, Pack:=1)> _
Public Structure LV_ITEM
Public Mask As UInteger
Public Index As Integer
Public SubIndex As Integer
Public State As Integer
Public StateMask As IntPtr
Public Text As String
Public TextLength As Integer
Public ImageIndex As Integer
Public LParam As IntPtr
End Structure

Convert array of structs to IntPtr

I am trying to convert an array of the RECT structure (given below) into an IntPtr, so I can send the pointer using PostMessage to another application.
[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
public int Left;
public int Top;
public int Right;
public int Bottom;
// lots of functions snipped here
}
// so we have something to send, in reality I have real data here
// also, the length of the array is not constant
RECT[] foo = new RECT[4];
IntPtr ptr = Marshal.AllocHGlobal(Marshal.SizeOf(foo[0]) * 4);
Marshal.StructureToPtr(foo, ptr, true); // -- FAILS
This gives an ArgumentException on the last line ("The specified structure must be blittable or have layout information."). I need to somehow get this array of RECTs over to another application using PostMessage, so I really need a pointer to this data.
What are my options here?
UPDATE: This seems to work:
IntPtr result = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Win32.RECT)) * foo.Length);
IntPtr c = new IntPtr(result.ToInt32());
for (i = 0; i < foo.Length; i++)
{
Marshal.StructureToPtr(foo[i], c, true);
c = new IntPtr(c.ToInt32() + Marshal.SizeOf(typeof(Win32.RECT)));
}
UPDATED AGAIN to fix what arbiter commented on.
StructureToPtr expects struct object, and foo is not structure it is array, that is why exception occurs.
I can suggest you to write structures in cycle (sadly, StructureToPtr does not have overload with Index):
long LongPtr = ptr.ToInt64(); // Must work both on x86 and x64
for (int I = 0; I < foo.Length; I++)
{
IntPtr RectPtr = new IntPtr(LongPtr);
Marshal.StructureToPtr(foo[I], RectPtr, false); // You do not need to erase struct in this case
LongPtr += Marshal.SizeOf(typeof(Rect));
}
Another option is to write structure as four integers, using Marshal.WriteInt32:
for (int I = 0; I < foo.Length; I++)
{
int Base = I * sizeof(int) * 4;
Marshal.WriteInt32(ptr, Base + 0, foo[I].Left);
Marshal.WriteInt32(ptr, Base + sizeof(int), foo[I].Top);
Marshal.WriteInt32(ptr, Base + sizeof(int) * 2, foo[I].Right);
Marshal.WriteInt32(ptr, Base + sizeof(int) * 3, foo[I].Bottom);
}
And the last, you can use unsafe keyword, and work with pointers directly.
Arbiter has given you one good answer for how to marshal arrays of structs. For blittable structs like these I, personally, would use unsafe code rather than manually marshaling each element to unmanaged memory. Something like this:
RECT[] foo = new RECT[4];
unsafe
{
fixed (RECT* pBuffer = foo)
{
//Do work with pointer
}
}
or you could pin the array using a GCHandle.
Unfortunately, you say you need to send this information to another process. If the message you are posting is not one of the ones for which Windows provides automatic marshaling then you have another problem. Since the pointer is relative to the local process it means nothing in the remote process and posting a message with this pointer will cause unexpected behavior, including likely program crash. So what you need to do is write the RECT array to the other process' memory not your own. To do this you need to use OpenProcess to get a handle to the process, VitualAllocEx to allocate the memory in the other process and then WriteProcessMemory to write the array into the other process' virtual memory.
Unfortunately again, if you are going from a 32bit process to a 32bit process or from a 64bit process to a 64bit process things are quite straightforward but from a 32bit process to a 64bit process things can get a little hairy. VirtualAllocEx and WriteProcessMemory are not really supported from 32 to 64. You may have success by trying to force VirtualAllocEx to allocate its memory in the bottom 4GB of the 64bit memory space so that the resultant pointer is valid for the 32bit process API calls and then write with that pointer. In addition, you may have struct size and packing differences between the two process types. With RECT there is no problem but some other structs with packing or alignment issues might need to be manually written field by field to the 64bit process in order to match the 64bit struct layout.
You could try the following:
RECT[] rects = new RECT[ 4 ];
IntPtr[] pointers = new IntPtr[4];
IntPtr result = Marshal.AllocHGlobal(IntPtr.Size * rects.Length);
for (int i = 0; i < rects.Length; i++)
{
pointers[i] = Marshal.AllocHGlobal (IntPtr.Size);
Marshal.StructureToPtr(rects[i], pointers[i], true);
Marshal.WriteIntPtr(result, i * IntPtr.Size, pointers[i]);
}
// the array that you need is stored in result
And don't forget to free everything after you are finished.
I was unable to get this solution to work. So, I did some searching and the solution given here worked for me.
http://social.msdn.microsoft.com/Forums/en-US/clr/thread/dcfd6310-b03b-4552-b4c7-6c11c115eb45

Categories

Resources