I need to listen for keyboard events from a specific keyboard device & know which key is pressed in a (C#) WPF application. Ideally, this shouldn't be window dependant and work so long as the application has focus.
unfortunately I can't think of / find any way to do this.
any ideas?
D.R
Edit: I've been looking into the OpenTK.Input library, which has a nice interface for keys... Does anybody know how to get a KeyboardDevice without creating a GameWindow
Info: Just by the way, this is for a barcode scanner which emulates a keyboard... who's bright idea was that, eh?
I'm actually working on a project that does exactly that. Check out Kaptivate. It installs a global keyboard hook, and ties together (using magic) the raw input api, and then invokes a callback function so that YOU can decide (1) is this the device I'm after?, and (2) should I allow other apps to see it, or just keep the keystroke for myself. Right now it's only C++ but one of the goals is eventually to have C# wrappers.
For keyboards, each generated event tells you the vkey, scan code, and source device.
Have a look at the below 2 articles.
Using global keyboard hook (WH_KEYBOARD_LL) in WPF / C#
https://gist.github.com/471698
both should be exactly what you want...
Managed to find this tutorial on msdn. along with a Sample Scanner object which comes with the POS.Net SDK
I haven't really had the time to pick apart how it works yet to give a proper overview, but it seems I should be able to write a custom service object abstraction for any "keyboard wedge" HID device.
Related
I have an extension on a WPF program, and I need to simulate a series of keypresses for it. WinForms is unable to be used as a reference. I cannot show code, but none of my code has to do with this anyways. I just need to simulate a keypress at the most basic level so the program thinks the user is typing. Also, the use of external libraries cannot be used because the extension is a single .DLL file added to hundreds of systems, so whole libraries cannot come with it. Thanks - Ethan
You can accomplish this through the use of a technique called Platform Invoke (P/Invoke for short). This lets higher-level code interact with the lower levels of code used by the platform it's running on.
The "higher-level code" in this case is your C# code, the "platform" is the Windows OS and the "lower-level code" are the native libraries that are part of the OS. These native libraries include code to do such things as send keyboard and mouse input, and since these libraries are part of Windows itself, you don't have to package them with your program.
The method you will want to p/invoke is called SendInput. It's used to send keyboard and mouse messages to the active program, just as if they were sent by a physical keyboard or mouse. Take a look at this page on pinvoke.net for instructions and an example on how to implement that method. That site is a good general resource for all your p/invoke needs.
I'm trying to create a windows application that can manage multiple mouses, with the feature of understanding which device is generating the input
(something like Microsoft's Multimouse, but without a UI).
The problem is that I don't want to use a UI because this will be a plug-in for Unity3D so I have many limitations imposed from this.
I also thought about a parallel service that can talk with Unity, but in that case I would like to not block all the system to take mouses inputs.
In this special case I have to manage 12 of these devices: AimTrak, that are recognized as a mouse input device.
So does anyone have some idea about that how can do this without hacking them and rewriting their driver?
Ok after 3 days of googleing things i have found a way to do this!!!!
We have need to make a windows's system hook on user32.dll in order to catch all mouse inputs.
i need to block any screen capture software on the computer from taking screen shots. Since all of them are work on standard API-functions, i think i could monitor and block them.
I need to use C#.
All i have found is how to monitor and block them in a certain program (screen capture program). They are looking for a function in the program, then they change it address on mine function address.
But how can i do it, if i haven't any certain programs? I need to block anyone which tries to take a screenshot.
If your final goal is possible or not I don't know, but for the hooking the API portion I can help you out.
I have used the library EasyHook many times in the past, this will let you hook and intercept system function calls from C# code fairly easily. Just read through the PDF tutorial for setup instructions.
For actually finding the API's I recommend Rohitab's API Monitor, it's still in Alpha stages but it works really well and is free. You just hook it on to a processes and it tells you every external DLL call it makes (with the parameters it passed if you have the xml definition file for the DLL, the program comes with almost all of the windows API dll's pre-defined).
The combination of EasyHook and API Monitor is a great 1-2 punch for mucking with other program's calls.
It is not possible to prevent screenshots from being taken. The battle is already lost because of the DWM (Desktop Window Manager). It's lower level than Win32 and device contexts.
If you want to protect the text in your program, there are a lot easier ways to extract it than doing screenshots and OCR. TextOut and/or Direct2D hooking and accessibility APIs.
If there's a lot of IP in your program. Then don't make it all available onscreen. Make sure it's tedious to crawl the GUI for text, and hard to automate it. And don't load whole texts in memory of the program.
Possible solutions:
1. To prevent copying of text. Draw the text as an image.
2. To prevent accessibility technologies, like screen readers - override WndProc in your control, handle and ignore the window message WM_GETOBJECT.
3. To make it harder if they try to use OCR. Draw graphics behind the text. Human readable, but much harder for a machine to interpret it.
Neither of these methods are invasive for the user.
** A very invasive suggestion **:
If you are really serious about preventing anyone from "stealing" your content.
Implement mouse and keyboard hooks. Filter out typical copy shortcuts. Prevent the mouse from leaving the boundaries of your application.
Allow your application to only run when the OS runs well-known processes and services.
If any process starts which you don't recognize, black out the application and notify the user about it, and request the user to close it. And ofc make sure someone is not just spoofing a well-known process.
Monitor the clipboard as you suggested yourself.
You can ofc soften some of these suggestions based on the context of your application.
As Scott just posted it likely can be prevented with API hooks to see that paint events only go to desktop bound handles and not others, and refuse to paint otherwise. However, you need to consider the following scenarios and see if they're relevant threat to your approach or not:
Your software may be running in a virtual machine like VMWare. Such software has capapbilities to capture screen that does so at "virtual hardware" level, and your API hooks will not be able to discern it - and this would be the easiest way approach if I wanted to bypass your protections.
As a post suggests here, nothing also prevents someone to take monitor cable and plug it into another computer's capture card, and take screenshot that way. Again, your hooks will be helpless here.
Bottom line, you can make it somewhat harder to do, but bypassing such protection may be pretty trivial thing to do.
My 2c.
There are many questions relating to simulating mouse/keyboard input in WPF (and Windows, for that matter). I have something a little different than the usual question, I think, and I'd like your input. Most posts I've seen have a specific higher level action in mind: I want to click this, I want to move the mouse here, etc. To emulate these, one can simply use routed events. However, I'm hoping to operate a mouse from a remote app and would like to input mouse events at a low-level: current mouse position is x,y and button state is such and such. My target framework is WPF, but if something like a generic virtual mouse driver is the way to go, I'm cool with that too. I do not have security concerns: the apps receiving the messages will be coded by me at a higher level, so I don't need crazy hacks. I'm willing to use managed or unmanaged code and take the rabbit hole as deep as it needs to go to make this work, but I don't want to reinvent the wheel. I can host my apps in an HwndHost or some such too, in case I need access to windows messages.
Thoughts?
WPF has some built-in automation capabilities. It's a bit complicated, and I've never actually tried it myself, but I've been reading about it recently - it might be worth checking out:
http://msdn.microsoft.com/en-us/library/ms747327.aspx
Or search google for "WPF automation"
I want to detect when my touchpad is clicked!
I normally use a usb mouse, so I don't use the touchpad for anything. Instead I'd like to make it possible to perform an action in .NET, when the touchpad is clicked. This way I can use it as a shortcut: One tap and something cool happens.
Is this possible, and if yes, any clue how? I'd prefer if it could be working in VB.NET or C#.
My theory is that I'd have to make a mousehook, which then somehow determines which device the click is coming from. If the click is determined to be from the touchpad, then cancel the click and doWhatever().
Thanks!
* EDIT *
Well, it's "solved", sort of :) In a odd coincidence, Synaptics released their latest driver and software for their touchpads a few days ago with some new functionality. As my laptop has a synaptics touchpad, I tried out the software and interestingly enough, the functionality for designating clicks on the trackpad to perform an action, was built-in.
So the desired function has been achieved, without a line of code (my own code anyway :).
Answer goes to Adrian though for the link to the RawInputSharp library. I tinkered with it yesterday, and I'm 90% sure it would be possible to use for this purpose, in the event a laptop doesn't have a synaptics trackpad.
Have a look at the RawInputSharp library from this page. It uses pInvokes into User32.dll to get hold of input device information. Using it you can detect which device (ie. mouse) input is coming from.
Having had a little play with it, I managed to extract some code that displays just the device id - a different value depending on whether I use my USB mouse or my internal touchpad. The tricky thing would be to identify which device id is that of your touchpad automatically, but you could manually configure this in your application.