I want to detect when my touchpad is clicked!
I normally use a usb mouse, so I don't use the touchpad for anything. Instead I'd like to make it possible to perform an action in .NET, when the touchpad is clicked. This way I can use it as a shortcut: One tap and something cool happens.
Is this possible, and if yes, any clue how? I'd prefer if it could be working in VB.NET or C#.
My theory is that I'd have to make a mousehook, which then somehow determines which device the click is coming from. If the click is determined to be from the touchpad, then cancel the click and doWhatever().
Thanks!
* EDIT *
Well, it's "solved", sort of :) In a odd coincidence, Synaptics released their latest driver and software for their touchpads a few days ago with some new functionality. As my laptop has a synaptics touchpad, I tried out the software and interestingly enough, the functionality for designating clicks on the trackpad to perform an action, was built-in.
So the desired function has been achieved, without a line of code (my own code anyway :).
Answer goes to Adrian though for the link to the RawInputSharp library. I tinkered with it yesterday, and I'm 90% sure it would be possible to use for this purpose, in the event a laptop doesn't have a synaptics trackpad.
Have a look at the RawInputSharp library from this page. It uses pInvokes into User32.dll to get hold of input device information. Using it you can detect which device (ie. mouse) input is coming from.
Having had a little play with it, I managed to extract some code that displays just the device id - a different value depending on whether I use my USB mouse or my internal touchpad. The tricky thing would be to identify which device id is that of your touchpad automatically, but you could manually configure this in your application.
Related
I am looking into writing something similar to that of Steady Mouse. My grandpa has tremors pretty badly and it prevents him from doing too much on the computer. Unfortunately it doesn't appear to work on Windows 10 and it seems the developer has discontinued working on the project. Seeing as I am looking for project to add to my portfolio, I figured I would see if I could maybe hack something together, only problem being I've never done anything this low level before so I am unsure of where to begin.
It seems the Kalman Filter is my best bet as an filtering algorithm, but I am unsure of how to provide the input. I've never used the Windows API, is this something it provides? Or, do I instead hook directly into the mouse device itself, and how is this possible? Am I even on the right track here?
I am assuming this would best be a background running process booted on startup, that filters the device input before the OS draws the cursor on the screen. Obviously, this would need to access all events and mouse movements regardless of which program is being used.
Investigate Windows Message Hook functions, it is possible to intercept/change Windows messages such as WM_MOUSEMOVE.
so I have received an external motion controller device (Myo) and I wish to create an application where certain motions will basically simulate a keystroke or keypress globally (doesn't matter about what application). This will happen while my program is running in the background so it can receive motion inputs and output as a keyboard press.
An example would be if I were to be playing Baseball game in the foreground (also full screen) and I do a pitching motion, the program will output the key which will do a pitch in game (whichever key it might be).
I have looked into the SendKeys class in C# but I feel there might be limitations as to what it can do (specifically global keypress sending).
Is there a good way where I can possibly write a program so I can map the actions with my motion controller to a keypress using C#? It would also be good if it can do key_down and key_up for key holdings.
The most direct way to accomplish truly global key-presses is to emulate a keyboard. This will involve creating a keyboard driver that somehow provides access to your background program. However this involves kernel programming which is quite complex.
An alternative is to use the SendKeys API combined with some logic to find the currently active application.
I know this isn't a C# solution, but the Myo Script interface in Myo Connect was essentially built for this purpose and would probably be the easiest way of testing things out if nothing else.
To send a keyboard command using Myo Script you can use myo.keyboard() (docs here).
If you want the script to be active at all times, you will need to consistently return true in onForegroundWindowChange() and pay attention to the script's location in the application manager. Scripts at the top of the application manager will be checked first, so your script may lose out if there is another one above it that 'wants' control of a given application.
I'm working on a small tool for a DirectX game and I want to prevent the user from pressing a certain key (F12 in this case) for a certain period.
I could find many options for simulating keypresses but what are the options when it comes to nulling out a keystroke before the game reads it?
The language doesn't really matter, although I would prefer a C# or C++ solution, or just a nudge in the right direction :)
Thanks in advance!
The good news is, I've done this before so I can say that it is possible and it does work.
The bad news is that it's not simple. It requires a lot of complicated code, and will likely take a long time to implement, but I'll explain how you can do it.
Applications like DirectX games usually register for raw input.
Since you want to stop a keyboard event from reaching the application, you need a way to insert your code between the raw input and the game so you can check the raw input and decide whether to allow it to be passed to the game:
So you want to change the flow from:
Raw Input --> Game
to
Raw Input --> Your Code --> Game
Without having access to the source code of the game, you have to find a way to insert your code.
When there is keyboard input available, the game will call the WinAPI function GetRawInputData, which will tell it about the keyboard event. Ideally, what we want is when the game calls this function, it actually calls our code instead of the WinAPI function. Then we can decide what to tell the game about the keyboard event, we could tell it anything we want (e.g. ignore F12). Sounds great right? Here's where it gets interesting...
We can take advantage of how windows loads executables into memory. Typically, a program uses (or 'imports') calls to functions in other DLLs (such as GetRawInputData, in User32.dll). When the program gets loaded into memory, Windows will fill in a table (the Import Address Table (IAT)) with pointers to the executable code in the appropriate DLLs. This means that when the program calls the function, it gets directed to the executable code in User32.dll in memory to run it.
Wouldn't it be great if we could write/patch the address of one of our functions into that table, so that when the game calls GetRawInputData, it actually gets directed to our function for us to process? Well we can! It's called Import Address Table Patching.
There's a pretty good article on it here with some working code in C++. You should first read it to understand in more detail how it works, then you can modify it to support your needs. It will work, but I know it's probably more work (much more work) than you would have been hoping for, but essentially you're hacking the application which is never easy to do.
It's worth doing, even just to gain a better understanding of Windows behind the scenes.
Good luck!
EDIT
As Simon said, Windows Hooks is a much simpler way to do it if the game isn't using raw input. DirectX Games tend to be a special case that don't really work too well with standard Hooks as they use special methods to get the input from the user. By all means give it a go though, it will be a lot easier if it works.
There are many questions relating to simulating mouse/keyboard input in WPF (and Windows, for that matter). I have something a little different than the usual question, I think, and I'd like your input. Most posts I've seen have a specific higher level action in mind: I want to click this, I want to move the mouse here, etc. To emulate these, one can simply use routed events. However, I'm hoping to operate a mouse from a remote app and would like to input mouse events at a low-level: current mouse position is x,y and button state is such and such. My target framework is WPF, but if something like a generic virtual mouse driver is the way to go, I'm cool with that too. I do not have security concerns: the apps receiving the messages will be coded by me at a higher level, so I don't need crazy hacks. I'm willing to use managed or unmanaged code and take the rabbit hole as deep as it needs to go to make this work, but I don't want to reinvent the wheel. I can host my apps in an HwndHost or some such too, in case I need access to windows messages.
Thoughts?
WPF has some built-in automation capabilities. It's a bit complicated, and I've never actually tried it myself, but I've been reading about it recently - it might be worth checking out:
http://msdn.microsoft.com/en-us/library/ms747327.aspx
Or search google for "WPF automation"
I need to listen for keyboard events from a specific keyboard device & know which key is pressed in a (C#) WPF application. Ideally, this shouldn't be window dependant and work so long as the application has focus.
unfortunately I can't think of / find any way to do this.
any ideas?
D.R
Edit: I've been looking into the OpenTK.Input library, which has a nice interface for keys... Does anybody know how to get a KeyboardDevice without creating a GameWindow
Info: Just by the way, this is for a barcode scanner which emulates a keyboard... who's bright idea was that, eh?
I'm actually working on a project that does exactly that. Check out Kaptivate. It installs a global keyboard hook, and ties together (using magic) the raw input api, and then invokes a callback function so that YOU can decide (1) is this the device I'm after?, and (2) should I allow other apps to see it, or just keep the keystroke for myself. Right now it's only C++ but one of the goals is eventually to have C# wrappers.
For keyboards, each generated event tells you the vkey, scan code, and source device.
Have a look at the below 2 articles.
Using global keyboard hook (WH_KEYBOARD_LL) in WPF / C#
https://gist.github.com/471698
both should be exactly what you want...
Managed to find this tutorial on msdn. along with a Sample Scanner object which comes with the POS.Net SDK
I haven't really had the time to pick apart how it works yet to give a proper overview, but it seems I should be able to write a custom service object abstraction for any "keyboard wedge" HID device.