so I have received an external motion controller device (Myo) and I wish to create an application where certain motions will basically simulate a keystroke or keypress globally (doesn't matter about what application). This will happen while my program is running in the background so it can receive motion inputs and output as a keyboard press.
An example would be if I were to be playing Baseball game in the foreground (also full screen) and I do a pitching motion, the program will output the key which will do a pitch in game (whichever key it might be).
I have looked into the SendKeys class in C# but I feel there might be limitations as to what it can do (specifically global keypress sending).
Is there a good way where I can possibly write a program so I can map the actions with my motion controller to a keypress using C#? It would also be good if it can do key_down and key_up for key holdings.
The most direct way to accomplish truly global key-presses is to emulate a keyboard. This will involve creating a keyboard driver that somehow provides access to your background program. However this involves kernel programming which is quite complex.
An alternative is to use the SendKeys API combined with some logic to find the currently active application.
I know this isn't a C# solution, but the Myo Script interface in Myo Connect was essentially built for this purpose and would probably be the easiest way of testing things out if nothing else.
To send a keyboard command using Myo Script you can use myo.keyboard() (docs here).
If you want the script to be active at all times, you will need to consistently return true in onForegroundWindowChange() and pay attention to the script's location in the application manager. Scripts at the top of the application manager will be checked first, so your script may lose out if there is another one above it that 'wants' control of a given application.
Related
I am developing a C# application that gives the user the possibility to set some specific keystrokes that the applications will reproduce when it will be launched in execution. As far as I was testing my application in notepads or other simple programs it worked just using SendInput or InputSimulator, but as soon as I tried it with a video-game the emulated input could not be received by the game.
The strange thing is that actually the input is received by the game but only if I am using the game chat.
To be clearer I will make an example:
I set in my application to reproduce the key w.
I launch my application and then I launch a FPS game like CounterStrike.
When the applications emulates the pression of the w key my character in the game doesn't move! But if i click on the chat and try to write something in the chatbox the emulated input is recognized and I can see the "w"s being written in the chatbox.
Game engines usually don't get their input from the usual Windows API; since this one lets you read only one character at once. In games, however, you can press several keys at once. For instance you might press wa together in order to move your character in forward-left direction. Games check the keyboard state at each game loop instead of reading the characters which have been typed. The character “w” might have been typed only once, but the w key might have been in pressed-state during 8 loops. Game engines seem to access the keyboard through a low-level function. This is why commands like SendKeys injecting their keystrokes at a higher level have no effect.
I am currently making a little application that lets me use my Intuos Pro as a keyboard because it is touch enabled. Wacom has released an API that allows the access of touch data so getting the core functionality has not been a problem. I have hit a bit of a snag, though. The API allows you to listen for data in two modes:
Consumer Mode means that the application will not pass touch information onto any other applications or to the driver for gesture recognition. It will only listen if the window has keyboard focus.
Observer Mode means that the application will pass touch information onto other applications and will always listen for data regardless of focus.
Here's my problem. The keyboard needs to be running all the time, but when I'm typing on my touchpad, I don't want two finger scrolls or tap clicking or anything to happen. But if I'm typing into something, the thing I'm typing into has to have keyboard focus - not my application.
I can't really see the point of observer mode if there's no way to destroy data so that gesture recognition doesn't get in the way. And in their FAQ, Wacome hinted at the possibility of being able to destroy data in observer mode.
If the application chooses to pass the touch data through to the driver in Observer mode, then the tablet driver will interpret touch data and recognize gestures as appropriate for the tablet and operating system.
So I am suspicious of there being a solution. I was wondering if anyone has had any experience with this, or would be able to take a look and see if they can figure out a solution? I'm okay with something hacky if need be as this is more of a personal thing than anything else.
I am using their MTDN variety in C# in Visual Studio 2013 and my application is currently a WPF application.
I'm working on a small tool for a DirectX game and I want to prevent the user from pressing a certain key (F12 in this case) for a certain period.
I could find many options for simulating keypresses but what are the options when it comes to nulling out a keystroke before the game reads it?
The language doesn't really matter, although I would prefer a C# or C++ solution, or just a nudge in the right direction :)
Thanks in advance!
The good news is, I've done this before so I can say that it is possible and it does work.
The bad news is that it's not simple. It requires a lot of complicated code, and will likely take a long time to implement, but I'll explain how you can do it.
Applications like DirectX games usually register for raw input.
Since you want to stop a keyboard event from reaching the application, you need a way to insert your code between the raw input and the game so you can check the raw input and decide whether to allow it to be passed to the game:
So you want to change the flow from:
Raw Input --> Game
to
Raw Input --> Your Code --> Game
Without having access to the source code of the game, you have to find a way to insert your code.
When there is keyboard input available, the game will call the WinAPI function GetRawInputData, which will tell it about the keyboard event. Ideally, what we want is when the game calls this function, it actually calls our code instead of the WinAPI function. Then we can decide what to tell the game about the keyboard event, we could tell it anything we want (e.g. ignore F12). Sounds great right? Here's where it gets interesting...
We can take advantage of how windows loads executables into memory. Typically, a program uses (or 'imports') calls to functions in other DLLs (such as GetRawInputData, in User32.dll). When the program gets loaded into memory, Windows will fill in a table (the Import Address Table (IAT)) with pointers to the executable code in the appropriate DLLs. This means that when the program calls the function, it gets directed to the executable code in User32.dll in memory to run it.
Wouldn't it be great if we could write/patch the address of one of our functions into that table, so that when the game calls GetRawInputData, it actually gets directed to our function for us to process? Well we can! It's called Import Address Table Patching.
There's a pretty good article on it here with some working code in C++. You should first read it to understand in more detail how it works, then you can modify it to support your needs. It will work, but I know it's probably more work (much more work) than you would have been hoping for, but essentially you're hacking the application which is never easy to do.
It's worth doing, even just to gain a better understanding of Windows behind the scenes.
Good luck!
EDIT
As Simon said, Windows Hooks is a much simpler way to do it if the game isn't using raw input. DirectX Games tend to be a special case that don't really work too well with standard Hooks as they use special methods to get the input from the user. By all means give it a go though, it will be a lot easier if it works.
There are many questions relating to simulating mouse/keyboard input in WPF (and Windows, for that matter). I have something a little different than the usual question, I think, and I'd like your input. Most posts I've seen have a specific higher level action in mind: I want to click this, I want to move the mouse here, etc. To emulate these, one can simply use routed events. However, I'm hoping to operate a mouse from a remote app and would like to input mouse events at a low-level: current mouse position is x,y and button state is such and such. My target framework is WPF, but if something like a generic virtual mouse driver is the way to go, I'm cool with that too. I do not have security concerns: the apps receiving the messages will be coded by me at a higher level, so I don't need crazy hacks. I'm willing to use managed or unmanaged code and take the rabbit hole as deep as it needs to go to make this work, but I don't want to reinvent the wheel. I can host my apps in an HwndHost or some such too, in case I need access to windows messages.
Thoughts?
WPF has some built-in automation capabilities. It's a bit complicated, and I've never actually tried it myself, but I've been reading about it recently - it might be worth checking out:
http://msdn.microsoft.com/en-us/library/ms747327.aspx
Or search google for "WPF automation"
I want to detect when my touchpad is clicked!
I normally use a usb mouse, so I don't use the touchpad for anything. Instead I'd like to make it possible to perform an action in .NET, when the touchpad is clicked. This way I can use it as a shortcut: One tap and something cool happens.
Is this possible, and if yes, any clue how? I'd prefer if it could be working in VB.NET or C#.
My theory is that I'd have to make a mousehook, which then somehow determines which device the click is coming from. If the click is determined to be from the touchpad, then cancel the click and doWhatever().
Thanks!
* EDIT *
Well, it's "solved", sort of :) In a odd coincidence, Synaptics released their latest driver and software for their touchpads a few days ago with some new functionality. As my laptop has a synaptics touchpad, I tried out the software and interestingly enough, the functionality for designating clicks on the trackpad to perform an action, was built-in.
So the desired function has been achieved, without a line of code (my own code anyway :).
Answer goes to Adrian though for the link to the RawInputSharp library. I tinkered with it yesterday, and I'm 90% sure it would be possible to use for this purpose, in the event a laptop doesn't have a synaptics trackpad.
Have a look at the RawInputSharp library from this page. It uses pInvokes into User32.dll to get hold of input device information. Using it you can detect which device (ie. mouse) input is coming from.
Having had a little play with it, I managed to extract some code that displays just the device id - a different value depending on whether I use my USB mouse or my internal touchpad. The tricky thing would be to identify which device id is that of your touchpad automatically, but you could manually configure this in your application.