Global HotKey in Universal Windows App - c#

So I created an UWP App that can record several Audio Lines and save the recordings to MP3 files for in-game multi-line recording that I can later edit separately (game audio, microphone, game comms, voice comms) as NVidia ShadowPlay/Share does not support this yet. I achieve this multi-line setup with VAC.
I have a version of this tool written in regular Windows WPF C# and I have a system-wide HotKey Ctrl+Alt+R that starts/stops recording so when I'm in a full screen game, I can start/stop recording without exiting full screen mode (switching window focus).
Can a global (system wide, app window not in focus) HotKey that triggers some in-App event be achieved in a UWP App? I know the functionality is not supported for other platforms. But I only need it to run on Windows 10 Desktop and the HotKey support is mandatory. Or can I achieve my goal in any other way for UWP Apps?
GOAL: System wide key combination to trigger in UWP app event without switching Window focus and messing with full-screen games.

at the moment it is not possible to solve this task thoroughly.
You are facing two limitations of UWP and can be only partially solved:
Lifecycle: UWP apps go in suspended state when they are not focused. They just "block" to consume less resources (and battery). This is a great feature for mobile devices, but is bad news for you project. You can solve this by requesting "ExtendedExecutionSession" which will guarantee that your app never falls asleep when out of focus if "attached to wallpower".
Detect input without focus. It's clearly stated on MSDN that UWP doesn't support keyboard HOOKS (this refers to SetWindowsHookEx). They reinvented "GetAsyncKeyState", now it works only when the Windows is focused. Indeed you can find that under CoreWindow.GetAsyncKeyState().
If you only need to use F Keys as hotkeys you can still do something, like "press F2 when the app is minimzed to activate a function".
Use Stefan Wick example. He solved part of the problem.
Instead if you need to listen to lots of keys (or mouse events) there isn't a way. You can't right now.
Curiosity
UWP has restricted capabilities, one of which called "InputObservation".
At the moment it is not documented and impossible to implement (unless you are a select Microsoft Partner), but it should allow apps to access system input (keyboard/mouse..) without any limitation and regardless its final destination.
I think this feature is the key for system-wide inputs detection.
I am not able to find a way to implement it.
Kind Regards

Related

C# .NET - Difference between SendKeys, SendInput, SendMessage, InputInjector and Cursor.Position

I am learning and building my first UWP test app, and need a way to simulate:
relative mouse movement
absolute mouse positioning
keyboard typing (not necessarily key presses/releases)
fine tuned x&y scrolling (so I can scroll by any amount)
I have come across the following methods for doing this, but can't figure out which ones are modern / best for UWP apps or best in general for my purposes:
SendKeys (A C# wrapper for SendInput of some sort?)
SendInput (A win32 API for simulating events, but is it best for UWP?)
SendMessage (Used for directly typing into focused applications?)
InputInjector (A more modern but limited way of simulating inputs, can't absolutely position cursor?)
Cursor.Position (A function for cursor movement and positing)
There are so many methods and approaches to this problem, and I'm not entirely sure which of these is most supported or recommended for UWP apps, or yields the best results.
The purpose of this project is to be able to control my PC (move the mouse, type), by interacting with through my phone. For example my phone becomes a trackpad, or I can type in my phone's soft keyboard and it types into my PC. The PC hosts a server on the local network, and the phone send input data packets to this server. The server receives these input data packets, and executes them (which is where I need the ability to simulate keyboard/mouse events). Very similar to Remote Mouse.
So my questions are:
What are the differences between these methods? (Like Windows Forms or Win32??)
Which is best for UWP apps / my need here?
Are there any better (not listed) solutions?
This is my first look into this stuff (C#, .NET, Windows dev) so any and all information is very helpful.
Thanks for your help!
Dan :D
Edit
Further research has shown that InputInjector is under the UWP reference, SendKeys and Cursor.Position are both under the .NET reference. Does this mean that InputInjector is the most ideal?
After researching some more, I found that InjectedInput is the only one included in the UWP API.
To clarify, when developing a Windows application, in Visual Studio you must select one "type" to use, be it WPF, Windows Forms, Win32 or UWP. UWP is the only one (mostly) that can be uploaded to the Microsoft Store.
This meant that I could only use methods inside the UWP API, in this case WinRT was a part of UWP and InjectedInput is a part of WinRT.
It supports absolute mouse positioning with the "Absolute" option, relative mouse movement with the "Move" option, and scrolling with the "Wheel" and "HWheel" options used in InjectedInputMouseOptions alongside InjectedInputMouseInfo. Keyboard input can be done with InjectedInputKeyOptions alongside InjectedInputKeyboardInfo.
Use the "Option" variant class to modify the effect of the input (such as selecting which options to change), and then use InputInjector with its TryCreate() method to instantiate it, along with the relevant InjectMouseInput or InjectKeyboardInput to execute the input injection.
This sample code alongside its related blog post is fantastic for understanding the basic useage, it jumps straight to the chase.

Unity: Passing information between two programs [duplicate]

This question already has answers here:
Send message from one program to another in Unity
(2 answers)
Closed 4 years ago.
I'm trying to design an app launcher system, and have figured out how to make the launcher run an app at path/app.exe, as well as attaching a listener to that process to detect when it has been closed.
Obviously, during this process, the launcher will lose focus as it moved into the background. Is there a way for it to continue to listen for keyboard input and perform an action in the background. It would be ideal if the launcher can then pass information to the running application, all without the application losing focus.
It is worth noting that both the launcher and the app will be made in Unity with c#, so if there is a solution that involved the app setting up some sort of listener for the launcher then that would be good, but for security reasons, the launcher must be the process that listens for the keyboard press action.
There are some approaches to Interprocess Communication in the .NET Framework, of wich you can pick your Poison:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365574.aspx
It is also possible to intercept Keyboard events via low Level Keyboard hooks. However those require you use a unamanged Windows API. That in turn means, that you have left Managed Code: https://www.codeproject.com/Articles/19004/A-Simple-C-Global-Low-Level-Keyboard-Hook
The biggest uncertainty here is Unity. Usually Unity targets the Mono Framework. I am not sure wich IPC options Mono has. Also Unity is usually for programms that run on more then one OS (Linux/Windows/Mac compatibility). And that will limit your IPC options even more. And ruin your chances of getting Keyboard hooks going.

Do not pass data to OS in observer mode in Wacom Feel Multi Touch API

I am currently making a little application that lets me use my Intuos Pro as a keyboard because it is touch enabled. Wacom has released an API that allows the access of touch data so getting the core functionality has not been a problem. I have hit a bit of a snag, though. The API allows you to listen for data in two modes:
Consumer Mode means that the application will not pass touch information onto any other applications or to the driver for gesture recognition. It will only listen if the window has keyboard focus.
Observer Mode means that the application will pass touch information onto other applications and will always listen for data regardless of focus.
Here's my problem. The keyboard needs to be running all the time, but when I'm typing on my touchpad, I don't want two finger scrolls or tap clicking or anything to happen. But if I'm typing into something, the thing I'm typing into has to have keyboard focus - not my application.
I can't really see the point of observer mode if there's no way to destroy data so that gesture recognition doesn't get in the way. And in their FAQ, Wacome hinted at the possibility of being able to destroy data in observer mode.
If the application chooses to pass the touch data through to the driver in Observer mode, then the tablet driver will interpret touch data and recognize gestures as appropriate for the tablet and operating system.
So I am suspicious of there being a solution. I was wondering if anyone has had any experience with this, or would be able to take a look and see if they can figure out a solution? I'm okay with something hacky if need be as this is more of a personal thing than anything else.
I am using their MTDN variety in C# in Visual Studio 2013 and my application is currently a WPF application.

Minimize Windows 8 Store App while tracking location with GPS

I'm working on a metro app and am having trouble finding how not to show the application.
We recently deployed tablets to our field reps, and need to add gps tracking. GPS is much easier to deal with in the metro libs (it's like 4 lines of code vs. unmanaged) so we're hoping to be able to push a metro app instead of spending time coding a winforms/wpf desktop app (the tablets are full version windows, so it's an option if we can't hide a metro app. I feel like it should be possible though as the start screen tiles update automatically without opening the main program).
In WPF, it's fairly simple to make a window invisible. I'm creating the metro app in wpf, but it uses different libs than desktop and I may just not know how to do it.
In desktop programs, it's fairly simple. You do something along the lines of:
<Page
x:Class = xxxxxxxx
..
Visibility="Hidden">
Unfortunately, with metro, the only options I have are collapsed and visible. Collapsed doesn't seem to have any effect, unless it's just because it's not deployed and visual studio shows it anyway...
So basically I'm trying to figure out if there's a way to start the program minimized or hidden so that it doesn't interrupt the field reps every time it takes their location.
If you really want to make a metro app and want it to run "minimised" you will need to look at background task. To start the background task the user would still need to start the app at least once, futhermore background task have limitation how how often and how long they can run. Also there is a lot of constraint on deploying a windows store app if you cannot publish it in the store.
If your goal is to just have access to GPS through C# apis, the GPS is actually one of the winRT api you can use from the desktop, you can find a tutorial on how to access winRT api from the desktop here
Here is the complete list of winRT api accessible from the desktop (You can find Geoposition class among them).
Have you looked into creating a background task that transmits GPS? The background task can run without the app running.
I am not entirely certain you can voluntarily minimize a Windows Store App on a user's behalf. I see nothing in IntelliSense about it, nor have I found anything online or see any app do it.
However, be aware that deploying the app without using the Windows Store -- sideloading -- requires Windows 8 Enterprise edition computers joined to the domain OR Windows 8 Pro with a sideloading key ($30 per key, purchased in packs of at least 100.) Perhaps a WPF app with unmanaged code is worth the money and effort.

Can I subscribe to window-docking events in windows 7 from C#

I am wondering if it is possible to create an application that could receive a notification when any other application/window is docked with the new windows 7 docking feature (f.ex. Winkey + left arrow)
The purpose of my application would be to set up custom rules for certain windows. So for example if I am using Chrome and I press the Win+LEFT keys, then my application would receive a notification, and would be able to say that the window should not resize to 50% of the screen, but should use 70%.
I am not very familiar with writing windows applications (mostly do web), so any pointers at how this might be achieved are very welcome.
Thanks.
Given the Win+Left/Right combinations work with applications that pre-date Windows 7 I strongly suspect it is just a combination of WM_SIZE and WM_MOVE messages with coordinates worked out by the shell, and there is nothing to directly distinguish such a resize from any other within the application.
Use Spy++ from the SDK to prove this.
You could apply heuristics based on the screen size to detect these events.

Categories

Resources