I am wondering if it is possible to create an application that could receive a notification when any other application/window is docked with the new windows 7 docking feature (f.ex. Winkey + left arrow)
The purpose of my application would be to set up custom rules for certain windows. So for example if I am using Chrome and I press the Win+LEFT keys, then my application would receive a notification, and would be able to say that the window should not resize to 50% of the screen, but should use 70%.
I am not very familiar with writing windows applications (mostly do web), so any pointers at how this might be achieved are very welcome.
Thanks.
Given the Win+Left/Right combinations work with applications that pre-date Windows 7 I strongly suspect it is just a combination of WM_SIZE and WM_MOVE messages with coordinates worked out by the shell, and there is nothing to directly distinguish such a resize from any other within the application.
Use Spy++ from the SDK to prove this.
You could apply heuristics based on the screen size to detect these events.
Related
I am learning and building my first UWP test app, and need a way to simulate:
relative mouse movement
absolute mouse positioning
keyboard typing (not necessarily key presses/releases)
fine tuned x&y scrolling (so I can scroll by any amount)
I have come across the following methods for doing this, but can't figure out which ones are modern / best for UWP apps or best in general for my purposes:
SendKeys (A C# wrapper for SendInput of some sort?)
SendInput (A win32 API for simulating events, but is it best for UWP?)
SendMessage (Used for directly typing into focused applications?)
InputInjector (A more modern but limited way of simulating inputs, can't absolutely position cursor?)
Cursor.Position (A function for cursor movement and positing)
There are so many methods and approaches to this problem, and I'm not entirely sure which of these is most supported or recommended for UWP apps, or yields the best results.
The purpose of this project is to be able to control my PC (move the mouse, type), by interacting with through my phone. For example my phone becomes a trackpad, or I can type in my phone's soft keyboard and it types into my PC. The PC hosts a server on the local network, and the phone send input data packets to this server. The server receives these input data packets, and executes them (which is where I need the ability to simulate keyboard/mouse events). Very similar to Remote Mouse.
So my questions are:
What are the differences between these methods? (Like Windows Forms or Win32??)
Which is best for UWP apps / my need here?
Are there any better (not listed) solutions?
This is my first look into this stuff (C#, .NET, Windows dev) so any and all information is very helpful.
Thanks for your help!
Dan :D
Edit
Further research has shown that InputInjector is under the UWP reference, SendKeys and Cursor.Position are both under the .NET reference. Does this mean that InputInjector is the most ideal?
After researching some more, I found that InjectedInput is the only one included in the UWP API.
To clarify, when developing a Windows application, in Visual Studio you must select one "type" to use, be it WPF, Windows Forms, Win32 or UWP. UWP is the only one (mostly) that can be uploaded to the Microsoft Store.
This meant that I could only use methods inside the UWP API, in this case WinRT was a part of UWP and InjectedInput is a part of WinRT.
It supports absolute mouse positioning with the "Absolute" option, relative mouse movement with the "Move" option, and scrolling with the "Wheel" and "HWheel" options used in InjectedInputMouseOptions alongside InjectedInputMouseInfo. Keyboard input can be done with InjectedInputKeyOptions alongside InjectedInputKeyboardInfo.
Use the "Option" variant class to modify the effect of the input (such as selecting which options to change), and then use InputInjector with its TryCreate() method to instantiate it, along with the relevant InjectMouseInput or InjectKeyboardInput to execute the input injection.
This sample code alongside its related blog post is fantastic for understanding the basic useage, it jumps straight to the chase.
So I created an UWP App that can record several Audio Lines and save the recordings to MP3 files for in-game multi-line recording that I can later edit separately (game audio, microphone, game comms, voice comms) as NVidia ShadowPlay/Share does not support this yet. I achieve this multi-line setup with VAC.
I have a version of this tool written in regular Windows WPF C# and I have a system-wide HotKey Ctrl+Alt+R that starts/stops recording so when I'm in a full screen game, I can start/stop recording without exiting full screen mode (switching window focus).
Can a global (system wide, app window not in focus) HotKey that triggers some in-App event be achieved in a UWP App? I know the functionality is not supported for other platforms. But I only need it to run on Windows 10 Desktop and the HotKey support is mandatory. Or can I achieve my goal in any other way for UWP Apps?
GOAL: System wide key combination to trigger in UWP app event without switching Window focus and messing with full-screen games.
at the moment it is not possible to solve this task thoroughly.
You are facing two limitations of UWP and can be only partially solved:
Lifecycle: UWP apps go in suspended state when they are not focused. They just "block" to consume less resources (and battery). This is a great feature for mobile devices, but is bad news for you project. You can solve this by requesting "ExtendedExecutionSession" which will guarantee that your app never falls asleep when out of focus if "attached to wallpower".
Detect input without focus. It's clearly stated on MSDN that UWP doesn't support keyboard HOOKS (this refers to SetWindowsHookEx). They reinvented "GetAsyncKeyState", now it works only when the Windows is focused. Indeed you can find that under CoreWindow.GetAsyncKeyState().
If you only need to use F Keys as hotkeys you can still do something, like "press F2 when the app is minimzed to activate a function".
Use Stefan Wick example. He solved part of the problem.
Instead if you need to listen to lots of keys (or mouse events) there isn't a way. You can't right now.
Curiosity
UWP has restricted capabilities, one of which called "InputObservation".
At the moment it is not documented and impossible to implement (unless you are a select Microsoft Partner), but it should allow apps to access system input (keyboard/mouse..) without any limitation and regardless its final destination.
I think this feature is the key for system-wide inputs detection.
I am not able to find a way to implement it.
Kind Regards
My app needs floating palette windows. I've already implemented this behavior on my own and it worked great for when it was a single document app, but now my app requires multiple document windows. My attempts to adapt the palette windowing system for this now makes the implementation too hacky and doesn't work very well when switching documents.
Windows has an extended window style, WS_EX_PALETTEWINDOW, which I have tried using through overriding CreateParams but this keeps my floating windows on top of EVERY other running app. I need them to just stay on top of my app and go away when another app is enters the foreground.
Any suggestions?
Edit: Preferrably solutions should not involve the use of MDI containers as I need document windows to be top level windows in their own right.
Use DockPanel suite. It is a ready made library for handling tool windows.
In addition you can enable user customizable docking of the tool windows if you like.
I need to capture the visual output (like a screenshot) of a DirectX window.
Currently, I use this approach.
But, when the window is in background, it captures whatever is in front of it.
I see that DirectX windows render even when minimized or in background, so this should be possible.
But, how? (It also needs to be fast, and it needs to work on Windows XP too, unfortunately...)
Edit: I am very busy these days... Don't worry, I'll put the bounty back if it expires.
To capture Direct3D windows that are in the background (or moved off screen), I believe you have the following options:
Inject and hook Direct3D within the target application via the link you have already posted or this more up-to-date example (EasyHook can be difficult to get setup but it does work really well) - you can always ask for help about getting it working. I have used that technique for capturing in a number of games without issues (most recently for an ambilight-clone project). The problem with this approach is your concern about game protection causing bans, however FRAPs also uses hooking to achieve this, so perhaps your concerns are exaggerated? I guess gamers being banned for a screen shot is an expensive way of finding out.
For windowed applications on Vista/Win 7 - you could inject and hook the DWM and make your capture requests through its shared surface. I have had this working on Vista, but have not finished getting it working on Windows 7, here is an example of it working for Windows 7 http://www.youtube.com/watch?v=G75WKeXqXkc. The main problem with this approach is the use of undocumented API's which could mean your application breaks without any warning upon a windows patch release - also you would have to redo the technique for each new major Windows flavour. This also does not address your need to capture in Windows XP.
Also within the DWM, there is a thumbnail API. This has limitations depending on what your trying to do. There is some information on this API along with other DWM API's here http://blogs.msdn.com/b/greg_schechter/archive/2006/09/14/753605.aspx
There are other techniques for intercepting the Direct3D calls without using EasyHook, such as substituting the various DLL's with wrappers. You will find various other game hooking/interception techniques here: http://www.gamedeception.net/
Simply bring the Direct3D application to the foreground (which I guess is undesirable in your situation) - this wouldn't work for off-screen windows unless you also move the window.
Unfortunately the only solution for Windows XP that I can think of is intercepting the Direct3D API in some form.
Just a clarification on Direct3D rendering while minimised. During my fairly limited testing on this matter I have found this to be application dependant; it is generally not recommended that rendering take place while the application is minimized (also this reference), it does continue to render while in the background however.
UPDATED: provided additional link to more up-to-date injection example for point 1.
A quick google and i found this Code Project which relates to Windows XP. I dont know if you can apply this knowledge to Windows Vista and 7??
http://www.codeproject.com/Articles/5051/Various-methods-for-capturing-the-screen
EDIT:
I found this article as well:
http://www.codeproject.com/Articles/20651/Capturing-Minimized-Window-A-Kid-s-Trick
This links off from Justins blog post here from the comments. It seems he was working on this with someone (i see thats your link about).
http://spazzarama.com/2009/02/07/screencapture-with-direct3d/
The code that you linked to (from spazzarama), which you said you were using in your project, captures the front buffer of your DirectX device. Have you tried capturing the back buffer instead? Going from the code on your linked site, you would change line 90 from
device.GetFrontBufferData(0, surface);
to
Surface backbuffer = device.GetBackBuffer(0, 0, BackBufferType.Mono);
SurfaceLoader.Save("Screenshot.bmp", ImageFileFormat.Bmp, backbuffer);
This would also involve removing lines 96-98 in your linked example. The backbuffer might be generated without the obstructing window.
EDIT
Nevermind all of that. I just realized that your linked sample code is using the window handle to define a region of the screen, and not actually doing anything with the DirectX window. Your sample code won't work around the obstruction because your region is already drawn with the other window in front of it by the time you access it.
Your best bet to salvage the application is probably to bring the DirectX window to the top of the screen before running the code to capture the image. You can use the Wind32API BringWindowToTop function to do that (http://msdn.microsoft.com/en-us/library/ms632673%28VS.85%29.aspx).
How does VNC send REPAINT messages to windows even when a user is not active?
I would like to implement this in C sharp - I've had a look at the PrintWindow, SendMessage methods and none of them achieve the same thing as VNC (tested by capturing images and its black) but with VNC I get the full picture.
What techniques are they using to do this and can this be implemented in C sharp to get windows to always repaint even when a user is not active (i.e. RDP is closed, minimised or similar).
Thanks all
You could use the technique used by video games, which consists in redrawing permanently a window during CPU idle time.
I found a C# implementation here.
You just have to adapt it to your needs.
VNC does NOT send WM_PAINT messages
Windows does (and it does not care whether a user is active). See also
Is it possible to screenshot a minimized application
How to get the screenshot of a minimized application programmatically?
Capturing screenshots of a minimized remote desktop