Palette windows/floating tool windows (C# .NET) - c#

My app needs floating palette windows. I've already implemented this behavior on my own and it worked great for when it was a single document app, but now my app requires multiple document windows. My attempts to adapt the palette windowing system for this now makes the implementation too hacky and doesn't work very well when switching documents.
Windows has an extended window style, WS_EX_PALETTEWINDOW, which I have tried using through overriding CreateParams but this keeps my floating windows on top of EVERY other running app. I need them to just stay on top of my app and go away when another app is enters the foreground.
Any suggestions?
Edit: Preferrably solutions should not involve the use of MDI containers as I need document windows to be top level windows in their own right.

Use DockPanel suite. It is a ready made library for handling tool windows.
In addition you can enable user customizable docking of the tool windows if you like.

Related

Code for copying the text from another window

Is it possible to write a code that will copy text values from a window belonging to another application?
I have an application that gives me live results(only texts), every 5 minutes, and I cannot copy paste them every 5 min.
Maybe.
It depends on how the target application is exposing its text to the OS.
If the application is using a private 2D/drawing library to render text by itself to an in-memory or in-VRAM buffer, then no. You'll need to grab a screenshot and perform OCR on it - or you could inject your own code into the target process and intercept those 2D/drawing library calls to get the text being rendered.
If the application is using the Windows-provided GDI then there are ways of intercepting those calls to get the text. I believe Direct2D and DirectWrite also offer straightforward ways of intercepting/profiling their calls as well.
If the application is using a GUI framework or platform like WinForms or WPF then there are ways of inspecting the rendered view's object-model to extract data and text - this is how various "Spy" utilities work. "Spy++" (spyxx.exe included in Visual Studio and the Windows SDK) can inspect native Win32 hWnd windows and "Snoop" is a very powerful tool for inspecting WPF applications (Visual Studio's built-in Visual Inspector does the same thing).
Additionally, often GUI frameworks and platforms will support the OS' built-in Accessibility platform and will expose on-screen data as machine-readable structured data for use by screen-readers for the blind and visually impaired as well as automation software. Windows' built-in platform is called Active Accessibility and Windows UI Automation. There are premade tools you can download to inspect Active Accessibility data.
If it's a HTML application (e.g. Windows HTA, Electron app, Chrome desktop app, etc) then that's another topic.

Windows 8 and C#: what technology is needed to move windows programatically, even in the z direction?

I'm interested in begin able to manipulate windows programmatically. Perhaps by clicking on a window so it has focus, then by using some key combination, I can move the windows. Also, I'd like to move the windows in the z-direction, which would mean that it would appear to get smaller as it went deeper into the screen and bigger as it was moved toward me.
I would like this to apply to any existing window, being a text editor window, a browser window, or even the calculator program window.
The problem is that I have no idea what technology would be needed to accomplish that.
Any ideas?
You'd need to use the Win32 API (using P/Invoke).
"Manipulating" a window would need several different API functions depending on what you want to do... these are a few:
FindWindow (pinvoke.net link) will allow you to find the window handle so you feed it into the other functions (there are more ways to find a window handle depending on your needs, but this one is by far the easiest)
MoveWindow(pinvoke.net link) allows you to set position and size
SetWindowPos (pinvoke.net link) to set the z-order of top-level windows
etc.
Use http://pinvoke.net to find out how to call Win32 API functions from c#, and use the MSDN (this link: http://msdn.microsoft.com/en-us/library/windows/desktop/ff468919(v=vs.85).aspx in particular) for a reference of all functions to handle Windows.
Update
Rereading your question it looks like you want to "simulate" a 3D-like effect in your windows. This is not in the API and there's no standarized way to do it as far as I know (the modern accelerated DWM does it, but I don't think you can access any functions to do that via its API).
You could research into capturing the window contents to a bitmap, and render that bitmap scaled into your own window. It's not impossible, but it's not precisely easy and would be WAY too long to explain how to do this here.
Update 2
There's actually a DWM API (link to MSDN), but even with it, I doubt you can do what you want in a practical manner with it

How do I utilize the functionality of a multi-monitor setup without physical hardware?

I've spent the past few days researching whether its possible to use the Windows API (Preferably Windows 8) to develop an application that can utilize the features in a multiple physical monitor configuration, from a single physical monitor. As far as I can tell you simply cannot do it, or its just not documented at all. Below I will present my problem and the research I've under-taken in the hopes that someone can provide some knowledge I have not yet encountered.
The Problem
In Windows 7+ multi-monitor configurations are able to utilize some cool desktop features such as being able to use a single large desktop that spans multiple monitors, seamless application dragging between them, ability to toggle whether to have the taskbar span or not, etc.
The Virtual Screen (MSDN link).
I would like to gain access to this API and allow my application to use it to allow the user to effectively have multiple virtual desktops from a single physical monitor. Simple as that.
The Solution
Here I will present a number of proposed solutions I have found, and why they will not work (As far as I can tell).
1. Use the Window Station & Desktop API to create entirely new desktops and flip between them.
"A window station is a securable object that is associated with a process, and contains a clipboard, an atom table, and one or more desktop objects.
A desktop is a securable object contained within a window station. A desktop has a logical display surface and contains user interface objects such as windows, menus, and hooks."
MSDN Link.
This is a really clean and simple way to effectively create multiple desktops in windows that allows the user to switch between on a single monitor. However it has the following large caveat:
"Windows doesn't provide a way to move a window from one desktop object to another, and because a separate Explorer process must run on each desktop to provide a taskbar and start menu, most tray applications are only visible on the first desktop." Sysinternals on TechNET.
2. Attempt to create a fake display driver to force Windows to believe it has more than one monitor.
This appears to have been a valid option for a couple of existing similar applications such as ZoneScreen. However in Windows 7 it became difficult to install the unsigned driver and in Windows 8 it appears to be flat out impossible.
3. Fake it by attempting to track applications and force them to hide between user defined monitor groups.
Both commercial and free applications such as DisplayFusion and Finestra Virtual Desktops appear to use a highly convoluted and complex system of tracking launched applications and attempting to hide and unhide them as the user switches between virtual monitors.
This is the most workable solution as it largely meets all the requirements. But its a hack - Some applications don't really work with it and there are many corner cases where it will fail.
What am I missing here? Is any of my research incorrect thus far? Are there areas of the API that I haven't yet plumbed?
develop an application that can utilize the features in a multiple physical monitor configuration, from a single physical monitor
The Windows API ties each desktop to a explorer process and the taskbar,notifications etc are managed on a per-desktop basis. It is possible to create new virtual desktops using this API by creating a new desktop object. However if you are trying to create something that is the equivalent of workspaces in linux distros, then you are out of luck. The desktop object manages the applications launched under a process tree and moving applications between these desktop objects etc is not possible due to the way windows explorer handles work.
The Solution
Here I will present a number of proposed solutions I have found, and why they will not work (As far as I can tell).
The only way to achieve something close to workspaces is to fake it -
each workspace and its process have to be show in the taskbar/notification area by slots. But this is very tough to achieve and games, fullscreen apps etc are bound to break. I am not aware of how this will work out in Win8 either. So yes - workspaces in Windows are going to suck from the get-go.

Capture visual output of a DirectX application - even in background?

I need to capture the visual output (like a screenshot) of a DirectX window.
Currently, I use this approach.
But, when the window is in background, it captures whatever is in front of it.
I see that DirectX windows render even when minimized or in background, so this should be possible.
But, how? (It also needs to be fast, and it needs to work on Windows XP too, unfortunately...)
Edit: I am very busy these days... Don't worry, I'll put the bounty back if it expires.
To capture Direct3D windows that are in the background (or moved off screen), I believe you have the following options:
Inject and hook Direct3D within the target application via the link you have already posted or this more up-to-date example (EasyHook can be difficult to get setup but it does work really well) - you can always ask for help about getting it working. I have used that technique for capturing in a number of games without issues (most recently for an ambilight-clone project). The problem with this approach is your concern about game protection causing bans, however FRAPs also uses hooking to achieve this, so perhaps your concerns are exaggerated? I guess gamers being banned for a screen shot is an expensive way of finding out.
For windowed applications on Vista/Win 7 - you could inject and hook the DWM and make your capture requests through its shared surface. I have had this working on Vista, but have not finished getting it working on Windows 7, here is an example of it working for Windows 7 http://www.youtube.com/watch?v=G75WKeXqXkc. The main problem with this approach is the use of undocumented API's which could mean your application breaks without any warning upon a windows patch release - also you would have to redo the technique for each new major Windows flavour. This also does not address your need to capture in Windows XP.
Also within the DWM, there is a thumbnail API. This has limitations depending on what your trying to do. There is some information on this API along with other DWM API's here http://blogs.msdn.com/b/greg_schechter/archive/2006/09/14/753605.aspx
There are other techniques for intercepting the Direct3D calls without using EasyHook, such as substituting the various DLL's with wrappers. You will find various other game hooking/interception techniques here: http://www.gamedeception.net/
Simply bring the Direct3D application to the foreground (which I guess is undesirable in your situation) - this wouldn't work for off-screen windows unless you also move the window.
Unfortunately the only solution for Windows XP that I can think of is intercepting the Direct3D API in some form.
Just a clarification on Direct3D rendering while minimised. During my fairly limited testing on this matter I have found this to be application dependant; it is generally not recommended that rendering take place while the application is minimized (also this reference), it does continue to render while in the background however.
UPDATED: provided additional link to more up-to-date injection example for point 1.
A quick google and i found this Code Project which relates to Windows XP. I dont know if you can apply this knowledge to Windows Vista and 7??
http://www.codeproject.com/Articles/5051/Various-methods-for-capturing-the-screen
EDIT:
I found this article as well:
http://www.codeproject.com/Articles/20651/Capturing-Minimized-Window-A-Kid-s-Trick
This links off from Justins blog post here from the comments. It seems he was working on this with someone (i see thats your link about).
http://spazzarama.com/2009/02/07/screencapture-with-direct3d/
The code that you linked to (from spazzarama), which you said you were using in your project, captures the front buffer of your DirectX device. Have you tried capturing the back buffer instead? Going from the code on your linked site, you would change line 90 from
device.GetFrontBufferData(0, surface);
to
Surface backbuffer = device.GetBackBuffer(0, 0, BackBufferType.Mono);
SurfaceLoader.Save("Screenshot.bmp", ImageFileFormat.Bmp, backbuffer);
This would also involve removing lines 96-98 in your linked example. The backbuffer might be generated without the obstructing window.
EDIT
Nevermind all of that. I just realized that your linked sample code is using the window handle to define a region of the screen, and not actually doing anything with the DirectX window. Your sample code won't work around the obstruction because your region is already drawn with the other window in front of it by the time you access it.
Your best bet to salvage the application is probably to bring the DirectX window to the top of the screen before running the code to capture the image. You can use the Wind32API BringWindowToTop function to do that (http://msdn.microsoft.com/en-us/library/ms632673%28VS.85%29.aspx).

Can I subscribe to window-docking events in windows 7 from C#

I am wondering if it is possible to create an application that could receive a notification when any other application/window is docked with the new windows 7 docking feature (f.ex. Winkey + left arrow)
The purpose of my application would be to set up custom rules for certain windows. So for example if I am using Chrome and I press the Win+LEFT keys, then my application would receive a notification, and would be able to say that the window should not resize to 50% of the screen, but should use 70%.
I am not very familiar with writing windows applications (mostly do web), so any pointers at how this might be achieved are very welcome.
Thanks.
Given the Win+Left/Right combinations work with applications that pre-date Windows 7 I strongly suspect it is just a combination of WM_SIZE and WM_MOVE messages with coordinates worked out by the shell, and there is nothing to directly distinguish such a resize from any other within the application.
Use Spy++ from the SDK to prove this.
You could apply heuristics based on the screen size to detect these events.

Categories

Resources