I've got an MDI application, which allows users to drag and move windows around the screen. I would like to detect when two windows are near each other, but I am unsure how to go about this. I am using the WPF MDI library to handle MDI, however I am unsure whether it has this functionality built-in.
If not, what would be the standard approach to tackling this issue?
What I thought of doing is - while a window is being dragged, the X and Y co-ordinates of surrounding windows is constantly being checked, to see if they are within close proximity or not. However if I have many windows, this might end up causing some lag. Any ideas?
Just incase anyone looks this up later on - this is the approach which I took:
I added a DoDragDrop event to my source window being dragged and set the other window as the drop target. It might be specific to my requirements, but the end result works perfectly!
Related
I tried writing code several different times, but I came to an error with each one.
Basically, I'm trying to make "windows" similar to say Explorer, Paint, MediaPlayer, where you could drag then around, interact with them, minimize and close. Of course, if you clicked on a window, the one below it (they can overlap) shouldn't get affected.
I know how to do this, I have a list of the class I call Window, loop through it, and I only interact with the first window to contain the location of the mouse-click. This way, other windows overlap won't get affected.[1]
Next, I had to make it so that two buttons that are overlapping don't get activated when the user clicks in the "intersection of both buttons." I handled this by using the same method I used above.[2]
But the problem I'm facing now is that, if I hold the left click, but then I decide not to click a button, I drag the mouse away from the button, and release the left click, so that the button-click event won't be activated. But, when I remove the mouse from the boundaries of the button, and say, into another.. the new button get activated. Which it should not.[3]
My set up is like this:
I have a class called Window.
In Window, I have a list of the class called Interface (similar to the Control class in WinForms).
And each Interface has a struct in it that contains 4 bools, if the left/right is currently down, and if they were down in the previous processing. (prevLeft, prevRight, currLeft, currRight)
So, I'm ready to discard that (I have not yet, so I still have the source code), but I need a good structure for making an object-oriented type of application. However, I am not using WinForms. I need help with the structure alone, so no actual code is necessary, description is enough. I need to avoid the 3 problems I mentioned above.
Creating your own Window Manager is not an easy task. I know it because I'm making one too ;)
You can use an existing, though maybe not the best solution, like for example Nuclex.UI, which I personally rejected when I first saw it, but if you're not dead set on making your own WM, I suggest to use that or hybrid WinForms-XNA approach.
But if you're really dead set on implementing a custom Window Manager, you have to understand how any other WM works. Since we're talking about XNA, it means Windows, and that means Windows Explorer, which is a great thing to learn from.
You have to recognize how the simplest things work, and it's really not so hard. The hard part is figuring out what logic is updated when, and how to not spend all the CPU on only UI updates. Let me just give you a few hints on how to solve the problems you mention in your question.
To keep track of all windows, I'm using a Dictionary<string, Window>, where Window is a custom class, and the string is its unique name for rare cases where I have to call windows by name. Think of it as a window GUID or Handle. But you can just make it so that a "Form" can only appear once, and store all references in static variables.
To make WM understand what control you're clicking I use rectangles and check if they contain a Point which is at Cursor coordinates and has {1; 1} pixel size, which is probably about the same way it's done in Windows Explorer. To do that your WM needs to know in which order to update the active windows. Usually you'd want to start from the topmost window and continue towards the end of the list of active windows. For that you can just iterate through the list with a foreach loop.
But that's not all, because every window itself is a Container, which means it contains other controls, some of which may even be Containers themselves, like WinForms Panel class. This means you have to iterate through each of the Windows' Children controls. The update order should make sense too - update from the topmost child to the bottommost, recursively for Container controls, in case they also have Containers in them. This basically means you'd want to implement a recursive GetAllControls() method for your WindowManager class that would iterate through all Containers and return a list of all Controls.
Drawing all those Controls should be done in reverse order of updating them, so you can just GetAllControls().Reverse() and iterate through that in a foreach loop.
Where to draw and what to update depends on all the parent containers the current container has and their combined offset from the top-left corner of the game window. I solve this by storing a ParentContainer reference in all children controls to get the appropriate DrawRectangles and update areas via recursive properties.
When you click somewhere on the screen and a click is registered on a Control, make the WindowManager remember that (bool clickRegistered) and not run any OnClick events on any underlying Controls.
Windows Explorer remembers the control you clicked and will activate its OnRelease event if the cursor is then released in the update area of the very same control. So basically Windows Manager only does something when you release the mouse button. You can make your WindowManager and Controls to handle click events differently, like firing an event right after you press the mouse button, i.e. OnMouseDown. But remember that Microsoft aren't noobs and there's a reason for that behavior in Windows Explorer, and it's because if you accidentally press a mouse button somewhere you didn't intend, you can still fix it by moving the cursor outside the pressed control's update area and not run its action.
At this point you might be thinking "Is it really worth implementing all this?" For me the answer was "maybe", because I was a total noob in both C# and XNA at the time I started, and now I know my game, which was originally supposed to use some Window Manager, is going to benefit from my own WM implementation far more than from ready third-party solutions. And besides, it's a great exercise in logic and programming.
But if you'd like to think of yourself as a game developer, you should think in terms of reaching your goal as quickly as possible, i.e. actually making a game, and not the game engine. So in this case, better make use of existing solutions and start selling your product.
Instead of having the structure with the 4 booleans (similar to xna), how about you make a way to tell where the mouse "is." So in a sense, the mouse is in Window number 5 which is Paint, and the user is holding the mouse down on interface/control number 2 which is a button.
That sounds like it could work.
I am currently working on a desktop C# WPF application where the goal is to make it look and feel like a "real" Windows Store App.
I want to add an appbar that should be shown when the user swipes up from the bottom. To do this in a normal app you just position your finger outside the screen area, and swipe up.
But if I do that in a fullscreen WPF program I don't receive any TouchDown or TouchMove events - probably because the finger is already down when entering the actual screen area.
I have tried with the Manipulation framework also, but same result here. Even when I hook directly into the message queue using WndProc or other hooks I get no events at all.
The funny thing is that I can see the "touch cursor" move around the screen, so at least something in the underlying framework is notified.
Does anyone have an idea how to do this?
p.s. It is not an option for me just to use a windows store app instead, because of hardware connectivity issues ;-)
You will need to keep track of the cursor location coordinates, and see when the cursor (swipe) starts at the edge of the screen and moves in. When that triggers (with whatever trigger you want, distance covered most likely) you can fire up your Appbar.
There was a similar question asked on MSDN:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/d85dcde7-839a-44d3-9f2a-8b47b947576c/swipe-gesture-and-page-change?forum=wpf
I need to create a application which is similar to those we will get when we buy a laptop. It will be visible only when the mouse pointer reaches the top of the window. So how can I able to do this using C# 4.0 ?
http://www.notebookcheck.net/uploads/pics/win2_12.jpg
this link u can see the application. I need to create such type
Any idea pls share. Thanks
I suppose there are several different ways to achieve this effect:
You can place part of the window of your application above the visible screen, so only a part of it is visible (let's say you can see only it's bottom). Then you need to handle events when mouse enters (MouseEnter) and leaves (MouseLeave) the form to move the form up and down.
You can use a background thread to call GetCursorPos method at a set interval (i.e. each 500ms) second to check where currently the mouse is. See this link for more information about it and a sample code: http://www.pinvoke.net/default.aspx/user32.getcursorpos.
(If you need only to check the mouse position, you can use a timer to simplify you application.)
When you hit what's possible with C#, you can always start invoking native code - such as the windows API. Since you don't ask a specific question, I'll leave you with:
Position your app where you want it to appear and hide it.
Capture mouse position with windows api (see this SO answer)
When mouse is at screen corner / top, etc; make your app visible.
Now make sure all this works with dual screen setup, and you are done.
Assume i have an empty form 100px by 100px at 0,0 coordinates on the screen. It has no border style. Is there any way to have this positioned BEHIND the desktop icons?
I would assume this would involve the process Progman because thats what contains the desktop icons. But no matter what i try... getting window handles and changing parents etc, i cant seem to get the window to appear behind the icons.
Any ideas?
Essentially you want to draw on the desktop wallpaper. The desktop hierarchy looks like this:
"Program Manager" Progman
"" SHELLDLL_DefView
"FolderView" SysListView32
It's the SysListView32 that actually draws the desktop icons, so that's what you have to hook. And you can't just stick your form on top of it; you have to grab a WindowDC to that handle and draw on the DC.
It can be done - it has been done, but you're going to be using a lot of interop. Forget about doing this with a traditional Winforms Form. I don't think I've even seen it done in C#, although somebody did it in python, if that helps. I'm not a python coder myself, but the code is pretty short and easy to understand.
There is a solution to this problem, at least for Windows 8. I postet it in form of an article on CodeProject, so you can read about it here:
http://www.codeproject.com/Articles/856020/Draw-behind-Desktop-Icons-in-Windows
This works for simple drawing, windows forms, wpf, directx, etc. The solution presented in that article is only for Windows 8.
Google-fu led me to this MSDN forum question:
http://social.msdn.microsoft.com/Forums/en/winformsdesigner/thread/c61d0705-d9ec-436a-b0a6-6ffa0ecec0cc
And this is a blog post regard the major pitfalls with using GetDesktopWindow() or dealing with the desktop handle (as per your other question: C# Position Window On Desktop)
http://blogs.msdn.com/oldnewthing/archive/2004/02/24/79212.aspx
You also don't want to pass GetDesktopWindow() as your hwndParent. If you create a child window whose parent is GetDesktopWindow(), your window is now glued to the desktop window. If your window then calls something like MessageBox(), well that's a modal dialog, and then the rules above kick in and the desktop gets disabled and the machine is toast.
Anyway, I suspect that it probably CAN be done, but whether you should is another question.
I'm sure others have run into this problem too...
I often watch videos in a small VLC window while working on other tasks, but no matter where the window is placed, I eventually need to access something in the GUI behind it, and have to manually reposition the video window first.
This could be solved by having the VLC window snap to another corner whenever the mouse pointer is moved over it. I haven't found an app that does this, so would like to write one. What technologies could I use to do this? Cross platform might be harder... so what if just on Windows?
I'd prefer something in C# (or Python), but am willing to learn something new if need be.
Here is a windows only solution. You dont need to actually put the mouse over the window. All you need to do is Find the window using its name and send WM_MOVE. I dont know the name of the window which VLC uses. You could use Spy++ to find its name.
This is a bit OOT, but in Windows 7, shaking the active window will hide others to reveal the desktop (and so will clicking/hovering the rightmost taskbar button). Instead of hiding/moving vlc, you could just temporarily reveal the whole desktop. Shaking the active window again brings everything back.