I am currently making a little application that lets me use my Intuos Pro as a keyboard because it is touch enabled. Wacom has released an API that allows the access of touch data so getting the core functionality has not been a problem. I have hit a bit of a snag, though. The API allows you to listen for data in two modes:
Consumer Mode means that the application will not pass touch information onto any other applications or to the driver for gesture recognition. It will only listen if the window has keyboard focus.
Observer Mode means that the application will pass touch information onto other applications and will always listen for data regardless of focus.
Here's my problem. The keyboard needs to be running all the time, but when I'm typing on my touchpad, I don't want two finger scrolls or tap clicking or anything to happen. But if I'm typing into something, the thing I'm typing into has to have keyboard focus - not my application.
I can't really see the point of observer mode if there's no way to destroy data so that gesture recognition doesn't get in the way. And in their FAQ, Wacome hinted at the possibility of being able to destroy data in observer mode.
If the application chooses to pass the touch data through to the driver in Observer mode, then the tablet driver will interpret touch data and recognize gestures as appropriate for the tablet and operating system.
So I am suspicious of there being a solution. I was wondering if anyone has had any experience with this, or would be able to take a look and see if they can figure out a solution? I'm okay with something hacky if need be as this is more of a personal thing than anything else.
I am using their MTDN variety in C# in Visual Studio 2013 and my application is currently a WPF application.
First a little background info on the problem:
My kiosk application must block user access to the computer by putting up a full screen image blocking all windows and the desktop. Users must not be able to get around this block. I can easily do this by putting a full-screen window up, OR by creating a new "virtual desktop" and switching to it. This is the easy part. Let's call this up front image/window/desktop THE BLOCKER.
What I need help with is allowing a remote desktop user or VNC user to operate the machine behind the BLOCKER, hidden from the user standing in front of the machine. I do not have a video switch involved (although if there is a cheap one that is remote controllable I might be interested...I need 12 of them). What I really want is a software solution.
VNC clients only show the current input desktop and do not have the option to ignore certain windows (the BLOCKER), so they don't seem useful for this. I do not know if RDP allows you to log in to a hidden desktop or if it can operate behind THE BLOCKER.
If anyone knows how this could be accomplished either via commercial software or knows of a software library that does something like this (we use .NET as our dev platform), I would appreciate the help.
I would like to use a monitor which is actually marked "disconnected" in the windows control panel under "Change display settings". (I do NOT mean a physically disconnected monitor.)
I know how to add a second monitor in Windows and make it part of desktop. I also know how to make my application run on a primary or on secondary monitor when they are part of desktop.
I have a piece of equipment attached to the PC which has a touch screen on it. The touchscreen is connected to the PC over USB looking as an ordinary USB-Monitor and I can make it part of my Windows desktop. But that is not what I want.
What I would like to do is make sure that only one special application can run on this monitor. I also do not want to have a windows desktop on it because than the user could move any window to it which is not what I want. The idea behind all this is to use the touch screen to have an application on it which can control this external piece of equipment. The user would only have to run the PC but not to login. I was thinking about starting the app from a windows service before the windows desktop is loaded. And once the user logs in I do not want him to be able to use the touch screen for anything else except this special application. That is why the touch screen must not be part of the windows desktop but ”deactivated”.
I am using . NET 4.0 and C# for my application, but I will use C++ or whatever comes handy.
Any help or idea is appreciated. Thank you!
It seems WDDM does not support independent displays any more. Here are a few links in case somebody wishes to take a look for himself:
(old MSDN link) = /windows/win32/gdi/multiple-display-monitors
(old MSDN link) = /windows/win32/gdi/using-multiple-monitors-as-independent-displays
The important part is this note from the second link:
ⓘ Note
Using other monitors as independent displays isn't supported on drivers that are implemented to the Windows Display Driver Model (WDDM).
I have been looking around for this but I can't find out if this is possible or not. I have some software that uses Log Me In Rescue. As it stands, the software currently connects to our servers to get a pin generated by their PHP API. This pin is then sent back to the application that then posts the form with the pin in at logmein123.com. This then downloads the response application, which then runs and our Support Techs can then access the hardware, the problem is the software actively cuts control of the Windows OS behind the application: no task bar; start button or driver installation; so the software then has provide a way for our support to re-enable every thing.
So I'm looking for a way to leave the display enabled to the LogMeIn software on the tills but stop the screen from showing is there I way I can tell the screen to turn off and not stop the display from working? The hardware is powered by a Windows 7-based OS. If this is possible, could some one point me to a tutorial on how to do such a thing?
I'm making a charity Windows Mobile 6 app in C# to help those affected by Alzheimer's.
The aim is for this app to let the carer set a boundary by tapping in Google maps to set points. The carer would then put the windows mobile device in the patient's hand bag or coat, so that when the patient walks out on their own, thinking that they are "going home", the carer receives an SMS text with their position, heading and speed.
However, I don't know how to...
Switch from app to google maps for mobile
tap to select points
import the coordinates of that point to my C# program
use the coordinates to Calculate the boundary
Send the text with the position information
Switch back to my C# program
HTC's HD2 comes with a compass that uses this "tap to select a point then return to app" functionality, so surely it's possible for us too?
If anyone would be able to give me a hand my out I would be EXTREMELY grateful as this will help all those affected by Alzheimer's and other similar conditions. My Gran, for example, recently started trying to walk back to the property she lived in 20 years ago...
Thanks everyone! This means sooo much! I'll even come and buy you a drink to say thanks!
James
Whatever technical issues you're considering, I think you should realize that this type of usage is, AFAICS, contrary to the terms of service of google maps. See:
http://code.google.com/apis/maps/terms.html
That is, you may only use the google maps content if its accessible for everybody, not just whomever you hand out your program to:
Your Maps API Implementation must be generally accessible to users without charge.
If you're building it as a web app, it must be accessible through the internet, not intranet:
[your Maps API Implementation must not:] operate only behind a firewall or only on an internal network (except during the development and testing phase).
Some of the terms in header 10 also seem applicable:
[you must not (nor may you permit anyone else to):]
10.8 use the Static Maps API other than in an implementation in a web browser;
10.9 use the Service or Content with any products, systems, or applications for or in connection with:
(a) real time navigation or route guidance, including but not limited to turn-by-turn route guidance that is synchronized to the position of a user's sensor-enabled device;
Why would you want to kludge something together like that? Trying to have your app interface with another application for which you don't have source, whether it's Google Maps fopr Mobile or anything else, is difficult and should only be used as a last resort.
If this app is going to be free and not require users to log in, you can use the Bing Maps Web Service API directly from your application without cost. You could then use built-in GPS through the GPSID APIs as well, and you'd have control over what data goes where, what maps to draw, etc.
This seems like a much easier path to achieve what you're after.
As a side note, I gave a link above for the GPSID sample from Microsoft. I'd recommend looking at it and the native GPSID APIs but the managed wrapper Microsoft provided is, IMO, pure garbage, so you might consider wrapping the lower APIs yourself.
To restate the problem I believe you're trying to solve:
You've a use case when a carer will sent up a "virtual boundary" on a device. If that device leaves the bounded area, you'd like an alert sent via SMS sent to a predefined recipient, saying where that device is.
My suggestion would be to use something like OpenStreetMap maps (as they're free) for when you're setting up the virtual boundary. For their tiles (each 256px square), there is a relatively trivial method for converting between lat/long and pixel co-ordinates.
You might also be able to do what you want by cannibalising one of their existing Windows Mobile applications intended for surveying, such as OSMtracker, which already includes the map controls, downloads and the like, just leaving point 5 and part of point 4 on your list to tackle.