I am trying to detect how hard someone is pushing the wp7 screen for a drawing application. Is there a way to detect how big surface area is where the screen is being touched. I reckon that would be a reasonably accurate way to determine how hard the screen is being touched - A light touch would have a small touch surface area while a hard press would have a bigger touch area.
Has anyone ever tried something like this?
You can determine this with a very simple equation.
Pressure = Force / Area
To solve this, you would need to know at least two of the variables. Suppose you can find the area from the phone's sensors. You would still need to know the pressure in order to calculate the force or in other words, how hard the user is pressing the screen.
Hope this helps you!
Related
I'm working on a tinny software (C# and WPF) that does some filtering to the screen output. Examples on the filtering I'm talking about:
Customizable blue light filtering.
Increase/Decrease screen brightness (via changing the colors, not actual brightness)
Inverting screen colors.
And some more, but the simple idea is to tweak the pixel values of the screen after everything else is done rendering, but before showing the result.
Q1: How can I actually access (and then edit) the screen image/frame?
I've heard of getting and editing the screen DC directly (Win32) but I don't know if it's a good way, nor how to do it exactly.
Q2: what is the best way of applying these filters?
I think that tweaking every single pixel individually is a really bad way of doing it. (My code runs on the CPU and this process needs to be done every frame!)
Q3: Is there an event or hook for when windows is refreshing/updating the screen?
If yes, how can I register this process to it?
Note: I want a really performant way so that I don't lose any frame rate if possible.
Note: My preferred language for this is C#, C++ is ok.
Thanks a ton.
I've been trying to figure out how to use the RTS plugin from the unity Asset Store but there seems to be a problem. I have no idea why this is happening, but the z and y values are mixed up. I can scroll left and right perfectly but whenever I press "W" the screen zooms out instead of moving upwards. The same applies to scrolling. When I scroll, the screen moves upwards/downwards and doesn't zoom out like its supposed too. I tried creating my own RTS camera via Brackeys and the same thing happened. His game would zoom, mine would just move upwards. I'm not for sure whats wrong. Im fairly new to all this unity jazz. ANY help would be appreciated
it's a little hard to know exactly why this is with the information you have provided. But if I were you the first thing I would check is your input manager! Have you done that and made sure your inputs correlate to exactly what you want?
It's sounds like your settings may be not be the default, causing the differences?
I am developing a WPF App that uses Kinect v2, and I use the hand to simulate the mouse. It works but I have a little problem - when I close the hand I simulate a click but the cursor drops its position a little bit relative to when the hand was open and sometimes it will end in a click in the wrong button or place.
Any ideas on how can I solve this?
I already tried to track the wrist and the thumbs instead of the hand but the problem still happens.
Thanks!
Here are some ideas:
Filter and smooth the hand position data a bit more. For a UI/menu system, it should be acceptable to have some latency as it doesn't require reduced latency as much as other uses.
Modify the hand position based on the hand's open/close state. Introduce a constant to bump up the hand position when the hand is closed, with appropriate smoothing to get this to feel and look correct
Keep a list of hand positions and use the data from a few frames before (though it might be tricky to get this to feel and look correct)
As a note, also consider these points:
Use bigger buttons. Buttons should have appropriate spacing, placement, and sizes. The app's UI should be specifically designed for a Kinect application.
Use a different gesture for a mouse click, such as push or press which is the recommended approach in the Kinect Human Interface Guidelines 2.0
I am completely new to windows phone development, I am planning to develop a windows phone 8 app that uses camera to measure an object's dimensions ie its height, width, distance from phone etc.
Is there any way in windows phone that makes it possible? I am fed up of searching for the topic but still with no results in my hand .. please tell me is it possible? if yes then how shall I proceed for it, what are the API's method's I am gonna use??
your help would be appreciated like anything..
thanks in advance
As far as I know, the Windows Phone doesn't include a sensor that will necessarily detect distances, as explained in this MSDN article:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202968%28v=vs.105%29.aspx
However, with a clever use of trigonometry you might be able to combine the sensors capabilities to do so.
Here are the class library's documentation for each sensor:
Gyroscope:
http://msdn.microsoft.com/library/windowsphone/develop/hh202968%28v=vs.105%29.aspx
Compass:
msdn.microsoft.com/library/windowsphone/develop/microsoft.devices.sensors.compass.aspx
And Accelerometer:
msdn.microsoft.com/library/windowsphone/develop/microsoft.devices.sensors.accelerometer.aspx
Best of luck!
For messure distanse you need to use simple mathematics + data from accelerometer
The answer appears currently to be "no", but contrary to what other (jimpanzer for example) would seem to indicate, this should be possible completely without any new hardware being added to the phone. The act of focusing your camera on an object would normally give the camera information on the distance to the subject focused on.
Take a look at any SLR lens of some quality, when you have focused on something, the lens has a distance scale on it, which will tell you how far away the entity which you are focusing on is. This distance would get less and less accurate the further away the object is, but, it should be possible for the camera to tell you approximately how far away an object you have focused on is.
So, I guess the answer, for those of us who would find this immensely useful, is to ask Microsoft/Nokia etc to provide this information in the camera API.
I have a C# application that has an existing WinForm that I now need to display upside down.
The application will be displayed on a touchscreen Windows 7 device. If two people are using the device, one person is viewing it right-side-up while another user will be simultaneously viewing it upside-down. I will need to have one control displayed right-side-up while another control is displayed upside-down, each duplicate forms. Both need to be functional. It is not necessary for the title bar and Windows close, maximize, and minimize to be rotated.
Is there a way to easily rotate this Form and all of its contents without having to rewrite it from scratch?
Unfortunately, rotating controls is not directly possible in WinForms.
At least, not if you want them to retain their functionality. It would be relatively simple to draw the control into a bitmap, rotate the bitmap, and then draw that back to the desired location on the form. But you would obviously lose the ability to interact with the controls. They would just be static representatives of their original selves.
But making functional upside-down controls just isn't going to happen. I mean, you could try to write a bunch of custom drawing code for owner-drawn controls, but you'll still run into a bunch of bugs, corner cases, and compatibility problems. The Win32 controls that WinForms is based on just don't support this. No big surprise, really, considering they were invented some 20–25 years before anyone thought of computer screens that you could carry around in your pocket and rotate in any direction. There is a good reason that UI technologies like WPF came out around the time that touch screens and tablets did.
There are some possibilities that can be explored when it comes to flipping the entire screen, but that's not going to help when you want different controls going different directions. (And I guess it betrays my vantage point as a desktop app guy when I say this, but that just sounds like an incredibly confusing UI.)
If you absolutely have to have this, someone else is going to have to give you another route to hack it, perhaps along the lines of Dhawalk's comment: hosting the WinForms control inside of a WPF app that does provide built-in support for rotated controls. I don't know enough about this to make any concrete suggestions down that path. From a few minutes of searching, it appears that WindowsFormsHost does not actually support rotation transforms, so this may be a non-starter anyway.