I am completely new to windows phone development, I am planning to develop a windows phone 8 app that uses camera to measure an object's dimensions ie its height, width, distance from phone etc.
Is there any way in windows phone that makes it possible? I am fed up of searching for the topic but still with no results in my hand .. please tell me is it possible? if yes then how shall I proceed for it, what are the API's method's I am gonna use??
your help would be appreciated like anything..
thanks in advance
As far as I know, the Windows Phone doesn't include a sensor that will necessarily detect distances, as explained in this MSDN article:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202968%28v=vs.105%29.aspx
However, with a clever use of trigonometry you might be able to combine the sensors capabilities to do so.
Here are the class library's documentation for each sensor:
Gyroscope:
http://msdn.microsoft.com/library/windowsphone/develop/hh202968%28v=vs.105%29.aspx
Compass:
msdn.microsoft.com/library/windowsphone/develop/microsoft.devices.sensors.compass.aspx
And Accelerometer:
msdn.microsoft.com/library/windowsphone/develop/microsoft.devices.sensors.accelerometer.aspx
Best of luck!
For messure distanse you need to use simple mathematics + data from accelerometer
The answer appears currently to be "no", but contrary to what other (jimpanzer for example) would seem to indicate, this should be possible completely without any new hardware being added to the phone. The act of focusing your camera on an object would normally give the camera information on the distance to the subject focused on.
Take a look at any SLR lens of some quality, when you have focused on something, the lens has a distance scale on it, which will tell you how far away the entity which you are focusing on is. This distance would get less and less accurate the further away the object is, but, it should be possible for the camera to tell you approximately how far away an object you have focused on is.
So, I guess the answer, for those of us who would find this immensely useful, is to ask Microsoft/Nokia etc to provide this information in the camera API.
Related
I'm making an app and I want to restrict it to the United States via GPS on the users phone. (I know permission will have to added to get access to their GPS) Geolocking on Google Play doesn't really do anything because of the huge access to VPNs to bypass that.
How would I go about to getting their location via their GPS and making sure they are ACTUALLY in the United States.
This game is being coded within Unity in C# language. I don't expect a full on copy-pasta code but more of a direction to be pointed towards weather its documentations or other examples that you have seen. Just a nice little nudge in the right direction.
GPS is a Polar Coordinate System. Just find the min/max ranges you want your users to be restricted to and treat it like a bounding box (but, obviously, with polar coordinates).
That said, given that spoofing a GPS is even easier than spoofing a Geo-IP, I have to ask, why is this such a big deal for you?
Spoofing GPS is easy, starting with a long list of Fake GPS apps and all the way to actually transmitting RF GPS signals using a software-defined radio and GitHub's gps-sdr-sim. When spoofing is done right, it is not trivial to know you are being spoofed. There are several software solutions out there trying to solve this problem - a quick Google search will give you some options.
I'm working on small WPF desktop app to track a robot. I have a Kinect for Windows on my desk and I was able to do the basic features and run the Depth camera stream and the RGB camera stream.
What I need is to track a robot on the floor but I have no idea where to start. I found out that I should use EMGU (OpenCV wrapper)
What I want to do is track a robot and find it's location using the depth camera. Basically, it's for localization of the robot using Stereo Triangulation. Then using TCP and Wifi to send the robot some commands to move him from one place to an other using both the RGB and Depth camera. The RGB camera will also be used to map the object in the area so that the robot can take the best path and avoid the objects.
The problem is that I have never worked with Computer Vision before and it's actually my first, I'm not stuck to a deadline and I'm more than willing to learn all the related stuff to finish this project.
I'm looking for details, explanation, hints, links or tutorials to achieve my need.
Thanks.
Robot localization is a very tricky problem and I myself have been struggling for months now, I can tell you what I have achieved But you have a number of options:
Optical Flow Based Odometery: (Also known as visual odometry):
Extract keypoints from one image or features (I used Shi-Tomashi, or cvGoodFeaturesToTrack)
Do the same for a consecutive image
Match these features (I used Lucas-Kanade)
Extract depth information from Kinect
Calculate transformation between two 3D point clouds.
What the above algorithm is doing is it is trying to estimate the camera motion between two frames, which will tell you the position of the robot.
Monte Carlo Localization: This is rather simpler, but you should also use wheel odometery with it.
Check this paper out for a c# based approach.
The method above uses probabalistic models to determine the robot's location.
The sad part is even though libraries exist in C++ to do what you need very easily, wrapping them for C# is a herculean task. If you however can code a wrapper, then 90% of your work is done, the key libraries to use are PCL and MRPT.
The last option (Which by far is the easiest, but the most inaccurate) is to use KinectFusion built in to the Kinect SDK 1.7. But my experiences with it for robot localization have been very bad.
You must read Slam for Dummies, it will make things about Monte Carlo Localization very clear.
The hard reality is, that this is very tricky and you will most probably end up doing it yourself. I hope you dive into this vast topic, and would learn awesome stuff.
For further information, or wrappers that I have written. Just comment below... :-)
Best
Not sure if is would help you or not...but I put together a Python module that might help.
http://letsmakerobots.com/node/38883#comments
I am trying to detect how hard someone is pushing the wp7 screen for a drawing application. Is there a way to detect how big surface area is where the screen is being touched. I reckon that would be a reasonably accurate way to determine how hard the screen is being touched - A light touch would have a small touch surface area while a hard press would have a bigger touch area.
Has anyone ever tried something like this?
You can determine this with a very simple equation.
Pressure = Force / Area
To solve this, you would need to know at least two of the variables. Suppose you can find the area from the phone's sensors. You would still need to know the pressure in order to calculate the force or in other words, how hard the user is pressing the screen.
Hope this helps you!
I have a question that How to detect the change on the screen? Its position is not necessary but is possible to get its position it will be helpful. I searched it on the internet but not found any suitable answer. Now, I am making a program in C# and I have to detect a change on the screen. I tried to capture four screen shots per second and compare them. This method works but it badly effect on the performance of the PC.
I think it is easy to do in C or Assembly language (x86) because in assembly we can get access to video memory directly.
Is it possible to do in C#?
Code sample will be appreciated.
Project: Detect any change on full Screen camera monitoring software.
Are you really looking just for simple difference of what you see on your monitor? I doubt that would do the job.
For motion detection from cam input you can take a look at Motion Detection Algorithms article on CodeProject.
Aside from taking screen captures and comparing them at some time intervals (which would cause performance issues),
The only solution i can think of is hooking up to system events, the "redraw" kind of events.
You will need to choose which events to hook your program with.
This codeproject tutorial might help-
http://www.codeproject.com/KB/system/WilsonSystemGlobalHooks.aspx
I'm trying to implement a simple game - I've written a dial control but having trouble writing a on-screen thumbstick in Silverlight for Windows Phone - this would be a large circle - say 150px wide with a 25px circle which when held down moves round the centre much like a real thumbstick - like the Xbox 360 controller thumbsticks.
I'm finding creating this a little tricky - if there are any examples such as a Joystick one I can shrink down for example? Been trying to create something for ages and can't seem to figure it out - the centre circle is loaded from an Image and the Larger one too so it can be customised - getting the two to be within each other centred is the easy part!
As discussed, i'd suggest using XNA doing it since its considerably easier to do. With Mango you could combine XNA and Silverlight and therefore satisfy your needs for some Silverlight too.
Look at this example:
http://create.msdn.com/en-US/sample/touchthumbsticks
It shows how to easily create a thumbstick control. To restrict the area which you can touch, just create a new Rectangle at the position of the thumbstick with the size you desire and use the .Contains(...) overload to check if the position of the tap is inside it and then act accordingly (update the stick, or ignore input).
Check out the .Contains(...) function and its overloads:
http://msdn.microsoft.com/de-de/library/microsoft.xna.framework.rectangle.contains.aspx
I have learned that many programmers tend to stick to Silverlight for they think XNA is some kind of holy grail and is complex to program. It is not. It just needs a bit of getting used to, but you will surely enjoy the ride to XNA enlightment. I can tell, i did :) It's fun! Just trust a stranger on the internet!
If you need to stick to Silverlight and Pre-Mango, i fear i can offer nothing of value for you, and i fear you will suffer pain in trying to recreate the same functionality XNA already offers programmers for no charge.