I'm currently working on a app that should be able to navigate you inside a building - the basic idea is to have the building schematics, floor by floor and on every floor it tells you where to go to get to the next one. Once you are on the next floor, you press a button and its schema appears.
While this works fine, I came across and idea to automatize it - use the buildings WiFi access points, meassure the strength of the signal and triangulate your position. However, I read that in WP7 there is no way to access available WiFi networks. Does it still apply in Mango? Or is there some workaround?
Or any other idea how to automatize navigation inside building?
There's no API for WiFi (neither with Mango), and as such, you can't triangulate any positions from it.
And there's no workaround.
Related
I have a problem with the project I am working on in Unity and have been trying to solve the problem for days.
I am using the Meta 2 and the HTC Vive base station. I am continuing the work of a colleague. He was working on another computer. I am trying to recreate the project he created. He gave me the project and everything included but also doesn´t know how to solve my problem.
It looks like this:
I can open and play the project. When I play the project I can see everything I want to see in 3D with the Meta 2. But I can not look around. When I move the Meta I see the some kind of "vibrating" movement in the screen but in the end everything stays in the position.
I also connected the Sense Gloves I have with my computer. I can move the fingers of the sense gloves in the screen (they are shown as blue circles which i can move) BUT I cannot move the whole hand. It stays where it is initialized in the room.
I think it is a problem with the transfer of the vive tracking data to unity (the position of the tracker coordinate system in unity is not moving even though hit should).
I hope this explanation is accurate enough. If you need more information, do not hesitate to ask me, please.
Kind regards
Alex
So in this maze kind of game I'm making, I will show the maze to the player for 30 seconds.
What I don't want is that the player taking screenshot of the maze.
I want to do something like Snapchat, or Instagram, how it detects when you take a screenshot of a snap/story.
I'm using C#. It can also prevent user to take screenshot. I don't mind.
Is there a possible way to detect when the user takes screenshots or prevent it in Unity?
No, you can't detect this reliably. They also could make a photo with a digi cam. Furthermore there are endless ways to create a screenshot and the os has no "callback" to inform an application about that. You could try to detect the "print screen" key but as I said there are other screenshot / screen recording tools which could use any hotkey or no hotkey at all. I have never used Snapchat but it seems it's not safe either.
There are even monitors and video projectors which have a freeze mode to keep the current image. You could also run your browser in a virtual machine. There you can actually freeze the whole virtual PC or take screen shots from the virtual screen and an application running inside the VM has no way to even detect or prevent that.
I once had to do something similar. If you just want to do what snapchat did then it can be done but remember that as long as the app is running on anyone's device instead of your server, it can be de-compiled, modified and compiled again so this screenshot detection can be circumvented.
First of all you need to know this about Apple's rule:
2.5.9 Apps that alter or disable the functions of standard switches, such as the Volume Up/Down and Ring/Silent switches, or other native
user interface elements or behaviors will be rejected.
So, the idea of altering what happens when you take a screenshot is eliminated.
What you do is start the game, do the following when you are showing the show the maze to the player for 30 seconds:
On iOS:
Continuously check if the player presses the power and the home button at the-same time. If this happens, restart the game and show the maze to the player for 30 seconds again. Do it over and over again until player stops doing it. You can even disconnect or ban the player if you detect power + the home button press.
On Android:
Continuously check if the player presses the the power and volume down buttons at the-same time. Perform the-same action described above.
You cannot just do this with C#. You have to use make plugins for both iOS and Android devices. The plugin should use Java to the the detection on android and Object-C to do the detection for iOS. This is because the API required is not available in C#. You can then call the Java and Objective-C functions from C#.
Other improvement to make:
Check or external display devices and disable them when you are
showing the maze to the player for 30 seconds. Enable them back
during this time.
When you detect the screenshot button press as described above,
immediate take your own screenshot too. Loop through images on the player's picture gallery and load all the images taken that day.
Compare it with the screenshot you just took and see if they match.
If they do, you are now very sure that the player is trying to cheat.
Take action like banning the player, restarting the game or even
trolling the player by sending their screenshot to the other player. You can also use it as a proof to show that the user is cheating when they complain after being banned.
Finally, you can even go deeper by using OpenCV. When you are
showing the player the maze for 30 seconds, start the front camera of
the device and use OpenCV to continuously check if any object other
than the player's head is in front of the camera. If so, then the
player is trying to take a screenshot with another device. Take
action immediately. You can use machine language to train this.
How far to go depends on how much time you want to spend and how much you care about player's cheating online. The only thing to worry about is players de-compiling the game and removing those features but it is worth implementing.
My Android phone takes screenshots differently. I swipe down from the
top of the screen and select the "Capture" option.
Nothing is always the-same on Android. This is different on some older or different Android devices. You can detect swipe patterns on the screen. The best way to do this is to build a profile that handles each Android device from different manufactures.
For those commenting, this is possible to do. You must do it especially if it is a multiplayer game. Just because a game can be hacked does not mean that a programmer should not implement basic hack prevention mechanism. Basic hack prevention mechanism should be implemented then improved as you get feedback from players.
Objective:
I have a pre-scanned spatial map of a room. (Carried out through an onboarding process.)
We take that map and add holographic locations/markers/digital twins to it in Unity, at pre-defined static locations. E.g wallspace, fittings, etc.
The app is then launched and contains all the holographic data in the correct location, irrespective of the users physical start location.
In short, I want an app to start, with pre-defined holograms at set locations in the real world, irrespective of where the app is started within that room.
I have read lots of tutorials and walkthroughs etc on Spatial Mapping, Spatial Understanding etc. but they do not seem to solve my problem.
I have already downloaded the 3D spatial map of the room using the hololens web browser interface, and placed holograms using Unity, with their respective scripts etc.
Now, when I start the app, all of the holograms are created correct relative to eachother, but they are only in the right place if I start the app stood at a set point, looking in a set direction.
The main idea has been to find the spatial anchors for the room, (I don't know where I get these from in the created spatial map!) and then once they are found, rotate/translate the holographic world to match the live scanned spatial anchors.
Other methods include:
- placing all the objects manually in some config first-run of the app
- creating qr codes and placing them in set locations to act in the same way as the spatial anchors in the main idea above.
Has anyone done this, and is there a better way of spawning pre-defined holograms at real-world locations every app run?
Other questions looking for similar answers, but not solving my use-case:
https://forums.hololens.com/discussion/2938/position-independent-object-placement
https://forum.unity.com/threads/how-do-i-refer-to-a-specific-space-in-a-spatial-mapped-room.425525/
You will need to setup World anchors to let Hololens remember the position of holograms in your scanned space. (More info)
I tried it and this works pretty well.
placing all the objects manually in some config first-run of the app
This is the easiest course of action you could take. Basically I would add a TapToPlace script to all of your Holograms that you want to anchor. When you first launch the app they will be in whatever place you have them in Unity. However, once you close the application and open it back up it will be in the same spot in which you put them. If you don't want the user to be able to move them so easily I would add some type of method that disables the TapToPlace with a button click or speech command.
You can find the TapToPlace script in the MRTK. This is a very easy way because you don't have to learn about Attaching and Removing World Anchors because it is done for you already.
So after thorough research over MSDN forums, I have decided to put up my question here for the experts around.
My goal is very simple but unusual. Before I post a question, here's the scenario where I want to implement the requirement;
There is a huge table (large tablet like Microsoft PixelSense or old Surface machine) of nearly 41" which runs my app on Windows 8.1 smoothly
This table (tablet) will be placed between two chairs, one for customer while the other one will be for the agent.
The app opens from the customer's side and customer enters his required details in a usual way like everyone enters some information in a form.
The unsual part is, the agent sitting on the other side is not able to access the keyboard on his side because Windows (OS) opens up the keyboard only in one direction (unlike Microsoft PixelSense).
I have managed to invert the layout of all the app for the agent but unable to change the keyboard's layout (invert to 180 degrees)
My question is, can I change it anyway? Is it possible?
I don't want to go for custom keyboard as of now.
I am creating an app with multiple functionalities one of those needs access to front facing camera. I do not set it as a necessity to install the app in the manifest because I want the user to have access to the other functionalities even if front facing camera is not present. I do need however to notify the user whenever he starts said functionality that front facing camera is needed and it cannot run. How can that be done programmatically?
I have searched around the web and only found ways to exclude devices that do not have a front facing camera. That is not what I need however and I am wondering if it is even possible to do so....
The Microsoft.Devices.Camera class offers information like
Camera.IsCameraTypeSupported(CameraType.FrontFacing)
I've found more info about creating and manipulating cameras here on MSDN.