I would like to detect if the user has their finger held on the screen when my app starts so that I can perform some different functionality.
Most of what I've seen is in regards to gestures but I don't think that would work since the user will already have their finger held down when the app starts. It would be extra cool if I could detect how many fingers so that I could go directly to a different page for two fingers.
Thanks for any ideas.
http://invokeit.wordpress.com/2012/04/27/high-performance-touch-interface-wpdev-wp7dev/
The TouchFrameEventArgs that occurs provides a mechanism to query multiple touch points
http://msdn.microsoft.com/en-us/library/system.windows.input.touchframeeventargs.gettouchpoints(v=vs.95).aspx
In my post i only deal with primary touch ponit but the GetTouchPoints method will give you all touch points as a collection
Related
I try to make a simple but unusual 2D game for android using Unity in which player is supposed to move two different objects in two different halfes of the screen(heah that sounds stupid, sorry) AT THE SAME TIME. So, most of the work is done but I found a problem I've no idea how to deal with - I can't swipe in both halfs of the screen at the same time. If I want to move both objects in both halfs of the screen AT THE SAME TIME I won't be able to do it. It is so because my script for swipes checks position of first touch and then waits for an end of swipe. So if I touch both halfes of the screen and than make a swipe by stopping touching the screen the script will only detect the swipe in the half I first touched the screen.
Sorry for bad description. :(
Any thoughts?
The most obvious variant (in case I understood your problem correctly) is to change your input code to be able to work with several touches, not single. It should work like this:
detect new touch (check if the Input.touchCount is greater than the number of currently tracked touches)
store info about handled touch (you can use fingerId value from Touch structure)
find an object, that will be affected by this touch (it seems like you already have this functionality)
listen to ALL updates for ALL touches to affect corresponding objects if it is needed.
remove handled touch on finger release from tracked touches
So you should work with ALL touches, not just the first one. Also, make sure that you have Input.multiTouchEnabled set to true.
Introduction
I am making a game in Unity that was designed for the player to use all the letters on the keyboard. The game shows a battle between two fighters, prompts a key, and the player must mash that key for his character to push the other.
The fun part is that, eventually, the key to be pressed will change to another randomly.
This game is about learning the keyboard for those who haven't learned it, or haven't got used to their flashing new keyboard just yet.
I want the game itself to be playable in more ways, and I made a port to Android that works by touching the sides of the screen, there are flashing button panels that change as the keys would.
Issue
Now, the game has reached more that I want the game to be playable with controllers/gamepads/joysticks, but I'm afraid about two issues.
First and foremost: How can I tell the number of button inputs a controller has? I don't want my game to ask the player to press "JoystickButton19" when in their physical controller there are only 12 or so buttons.
I would add to the question "how do I know the actual name of the button itself?". In an example, how to know if the controller is a JoyCon. I noticed that in Fire Pro Wrestling World on Steam, the game recognizes the "SL" and "SR" buttons, so they got JoyCon specific layout there. If this question has to be answered in a new topic, I will make that.
This is my first question. Thanks in advance!
So in this maze kind of game I'm making, I will show the maze to the player for 30 seconds.
What I don't want is that the player taking screenshot of the maze.
I want to do something like Snapchat, or Instagram, how it detects when you take a screenshot of a snap/story.
I'm using C#. It can also prevent user to take screenshot. I don't mind.
Is there a possible way to detect when the user takes screenshots or prevent it in Unity?
No, you can't detect this reliably. They also could make a photo with a digi cam. Furthermore there are endless ways to create a screenshot and the os has no "callback" to inform an application about that. You could try to detect the "print screen" key but as I said there are other screenshot / screen recording tools which could use any hotkey or no hotkey at all. I have never used Snapchat but it seems it's not safe either.
There are even monitors and video projectors which have a freeze mode to keep the current image. You could also run your browser in a virtual machine. There you can actually freeze the whole virtual PC or take screen shots from the virtual screen and an application running inside the VM has no way to even detect or prevent that.
I once had to do something similar. If you just want to do what snapchat did then it can be done but remember that as long as the app is running on anyone's device instead of your server, it can be de-compiled, modified and compiled again so this screenshot detection can be circumvented.
First of all you need to know this about Apple's rule:
2.5.9 Apps that alter or disable the functions of standard switches, such as the Volume Up/Down and Ring/Silent switches, or other native
user interface elements or behaviors will be rejected.
So, the idea of altering what happens when you take a screenshot is eliminated.
What you do is start the game, do the following when you are showing the show the maze to the player for 30 seconds:
On iOS:
Continuously check if the player presses the power and the home button at the-same time. If this happens, restart the game and show the maze to the player for 30 seconds again. Do it over and over again until player stops doing it. You can even disconnect or ban the player if you detect power + the home button press.
On Android:
Continuously check if the player presses the the power and volume down buttons at the-same time. Perform the-same action described above.
You cannot just do this with C#. You have to use make plugins for both iOS and Android devices. The plugin should use Java to the the detection on android and Object-C to do the detection for iOS. This is because the API required is not available in C#. You can then call the Java and Objective-C functions from C#.
Other improvement to make:
Check or external display devices and disable them when you are
showing the maze to the player for 30 seconds. Enable them back
during this time.
When you detect the screenshot button press as described above,
immediate take your own screenshot too. Loop through images on the player's picture gallery and load all the images taken that day.
Compare it with the screenshot you just took and see if they match.
If they do, you are now very sure that the player is trying to cheat.
Take action like banning the player, restarting the game or even
trolling the player by sending their screenshot to the other player. You can also use it as a proof to show that the user is cheating when they complain after being banned.
Finally, you can even go deeper by using OpenCV. When you are
showing the player the maze for 30 seconds, start the front camera of
the device and use OpenCV to continuously check if any object other
than the player's head is in front of the camera. If so, then the
player is trying to take a screenshot with another device. Take
action immediately. You can use machine language to train this.
How far to go depends on how much time you want to spend and how much you care about player's cheating online. The only thing to worry about is players de-compiling the game and removing those features but it is worth implementing.
My Android phone takes screenshots differently. I swipe down from the
top of the screen and select the "Capture" option.
Nothing is always the-same on Android. This is different on some older or different Android devices. You can detect swipe patterns on the screen. The best way to do this is to build a profile that handles each Android device from different manufactures.
For those commenting, this is possible to do. You must do it especially if it is a multiplayer game. Just because a game can be hacked does not mean that a programmer should not implement basic hack prevention mechanism. Basic hack prevention mechanism should be implemented then improved as you get feedback from players.
so I have received an external motion controller device (Myo) and I wish to create an application where certain motions will basically simulate a keystroke or keypress globally (doesn't matter about what application). This will happen while my program is running in the background so it can receive motion inputs and output as a keyboard press.
An example would be if I were to be playing Baseball game in the foreground (also full screen) and I do a pitching motion, the program will output the key which will do a pitch in game (whichever key it might be).
I have looked into the SendKeys class in C# but I feel there might be limitations as to what it can do (specifically global keypress sending).
Is there a good way where I can possibly write a program so I can map the actions with my motion controller to a keypress using C#? It would also be good if it can do key_down and key_up for key holdings.
The most direct way to accomplish truly global key-presses is to emulate a keyboard. This will involve creating a keyboard driver that somehow provides access to your background program. However this involves kernel programming which is quite complex.
An alternative is to use the SendKeys API combined with some logic to find the currently active application.
I know this isn't a C# solution, but the Myo Script interface in Myo Connect was essentially built for this purpose and would probably be the easiest way of testing things out if nothing else.
To send a keyboard command using Myo Script you can use myo.keyboard() (docs here).
If you want the script to be active at all times, you will need to consistently return true in onForegroundWindowChange() and pay attention to the script's location in the application manager. Scripts at the top of the application manager will be checked first, so your script may lose out if there is another one above it that 'wants' control of a given application.
I am writting a Windows Store App where you can draw lines on the screen and need to avoid the obstacles. To detect where exactly is user touching at the moment I use Canvas from Windows.UI.Xaml.Controls, because it contains events that are helpful here. But the problem is that PointerMoved event is very slow, and it's easy for user to draw on the obstacle and app does not detect it (if user draws over the obstacle fast enough)
Is there any way to speed it up, for example try to force this event to be raised after every (even small) pointer position change?
Thanks for any help!