Implementing zoom using MonoGame and two-fingers gesture - c#

I want to implement zoom feature using two-fingers slide in/out gesture that is commonly met in games such as Angry Birds. Now i'm using slider zoom and it feels not so good as the simple gesture. So i've tried to look at the gestures implementation in MonoGame but haven't figured out what actualy can help me to achieve described beahviour.
Any help will be appreciated, thanks!

Short answer: you need to use the TouchPanel gesture functionality to detect the Pinch gesture, then process the resultant gestures.
The longer answer...
You will get multiple GestureType.Pinch gesture events per user gesture, followed by a GestureType.PinchComplete when the user releases one or both fingers. Each Pinch event will have two pairs of vectors - a current position and a position change for each touch point. To calculate the actual change for the pinch you need to back-calculate the prior positions of each touch point, get the distance between the touch points at prior and current states, then subtract to get the total change. Compare this to the distance of the original pinch touch points (the original positions of the touch points from the first pinch event) to get a total distance difference.
First, make sure you initialize the TouchPanel.EnabledGestures property to include GestureType.Pinch and optionally GestureType.PinchComplete depending on whether you want to capture the end of the user's pinch gesture.
Next, use something similar to this (called from your Game class's Update method) to process the events
bool _pinching = false;
float _pinchInitialDistance;
private void HandleTouchInput()
{
while (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.GetGesture();
if (gesture.GestureType == GestureType.Pinch)
{
// current positions
Vector2 a = gesture.Position;
Vector2 b = gesture.Position2;
float dist = Vector2.Distance(a, b);
// prior positions
Vector2 aOld = gesture.Position - gesture.Delta;
Vector2 bOld = gesture.Position2 - gesture.Delta2;
float distOld = Vector2.Distance(aOld, bOld);
if (!_pinching)
{
// start of pinch, record original distance
_pinching = true;
_pinchInitialDistance = distOld;
}
// work out zoom amount based on pinch distance...
float scale = (distOld - dist) * 0.05f;
ZoomBy(scale);
}
else if (gesture.GestureType == GestureType.PinchComplete)
{
// end of pinch
_pinching = false;
}
}
}
The fun part is working out the zoom amounts. There are two basic options:
As shown above, use a scaling factor to alter zoom based on the raw change in distance represented by the current Pinch event. This is fairly simple and probably does what you need it to do. In this case you can probably drop the _pinching and _pinchInitialDistance fields and related code.
Track the distance between the original touch points and set zoom based on current distance as a percentage of initial distance (float zoom = dist / _pinchInitialDistance; ZoomTo(zoom);)
Which one you choose depends on how you're handling zoom at the moment.
In either case, you might also want to record the central point between the touch points to use as the center of your zoom rather than pinning the zoom point to the center of the screen. Or if you want to get really silly with it, record the original touch points (aOld and bOld from the first Pinch event) and do translation, rotation and scaling operations to have those two points follow the current touch points.

Related

How do i prevent the player from dragging an image outside the specified area?

I am trying to drag an Image in a certain area. For that I am using IDragHandler. To prevent the image to go outside the area, i put four box colliders in a square shape. The box was still moving out. So, I put the fixed timestep to 0.0001. Now, when the image goes out of boundry, it pushes back the image in the specified area which is fine but I want the image to stop moving out the boundary the moment it touch the edge of the image.
Here's my code:
public class Draggable : MonoBehaviour, IDragHandler
{
public GameObject box;
private void OnCollisionEnter2D(Collision2D collision)
{
Debug.Log("Triggered");
}
public void OnDrag(PointerEventData eventData)
{
box.transform.position = eventData.position;
}
}
Don’t use physics and fixed timestep to solve this problem. It is performance killer to reduce timestep.
One way to do it : use Physics.OverlapBox ( see documentation https://docs.unity3d.com/ScriptReference/Physics.OverlapBox.html)
Do this test between your draggable object and the limit of your draggable zone in your OnDrag method. Calculate the « Wanted » position base on the event you receive and if your object overlap with your borders then dont move it, if not your are safe and you can move it.
Try to achieve your bounderies as simple numbers. Maybe create a rectangle and take its corners. Then you get the best user experience by using min and max functions, effectively "clamping" the allowed coordinates before they are actually set.
It makes sense to clamp x and y separately. That way the user can still drag along the Y axis if there is room despite the mouse being outside the boundary on the X axis.

Touch and Canvas element location do not match

I'm trying to create a game like WordCookie. I want to use a LineRenderer, so I cannot use the Screen Space - Overlay Canvas Renderer mode. When using either Screen Space - Camera or World View, my Touch position doesn't match the position I get from my RectTransforms.
I access my buttons' transform position by logging:
foreach (RectTransform child in buttons.transform)
{
Debug.Log("assigning" + child.transform);
}
For the touch cooridinates, I simply log the touch.position.
To clarify; I want to trigger my LineRenderrer when the distance between the position vectors is smaller than a certain float. However, whenever I tap on my button to test this, the button logs at (1.2, -2.6) and my touch at (212.2, 250.4).
What could be causing this?
touch.position returns a value using pixel coordinates.
In order to convert to world coordinates you need to use: Camera.ScreenToWorldPoint
Should be something like that:
Vector2 touch = Input.GetTouch(0);
float buttonPositionZ = button.transform.position.z;
Vector3 worldPosition = camera.ScreenToWorldPoint(new Vector3(touch.x, touch.y, buttonPositionZ));

Discontinuity in RaycastHit.textureCoord values (getting coordinates on texture right)

Hi I'm trying to get a particular coordinate on texture (under mouse cursor). So on mouse event I'm performing:
Ray outRay = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit clickRayHit = new RaycastHit();
if (!Physics.Raycast(outRay, out clickRayHit))
{
return;
}
Vector2 textureCoordinate = clickRayHit.textureCoord;
Texture2D objectTexture = gameObject.GetComponent<Renderer>().material.mainTexture as Texture2D;
int xCord = (int)(objectTexture.width * textureCoordinate.x);
int yCord = (int)(objectTexture.height * textureCoordinate.y);
But the problem is that the coordinate I'm getting is not preciesly under the cursor but "somewhat near it". And it's not like coordinates are consistently shifted in one way but shifting is not random as well. They are shifted differently in different points of the texture but:
they remain somwhere in the area of real cursor
and coordinates shifted in the same way when cursor is above the same point.
Here is part of coordinates log: http://pastebin.ca/3029357
If i haven't described problem good enough I can record a short screencast.
GameObject is a Plane.
If it is relevant mouseEvent is generated by windows mouseHook. (Application specific thing)
What am I doing wrong?
UPD: I've decided to record screencast - https://youtu.be/LC71dAr_tCM?t=42. Here you can see Paint window image through my application. On the bottom left corner you can see that Paint is displaying coordinates of the mouse (I'm getting this coordinates in a way I've described earlier - location of a point on a texture). So as I move mouse cursor you can see how coordinates are changing.
UPD2:
I just want to emphasize one more time that this shift is not constant or linear. There could be "jumps" around the mouse coordinates (but not only jumps). The video above explains it better.
So it was scaling problem after all. One of the scale values were negative and was causing this behaviour.

Scale Sprite up and Down to give illusion of a jump

I have some code that I wrote that works, but I feel it could be better and wanted to get some feedback.
The goal I had is to have a Sprite Scale up and back down in a timely fashion when a button is pushed so that it gives the illusion of jumping in a "Top Down" view of the game. Like the character is jumping off the screen. I already know how to draw scaled images I'm more interested in the logic of the timing aspect.
This works, just not sure it's the best. Thought maybe there was some equation, a math friend told me maybe a linear equation or like a parabola or second order equation. Not being great with math.
Anyway.
Class Properties
private double _jumpingStartedAt;
private double _totalJumpTimeInSeconds = 0.7;
private double _totalJumpFrames = 14;
private double _timeSinceLastScale;
private double _jumpingHalfWayAt;
When button is pushed for the first time I start the "Jump Logic". This runs once per jump. My thought was that I'd mark the "start" time and determine the "halfway" time by the totalJumpTimeInSeconds.
_jumpingStartedAt = gameTime.TotalGameTime.TotalSeconds;
_jumpingHalfWayAt = _jumpingStartedAt + MillisecondsBetweenFrame() * (_totalJumpFrames / 2);
And then this is run on each Update() until my "jump" is complete or isJumping = false. The logic here is that I would scale up every 1 "frame" until half way point then scale back down.
_timeSinceLastScale += gameTime.ElapsedGameTime.TotalSeconds;
if (_timeSinceLastScale > MillisecondsBetweenFrame() && gameTime.TotalGameTime.TotalSeconds < _jumpingHalfWayAt)
{
Scale += 0.2f;
_timeSinceLastScale = 0;
}
else if (gameTime.TotalGameTime.TotalSeconds > _jumpingHalfWayAt)
{
Scale -= 0.2f;
if (Scale < 1.0) Scale = 1; //probably don't need this was worried if it went passed 0
if (Scale == 1.0) _isJumping = false;
}
private double SecondsBetweenFrame()
{
return _totalJumpTimeInSeconds / this._totalJumpFrames;
}
Now this works, but seems a little convoluted to me.
Stretching image when jumping - side view
Yeah, it's pretty complicated, what you created.
I assume your sprite is also moving up and down when jumping. That you have some sort of Vector2 velocity, which you change by dv = gravityAcceleration * dt in every update, and so you change Vector2 position by dp = velocity * dt. If so, I would rather use my velocity.Y value to calculate how the sprite should stretch. I think it's more natural. And your code will become much more simple.
Here's an image to describe better what I mean:
However, you can probably face the other problem here: just at the beginning of the jump your sprite will suddenly get high velocity, when still being near the ground, which can cause it to cross through the floor for a moment. To prevent that you can artificially move your sprite upwards by the smallest needed value for the time of jump. The problem is described by the image below:
As you can very well see, the first stretched ball moved upwards a little bit, but not enough. You have to calculate difference between sizes before and after stretching and then move your sprite up by that distance.
If you do it like that, your Update should shorten to just a few lines. I believe you can do simple calculations on your own.
Easier approach
...Unless you'd rather like your sprite behave like you want. Then you could modify scale according to your Y position:
if (KeyboardState.IsKeyDown(Keys.Space))
{
isJumping = true;
jumpStartPosition = Position;
}
if (!isJumping) Scale = 1f;
else
{
Scale = StretchFactor * (Position.Y - jumpStartPosition.Y);
}
where:
- isJumping is a bool,
- jumpStartPosition is a Vector2,
- Position is a Vector2 property of your sprite,
- StretchFactor is a float property of your sprite telling how much does it stretch.
And you also need to have end-of-jump condition - for example when the sprite's Position.Y becomes smaller than the jumpStartPosition.Y. But generally this solution (as well as yours) has one disadvantage - there will be problems, if you will want to start jump from one height, and end it on another:
so I would rather recommend my first solution. There you can make stop-jump condition by collision check.
Stretching image when jumping - top-down view
Bummer. Since originally it wasn't specified that it is a top-down game, like those first GTA's, I really misunderstood the question, so the answer doesn't fit much. So the answer goes now.
If you wan't it to be realistic, you should use some basic principles of perspective. As we look at the character jumping from the top, it goes closer to us, so it's image grows. Why's that? Look at the pic below.
There are two things, that are needed for perspective to work: the center of perspective and the screen. The center of perspective is the point, where all "rays" are crossing. "Ray" is a line from the any point in the world to the center of our eye. Now the screen is the plane, where image of 3d world is being created. The points of the real world are being cast into screen along their rays. Of course your game is pseudo-3d, but it shouldn't matter in that case.
When z grows, sprite comes closer to the center of perspective. If you imagine ray from the center of perspective to the edge of the sprite, the angle of ray changes, as it's distance to the center of perspective becomes lesser. And the change of angle makes the point's image on the screen moving. That's why image grows, or becomes smaller.
Now we can wonder: ok, how now put this into numbers? Look at the picture below:
I deliberately translated whole world by -C so the z coord of the center of perspective could become 0. That makes calculations simplier. What are we trying to find, is the x' - coord of the point on the screen. Let the Z* = |z - C|. If we look at this picture it becomes clear, that we can find what we need by pretty simple proportion:
Using the same method you can calculate y'. If your character is always at the center of the screen, all that you need will be x'/x = y'/y = S, i.e. your scale. That's because x in this scenario is, in fact, the half-width of the sprite, and y is the half-height. However, if your character will be able to move freely around the screen, you may want to scale & translate it, so it would be more natural:
The white square is the on-the-ground sprite, the gray square is the jumping sprite. In this case you will have to know l (left), r (right), t (top) and b (bottom) coords of the sprite's boundaries (top-bottom means Y-axis, not Z-axis). Then using the same proportion you can get l', r', t' and b' - boundaries of the sprite's image on screen. From this data you should be able to calculate both scale and translation.
Note: L is the parameter of our calculation which you have to choose yourself. Assuming, that the screen has constant width Ws and height Hs, L strictly corresponds with FOV (field of view). You can acquire it also using proportions. So L = (cos(FOV/2) * Ws)/2. I would recommend FOV = 60 deg. If you will make FOV too big, you may face the fisheye problem.

How to automatically change kinect sensor angle?

Is it possible to change the elevation angle of the kinect motor automatically?
I mean, till now I have this code (c#):
private void CameraAngleSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
int angle = (int)CameraAngleSlider.Value;
KinectSensorManager.ElevationAngle = angle;
}
and I change the angle manually using a slider called CameraAngleSlider.
Just an example: I would imagine that when the kinect start session, I place in front of the kinect and the sensor tries to adjust the angle related to my position.
This should be perfectly possible, however you will have to program this manually and take into account that the adjustment is not very fast. Also the kinect sensors don't push any data while it is adjusting its angle.
You have 2 situations:
1) the skeleton is already being tracked
=> move the angle up when the head is too close to the top of the screen
=> move the angle down when the head is under half the screen heigth
2) no skeleton being tracked
=> you will have to guess.. I'd suggest moving the angle up and down untill you get a tracked skeleton (and abort after a few tries so it doens't just keep adjusting)
There is no flag or function you can set on the Kinect to have it seek out the best position for tracking, based on player position. The Kinect does detect clipping and can be queried as to if a portion of the player is out of the FOV.
The two functions that will allow you to detect clipping of the skeleton are:
FrameEdges Enumeration
Skeleton.ClippedEdges
Depending on how your application is operation you can monitor either one of these and, when clipping of the Skeleton is detected, adjust the KinectSensor.ElevationAngle accordingly.

Categories

Resources