Viewport3D disable rescaling / resizing - c#

How can I disable the following strange resize / rescaling behaviour of a Viewport3d in WPF?
Notice that I do not change the window height:
How can I disable this "feature" of WPF (or why does it do this?)

To anyone stumbling upon this question, here's the answer by Mike Danes:
http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/642253a6-e7a5-4ce2-bc08-e91f2634605b/
This auto scaling is a consequence of the way perspective works.
The default field of view is 45 and the default near plane distance is
0.125. This means that a point (x, 0, 0.125) will be visible at the right side of the viewport when its x coordinate is tan(45/2) * 0.125
= 0.051776. Note how the viewport width doesn't play a part in this computation, if you make it wider the point (0.051776, 0, 0.125) will
still be visible at the right side of the viewport, that's why the
teapot appears larger.
You could compensate for a width increase by changing the field of
view and near plane distance, something along these lines:
double newWidth = grid.ActualWidth;
double originalWidth = 509.0;
double originalNearPlaneDistance = 0.125;
double originalFieldOfView = 45.0;
double scale = newWidth / originalWidth;
double fov = Math.Atan(Math.Tan(originalFieldOfView / 2.0 / 180.0 * Math.PI) * scale) * 2.0;
camera.FieldOfView = fov / Math.PI * 180.0;
camera.NearPlaneDistance = originalNearPlaneDistance * scale;

I know this is an old post, but I've come across this a few times, and I'm not sure the accepted answer is correct. Near Plane Distance defines at which point objects are too close too render. I'm not sure how this might affect the scale of the viewport3d and its children. What I have come across is that modifying Viewport3d.Width affects the scale of the 3D scene, while modifying the Height only causes clipping (or adding white space if getting larger). This is not intuitive in my opinion, but maybe its a 3D programming convention I'm not aware of.
My solution for the above problem would be to remove the Viewport3D from the Grid its in, and instead place it inside a canvas. This will prevent the window resize from changing the width of the viewport, and it should stay the same size.

Related

joystick positioning relative to screen size

I want the joystick fixed on the bottom-right the screen, no matter the resolution/window size. Although my solution works, there must be a cleaner code for this or some canvas option I don't know about.
I tried and tested a position that works on a set resolution and then factor the new resolution to that. Tested on resolutions: 434x494 and 1920x1080
For 434 pixel wide screen a joystick position.x of 434 - 115 = 319.
The factor was too big so I had to square root it for width.
For the height 120 worked for 494 height screen, I just did some hocus pocus to make it work for 1920x1080.
public Joystick joystick;
public Vector3 m_myPosition;
public RectTransform m_NewTransform;
void Start()
{
double width = Screen.width - (Mathf.Sqrt(Screen.width / 434)) * 115;
double height = 120 + (4* Mathf.Exp(Screen.width / 494));
m_myPosition.x = (float)width;
m_myPosition.y = (float)height;
m_NewTransform.position = m_myPosition;
}
I shouldn't rely on iffy code like this, any solutions?
G'day folks
Solution is everywhere on youtube...
You just change the canvas UI scale to 'Scale to screen size' and then Anchor the UI joystick object to a direction.
For my joystick I just anchored it to bottom-right and done!
In case you want in depth info on how to; I followed this tutorial: https://www.youtube.com/watch?v=w3sMD-3OJro

Change the size of camera to fit a GameObject in Unity/C#

Say I have a GameObject inside a 2D scene and a camera. And I want to change the size and position of the camera so that even when the screen resolution changes, the object will still be visible. So, how can I do that?
TL;DR: Scroll down to the bottom for the code.
First up, we must set the position of the camera to the middle of the object so the scaling of the camera would be easier.
Second, to scale the camera, we're going to change the orthographicSize of the camera in our script (Which is the Size attribute in the Camera component). But how do we calculate the attribute?
Basically, the Size attribute here is half the height of the camera. So, for example, if we set the Size to 5, that mean the height of camera going to be 10 Unity Unit (which is something I made up so you can understand easier).
So, it seems like we just have to get the height of the object, divide it by 2 and set the Size of the camera to the result, right? (1)
Well, not really. You see, while it might work on certain cases, when the width of the object is way, way longer than the screen, and the height if way, way shorter, then the camera would not able to see all of the object.
But why is that? Now, let's say that our camera has a width/height of 16/9, and our object is 100/18. That means if we scale using the height, our camera's width/height would be 32/18, and while it's enough to cover the height, it isn't enough to cover the width. So, another approach is to calculate using the width
By taking the width of the object, divide it by the width of the camera and then multiply with the height of the camera (then of course, divide by 2). We would be able to fit the whole width of the object. (Because of... ratio or something) (2)
BUT AGAIN, it has the same problem as our first approach, but the object being too tall instead too wide.
So, to solve this, we just have to place a check if the first approach (see (1)) if the object is being overflowed, and if it is, then we just use the second approach instead (see (2)). And that's it
And here's the code btw:
// replace the `cam` variable with your camera.
float w = <the_width_of_object>;
float h = <the_height_of_object>;
float x = w * 0.5f - 0.5f;
float y = h * 0.5f - 0.5f;
cam.transform.position = new Vector3(x, y, -10f);
cam.orthographicSize = ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) / 2;
// to add padding, just plus the result of the `orthographicSize` calculation with number, like this:
// | |
// V V
// ... cam.pixelHeight : h) / 2 + 1

How can to position objects correctly in different resolutions in Unity?

I'm writing an Android game and as I wanted it to be played in portrait mode I want the scale of the objects to remain the same in regards to the screen width. I think I managed to do that with this code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Android;
public class PowerPosition : MonoBehaviour
{
void Start()
{
float w = Screen.width;
float h = Screen.height;
Vector2 position = (Camera.main.ScreenToWorldPoint(new Vector2(w * 0.9f, w * 0.53f)));
float ratio = (w / h) / (9f / 16f);
gameObject.transform.position = new Vector3(position.x * ratio, position.y * ratio, 0);
gameObject.transform.localScale = new Vector3(gameObject.transform.localScale.x * ratio, gameObject.transform.localScale.y * ratio, gameObject.transform.localScale.z * ratio);
The issue is with the positioning. It changes and objects that should have relative distances from each other get messed up. Objects that should appear one above the other become too close and overlap.
The "position * ratio" was a test, it doesn't work well either with or without that. Here for example I had to items that I wanted to keep a consistent distance from the lower right end of the screen.
How can you fix that?
You can use percentage calculation as orientationbase to scale. That represents a relational scaleability.
There is a false friend in your mind. The displays haves different dots per inch. Will say, this value may be included your calculation.
I will try to explain it differently.
A device has a screen resolution of X-axis, Y-axis, color depth and ppi / dpi.
You can buy a display with 1920x1080xColors with 6 "or 32". With large displays you have a lower pixel density compared to small displays.
A device itself only changes X and Y if you have an orientation sensor.
Landscape 1920x1080 becomes portrait 1080x1920.
When I take an iPhone, the display has a higher pixel density than the cheaper Android devices.
In mathematics there is the famous three-sentence that is suitable for calculating proportions.
You can query the value for the pixel density from the system. You can use this value for rescaling your buttons.
I work mostly with horizontal centering. So I keep the symmetry.

Calculate pixel position between two reference points

I'm actually facing a problem I can't solve myself. So I'm asking you guys for help. Hope somebody can help me.
The Problem:
My task is to graphically display measured values. I do have two reference points. I created a sketch witch might explain the problem better:
As you can see in the picture above the two lines (0.20 and 0.05) are my reference points. As you know the canvas' coordinate system is inverse. So the Point (0|0) is in the upper left corner.
What I need is one (or maybe more) formula(s) to calculate the pixel position of e.g. the Point 0.13. I had many approaches to set up a formula myself but with no luck. The points drawn in the image are variable. The height and the reference points are pretty much static.
Thanks for your help in advance!
Given that yMin and yMin are the lower and upper bounds of the visible range of your measurement values (which might be -0.05 and 0.3 in the picture's graph), you would calculate the y value of a position relative to the Canvas origin like this:
var y = 0.13;
var canvasY = canvas.ActualHeight * (1.0 - (y - yMin) / (yMax - yMin));

Check if a point is in a rotated rectangle (C#)

I have a program in C# (Windows Forms) which draws some rectangles on a picturebox. They can be drawn at an angle too (rotated).
I know each of the rectangles' start point (upper-left corner), their size(width+height) and their angle. Because of the rotation, the start point is not necessarely the upper-left corner, but that does not matter here.
Then when I click the picturebox, I need to check in which rectangle (if any) I have clicked.
So I need some way of checking if a point is in a rectangle, but I also need to take into account the rotation of each rectangle.
Does anybody know of a way to do this in C#?
Is it possible to apply the same rotation applied to the rectangle to the point in reverse?
For example, Rectangle A is rotated 45 degrees clockwise from its origin (upper left corner), you would then just rotate point B around the same origin 45 degrees COUNTER clockwise, then check to see if it falls within Rectangle A pre-rotation
You could keep a second, undisplayed image where you draw duplicates of the rectangles, each uniquely colored. When the user clicks on the picturebox, find the color of the corresponding pixel in the 2nd image, which will identify which rectangle was clicked.
Edit: After looking back, I'm using MonoGame and the OP is using Windows Forms. The following is for MonoGame.
I've been messing this for a while now and have found a couple answers, just none of them actually worked. Here is a C# function that does exactly as OP describes, if not for OP then other people Googling like I was.
It was a headache to figure this out. A lot of the typical guesswork.
bool PointIsInRotatedRectangle(Vector2 P, Rectangle rect, float rotation)
{
Matrix rotMat = Matrix.CreateRotationZ(-rotation);
Vector2 Localpoint = P - (rect.Location).ToVector2();
Localpoint = Vector2.Transform(Localpoint, rotMat);
Localpoint += (rect.Location).ToVector2();
if (rect.Contains(Localpoint)) { return true; }
return false;
}
And here it is in a single line of code. Probably faster to use.
bool PointIsInRotatedRectangle(Vector2 P, Rectangle rect, float rotation)
{
return rect.Contains(Vector2.Transform(P - (rect.Location).ToVector2(), Matrix.CreateRotationZ(-rotation)) + (rect.Location).ToVector2());
}
I know this was already answered but I had to do something similar a while ago. I created an extension method for the System.Windows.Point class that helped do exactly what Neil suggested:
public static double GetAngle(this Point pt)
{
return Math.Atan2(pt.X, -pt.Y) * 180 / Math.PI;
}
public static Point SetAngle(this Point pt, double angle)
{
var rads = angle * (Math.PI / 180);
var dist = Math.Sqrt(pt.X * pt.X + pt.Y * pt.Y);
pt.X = Math.Sin(rads) * dist;
pt.Y = -(Math.Cos(rads) * dist);
return pt;
}
This would allow me to work with the angles of points around 0, 0. So if you know the center of the rect that you are testing you would offset the point by the negative of this value (for example: pt.X -= 32; pt.Y -= 32) And then you would apply the negative rotation of the rectangle (as suggested by Neil: pt.SetAngle(-45);)...
Now if the point is within 64, 64 you know you hit the rectangle. More specifically I was checking a pixel of a rotated image to make sure I hit a pixel of a specific color.
Would the rectangles be allowed to overlap?
If so, would you want all the rectangles in a point, or just the one in the top layer?
If you know the coordinates of the corners of the rectangle, this is an fast, elegant solution that merely involves a couple of dot and scalar products: https://math.stackexchange.com/a/190373/178768
See the rectangle edges as a list of vectors linking a corner to the next, sorting corners clockwise. If the point is in the square, it must be to the right with respect to all of the edge vectors.
This can be solved by vector products, but it boils down to the following:
Iterate over rectangle corners:
the point to be checked is P=[px,py]
the current corner is C=[cx,cy] and the next corner is N=[nx,ny]
if px*ny+cx*py+nx*cy<py*nx+cy*px+ny*cx, the point is outside the square.
this would actually work for every convex polygon.

Categories

Resources