Problem with RenderTarget and Transformation Matrix in MonoGame - c#

I've been trying to reach a good solution on different resolutions, but none have been working very well, either the sprites get distorted, everythings gets offset, or a variety of different shenanigans.
The best solution I got was this, where it uses a RenderTarget and Transformation Matrix to scale everything down according to the resolution, however when the aspect ration is not the same as the virtual resolution, things get offset on the Y axis gif of it happening, here's the Draw code:
GraphicsDevice.SetRenderTarget(RenderTarget);
var scaleX = (float)ScreenWidths[CurrentResolution] / 1920;
var scaleY = (float)ScreenHeights[CurrentResolution] / 1080;
var matrix = Matrix.CreateScale(scaleX, scaleX, 1.0f);
spriteBatch.Begin(transformMatrix: matrix);
GraphicsDevice.Clear(BackgroundColor);
foreach (var uiElement in UIElements)
{
uiElement.Draw(gameTime, spriteBatch);
}
spriteBatch.End();
GraphicsDevice.SetRenderTarget(null);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend,
SamplerState.LinearClamp, DepthStencilState.Default,
RasterizerState.CullNone);
var offsetX = ScreenWidths[CurrentResolution] / 2 - 1920 / 2 * scaleX;
var offsetY = ScreenHeights[CurrentResolution] / 2 - 1080 / 2 * scaleY;
spriteBatch.Draw(RenderTarget, new Rectangle((int)offsetX, (int)offsetY, (int)(1920), (int)(1080)), Color.White);
spriteBatch.End();
var mouseState = Mouse.GetState();
MousePosition = Vector2.Transform(new Vector2(mouseState.X, mouseState.Y), Matrix.Invert(matrix));
base.Draw(gameTime);
This is on the Initialise:
ScreenWidths = new int\[\] { 1920, 960, 1366, 1280, 1280, 1366 };
ScreenHeights = new int\[\] { 1080, 540, 768, 1024, 720, 680 };
RenderTarget = new RenderTarget2D(
GraphicsDevice,
GraphicsDevice.PresentationParameters.BackBufferWidth,
GraphicsDevice.PresentationParameters.BackBufferHeight,
false,
GraphicsDevice.PresentationParameters.BackBufferFormat,
DepthFormat.Depth24);
And this is the code for the button:
if (Main.CurrentResolution >= 0 && Main.CurrentResolution < Main.ScreenWidths.Length - 1) {
Main.CurrentResolution++;
Main.graphics.PreferredBackBufferWidth = Main.ScreenWidths[Main.CurrentResolution];
Main.graphics.PreferredBackBufferHeight = Main.ScreenHeights[Main.CurrentResolution];
Main.graphics.ApplyChanges();
}
How would I fix this offset on the Y axis? Or even what would be a better way to go about different resolutions?

In this example, you can see how to:
put a point in relative coordinates, placed at the same relative position on screen
put a point in absolute coordinates, placed at the same relative position on screen
Things are then correctly positioned in a relative manner no matter the aspect ratio difference.
Next thing you want is an uniform scale, here it's min of X/Y but you can also force it to be on a specific axis.
Also, the matrix you want will likely be: scale, rotate, translate.
You may want to adjust all of this to what you're really looking for.
Result:
resolution: <960, 540>, pointRelative: <480.000, 270.000>, pointAbsolute: <50.000, 50.000>, scaleAbsolute: <0.500, 0.500>, scaleUniform: 0.500
resolution: <960, 680>, pointRelative: <480.000, 340.000>, pointAbsolute: <50.000, 62.963>, scaleAbsolute: <0.500, 0.630>, scaleUniform: 0.500
resolution: <960, 720>, pointRelative: <480.000, 360.000>, pointAbsolute: <50.000, 66.667>, scaleAbsolute: <0.500, 0.667>, scaleUniform: 0.500
resolution: <960, 768>, pointRelative: <480.000, 384.000>, pointAbsolute: <50.000, 71.111>, scaleAbsolute: <0.500, 0.711>, scaleUniform: 0.500
resolution: <960, 1024>, pointRelative: <480.000, 512.000>, pointAbsolute: <50.000, 94.815>, scaleAbsolute: <0.500, 0.948>, scaleUniform: 0.500
resolution: <960, 1080>, pointRelative: <480.000, 540.000>, pointAbsolute: <50.000, 100.000>, scaleAbsolute: <0.500, 1.000>, scaleUniform: 0.500
resolution: <1280, 540>, pointRelative: <640.000, 270.000>, pointAbsolute: <66.667, 50.000>, scaleAbsolute: <0.667, 0.500>, scaleUniform: 0.500
resolution: <1280, 680>, pointRelative: <640.000, 340.000>, pointAbsolute: <66.667, 62.963>, scaleAbsolute: <0.667, 0.630>, scaleUniform: 0.630
resolution: <1280, 720>, pointRelative: <640.000, 360.000>, pointAbsolute: <66.667, 66.667>, scaleAbsolute: <0.667, 0.667>, scaleUniform: 0.667
resolution: <1280, 768>, pointRelative: <640.000, 384.000>, pointAbsolute: <66.667, 71.111>, scaleAbsolute: <0.667, 0.711>, scaleUniform: 0.667
resolution: <1280, 1024>, pointRelative: <640.000, 512.000>, pointAbsolute: <66.667, 94.815>, scaleAbsolute: <0.667, 0.948>, scaleUniform: 0.667
resolution: <1280, 1080>, pointRelative: <640.000, 540.000>, pointAbsolute: <66.667, 100.000>, scaleAbsolute: <0.667, 1.000>, scaleUniform: 0.667
resolution: <1366, 540>, pointRelative: <683.000, 270.000>, pointAbsolute: <71.146, 50.000>, scaleAbsolute: <0.711, 0.500>, scaleUniform: 0.500
resolution: <1366, 680>, pointRelative: <683.000, 340.000>, pointAbsolute: <71.146, 62.963>, scaleAbsolute: <0.711, 0.630>, scaleUniform: 0.630
resolution: <1366, 720>, pointRelative: <683.000, 360.000>, pointAbsolute: <71.146, 66.667>, scaleAbsolute: <0.711, 0.667>, scaleUniform: 0.667
resolution: <1366, 768>, pointRelative: <683.000, 384.000>, pointAbsolute: <71.146, 71.111>, scaleAbsolute: <0.711, 0.711>, scaleUniform: 0.711
resolution: <1366, 1024>, pointRelative: <683.000, 512.000>, pointAbsolute: <71.146, 94.815>, scaleAbsolute: <0.711, 0.948>, scaleUniform: 0.711
resolution: <1366, 1080>, pointRelative: <683.000, 540.000>, pointAbsolute: <71.146, 100.000>, scaleAbsolute: <0.711, 1.000>, scaleUniform: 0.711
resolution: <1920, 540>, pointRelative: <960.000, 270.000>, pointAbsolute: <100.000, 50.000>, scaleAbsolute: <1.000, 0.500>, scaleUniform: 0.500
resolution: <1920, 680>, pointRelative: <960.000, 340.000>, pointAbsolute: <100.000, 62.963>, scaleAbsolute: <1.000, 0.630>, scaleUniform: 0.630
resolution: <1920, 720>, pointRelative: <960.000, 360.000>, pointAbsolute: <100.000, 66.667>, scaleAbsolute: <1.000, 0.667>, scaleUniform: 0.667
resolution: <1920, 768>, pointRelative: <960.000, 384.000>, pointAbsolute: <100.000, 71.111>, scaleAbsolute: <1.000, 0.711>, scaleUniform: 0.711
resolution: <1920, 1024>, pointRelative: <960.000, 512.000>, pointAbsolute: <100.000, 94.815>, scaleAbsolute: <1.000, 0.948>, scaleUniform: 0.948
resolution: <1920, 1080>, pointRelative: <960.000, 540.000>, pointAbsolute: <100.000, 100.000>, scaleAbsolute: <1.000, 1.000>, scaleUniform: 1.000
Code:
public void Test()
{
var sx = new[] { 1920, 1366, 1280, 960 };
var sy = new[] { 1080, 1024, 768, 720, 680, 540 };
var pt1 = new Vector2(0.5f, 0.5f);
var pt2 = new Vector2(100, 100);
foreach (var w in sx.Reverse())
{
foreach (var h in sy.Reverse())
{
var scaleX = w / 1920.0f;
var scaleY = h / 1080.0f;
var resolution = new Vector2(w, h);
var scaleUniform = Math.Min(scaleX, scaleY);
var scaleAbsolute = new Vector2(scaleX, scaleY);
var pointRelative = pt1 * resolution;
var pointAbsolute = pt2 * scaleAbsolute;
Console.WriteLine(
$"{nameof(resolution)}: {resolution,12}, " +
$"{nameof(pointRelative)}: {pointRelative,20:F3}, " +
$"{nameof(pointAbsolute)}: {pointAbsolute,20:F3}, " +
$"{nameof(scaleAbsolute)}: {scaleAbsolute:F3}, " +
$"{nameof(scaleUniform)}: {scaleUniform:F3}"
);
}
}
}

Without clipping or stretching, a single image will be distorted when drawn.
You need to define your virtual screen sources in terms of aspect ratio groups and size groups, instead of resolutions.
Make source assets for the major aspect ratios, 5:4, 16:9(10), 32:9(10). If designing for mobile(or possible PC screens) for portrait modes, the inverses as well. The reason for the (10) is screens vary from 9 to 11(so a 10 will mean less distortion for the majority of screens).
A single set of resources will alias if the source too large, and pixelize if too small. (Mipmaps solve this problem in 3D, but would be wasteful for discrete resolusions)
So size groups need to be two sizes, small and medium, large would be applicable only for native 4k and 8k resolutions(both are greater than human perception at a reasonable distance). The eye has a limited ~5K focus zone across a small region.
Since half and doubling of resolutions produce minimal artefacts, I purpose creating the smallest dimension size of 800 and medium at 1600.
Not all size/aspect ratio sources need to fit this recommendation.
I would suggest calculating and storing in a class level variable, the aspect ratio and size selection in Initialize and in the Resize event.
These variables make the choice for the first render pass' source assets. The final draw scales, with minimal distortion to fit and fill the screen(10 vs 9 or 11).

Related

How image px convert to word(docx) pic - ext - cx and cy?

How image px convert to word(docx) pic - ext - cx and cy?
e.g. below image is 400 x 400 px, how can it converted to word(docx) pic - ext - cx and cy?
word xml demo code
<pic:spPr>
<a:xfrm>
<a:off x="0" y="0" />
<a:ext cx="????" cy="?????" />
</a:xfrm>
<a:prstGeom prst="rect">
<a:avLst />
</a:prstGeom>
</pic:spPr>
image
Web uses pixels, Office uses points. So depending on what you want:
(yourNumber is 400) x 12700 = 5080000 (for points)
(yourNumber is 400) x 9525 = 3810000 (for pixels)

Why is there no Matrix3x3 in the System.Numerics namespace? c#

I am trying to make a little GUI library in c#, but after doing some research on matrix transformations I found out that I need a Matrix3x3 to store rotation, scale, and translation of a Vector2. But in the C# System.Numerics there is only a Matrix3x2 or Matrix4x4? Could I use one of those instead? If so how would I go about it? And why isnt there a Matrix3x3 in the standard library?
I am very new to Matrix and Vector programming, So sorry if this is a stupid question.
Thanks in advance.
You can use Matrix4x4. Start with an identity matrix and fill the top left 3×3 elements with your matrix.
For example to solve a 3×3 system of equations:
// Find the barycentric coordinates
//
// Solve for (wA,wB,wC):
// | px | | ax bx cx | | wA |
// | py | = | ay by cy | | wB |
// | pz | | az bz cz | | wC |
var m = new Matrix4x4(
A.X, B.X, C.X, 0,
A.Y, B.Y, C.Y, 0,
A.Z, B.Z, C.Z, 0,
0, 0, 0, 1);
if (Matrix4x4.Invert(m, out Matrix4x4 u))
{
var w = new Vector3(
u.M11*P.X + u.M12*P.Y + u.M13*P.Z,
u.M21*P.X + u.M22*P.Y + u.M23*P.Z,
u.M31*P.X + u.M32*P.Y + u.M33*P.Z);
// ...
}
As for the reasons, the intent of System.Numerics has to be related to computer graphics since it utilizes Homogeneous Coordinates in which 3D vectors contain 4 elements. Three regular coordinates and a scalar weight factor. The math with homogeneous coordinates for computer graphics is vastly simplified. The only reason there is a Vector3 is because a Vector4 should be treated as a vector of 3 elements plus a scalar, and thus Vector3 should be used in composition and decomposition of the homogeneous coordinates. It means that not all 4 elements can be treated equally, and sometimes you need to do things with the vector (first three elements) separately from the scalar value (fourth element).
Also System.Numerics uses single precision float elements, which are almost never used in scientific computing, but universally applied to computer graphics for their speed. Hopefully one day when the CLR supports AVX512 there will be double precision numeric intrinsics that scientists can actually use.

Convert Resolutions depending on which Resolution the user is running

Im making a Program which makes stuff Easier for a Game with fast Inputs. Since my Tool right now is only for 1920x1080 and i want to get it going for multiple Resolutions. Thats how i have it right now for 1920x1080.
SetCursorPos(105, 640);
System.Threading.Thread.Sleep(30);
sim.Mouse.LeftButtonClick();
System.Threading.Thread.Sleep(30);
SetCursorPos(274, 547);
System.Threading.Thread.Sleep(30);
sim.Mouse.LeftButtonClick();
System.Threading.Thread.Sleep(1560);
sim.Keyboard.KeyPress(VirtualKeyCode.VK_T);
System.Threading.Thread.Sleep(50);
SetCursorPos(274, 547);
sim.Mouse.LeftButtonClick();
System.Threading.Thread.Sleep(1610);
SetCursorPos(274, 547);
sim.Mouse.LeftButtonClick();
System.Threading.Thread.Sleep(1610);
SetCursorPos(274, 547);
sim.Mouse.LeftButtonClick();
SetCursorPos(960, 540);
I kinda want the Program detects the actual Screen Resolution and converts the Pixel Location from 1920x1080 to its needed Locations.
Theoretically...store your (x, y) coordinates as decimal numbers representing the "percentage" of the original resolution you developed them in.
For instance, your first point is (105, 640). As a "percentage point", divide the x-coordinate by 1920, and the y-coordinate by 1080 to get (0.0546875, 0.5925925925925926). This can be stored using the PointF structure.
Now you can use those decimal percentage numbers to get the desired equivalent point in any resolution by simply multiplying them by the width/height of the screen.
You can get the current screen resolution using Screen.Bounds:
Rectangle rc = Screen.PrimaryScreen.Bounds;
What you need is Scaling.
You've coded for a fixed resultion of 1920x1080. I.e. 1920 pixels width and 1080 pixels height.
If you need to scale this you can get the current screen resultion and then calculate the ratio.
Let's say the resolution is 640x480. Then you'd calculate the X (or width) ratio with:
640 / 1920 = 0.3333...
and the Y (or height) ratio with:
480 / 1080 = 0.4444...
To scale you now multiply width and height with respective ratio:
SetCursorPos(105 * 0.3333, 640 * 0.4444)
In code it would look something like:
int currentX = SystemParameters.PrimaryScreenHeight;
int currentY = SystemParameters.PrimaryScreenWidth;
var xScale = currentX / 1920;
var yScale = currenty / 1080;
SetCursorPos(105 * xScale, 640 * yScale);

Decompose 2D Transformation Matrix

So, I have a Direct2D Matrix3x2F that I use to store transformations on geometries. I want these transformations to be user-editable, and I don't want the user to have to edit a matrix directly. Is it possible to decompose a 3x2 matrix into scaling, rotation, skewing, and translation?
This is the solution I found for a Direct2D transformation matrix:
scale x = sqrt(M11 * M11 + M12 * M12)
scale y = sqrt(M21 * M21 + M22 * M22) * cos(shear)
rotation = atan2(M12, M11)
shear (y) = atan2(M22, M21) - PI/2 - rotation
translation x = M31
translation y = M32
If you multiply these values back together in the order scale(x, y) * skew(0, shear) * rotate(angle) * translate(x, y) you will get a matrix that performs an equivalent transformation.
Decomposition
yes you can (at least partially). 3x2 transform matrix represents 2D homogenuous 3x3 transform matrix without projections. Such transform matrix is either OpenGL style:
| Xx Yx Ox |
| Xy Yy Oy |
or DirectX style:
| Xx Xy |
| Yx Yy |
| Ox Oy |
As you tagged Direct2D and using 3x2 matrix then the second is the one you got. There are 3 vectors:
X=(Xx,Xy) X axis vector
Y=(Yx,Yy) Y axis vector
O=(Ox,Oy) Origin of coordinate system.
Now lets assume that there is no skew present and the matrix is orthogonal...
Scaling
is very simple just obtain the axises basis vectors lengths.
scalex = sqrt( Xx^2 + Xy^2 );
scaley = sqrt( Yx^2 + Yy^2 );
if scale coefficient is >1 the matrix scales up and if <1 scales down.
rotation
You can use:
rotation_ang=atan2(Xy,Yx);
translation
The offset is O so if it is non zero you got translation present.
Skew
In 2D skew does not complicate things too much and the bullets above still apply (not the case for 3D). The skew angle is the angle between axises minus 90 degrees so:
skew_angle = acos((X.Y)/(|X|.|Y|)) - 0.5*PI;
skew_angle = acos((Xx*Yx + Xy*Yy)/sqrt(( Xx^2 + Xy^2 )*( Yx^2 + Yy^2 ))) - 0.5*PI;
Also beware if your transform matrix does not represent your coordinate system but its inverse then you need to inverse your matrix before applying this...
So compute first inverse of:
| Xx Xy 0 |
| Yx Yy 0 |
| Ox Oy 1 |
And apply the above on the result.
For more info about this topic see:
Understanding 4x4 homogenous transform matrices
Especially the difference between column major and row major orders (OpenGL vs. DirectX notation)
Store the primary transformations in a class with editable properites
scaling
rotation
skewing
translation
and then build the final transform matrix from those. It will be easier that way. However if you must there are algorithms for decomposing a matrix. They are not as simple as you might think.
System.Numerics has a method for decomposing 3D transform matrices
https://github.com/dotnet/corefx/blob/master/src/System.Numerics.Vectors/src/System/Numerics/Matrix4x4.cs#L1497

Relationship between projected and unprojected Z-Values in Direct3D

I've been trying to figure this relationship out but I can't, maybe I'm just not searching for the right thing. If I project a world-space coordinate to clip space using Vector3.Project, the X and Y coordinates make sense but I can't figure out how it's computing the Z (0..1) coordinate. For instance, if my nearplane is 1 and farplane is 1000, I project a Vector3 of (0,0,500) (camera center, 50% of distance to far plane) to screen space I get (1050, 500, .9994785)
The resulting X and Y coordinates make perfect sense but I don't understand where it's getting the resulting Z-value.
I need this because I'm actually trying to UNPROJECT screen-space coordinates and I need to be able to pick a Z-value to tell it the distance from the camera I want the world-space coordinate to be, but I don't understand the relationship between clip space Z (0-1) and world-space Z (nearplane-farplane).
In case this helps, my transformation matrices are:
World = Matrix.Identity;
//basically centered at 0,0,0 looking into the screen
View = Matrix.LookAtLH(
new Vector3(0,0,0), //camera position
new Vector3(0,0,1), //look target
new Vector3(0,1,0)); //up vector
Projection = Matrix.PerspectiveFovLH(
(float)(Math.PI / 4), //FieldOfViewY
1.6f, // AspectRatio
1, //NearPlane
1000); //FarPlane
Standard perspective projection creates a reciprocal relationship between the scene depth and the depth buffer value, not a linear one. This causes a higher percentage of buffer precision to be applied to objects closer to the near plane than those closer to the far plane, which is typically desired. As for the actual math, here's the breakdown:
The bottom-right 2x2 elements (corresponding to z and w) of the projection matrix are:
[far / (far - near) ] [1]
[-far * near / (far - near)] [0]
This means that after multiplying, z' = z * far / (far - near) - far * near / (far - near) and w' = z. After this step, there is the perspective divide, z'' = z' / w'.
In your specific case, the math works out to the value you got:
z = 500
z' = z * 1000 / (1000 - 999) - 1000 / (1000 - 999) = 499.499499499...
w' = z = 500
z'' = z' / w' = 0.998998998...
To recover the original depth, simply reverse the operations:
z = (far / (far - near)) / ((far / (far - near)) - z'')

Categories

Resources