XNA stretching and filling 2D and 3D content - c#

I've written out a detailed description of my problem here:
http://www.codebot.org/articles/?doc=9574
The basic gist of my question is what is the best way to get XNA to behave like my OpenGL apps, in that I want content stretched to fill a window based on my designed proportions rather than the actual window size.
Further information, this problem relates to varying window or viewport size. In my previous OpenGL apps I would allow uses to switch between windowed and fullscreen mode, and I'd also allow windows to be resized. The problem I am running into with XNA is handling different fullscreen and windowed sizes. In OpenGL I'd detect a when window was resized and adjust the viewport so that the field of view was always fixed to a resolution aspect ratio. I would also create a 2D projection drawing, using the glOrtho function, to a fixed resolution.
The XNA examples I've worked through using SpriteBatch and SpriteFont, text and sprites seem to render in screen pixels. That is, all 2D output is rendered with square pixels and no stretching. In my XNA apps I'd rather they stretch to fill a window in the proportions I've designed. My question is, how can 2D and 3D stretching and filling, like I've done in OpenGL, best be done in XNA?

For 3D content using BasicEffect (and other effects that implement IEffectMatrices as explained here) you can use the appropriate members to set your World, View and Projection matrices as you like.
So where in your OpenGL code you have this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(FieldOfView, Width / Height, 1, 1000);
The equivalent in XNA is to set a projection matrix on the effect, like so:
effect.Projection = Matrix.CreatePerspectiveFieldOfView(
FieldOfView, Width / Height, 1, 1000);
Now, for 2D. Here's what you might have in OpenGL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
If you're using an effect (even with SpriteBatch, as explained here), the basic idea is the same as with 3D:
effect.Projection = Matrix.CreateOrthographicOffCenter(
0, width, height, 0, 0, 1);
Now, if you're using SpriteBatch without a custom effect (and I would recommend this, if you don't actually need a custom effect), you have to bare in mind that, by default, SpriteBatch uses an projection equivalent to:
effect.Projection = Matrix.CreateTranslation(-0.5f, -0.5f, 0)
* Matrix.CreateOrthographicOffCenter(0,
GraphicsDevice.Viewport.Width,
GraphicsDevice.Viewport.Height, 0, 0, 1);
Which gives a "client space" (top left is (0,0)) coordinate system, aligned to pixel centres.
If you want to adjust that space, you may pass in a transformation matrix to SpriteBatch.Begin (this overload).
So to get the effect you are after (where a fixed number of world units appear on screen, no matter the screen size), you can to counter-act the built-in projection from client-space with this transformation:
Matrix.CreateScale(GraphicsDevice.Viewport.Width / 640f,
GraphicsDevice.Viewport.Height / 480f, 1f);
(Assuming you want your visible world space to be 640 by 480.)
I recommend having a look through the documentation for XNA's Matrix on MSDN, to see what kind of matrices you can create.

For 2D drawing I added this to my LoadContent() method, where effect is private field in my Game class ...
effect = new BasicEffect(GraphicsDevice)
{
TextureEnabled = true,
VertexColorEnabled = true
};
And then added this inside my Draw() method ...
effect.Projection = Matrix.CreateTranslation(-0.5f, -0.5f, 0) *
Matrix.CreateOrthographicOffCenter(0, 640, 480, 0, 0, 1);
batch.Begin(0, null, null, null, RasterizerState.CullNone, effect);
It seems to work fine. Now 2D images and fonts are scaled correctly when the window is resized. You recommended not using a custom effect. Is creating an instance of BasicEffect what you meant, or did you mean something else? That is I don't see how to create a custom project matrix without using an effect instance.

Related

Scale all sprites on screen based on resolution

I decided to move my game from windowed to fullscreen mode and that's the first problem I face. I'm looking for a way of resizing all of my sprites based on screen resolution. My background is now in the (0, 0) coordinates, but I need to have it and all sprites to scale with some kind of fixed aspect ratio (16:9 preferred). And resize them to that portion that the background is stretched to fill the screen. And not more, not less.
I've looked into some online tutorials but I really couldn't understand the concept they used. Can you explain how you would to that? I read using a RenderTarget2D and passing it to a spriteBatch.Begin() call, has some kind of effect, but there's got to be more code.
I'm not looking to supporting resolution change option, but adapting the sprites to the current resolution.
It sounds like you're talking about resolution independence.
The general idea is to make your game using a virtual resolution and scale it up or down to fit the actual resolution of the screen.
var scaleX = (float)ActualWidth / VirtualWidth;
var scaleY = (float)ActualHeight / VirtualHeight;
var matrix = Matrix.CreateScale(scaleX, scaleY, 1.0f);
_spriteBatch.Begin(transformMatrix: matrix);
For example, if your virtual resolution was 800x480 you would simply render all your sprites relative to that. Then before rendering the sprite batch, create a transformation matrix to pass into the Begin call.
The other thing you should know is that you'll need to scale the mouse / touch input coordinates in reverse to deal with them in the virtual resolution. In the Update method you can scale the mouse position in reverse like this:
var mouseState = Mouse.GetState(); // you're probably already doing this
var mousePosition = new Vector2(mouseState.X, mouseState.Y);
var scaledMousePosition = Vector2.Transform(mousePosition, Matrix.Invert(matrix));
Then you can use the scaled value in all the places you're currently using mouseState.X and mouseState.Y.
It gets more complicated if want to implement letterboxing or pillarboxing. Take a look at the Viewport Adapters in MonoGame.Extended if you want to know how that works.
You have a texture with size (W, H), to be put in the position (X, Y), according to a scale (sW, sH). Initially, the scale was (1, 1), so the sprite would be positioned in the rectangle (X, Y, W, H).
Now, let's say the initial resolution was 800x600, but you now want a resolution of 1440x900. If 800 -> sW = 1, 1440 -> sW = 1440/800 = 1.8. Similar, our new sH is 1.5.
What this is saying is: if something was supposed to be on the X-coordinate 500 on the initial resolution, it is now on 500*1.8 = 900 X position on the new resolution. This is clear for the edge: if something was on X=800 previously, it is now on 800*1.8 = 1440, still on the edge of the screen!
All said and done, we simply have to multiply. Going back to the first paragraph, we can say that a rectangle (X, Y, W, H) can be rescaled by a scale (sW, sH) to (X * sW, Y * sH, W * sW, H * sH).
This is of course calculated by assuming the original resolution is scaled by (1, 1), don't forget this!

Scaling a Texture2D without Draw()

I am trying to scale a Texture2D without using the Draw() method. The reason being
I am not going to be drawing the Texture2D until I perform further manipulations. I would be saving the Texture2D as a field.
I don't know what kind of image manipulations you want to perform on your image but I highly recommend not performing scaling before you do those manipulations. If there is any way possible to do so, manipulate your image before you scale it. Xna has already taken care of all the dirty work of scaling for you.
If you want to perform specific pixel operations, Texture2D.GetData will work for you but only in small quantities. If you're doing this to hundreds of images, you'll slow down your game drastically. I highly recommend doing some post-processing effects using a customized Effect.
Edit: I just thought of a way to do this the way you want to do it. What you can do is draw your scaled texture to a RenderTarget2D object and then get the color data from it and manipulate the data however you'd like. An example below:
RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, textureWidth, textureHeight, false, GraphicsDevice.PresentationParameters.BackBufferFormat, DepthFormat.Depth24);
GraphicsDevice.SetRenderTarget(renderTarget);
spriteBatch.Begin();
//scale and draw your texture here
spriteBatch.End();
GraphicsDevice.SetRenderTarget(null);
This draws the texture to the render target which you can then draw later just like you would any other texture:
spriteBatch.Draw(renderTarget, new Rectangle(), Color.White);
You can use renderTarget.GetData to get color data just like you would with Texture2D and manipulate it to your liking.
The first thing I can think is to use Texture2D.GetData and store your texture in an array of Color, uint or whatever, and then perform your scale.
This will require some basic computer graphics knowledge, and I don't think that's the better way to do it.

Scrolling texture in cycle

I'm using C# and XNA and I would like to make a scrolling background in my game.
I'm trying to figure out what the best way to implement a scrolling texture that moves in some direction indefinitely. Let's say a space background with stars. So, when ship moves do does the texture, but in opposite direction. Kinda like in "tiling" mode.
My only guess so far is to render two textures which are, let's say moving left, and then just make the most left one jump to right when it's beyond visibility or something similar to this.
So, I was wondering is there some simple way to do it in XNA, maybe some render mode, or is the way I described it is good enough? I just don't want to overcomplicate things. I obviously tried to google first, but found pretty much nothing, but it is strange considering that many games use similar techniques too.
Theory
A scrolling background image is easy to implement with the XNA SpriteBatch class. There are several overloads of the Draw method which let the caller specify a source rectangle. This source rectangle defines the section of the texture that is drawn to the specified destination rectangle on screen:
Changing the position of the source rectangle will change the section of the texture displayed in the destination rectangle.
In order to have the sprite cover the whole screen use the following destination rectangle:
var destination = new Rectangle(0, 0, screenWidth, screenHeight);
If the whole texture should be displayed use the following destination rectangle:
var source = new Rectangle(0, 0, textureWidth, textureHeight);
Than all you have to do is animate the source rectangle's X and Y coordinates and you are done.
Well, almost done. The texture should start again even if the source rectangle moves out of the texture area. To do that you have to set a SamplerState that uses texture wrap. Fortunately the Begin method of the SpriteBatch allows the usage of a custom SamplerState. You can use one of the following:
// Either one of the three is fine, the only difference is the filter quality
SamplerState sampler;
sampler = SamplerState.PointWrap;
sampler = SamplerState.LinearWrap;
sampler = SamplerState.AnisotropicWrap;
Example
// Begin drawing with the default states
// Except the SamplerState should be set to PointWrap, LinearWrap or AnisotropicWrap
spriteBatch.Begin(
SpriteSortMode.Deferred,
BlendState.Opaque,
SamplerState.AnisotropicWrap, // Make the texture wrap
DepthStencilState.Default,
RasterizerState.CullCounterClockwise
);
// Rectangle over the whole game screen
var screenArea = new Rectangle(0, 0, 800, 600);
// Calculate the current offset of the texture
// For this example I use the game time
var offset = (int)gameTime.TotalGameTime.TotalMilliseconds;
// Offset increases over time, so the texture moves from the bottom to the top of the screen
var destination = new Rectangle(0, offset, texture.Width, texture.Height);
// Draw the texture
spriteBatch.Draw(
texture,
screenArea,
destination,
Color.White
);
Microsoft has a XNA tutorial that does exactly this, you can grab the source code and read up on the actual programming logic behind a scrolling background. Bonus points they do parallax scrolling for a nice effect.
Link: http://xbox.create.msdn.com/en-US/education/tutorial/2dgame/getting_started

How do I draw a rectangle from one render target onto another without losing data?

I asked a question earlier in the week (here) regarding transferring data from a rendertarget to the CPU so that collisions could be tested on it. From the responses I got there, I decided that drawing a section of the render target to a smaller render target and then retrieving that data would be a valid solution and tried to implement that.
After drawing the full sized render target (1280x1024), I switch the active render target to a smaller one (59X47 - the collision size of my players) and try to draw the section of the large render target which falls under the player onto the smaller target. At first, I thought this had worked, but then I noticed that collisions would sometimes be wildly inaccurate.
I drew the smaller render target to the screen to inspect its contents and found that it only had a handful of pixels occupied at any one time, and that these were always in the top left corner.
// Render rectangle under player from foreground to collision layer
Rectangle sourceRectangle = new Rectangle(sourceX, sourceY,
(int)sizeX, (int)sizeY);
gd.SetRenderTarget(collisionLayer[i]);
gd.Clear(Color.Transparent);
spriteBatch.Draw(foregroundLayer, collisionLayer[i].Bounds,
sourceRectangle, Color.White);
gd.SetRenderTarget(null);
Drawing the full large render target to the smaller target results in the data carrying over successfully (though scaled, obviously), so I'm not sure what is going wrong.
// Render full target onto collision layer
gd.SetRenderTarget(collisionLayer[i]);
gd.Clear(Color.Transparent);
spriteBatch.Draw(foregroundLayer,new Vector2(0.0f),Color.White);
gd.SetRenderTarget(null);
The image below shows the large target with the small target superimposed on the top left corner. The large target's full data seems to be drawn correctly.
You're not showing your spriteBatch.Begin and spriteBatch.End calls, but it looks like you need to move them. Specifically, you're setting your render target to null before calling spriteBatch.End, or at least it appears so from your sample code.
I set up a sample project, and when I do the following everything works as expected...
GraphicsDevice.SetRenderTarget(target);
GraphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin();
spriteBatch.Draw(texture, target.Bounds, new Rectangle(50, 500, 100, 100), Color.White);
spriteBatch.End();
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(texture, new Rectangle(0, 0, 640, 480), texture.Bounds, Color.White);
spriteBatch.Draw(target, Vector2.Zero, target.Bounds, Color.White);
spriteBatch.End();
But if I move spriteBatch.End after SetRenderTarget(null) my render target is empty.

Issue with transparent texture on 3D primitive, XNA 4.0

I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here:
cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"];
Block[] cubes = new Block[4];
cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3));
cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3));
cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4));
cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4));
foreach(Block b in cubes) {
b.shape.RenderShape(GraphicsDevice, cubeEffect);
}
This is the code in the Draw method. It produces this result:
As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this:
It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.
You are relying on depth buffering to achieve occlusion. This technique only works for opaque objects.
To achieve correct occlusion for a scene containing transparent objects:
Set DepthBufferEnable and
DepthBufferWriteEnable to true
Draw all opaque geometry
Leave DepthBufferEnable set to true,
but change DepthBufferWriteEnable to
false
Sort alpha blended objects by
distance from the camera, then draw
them in order from back to front
Extract from Depth sorting alpha blended objects by Shawn Hargreaves
Drawing transparent objects properly is harder than regular ones. The reason is when face is rendered by default it marks all pixels as drawn at certain depth and as result pixels that are behind will not be drawn at all. I'd recommend getting a book on 3d rendering and look through for more details.
The easiest approach you already found - draw transparent objects AFTER non-transparent ones. Works for transpreant and semi-transparent objects. Note that transparent objects need to be sorted to be drawn correctly (unlike non-transparent ones).
In your particular case (non-semitransparent) you can change texture renreding to NOT render anything for transparent parts.
You may be able to use this if you don't have semi-transparent pixels on the objects. It'll either render completely solid or won't write to the Z-Buffer.
As in Riemers Alpha Testing.
XNA (and DirectX and all major 3D libraries) take in consideration something called culling. Although from your code I cannot tell for sure, from the images I think this is your problem. The polygons that you don't see have the vertices in the wrong order. If this is the problem, you have two solutions:
either turn culling off (device.RenderState.CullMode = CullMode.None; if I remember correctly)
apply your texture twice, with the points of the polygon both in clockwise order and counter clockwise

Categories

Resources