I'm trying to make procedural textures in my XNA 4.0 game, mainly for buttons but for other textures as well. Here's an image describing what I want:
Hope you understand what I want to do, if you don't, heres some words:
I want to make objects in my game. These objects will all use the same texture, but can be resized, and their texture will not be resized so the pixels are "stretched", but procedurally placed.
The general way to do this is to have one texture for the middle, 4 for each corners and 4 for each edges. The vertical edges and middle would be stretched vertically, and the horizontal edges and middle would be stretched horizontally.
You could pack it into 1 texture for easy editing. You'd define the corners and edges implicitly with a border distance, which would define the parts of the texture that should not scale.
I would recommend you to split up the textures in 5 Textures. One for each side and one single-colored texture. You just stretch the one colored texture and draw the frame textures around your stretched colored texture.
I hope I could help you.
Related
I'm trying to render to texture using FBO using OpenTK in C#.
When I try to render, everything show up and just fine except the texture is shown on bottom-left corner, I'm expecting it shown on the top left corner.
Also the Texture appear flipped in Y axis, So I need to modify the Texture Matrix after binding the Texture Target.
If I just bind my texture and draw the vertices, the sprite will appear on top-left corner.
The codes I use is looks exactly same from the official documentation.
I got 2 questions:
1. Am I doing it right to make the target texture show up properly by modifying the Texture Matrix?
2. How to make the texture target appear on top left corner?
Thanks in advance!
Actually Origin is at Bottom Left , therefore the FBO is getting displayed in the Bottom Left.
In the normal images , the texture space 0,0 is at top therefore you don't see it flipped.
So you have to add the texture matrix to make the space same.
When I try to render, everything show up and just fine except the texture is shown on bottom-left corner,
Yes. In OpenGL the origin (0,0) of 2D images (the viewport, textures, render buffers) is in the lower left.
I'm expecting it shown on the top left corner.
Why? The origin (as far as OpenGL is concerned) is in the lower left. Why'd you expect it in the top?
I encountered similar problems when the first time I tried FBO, so here my answer:
Although there several ways to workaround against this Upside down problem, Modifying Texture Matrix isn't bad idea at all. Sometimes modifying Texture matrix could be handy in certain situation, e.g: Use non normalized texture coordinate, So you can add such features to your bind texture function.
It's seems projection / viewport issue, if you are sure that the normal sprite appear on top-left coordinate, try to re-setup your projection / view / camera before unbind the FBO handle.
I have a 2D tile-based lighting system which is drawn onto a Render Target.
I am also drawing a background which involves mountains, sun/moon, and clouds.
There is also the unlit game; blocks, character, etc.
Here is the order in which everything is drawn:
1. Background
2. Game World
3. Lighting
The problem is that the LIGHTING is covering up the BACKGROUND when it's dark:
Although it's perfectly fine in the day because the light is full:
You might ask, well, why don't you just blend each block, not drawing the lighting on a RenderTarget?
Because that would prevent me from performing smooth lighting, as seen in the pictures. To produce the smooth lighting, I draw squares of light onto a RenderTarget, perform a Gaussian blur and then draw that RenderTarget.
How about not drawing a square of light in empty spaces?
The light blurs onto any adjacent blocks, and not all objects in my game affected by lighting are square, so they would look like a sprite with a blurry square on top of it.
Light NOT being drawn in empty spaces:
Light being drawn everywhere:
Is there any way to keep the background visible, or something else that would help my predicament?
I'd suggest you do the following:
Render lighting to rendertarget
Render scene to rendertarget (leave the background transparent)
Multiply scene with the lighting rendertarget
Render background to screen
Render scene (with the lighting already applied) to the screen
Because the background is added after applying the lighting, it will not be affected.
I've got a problem. Using SpriteBatch, I can only draw a rectangle area from my source Texture2D.
Please, help me to find the way, how I can draw polygon or circle area from my source texture.
I'm creating 2d sprite game.
Thanks in advance,
Denis
You could construct this shapes with dynamic vertices, like building your own shapes [1]
But if you just want to draw any non rectangular shapes it would be much easier if you just use transparency. So you will still take a rectangular region from your texture but only the circle/polygon is visible.
This can be done easily by using png ord tga with baked in transparency. There are also a lot of qeustions dealing with this on SO:
[2] [3]
In my game I need to draw a circle made of squares the sizes of a game tile (circle is made of squares). I could just draw monochrome square textures in form of a jagged circle each frame, but it consumes a significant amount of resources. What I'd like to do is to draw it somewhere in the memory just once and save to draw each frame after that.
I could simply draw said circle myself and use it as a ready texture, but my circle is not always the same. It has different size throughout the game (and it's not really a circle half of the time, but I've got the algo that says where to draw), so it has to be drawn programmatically.
First you render the circle to a custom RenderTarget2D. You can set a custom render target like this:
GraphicsDevice.SetRenderTarget(renderTarget);
After rendering your circle to the render target cast it to a Texture2D like this:
texture = (Texture2D)renderTarget;
Read more: http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Render_to_texture.php
Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.