I have a very large texture and would like to set the pixel data of a specific part of it. I need it to be as fast as possible so is it possible to give SetData a rectangle of data instead of the entire texture data?
Yes. However, getting part of the texture compared to the whole thing is unlikely make a difference except maybe if the texture is huge.
Related
Edit: Bad angle picture was due to Depth Testing. Still need the 'right' way to do 3d text rendering though!
I got 2D text rendering in OpenTK working. It's very simple, I use .NET Graphics class methods to draw a string to Bitmap, which I can load into GPU via OpenTK. Ok great, but how about 3D?
I'm thinking about this in terms of a cylinder. What is a cylinder? It's just a circle stretched out over a certain height. That's EXACTLY what I want to do here! I researched a bunch...but surprisingly there isn't all that much info readily available IMO for such a basic task.
Here's what I've tried:
1) Rendering the bitmap 100 times from Z = 0.0f to 1.0f. This actually works pretty well! For certain rotations anyway.
2) Drawing 16x16x16 Voxels (well, I think I'm drawing voxels). Basically the idea is, use the typical GL.TexCoord3 and GL.Vertex3 methods for drawing the SURFACE of a cube, but because we are drawing so freakin many of them, I figured it would actually give depth to my text. It sort of does, but the results are actually worse than attempt 1.
I want to get this working with a really simple solution, if one exists. I'm using Immediate mode, and if possible I'd like to keep using that.
This is what solution 1 looks like at a good angle:
Bad Angle:
I know that my method is inherently flawed because these bitmaps dont actually have a depth when I draw them, which is why at some critical angles either the text becomes flat looking, or disappears from view.
I'm making a RTS game in Unity. In such games, players usually can determine the unit allegiance using the unit color markings. And I'm now trying to implement system, that will remap purple color on the unit owner's color.
One idea was to determine color that will be used as mask and then recoloured to any color. It could be done using some hue distribution function:
Solution 1
I used funtion based on max(). You can see the plot there.
hue = min(hue, pow(Math.abs(hue-MASK_HUE),8)*5000000+RESULT_HUE)
This solution has two big flaws:
Purple can't be used (I don't like it anyway)
Only full colours are applicable (no brown, black, white...)
What you see above is just my fiddling. The actual project would run on Unity engine in C#.
Solution 2
My friend proposed diferent approach: every image should use a map - either faded or just true/false array - to map where should the colours be applied. I didn't try this yet, as it's more complicated to test ad-hoc.
Altering textures in unity
It seems that texture for material can be easily altered in Unity, so the question is:
Q: How should I implement the dynamic texture coloring (generating) in Unity? Is any of my methods the good way? How should I produce the new textures, using what functions?
Rather than full code, I need information about what functions should I use. I hope other users will also profit from the general info.
To help you answering, I can guess the process will have 3 important parts:
Somehow get the texture of a model material (we know just the GameObject here)
If it hasn't been recolored already, use some algorithm to change the image properly. We'll need some pixel manipulation here
Apply the texture back to the model material
If you go the texture manipulation route, you'll need to make an additional copy of the texture stored in memory for each color, and this will increase the "loading" time of your scene. You can access the texture of a GameObject with renderer.material.mainTexture used on the GameObject that has the renderer component. Then you can use all sorts of pixel manipulation options such as SetPixel, or SetPixels to do it in batches for better performance.
There is, however, a third option that I would recommend. If you write/modify a custom shader, you can perform the color replacement at render time without significantly decreasing performance. This can be accomplished by adding a step where you convert your color output from RGB to HSV, change the Hue and Saturation, and then convert back to HSV.
By making the Hue and Saturation external parameters, you should be able to use a full range of colors including whatever you used for your marker color.
This post from the Unity forums should help with the hue shift shader.
Im working on isometric 2D tile engin for RTS game. I have two ways how can I draw floor. One option is one big image (for example 8000px x 8000px have about 10MB) and second option is draw images tile by tile only in visibly area.
My questin is what is better (for performance)?
Performance-wise and memory-wise, a tiled approach is better.
Memory-wise: If you can use a single spritesheet to hold the textures of every tile you need to render, then the amount of memory used would decrease tremendously - as opposed to redefining textures for tiles you want to render more than once. Also, on every texture there is an attribute called "pitch". This attribute tells us how much more memory is being used than the image actually needs. What? Why would my program be doing this? Back in the good old days, when Ben Kenobi was still called Obi Wan Kenobi, textures took up the memory they were supposed to. But now, with hardware acceleration, the GPU adds some padding to your texture to make it align with boundaries that it can process faster. This is memory you can reduce with the use of a spritesheet.
From a performance standpoint: Whenever you draw a regular sprite to the screen, the graphics hardware requires three main pieces of information: 1) The texture you want to render from. 2) What part of that texture you want to render from. 3) Where on the screen you want to render to. Repeat for every object you want to render. With a spritesheet, it only passes data once - a big performance increase because passing data from the CPU to the GPU (and vice-versa) is really slow.
And I disagree with the two comments, actually. Making a change of this caliber would be difficult when your program is mature.
Take a look at this image:
http://www.sprites-inc.co.uk/files/X/X/X4-X6/mmx_x4_x_sheet.gif
My question is; What is the best way to use this is a spritesheet? Making the usual FrameCount / FrameWidth won't work so well here. My brother suggested that I could assign each frame with a position, size and number. That sounded like a great idea, but the problem is that I have no idea how!
All I know that I can do, after assigning those three, is how to change the frame depending on their numbers.
So; What is the best way to do, and if this is, how do I do it? Do you know a site with such tutorial? I have searched but can't find anything. Or could you give me hints here?
Thanks for responding!
What you are looking to do, is work with a Texture Atlas, which is essentially an index of images contained inside of your spritesheet. You can then import the data into your environment to process the sprites contained inside of your image.
Typically you would have all the sprites separate, and you would use a sprite packing tool, that would create a spritesheet, and a texture atlas for you to use.
In this case, you already have a spritesheet packed, therefore you would need to calculate the atlas yourself.
Is there a way to load images into openGL with the y-coords flipped? (upside down). I'm using the .NET Bitmap and BitmapData classes, and passing BitmapData.Scan0 to OpenGL.
Flipping the Bitmap on the CPU using .RotateFlip() is too slow.
Aside from flipping all of the texcoords, can I solve this problem in our engine?
If you render using a fragment shader, you get to interpret the u, v coordinates anyway you want. Turning them upside down should be trivial and (nearly) free.
Other than that, just flip your texture coordinates. It should not be difficult to achieve this.
You should be able to just alter your texture coordinates to achieve the desired flip.
I think the answer is no. OpenGl takes a pointer to an image as texture data, and I never seen any way to tell him to flip the lines.
(edit: Disregard, question is textures instead of DrawPixels)
This worked for me in C++.
Q: "How can I make glDrawPixels() draw an image flipped upside down?"
A: "Try glPixelZoom( 1.0, -1.0 ). Similarly, an image can be flipped left to right with glPixelZoom(). Note that you may have to adjust your raster position to position the image correctly."
src: http://www.mesa3d.org/brianp/sig97/gotchas.htm