Is there a way to load images into openGL with the y-coords flipped? (upside down). I'm using the .NET Bitmap and BitmapData classes, and passing BitmapData.Scan0 to OpenGL.
Flipping the Bitmap on the CPU using .RotateFlip() is too slow.
Aside from flipping all of the texcoords, can I solve this problem in our engine?
If you render using a fragment shader, you get to interpret the u, v coordinates anyway you want. Turning them upside down should be trivial and (nearly) free.
Other than that, just flip your texture coordinates. It should not be difficult to achieve this.
You should be able to just alter your texture coordinates to achieve the desired flip.
I think the answer is no. OpenGl takes a pointer to an image as texture data, and I never seen any way to tell him to flip the lines.
(edit: Disregard, question is textures instead of DrawPixels)
This worked for me in C++.
Q: "How can I make glDrawPixels() draw an image flipped upside down?"
A: "Try glPixelZoom( 1.0, -1.0 ). Similarly, an image can be flipped left to right with glPixelZoom(). Note that you may have to adjust your raster position to position the image correctly."
src: http://www.mesa3d.org/brianp/sig97/gotchas.htm
Related
In XNA I am trying to create a game using old style Super Mario sprites, but if I try to make them bigger, they get very blurry. I have tried saving the PNG sprites as nearest neighbor, bicubic, and bilinear in photoshop, but they all appear equally blurry. I have also tried compressing the PNG online, which also didn't help.
My knowledge of XNA is somewhat basic, so unless your answer is code I can simply copy paste into my 'main' class, please explain how to use it.
In your draw-function you should only need to change one row of code.
Where you call spritebatch.Begin(); instead call
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullCounterClockwise);
This will set your "GraphicsDevice" to render textures without interpolating the colors between whole pixels when the sprites get zoomed.
One other thing that you could do is to use a rendertarget with the resolution you want the game to "emulate" , draw everything on the rendertarget and finally draw that to the screen.
This is a bit out of the scope of this question but it is ideal if you want to create a genuine oldschool experience.
Does someone know if it's possible to display a video in a form deformed? I mean not in a rectangle but in a parallelogram.
If it's not possible, is possible to mask a part of video? So I can put another video under it?
With your favourite 3d-Rendering-API, you would render to a texture and then use this texture on a quad.
You could of course also do it by hand, copy each scanline, and for each scanline, you increase the distance to the left or right a bit.
I tried to cutout player image with using depth image of kinect but there are some problem with that , first when im using depthStreamWithPlayerIndex ,just 320x240 reslution
can used for depth stream , second problem is the function that retrive correct color pixel from depth pixel is works up to 640x480 , cause of these two problem cutouted image is not good if you want show on a high reslution, now i want to know is anyway to fix these two problem Or an algoritm to smooth output image? something like anti-aliasing ?
Couple of things I can think of.
If you want to even out the edges of the person, then you could do this:
Make a mask that is 255 where the player is, 0 everywhere else
Smooth the mask (using Gaussian blurring with an empirically determined parameter)
Use this mask when composing the original player image with the new background
You could replace the smoothing step with morphological operations (e.g. dilation, open/close).
Once you've put the player on the new background, you could "feather" the player edges to make them stand out a bit less:
Apply Canny operator to the edge mask from above
Dilate the mask. You now have a mask that covers the outside of the player
Blur the parts of the composed image that are under the mask
I have a very large texture and would like to set the pixel data of a specific part of it. I need it to be as fast as possible so is it possible to give SetData a rectangle of data instead of the entire texture data?
Yes. However, getting part of the texture compared to the whole thing is unlikely make a difference except maybe if the texture is huge.
I am working on a C# program to process images ( given as int[,] )..
I have a 2D array of pixels, and I need to rotate them around a point, then scale them down to fit the original array. I already found articles about using matrix to transform to a point and rotate then transform back. What remains is to scale the resultant image to fit an array of original size.
How that can be done? (preferably with 2 equations one for x and one for y )
In the Matrix class you have both functions Rotate(At) and Scale. What other would you find out?
Have a look here. That should give you all the math behind doing coordinate rotations.
You need to find a transform from the resultant array to the original image. You then transform points in the destination to points in the source image and copy. Anti-aliasing via oversampling is also an option. Your rotation matrix can also apply a scaling - just multiply the matrix by the scale factor (this assumes a 2x2). If you're doing 3x3 matrix for rotation, scaling, and translation, then just multiply the upper left 2x2 by the scale factor.
Lastly, at the risk of some humility here is a link to some old TP6/asm DOS code I wrote for doing full screen roto-zooming. Strange the stuff that sticks around on the net:
http://www.hornet.org/cgi-bin/scene-search.cgi?search=Paul%20H.%20Kahler
Everything you need to do can be done with Bitmap images in GDI+ (using the System.Drawing... namespaces). These classes are designed and optimized for doing exactly this sort of thing (image manipulation). Is there any particular reason you can't work with an actual Bitmap instead of an int[,]? You could even write a very simple routine to create a Bitmap from an int[,], do whatever you need to to on the Bitmap, and then convert it back to int[,] at the end.