I'm stuck at not being able to map texture to a square in openGLES. I'm trying to display a jpg image on the screen, and in order for me to do that, I draw a square that I want to then map image onto. However all I get as an output is a white square. I don't know what am I doing wrong. And this problem is preventing me from moving forward with my project. I'm using Managed OpenGL ES wrapper for Windows Mobile.
I verified that the texture is loading correctly, but I can't apply it to my object. I uploaded sample project that shows my problem here. You would need VS2008 with Windows Mobile 6 SDK to be able to run it. I'm also posting the code of the Form that renders and textures an object here. Any suggestions would be much appreciated, since I've been stuck on this problem for a while, and I can't figure out what am I doing wrong.
public partial class Form1 : Form
{
[DllImport("coredll")]
extern static IntPtr GetDC(IntPtr hwnd);
EGLDisplay myDisplay;
EGLSurface mySurface;
EGLContext myContext;
public Form1()
{
InitializeComponent();
myDisplay = egl.GetDisplay(new EGLNativeDisplayType(this));
int major, minor;
egl.Initialize(myDisplay, out major, out minor);
EGLConfig[] configs = new EGLConfig[10];
int[] attribList = new int[]
{
egl.EGL_RED_SIZE, 5,
egl.EGL_GREEN_SIZE, 6,
egl.EGL_BLUE_SIZE, 5,
egl.EGL_DEPTH_SIZE, 16 ,
egl.EGL_SURFACE_TYPE, egl.EGL_WINDOW_BIT,
egl.EGL_STENCIL_SIZE, egl.EGL_DONT_CARE,
egl.EGL_NONE, egl.EGL_NONE
};
int numConfig;
if (!egl.ChooseConfig(myDisplay, attribList, configs, configs.Length, out numConfig) || numConfig < 1)
throw new InvalidOperationException("Unable to choose config.");
EGLConfig config = configs[0];
mySurface = egl.CreateWindowSurface(myDisplay, config, Handle, null);
myContext = egl.CreateContext(myDisplay, config, EGLContext.None, null);
egl.MakeCurrent(myDisplay, mySurface, mySurface, myContext);
gl.ClearColor(0, 0, 0, 0);
InitGL();
}
void InitGL()
{
gl.ShadeModel(gl.GL_SMOOTH);
gl.ClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.BlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA);
gl.Hint(gl.GL_PERSPECTIVE_CORRECTION_HINT, gl.GL_NICEST);
}
public unsafe void DrawGLScene()
{
gl.MatrixMode(gl.GL_PROJECTION);
gl.LoadIdentity();
gl.Orthof(0, ClientSize.Width, ClientSize.Height, 0, 0, 1);
gl.Disable(gl.GL_DEPTH_TEST);
gl.MatrixMode(gl.GL_MODELVIEW);
gl.LoadIdentity();
Texture myImage;
Bitmap Image = new Bitmap(#"\Storage Card\Texture.jpg");
using (MemoryStream ms = new MemoryStream())
{
Image.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp);
myImage = Texture.LoadStream(ms, false);
}
float[] rectangle = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
float[] texturePosition = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
//Bind texture
gl.BindTexture(gl.GL_TEXTURE_2D, myImage.Name);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR);
gl.EnableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.EnableClientState(gl.GL_VERTEX_ARRAY);
//draw square and texture it.
fixed (float* rectanglePointer = &rectangle[0], positionPointer = &texturePosition[0])
{
gl.TexCoordPointer(2, gl.GL_FLOAT, 0, (IntPtr)positionPointer);
gl.VertexPointer(2, gl.GL_FLOAT, 0, (IntPtr)rectanglePointer);
gl.DrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4);
}
gl.DisableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.DisableClientState(gl.GL_VERTEX_ARRAY);
}
protected override void OnPaintBackground(PaintEventArgs e)
{
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
DrawGLScene();
egl.SwapBuffers(myDisplay, mySurface);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
}
protected override void OnClosing(CancelEventArgs e)
{
if (!egl.DestroySurface(myDisplay, mySurface))
throw new Exception("Error while destroying surface.");
if (!egl.DestroyContext(myDisplay, myContext))
throw new Exception("Error while destroying context.");
if (!egl.Terminate(myDisplay))
throw new Exception("Error while terminating display.");
base.OnClosing(e);
}
}
You need to enable texturing:
glEnable( GL_TEXTURE_2D );
before rendering the square.
If you work with OpenGL|ES also take a look if the glDrawTexImage-Extension is supported (well - it should, it's a core-extension and required, but you never know...)
It won't help you with your problem directly (e.g. you have to enable texturing as well), but glDrawTexImage is a hell lot more efficient than polygon rendering. And it needs less code to write as well.
If you are loading textures from PNG or JPG files using UIImage, CGImage and CGContext, it is very important to set GL_TEXTURE_MIN_FILTER to GL_LINEAR or GL_NEAREST before creating textures, because if you don't do it, all your textures except the last bound will be set to blank white.
Thanks for the help! However your suggestion didn't fix the issue. Now the square is black instead of white, but still no texture. I've tried adding gl.Enable(gl.GL_TEXTURE_2D) at every possible position, but the result is still black square.
EDIT:
Upps, sorry, top-left corner of my image was black that's why I didn't see anything. Changed the image to have different colors, and now I can see part of the image rendered. It's not mapped propertly, but I can figure that part out.
Thanks a lot of the help!!!
Related
I'm trying to create a screenshot/bitmap of my screen. I wrote this function:
public static Bitmap CreateScreenshot(Rectangle bounds)
{
var bmpScreenshot = new Bitmap(bounds.Width, bounds.Height,
PixelFormat.Format32bppArgb);
var gfxScreenshot = Graphics.FromImage(bmpScreenshot);
gfxScreenshot.CopyFromScreen(bounds.X, bounds.Y,
0, 0,
new Size(bounds.Size.Width, bounds.Size.Height),
CopyPixelOperation.SourceCopy);
return bmpScreenshot;
}
This function is being called in my overlay form that should draw the bitmap onto itself. I'm currently using GDI+ for the whole process.
private void ScreenshotOverlay_Load(object sender, EventArgs e)
{
foreach (Screen screen in Screen.AllScreens)
Size += screen.Bounds.Size;
Location = Screen.PrimaryScreen.Bounds.Location;
_screenshot = BitmapHelper.CreateScreenshot(new Rectangle(new Point(0, 0), Size));
Invalidate(); // The screenshot/bitmap is drawn here
}
Yep, I dispose the bitmap later, so don't worry. ;)
On my laptop and desktop computer this works fine. I've tested this with different resolutions and the calculations are correct. I can see an image of the screen on the form.
The problem starts with the Surface 3. All elements are being scaled by a factor of 1.5 (150%). This consequently means that the DPI changes. If I try to take a screenshot there, it does only capture like the upper-left part of the screen but not the whole one.
I've made my way through Google and StackOverflow and tried out different things:
Get the DPI, divide it by 96 and multiply the size components (X and Y) of the screen with this factor.
Add an entry to application.manifest to make the application DPI-aware.
The first way did not bring the desired result. The second one did, but the whole application would have to be adjusted then and this is quite complicated in Windows Forms.
Now my question would be: Is there any way to capture a screenshot of the whole screen, even if it is has a scalation factor higher than 1 (higher DPI)?
There must be a way to do this in order to make it working everywhere.
But at this point I had no real search results that could help me.
Thanks in advance.
Try this, which is found within SharpAVI's library. It works well on devices regardless of resolution scale. And I have tested it on Surface 3 at 150%.
System.Windows.Media.Matrix toDevice;
using (var source = new HwndSource(new HwndSourceParameters()))
{
toDevice = source.CompositionTarget.TransformToDevice;
}
screenWidth = (int)Math.Round(SystemParameters.PrimaryScreenWidth * toDevice.M11);
screenHeight = (int)Math.Round(SystemParameters.PrimaryScreenHeight * toDevice.M22);
SharpAVI can be found here: https://github.com/baSSiLL/SharpAvi It is for videos but uses a similar copyFromScreen method when getting each frame:
graphics.CopyFromScreen(0, 0, 0, 0, new System.Drawing.Size(screenWidth, screenHeight));
Before taking your screen shot, you can make the process DPI aware:
[System.Runtime.InteropServices.DllImport("user32.dll")]
public static extern bool SetProcessDPIAware();
private static Bitmap Screenshot()
{
SetProcessDPIAware();
var screen = System.Windows.Forms.Screen.PrimaryScreen;
var rect = screen.Bounds;
var size = rect.Size;
Bitmap bmpScreenshot = new Bitmap(size.Width, size.Height);
Graphics g = Graphics.FromImage(bmpScreenshot);
g.CopyFromScreen(0, 0, 0, 0, size);
return bmpScreenshot;
}
I am trying to draw a crosshair ("plus sign") with inverted colors over an image to show the location of a selected point within the image. This is how I do it:
private static void DrawInvertedCrosshair(Graphics g, Image img, PointF location, float length, float width)
{
float halfLength = length / 2f;
float halfWidth = width / 2f;
Rectangle absHorizRect = Rectangle.Round(new RectangleF(location.X - halfLength, location.Y - halfWidth, length, width));
Rectangle absVertRect = Rectangle.Round(new RectangleF(location.X - halfWidth, location.Y - halfLength, width, length));
ImageAttributes attributes = new ImageAttributes();
float[][] invertMatrix =
{
new float[] {-1, 0, 0, 0, 0 },
new float[] { 0, -1, 0, 0, 0 },
new float[] { 0, 0, -1, 0, 0 },
new float[] { 0, 0, 0, 1, 0 },
new float[] { 1, 1, 1, 0, 1 }
};
ColorMatrix matrix = new ColorMatrix(invertMatrix);
attributes.SetColorMatrix(matrix, ColorMatrixFlag.Default, ColorAdjustType.Bitmap);
g.DrawImage(img, absHorizRect, absHorizRect.X, absHorizRect.Y, absHorizRect.Width, absHorizRect.Height, GraphicsUnit.Pixel, attributes);
g.DrawImage(img, absVertRect, absVertRect.X, absVertRect.Y, absVertRect.Width, absVertRect.Height, GraphicsUnit.Pixel, attributes);
}
It works as expected, however, it is really slow. I want the user to be able to move the selected location around with their mouse by setting the location to the cursor's location whenever it moves. Unfortunately, on my computer, it can update only around once per second for big images.
So, I am looking for an alternative to using Graphics.DrawImage to invert a region of an image. Are there any ways to do this with speeds proportional to the selected region area rather than the entire image area?
Sounds to me you are focusing on the wrong problem. Painting the image is slow, not painting the "cross-hairs".
Large images can certainly be very expensive when you don't help. And System.Drawing makes it very easy to not help. Two basic things you want to do to make the image paint faster, getting it more than 20 times faster is quite achievable:
avoid forcing the image painting code to rescale the image. Instead do it just once so the image can be drawn directly one-to-one without any rescaling. Best time to do so is when you load the image. Possibly again in the control's Resize event handler.
pay attention to the pixel format of the image. The fastest one by a long shot is the pixel format that's directly compatible with the way the image needs to be stored in the video adapter. So the image data can be directly copied to video RAM without having to adjust each individual pixel. That format is PixelFormat.Format32bppPArgb on 99% of all modern machines. Makes a huge difference, it is ten times faster than all the other ones.
A simple helper method that accomplishes both without otherwise dealing with the aspect ratio:
private static Bitmap Resample(Image img, Size size) {
var bmp = new Bitmap(size.Width, size.Height, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);
using (var gr = Graphics.FromImage(bmp)) {
gr.DrawImage(img, new Rectangle(Point.Empty, size));
}
return bmp;
}
Draw the image once on Graphics g, then draw the crosshair on Graphics g directly instead of the image. You can optionally keep track of the places the user clicked so as to save them either in the image or elsewhere as needed.
I'm facing a really perplexing problem..
I have a .Net 2.0 C# WinForms project.
I'm trying to stretch a bitmap onto a drawing area, but for some reason it is not stretched properly - I get alpha channel gradient on the right and bottom margins of my drawing area.
It took me quite a while to isolate this problem. I create a few lines of code that reproduce the problem (see code snippet and screenshot below).
Can anyone please shed some light over this matter?
Thanks in advance.
--
private void Form1_Paint( object sender, PaintEventArgs e )
{
// Create a black bitmap resource sized 10x10
Image resourceImg = new Bitmap( 10, 10 );
Graphics g = Graphics.FromImage( resourceImg );
g.FillRectangle( Brushes.Black, 0, 0, resourceImg.Width, resourceImg.Height );
Rectangle drawingArea = new Rectangle( 0, 0, 200, 200 ); // Set the size of the drawing area
e.Graphics.FillRectangle( Brushes.Aqua, drawingArea ); // Fill an aqua colored rectangle
e.Graphics.DrawImage( resourceImg, drawingArea ); // Stretch the resource image
// Expected result: The resource image should completely cover the aqua rectangle.
// Actual Result: The right and bottom edges become gradiently transparent (revealing the aqua rectangle under it)
}
The behavior has to do with how GDI+ handles edges. In this case, you're scaling a very small image over a large area, and you haven't told GDI+ how to handle the edge. If you use the ImageAttributes class and set the WrapMode appropriately, you can get around this issue.
For example:
private void Form1_Paint(object sender, PaintEventArgs e)
{
using (var resourceImg = new Bitmap(10, 10))
{
using (var g = Graphics.FromImage(resourceImg))
{
g.FillRectangle(Brushes.Black, 0, 0,
resourceImg.Width, resourceImg.Height);
}
var drawingArea = new Rectangle(0, 0, 200, 200);
e.Graphics.FillRectangle(Brushes.Aqua, drawingArea);
using (var attribs = new ImageAttributes())
{
attribs.SetWrapMode(WrapMode.TileFlipXY);
e.Graphics.DrawImage(resourceImg, drawingArea,
0, 0, resourceImg.Width, resourceImg.Height,
GraphicsUnit.Pixel, attribs);
}
}
}
The above code should produce an all black image. If you comment out the attribs.SetWrapMode(WrapMode.TileFlipXY); statement, you should see the blue gradient. With the wrap mode set, you're telling GDI+ to flip the image at the edges, so it will pick up more black and not fade things out at the edge when it scales the image.
I've written some code using SlimDX and WPF where I would expect the end result to be a red screen.
Unfortunately all I get is a black screen.
This is on windows 7.
Can anyone see anything major I'm missing?
The reason I'm using a separate surface as the backbuffer for the D3DImage is that I am going to be needing multiple viewports. I thought that rendering to seperate surfaces instead of the devices initial backbuffer would be the best way to achieve that.
anyway, on with the code..
Disclaimer: Please ignore the bad code, this is written entirely as throw-away code just so I can figure out how to do achieve what I'm after.
Here's my window class:
namespace SlimDXWithWpf
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
SlimDXRenderer controller;
public MainWindow()
{
InitializeComponent();
controller = new SlimDXRenderer();
controller.Initialize();
D3DImage image = new D3DImage();
image.Lock();
controller.RenderToSurface();
image.SetBackBuffer(D3DResourceType.IDirect3DSurface9, controller.SurfacePointer);
image.AddDirtyRect(new Int32Rect(0, 0, image.PixelWidth, image.PixelHeight));
image.Unlock();
Background = new ImageBrush(image);
}
}
}
And heres my "renderer" class
namespace SlimDXWithWpf
{
public class SlimDXRenderer : IDisposable
{
Direct3DEx directX;
DeviceEx device;
Surface surface;
Surface backBuffer;
IntPtr surfacePointer;
public IntPtr SurfacePointer
{
get
{
return surfacePointer;
}
}
public void Initialize()
{
directX = new Direct3DEx();
HwndSource hwnd = new HwndSource(0, 0, 0, 0, 0, 640, 480, "SlimDXControl", IntPtr.Zero);
PresentParameters pp = new PresentParameters()
{
BackBufferCount = 1,
BackBufferFormat = Format.A8R8G8B8,
BackBufferWidth = 640,
BackBufferHeight = 480,
DeviceWindowHandle = hwnd.Handle,
PresentationInterval = PresentInterval.Immediate,
Windowed = true,
SwapEffect = SwapEffect.Discard
};
device = new DeviceEx(directX, 0, DeviceType.Hardware, hwnd.Handle, CreateFlags.HardwareVertexProcessing, pp);
backBuffer = device.GetRenderTarget(0);
surface = Surface.CreateRenderTarget(device, 1024, 768, Format.A8R8G8B8, MultisampleType.None, 1, false);
surfacePointer = surface.ComPointer;
}
public void RenderToSurface()
{
device.SetRenderTarget(0, surface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, new Color4(Color.Red), 0f, 0);
device.BeginScene();
device.EndScene();
}
public void Dispose()
{
surface.Dispose();
device.Dispose();
directX.Dispose();
}
}
}
-- Edit: For a second I had thought I'd solved it, but it seems it will only work when my second render target (the one I'm trying to clear red) is 640x480. Any thoughts?
Did you base some of this code on the SlimDX WPF sample? It looks like you might have, which is why your Clear() call is using 0.0f for the Z clear value... which is a bug in our sample. It should be 1.0f.
Beyond that, the only potential issue I see is that your surface render target is a different size than your back buffer, but that should not actually cause problems. Have you tried rendering to the device's backbuffer (Device.GetBackBuffer()) instead of a new surface to see what impact that has?
In your device.Clear call, change the first numeric argument from 0f to 1f. That's the z-depth which ranges from 0 to 1. Specifying a z-depth of 0 effectively does nothing.
I'm building an application that captures video frames from a camera (30fps # 640x480), processes them, and then displays them on a Windows Form. I was initially using DrawImage (see code below) but the performance was terrible. Even with the processing step disabled the best I can get is 20fps on a 2.8GHz Core 2 Duo machine. Double buffering is enabled on the Windows form otherwise I get tearing.
Note: The image used is a Bitmap of format Format24bppRgb. I know that DrawImage is supposed to be faster with a Format32bppArgb formatted image but I am restricted by the format that comes out of the frame grabber.
private void CameraViewForm_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
// Maximize performance
g.CompositingMode = CompositingMode.SourceOver;
g.PixelOffsetMode = PixelOffsetMode.HighSpeed;
g.CompositingQuality = CompositingQuality.HighSpeed;
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.SmoothingMode = SmoothingMode.None;
g.DrawImage(currentFrame, displayRectangle);
}
I tried using Managed DirectX 9 with Textures and Spites (see below) but the performance was even worse. I'm very new to DirectX programming so this may not be the best DirectX code.
private void CameraViewForm_Paint(object sender, PaintEventArgs e)
{
device.Clear(ClearFlags.Target, Color.Black, 1.0f, 0);
device.BeginScene();
Texture texture = new Texture(device, currentFrame, Usage.None, Pool.Managed);
Rectangle textureSize;
using (Surface surface = texture.GetSurfaceLevel(0))
{
SurfaceDescription surfaceDescription = surface.Description;
textureSize = new Rectangle(0, 0, surfaceDescription.Width, surfaceDescription.Height);
}
Sprite sprite = new Sprite(device);
sprite.Begin(SpriteFlags.None);
sprite.Draw(texture, textureSize, new Vector3(0, 0, 0), new Vector3(0, 0, 0), Color.White);
sprite.End();
device.EndScene();
device.Present();
sprite.Dispose();
texture.Dispose();
}
I need this to work on XP, Vista and Windows 7. I don't know if it's worth trying XNA or OpenGL. This seems like it should be a very simple thing to accomplish.
The answer is right in front of you. This is a late answer to an old question, but someone might need the answer so...
Instead of declaring new textures and rectangles inside your draw loop (which is quite intense on the resources) why not create a texture outside the scope? Instead of your one, for example, try this:
Texture texture;
Rectangle textureSize;
private void InitiateTexture()
{
texture = new Texture(device, new Bitmap("CAR.jpg"), Usage.None, Pool.Managed);
using (Surface surface = texture.GetSurfaceLevel(0))
{
SurfaceDescription surfaceDescription = surface.Description;
textureSize = new Rectangle(0, 0,
surfaceDescription.Width,
surfaceDescription.Height);
}
}
protected override void OnPaint(System.Windows.Forms.PaintEventArgs e)
{
device.Clear(ClearFlags.Target, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
Sprite sprite = new Sprite(device);
sprite.Begin(SpriteFlags.None);
sprite.Draw(texture, textureSize,
new Vector3(0, 0, 0),
new Vector3(0, 0, 0), Color.White);
sprite.End();
device.EndScene();
device.Present();
}
and then if you need to initiate multiple textures, do it into a list, then paint from that list. It saves having to use resources then free them each paint.
Is CameraViewForm your entire viewer window? If so, then on Paint, the entire window is redrawn, including all buttons, progress bars, etc. This will be more or less expensive depending on the number of controls on your form, and the visual doo-dads you have enabled for desktop elements in various OSes (e.g. window bar transparency).
Try single-buffering the entire form, but giving the Panel (from which I assume you get displayRectangle) its own BufferedGraphicsContext, by declaring a new one when you call GreateGraphics() or before calling Invalidate() on the Panel. This allows the Panel to be double-buffered separately from the Form.