I have a function that gets called several times in a loop and returns a view controller( each view controller is like a slide of a presentation ). Uiviewcontroller has subviews like scroll view , uiwebview and uiimageview. As the number of view controllers increase, i get memory warnings and app crashes. I have wrapped the function in a nsautorelease pool and i am using "using" block to dispose the uiimages but no use.
Can you look at the code and tell me whats wrong and how i can solve the memory problem.
public UIViewController GetPage(int i){
using (var pool = new NSAutoreleasePool ()) {
UIViewController c = new UIViewController ();
c.View.Frame = new RectangleF (0, 0, 1024, 743);
UIScrollView scr = new UIScrollView (new RectangleF (0, 0, 1024, 743));
scr.ContentSize = new SizeF (1024, 748);
string contentDirectoryPath = // path string
UIWebView asset = new UIWebView (new RectangleF (0, 0, 1024, 748));
asset.ScalesPageToFit = true;
UIScrollView scrl = new UIScrollView (new RectangleF (0, 0, 1024, 748));
scrl.ContentSize = new SizeF (1024, 768);
using (UIImageView imgView = new UIImageView (new RectangleF (0, 0, 1024, 748))) {
using (var img = UIImage.FromFile (contentDirectoryPath)) {
imgView.Image = img;
img.Dispose ();
}
//UIImage img = UIImage.FromFile (contentDirectoryPath);
//imgView.Image = img;
imgView.ContentMode = UIViewContentMode.ScaleAspectFit;
float widthRatio = imgView.Bounds.Size.Width / imgView.Image.Size.Width;
float heightRatio = imgView.Bounds.Size.Height / imgView.Image.Size.Height;
float scale = Math.Min (widthRatio, heightRatio);
float imageWidth = scale * imgView.Image.Size.Width;
float imageHeight = scale * imgView.Image.Size.Height;
imgView.Frame = new RectangleF (0, 0, imageWidth, imageHeight);
scrl.AddSubview (imgView);
imgView.Center = imgView.Superview.Center;
imgView.Dispose ();
}
asset.AddSubview (scrl);
scrl.Center = scrl.Superview.Center;
asset.ScalesPageToFit = true;
scr.AddSubview (asset);
c.View.AddSubview (scr);
scr.Center = scr.Superview.Center;
return c;
}
Using NSAutoReleasePool won't help you, neither will disposing the image or image view help you (In fact, your code disposes the UIImageView you're adding to the UIScrollView - I'm actually wondering why this is working at all).
If you are creating tons of instances of a UIViewController that hosts a lot of subviews and especially large images, you will eventually run out of memory as long as you reference these controllers from somewhere.
From what I see, your image is 1024x768, probably at 32bit color depth, that's 3 megabyte per image in memory.
You will have to get rid of the view controller's you're creating or make them smarter (load and dispose the images in ViewWillAppear() and ViewWillDisappear() for instance)
The question is why are you creating multiple view controllers with the same view (different content)? Why not create a single view controller and then change the content of it on ViewDidLoad()? I don't see you using the passed integer anywhere so is this your actual code or just a sample of what you are doing?
If you want to change the content of the controller without reloading, create a function to change the values. For example:
public void Bind(UIImage image)
{
this.imgView.Image = image;
}
Related
I would like to implement hardware acceleration for a C # WinForms application. Reason is that i have to draw 150 x 720p images and the 5 picturebox control's takes too long (scaling+drawing of images) and therefore are problems with disposing and reloading.
So I dealt with ShapeDX.
But now I 'm stuck and do not know how to draw the 2D Texture. To test the code i just have a Test Button and one PictureBox.
When I run the code in the PictureBox also DirectX ( Draw or 3D ) is loaded. I acknowledge the black background. But I do not understand how the Texture must be drawn.
String imageFile = "Image.JPG";
Control TargetControl = this.pictureBoxCurrentFrameL;
int TotalWidth = TargetControl.Width;
int TotalHeight = TargetControl.Height;
SharpDX.Direct3D11.Device defaultDevice = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware, SharpDX.Direct3D11.DeviceCreationFlags.Debug);
SharpDX.Toolkit.Graphics.GraphicsDevice graphicsDevice = SharpDX.Toolkit.Graphics.GraphicsDevice.New(defaultDevice);
SharpDX.Toolkit.Graphics.PresentationParameters presentationParameters = new SharpDX.Toolkit.Graphics.PresentationParameters();
presentationParameters.DeviceWindowHandle = this.pictureBoxCurrentFrameL.Handle;
presentationParameters.BackBufferWidth = TotalWidth;
presentationParameters.BackBufferHeight = TotalHeight;
SharpDX.Toolkit.Graphics.SwapChainGraphicsPresenter swapChainGraphicsPresenter = new SharpDX.Toolkit.Graphics.SwapChainGraphicsPresenter(graphicsDevice, presentationParameters);
SharpDX.Toolkit.Graphics.Texture2D texture2D = SharpDX.Toolkit.Graphics.Texture2D.Load(graphicsDevice, imageFile);
//Now i should draw. But how?
swapChainGraphicsPresenter.Present();/**/
Using Microsoft Visual Studio Community 2015 (.Net 4, C# WinForm) on Windows 10 an SharpDX-SDK-2.6.3!
Thank you for assistance.
I solved the problem by simply switching to SlimDX (SlimDX Runtime .NET 4.0 x64 January 2012.msi, .Net4, Win10, MS Visual Studio Community 2015, Winforms App.). There are several useful tutorials .
to use SlimDX just link the only one DLL to ur project! after installing SlimDX u will find this SlimDX.dll file somewhere on ur C Drive.
it is important to understand that you need at least one factory and an render target for Direct2D . The RenderTarget points to the object to be used ( Control / form / etc ) and takes over the drawing.
a swap chain is not needed. probable used by the render target internally. the biggest part is to convert an bitmap into usefull Direct2D Bitmap (for drawing). Otherwise, you can process the bitmap data also from a MemoryStream .
For Those Who are looking for a solution too:
Control targetControl = this.pictureBoxCurrentFrameL;
String imageFile = "Image.JPG";
//Update control styles, works for forms, not for controls. I solve this later otherwise .
//this.SetStyle(ControlStyles.AllPaintingInWmPaint, true);
//this.SetStyle(ControlStyles.Opaque, true);
//this.SetStyle(ControlStyles.ResizeRedraw, true);
//Get requested debug level
SlimDX.Direct2D.DebugLevel debugLevel = SlimDX.Direct2D.DebugLevel.None;
//Resources for Direct2D rendering
SlimDX.Direct2D.Factory d2dFactory = new SlimDX.Direct2D.Factory(SlimDX.Direct2D.FactoryType.Multithreaded, debugLevel);
//Create the render target
SlimDX.Direct2D.WindowRenderTarget d2dWindowRenderTarget = new SlimDX.Direct2D.WindowRenderTarget(d2dFactory, new SlimDX.Direct2D.WindowRenderTargetProperties() {
Handle = targetControl.Handle,
PixelSize = targetControl.Size,
PresentOptions = SlimDX.Direct2D.PresentOptions.Immediately
});
//Paint!
d2dWindowRenderTarget.BeginDraw();
d2dWindowRenderTarget.Clear(new SlimDX.Color4(Color.LightSteelBlue));
//Convert System.Drawing.Bitmap into SlimDX.Direct2D.Bitmap!
System.Drawing.Bitmap bitmap = (System.Drawing.Bitmap)Properties.Resources.Image_720p;//loaded from embedded resource, can be changed to Bitmap.FromFile(imageFile); to load from hdd!
SlimDX.Direct2D.Bitmap d2dBitmap = null;
System.Drawing.Imaging.BitmapData bitmapData = bitmap.LockBits(new Rectangle(new Point(0, 0), bitmap.Size), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);//TODO: PixelFormat is very important!!! Check!
SlimDX.DataStream dataStream = new SlimDX.DataStream(bitmapData.Scan0, bitmapData.Stride * bitmapData.Height, true, false);
SlimDX.Direct2D.PixelFormat d2dPixelFormat = new SlimDX.Direct2D.PixelFormat(SlimDX.DXGI.Format.B8G8R8A8_UNorm, SlimDX.Direct2D.AlphaMode.Premultiplied);
SlimDX.Direct2D.BitmapProperties d2dBitmapProperties = new SlimDX.Direct2D.BitmapProperties();
d2dBitmapProperties.PixelFormat = d2dPixelFormat;
d2dBitmap = new SlimDX.Direct2D.Bitmap(d2dWindowRenderTarget, new Size(bitmap.Width, bitmap.Height), dataStream, bitmapData.Stride, d2dBitmapProperties);
bitmap.UnlockBits(bitmapData);
//Draw SlimDX.Direct2D.Bitmap
d2dWindowRenderTarget.DrawBitmap(d2dBitmap, new Rectangle(0, 0, bitmap.Width, bitmap.Height));/**/
d2dWindowRenderTarget.EndDraw();
//Dispose everything u dont need anymore.
//bitmap.Dispose();//......
So it is super simple to use Direct2D, all the code can be compressed to 2 main Lines + drawing:
SlimDX.Direct2D.Factory d2dFactory = new SlimDX.Direct2D.Factory(SlimDX.Direct2D.FactoryType.Multithreaded, SlimDX.Direct2D.DebugLevel.None);
SlimDX.Direct2D.WindowRenderTarget d2dWindowRenderTarget = new SlimDX.Direct2D.WindowRenderTarget(d2dFactory, new SlimDX.Direct2D.WindowRenderTargetProperties() { Handle = targetControl.Handle, PixelSize = targetControl.Size, PresentOptions = SlimDX.Direct2D.PresentOptions.Immediately });
d2dWindowRenderTarget.BeginDraw();
d2dWindowRenderTarget.Clear(new SlimDX.Color4(Color.LightSteelBlue));
d2dWindowRenderTarget.DrawRectangle(new SlimDX.Direct2D.SolidColorBrush(d2dWindowRenderTarget, new SlimDX.Color4(Color.Red)), new Rectangle(20,20, targetControl.Width-40, targetControl.Height-40));
d2dWindowRenderTarget.EndDraw();
I'm trying to create a screenshot/bitmap of my screen. I wrote this function:
public static Bitmap CreateScreenshot(Rectangle bounds)
{
var bmpScreenshot = new Bitmap(bounds.Width, bounds.Height,
PixelFormat.Format32bppArgb);
var gfxScreenshot = Graphics.FromImage(bmpScreenshot);
gfxScreenshot.CopyFromScreen(bounds.X, bounds.Y,
0, 0,
new Size(bounds.Size.Width, bounds.Size.Height),
CopyPixelOperation.SourceCopy);
return bmpScreenshot;
}
This function is being called in my overlay form that should draw the bitmap onto itself. I'm currently using GDI+ for the whole process.
private void ScreenshotOverlay_Load(object sender, EventArgs e)
{
foreach (Screen screen in Screen.AllScreens)
Size += screen.Bounds.Size;
Location = Screen.PrimaryScreen.Bounds.Location;
_screenshot = BitmapHelper.CreateScreenshot(new Rectangle(new Point(0, 0), Size));
Invalidate(); // The screenshot/bitmap is drawn here
}
Yep, I dispose the bitmap later, so don't worry. ;)
On my laptop and desktop computer this works fine. I've tested this with different resolutions and the calculations are correct. I can see an image of the screen on the form.
The problem starts with the Surface 3. All elements are being scaled by a factor of 1.5 (150%). This consequently means that the DPI changes. If I try to take a screenshot there, it does only capture like the upper-left part of the screen but not the whole one.
I've made my way through Google and StackOverflow and tried out different things:
Get the DPI, divide it by 96 and multiply the size components (X and Y) of the screen with this factor.
Add an entry to application.manifest to make the application DPI-aware.
The first way did not bring the desired result. The second one did, but the whole application would have to be adjusted then and this is quite complicated in Windows Forms.
Now my question would be: Is there any way to capture a screenshot of the whole screen, even if it is has a scalation factor higher than 1 (higher DPI)?
There must be a way to do this in order to make it working everywhere.
But at this point I had no real search results that could help me.
Thanks in advance.
Try this, which is found within SharpAVI's library. It works well on devices regardless of resolution scale. And I have tested it on Surface 3 at 150%.
System.Windows.Media.Matrix toDevice;
using (var source = new HwndSource(new HwndSourceParameters()))
{
toDevice = source.CompositionTarget.TransformToDevice;
}
screenWidth = (int)Math.Round(SystemParameters.PrimaryScreenWidth * toDevice.M11);
screenHeight = (int)Math.Round(SystemParameters.PrimaryScreenHeight * toDevice.M22);
SharpAVI can be found here: https://github.com/baSSiLL/SharpAvi It is for videos but uses a similar copyFromScreen method when getting each frame:
graphics.CopyFromScreen(0, 0, 0, 0, new System.Drawing.Size(screenWidth, screenHeight));
Before taking your screen shot, you can make the process DPI aware:
[System.Runtime.InteropServices.DllImport("user32.dll")]
public static extern bool SetProcessDPIAware();
private static Bitmap Screenshot()
{
SetProcessDPIAware();
var screen = System.Windows.Forms.Screen.PrimaryScreen;
var rect = screen.Bounds;
var size = rect.Size;
Bitmap bmpScreenshot = new Bitmap(size.Width, size.Height);
Graphics g = Graphics.FromImage(bmpScreenshot);
g.CopyFromScreen(0, 0, 0, 0, size);
return bmpScreenshot;
}
I have a collection of bitmap images that I am looping through and writing them all to one new bitmap. Basically I am taking a loose collection of bitmaps and writing them all to one bitmap one after another so that they are visible as one image.
When i call dc.DrawImage from one of the bitmaps in the collection onto the new bitmap my winform is showing a big red X in the form. When I set a PictureBox.Image to the newly drawn bitmap I am getting a big red X.
For some reason i cannot find the error anywhere. I am not able to locate the error with debugging.
Now, if I just set the PictureBox.Image to one of the images in the collection of images with-out looping and drawning onto a new bitmap everything works fine.
To make everything ease I am only working with one bitmap that is in the collection and drawing the one bitmap to the new bitmap. So I know I have only one bitmap to get working then i can add the other ones.
In the images below is what the form looks like if i just set the picturebox.image of the image in the collection.
The second image is the error that shows after I loop and drawing the bitmap in the collection to another bitmap.
The code below is what needs to work, but throws an error.
Notice where I am setting the property of the PictureBox.Image like so:
this.picBx.Image = schedule; this causes the error.
But if i set the picturebox.image like so:
this.picBx.Image = schedules[0].Door; it works just fine.
DoorSchedules schedules = GetDoorDrawing(elev, projInfo.ProjectName);
int prevWidth = 0;
//
using (Bitmap schedule = new Bitmap(schedules.Width + 50, schedules.Height + 50))
{
using (Graphics dc = Graphics.FromImage(schedule))
{
using (Pen pen = new Pen(LINE_COLOR))
{
pen.Width = 4;
pen.Color =
Color.FromArgb(50, LINE_COLOR.R, LINE_COLOR.G, LINE_COLOR.B);
//
for (byte i = 0; i < schedules.Count; i++)
{
if (i > 0)
{
dc.DrawLine(pen, prevWidth - 25, 0,
prevWidth - 25, schedule.Height);
};
dc.DrawImage(schedules[i].Door, prevWidth, 0);
prevWidth += schedules[i].Door.Width;
};
};
};
this.picBx.Image = schedule;
this.picBx.BackColor = BACK_COLOR;
this.Size = new System.Drawing.Size(schedule.Width, schedule.Height);
};
You have Bitmap schedule defined in a using statement.
When that using block ends, the bitmap is disposed.
I've written some code using SlimDX and WPF where I would expect the end result to be a red screen.
Unfortunately all I get is a black screen.
This is on windows 7.
Can anyone see anything major I'm missing?
The reason I'm using a separate surface as the backbuffer for the D3DImage is that I am going to be needing multiple viewports. I thought that rendering to seperate surfaces instead of the devices initial backbuffer would be the best way to achieve that.
anyway, on with the code..
Disclaimer: Please ignore the bad code, this is written entirely as throw-away code just so I can figure out how to do achieve what I'm after.
Here's my window class:
namespace SlimDXWithWpf
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
SlimDXRenderer controller;
public MainWindow()
{
InitializeComponent();
controller = new SlimDXRenderer();
controller.Initialize();
D3DImage image = new D3DImage();
image.Lock();
controller.RenderToSurface();
image.SetBackBuffer(D3DResourceType.IDirect3DSurface9, controller.SurfacePointer);
image.AddDirtyRect(new Int32Rect(0, 0, image.PixelWidth, image.PixelHeight));
image.Unlock();
Background = new ImageBrush(image);
}
}
}
And heres my "renderer" class
namespace SlimDXWithWpf
{
public class SlimDXRenderer : IDisposable
{
Direct3DEx directX;
DeviceEx device;
Surface surface;
Surface backBuffer;
IntPtr surfacePointer;
public IntPtr SurfacePointer
{
get
{
return surfacePointer;
}
}
public void Initialize()
{
directX = new Direct3DEx();
HwndSource hwnd = new HwndSource(0, 0, 0, 0, 0, 640, 480, "SlimDXControl", IntPtr.Zero);
PresentParameters pp = new PresentParameters()
{
BackBufferCount = 1,
BackBufferFormat = Format.A8R8G8B8,
BackBufferWidth = 640,
BackBufferHeight = 480,
DeviceWindowHandle = hwnd.Handle,
PresentationInterval = PresentInterval.Immediate,
Windowed = true,
SwapEffect = SwapEffect.Discard
};
device = new DeviceEx(directX, 0, DeviceType.Hardware, hwnd.Handle, CreateFlags.HardwareVertexProcessing, pp);
backBuffer = device.GetRenderTarget(0);
surface = Surface.CreateRenderTarget(device, 1024, 768, Format.A8R8G8B8, MultisampleType.None, 1, false);
surfacePointer = surface.ComPointer;
}
public void RenderToSurface()
{
device.SetRenderTarget(0, surface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, new Color4(Color.Red), 0f, 0);
device.BeginScene();
device.EndScene();
}
public void Dispose()
{
surface.Dispose();
device.Dispose();
directX.Dispose();
}
}
}
-- Edit: For a second I had thought I'd solved it, but it seems it will only work when my second render target (the one I'm trying to clear red) is 640x480. Any thoughts?
Did you base some of this code on the SlimDX WPF sample? It looks like you might have, which is why your Clear() call is using 0.0f for the Z clear value... which is a bug in our sample. It should be 1.0f.
Beyond that, the only potential issue I see is that your surface render target is a different size than your back buffer, but that should not actually cause problems. Have you tried rendering to the device's backbuffer (Device.GetBackBuffer()) instead of a new surface to see what impact that has?
In your device.Clear call, change the first numeric argument from 0f to 1f. That's the z-depth which ranges from 0 to 1. Specifying a z-depth of 0 effectively does nothing.
I'm stuck at not being able to map texture to a square in openGLES. I'm trying to display a jpg image on the screen, and in order for me to do that, I draw a square that I want to then map image onto. However all I get as an output is a white square. I don't know what am I doing wrong. And this problem is preventing me from moving forward with my project. I'm using Managed OpenGL ES wrapper for Windows Mobile.
I verified that the texture is loading correctly, but I can't apply it to my object. I uploaded sample project that shows my problem here. You would need VS2008 with Windows Mobile 6 SDK to be able to run it. I'm also posting the code of the Form that renders and textures an object here. Any suggestions would be much appreciated, since I've been stuck on this problem for a while, and I can't figure out what am I doing wrong.
public partial class Form1 : Form
{
[DllImport("coredll")]
extern static IntPtr GetDC(IntPtr hwnd);
EGLDisplay myDisplay;
EGLSurface mySurface;
EGLContext myContext;
public Form1()
{
InitializeComponent();
myDisplay = egl.GetDisplay(new EGLNativeDisplayType(this));
int major, minor;
egl.Initialize(myDisplay, out major, out minor);
EGLConfig[] configs = new EGLConfig[10];
int[] attribList = new int[]
{
egl.EGL_RED_SIZE, 5,
egl.EGL_GREEN_SIZE, 6,
egl.EGL_BLUE_SIZE, 5,
egl.EGL_DEPTH_SIZE, 16 ,
egl.EGL_SURFACE_TYPE, egl.EGL_WINDOW_BIT,
egl.EGL_STENCIL_SIZE, egl.EGL_DONT_CARE,
egl.EGL_NONE, egl.EGL_NONE
};
int numConfig;
if (!egl.ChooseConfig(myDisplay, attribList, configs, configs.Length, out numConfig) || numConfig < 1)
throw new InvalidOperationException("Unable to choose config.");
EGLConfig config = configs[0];
mySurface = egl.CreateWindowSurface(myDisplay, config, Handle, null);
myContext = egl.CreateContext(myDisplay, config, EGLContext.None, null);
egl.MakeCurrent(myDisplay, mySurface, mySurface, myContext);
gl.ClearColor(0, 0, 0, 0);
InitGL();
}
void InitGL()
{
gl.ShadeModel(gl.GL_SMOOTH);
gl.ClearColor(0.0f, 0.0f, 0.0f, 0.5f);
gl.BlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA);
gl.Hint(gl.GL_PERSPECTIVE_CORRECTION_HINT, gl.GL_NICEST);
}
public unsafe void DrawGLScene()
{
gl.MatrixMode(gl.GL_PROJECTION);
gl.LoadIdentity();
gl.Orthof(0, ClientSize.Width, ClientSize.Height, 0, 0, 1);
gl.Disable(gl.GL_DEPTH_TEST);
gl.MatrixMode(gl.GL_MODELVIEW);
gl.LoadIdentity();
Texture myImage;
Bitmap Image = new Bitmap(#"\Storage Card\Texture.jpg");
using (MemoryStream ms = new MemoryStream())
{
Image.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp);
myImage = Texture.LoadStream(ms, false);
}
float[] rectangle = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
float[] texturePosition = new float[] {
0, 0,
myImage.Width, 0,
0, myImage.Height,
myImage.Width, myImage.Height
};
//Bind texture
gl.BindTexture(gl.GL_TEXTURE_2D, myImage.Name);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR);
gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR);
gl.EnableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.EnableClientState(gl.GL_VERTEX_ARRAY);
//draw square and texture it.
fixed (float* rectanglePointer = &rectangle[0], positionPointer = &texturePosition[0])
{
gl.TexCoordPointer(2, gl.GL_FLOAT, 0, (IntPtr)positionPointer);
gl.VertexPointer(2, gl.GL_FLOAT, 0, (IntPtr)rectanglePointer);
gl.DrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4);
}
gl.DisableClientState(gl.GL_TEXTURE_COORD_ARRAY);
gl.DisableClientState(gl.GL_VERTEX_ARRAY);
}
protected override void OnPaintBackground(PaintEventArgs e)
{
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
DrawGLScene();
egl.SwapBuffers(myDisplay, mySurface);
gl.Clear(gl.GL_COLOR_BUFFER_BIT);
}
protected override void OnClosing(CancelEventArgs e)
{
if (!egl.DestroySurface(myDisplay, mySurface))
throw new Exception("Error while destroying surface.");
if (!egl.DestroyContext(myDisplay, myContext))
throw new Exception("Error while destroying context.");
if (!egl.Terminate(myDisplay))
throw new Exception("Error while terminating display.");
base.OnClosing(e);
}
}
You need to enable texturing:
glEnable( GL_TEXTURE_2D );
before rendering the square.
If you work with OpenGL|ES also take a look if the glDrawTexImage-Extension is supported (well - it should, it's a core-extension and required, but you never know...)
It won't help you with your problem directly (e.g. you have to enable texturing as well), but glDrawTexImage is a hell lot more efficient than polygon rendering. And it needs less code to write as well.
If you are loading textures from PNG or JPG files using UIImage, CGImage and CGContext, it is very important to set GL_TEXTURE_MIN_FILTER to GL_LINEAR or GL_NEAREST before creating textures, because if you don't do it, all your textures except the last bound will be set to blank white.
Thanks for the help! However your suggestion didn't fix the issue. Now the square is black instead of white, but still no texture. I've tried adding gl.Enable(gl.GL_TEXTURE_2D) at every possible position, but the result is still black square.
EDIT:
Upps, sorry, top-left corner of my image was black that's why I didn't see anything. Changed the image to have different colors, and now I can see part of the image rendered. It's not mapped propertly, but I can figure that part out.
Thanks a lot of the help!!!