OpenGL ES 2.0 / MonoTouch: Texture is colorized red - c#

I'm currently loading a cube-map into my application but it's shown in a red tone.
Edit: The channel problem is also present when using 2D-Textures, it seems the channels are not in the correct order. Is there any way to change the order of the channels using the iOS methods?
That's the code for texture loading:
public TextureCube (Generic3DView device, UIImage right, UIImage left, UIImage top, UIImage bottom, UIImage front, UIImage back)
: base(device)
{
_Device = device;
GL.GenTextures (1, ref _Handle);
GL.BindTexture (TextureType, _Handle);
LoadTexture(All.TextureCubeMapPositiveX, right);
LoadTexture(All.TextureCubeMapNegativeX, left);
LoadTexture(All.TextureCubeMapPositiveY, top);
LoadTexture(All.TextureCubeMapNegativeY, bottom);
LoadTexture(All.TextureCubeMapPositiveZ, front);
LoadTexture(All.TextureCubeMapNegativeZ, back);
GL.TexParameter(All.TextureCubeMap, All.TextureMinFilter, (Int32)All.LinearMipmapLinear);
GL.TexParameter(All.TextureCubeMap, All.TextureMagFilter, (Int32)All.Linear);
GL.GenerateMipmap(All.TextureCubeMap);
}
private void LoadTexture(All usage, UIImage image)
{
GL.TexImage2D(usage, 0, (Int32)All.Rgba, (Int32)image.Size.Width,
(Int32)image.Size.Height, 0, All.Rgba, All.UnsignedByte, RequestImagePixelData(image));
}
protected CGBitmapContext CreateARGBBitmapContext (CGImage inImage)
{
var pixelsWide = inImage.Width;
var pixelsHigh = inImage.Height;
var bitmapBytesPerRow = pixelsWide * 4;
var bitmapByteCount = bitmapBytesPerRow * pixelsHigh;
//Note implicit colorSpace.Dispose()
using (var colorSpace = CGColorSpace.CreateDeviceRGB()) {
//Allocate the bitmap and create context
var bitmapData = Marshal.AllocHGlobal (bitmapByteCount);
if (bitmapData == IntPtr.Zero) {
throw new Exception ("Memory not allocated.");
}
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
if (context == null) {
throw new Exception ("Context not created");
}
return context;
}
}
//Store pixel data as an ARGB Bitmap
protected IntPtr RequestImagePixelData (UIImage inImage)
{
var imageSize = inImage.Size;
CGBitmapContext ctxt = CreateARGBBitmapContext (inImage.CGImage);
var rect = new RectangleF (0.0f, 0.0f, imageSize.Width, imageSize.Height);
ctxt.DrawImage (rect, inImage.CGImage);
var data = ctxt.Data;
return data;
}
I think the channels are inverted, but maybe there is a way to invert the bitmap without some custom code.
This is the image which is rendered( ignore the fancy model in front of it ):
And the expected image:
Edit:
The GL_INVALID_OPERATION issue has been fixed, but it does not solve the issue with the red texture.
The vertex-shader:
attribute vec3 position;
uniform mat4 modelViewMatrix;
varying mediump vec3 texture;
void main()
{
texture = position.xyz;
gl_Position = modelViewMatrix * vec4(position.xyz, 1.0);
}
The fragment-shader:
varying mediump vec3 texture;
uniform samplerCube cubeMap;
void main()
{
mediump vec3 cube = vec3(textureCube(cubeMap, texture));
gl_FragColor = vec4(cube.xyz, 1.0);
}

The problem is your function CreateARGBBitmapContext the line
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
If you change
CGImageAlphaInfo.PremultipliedFirst
to
CGImageAlphaInfo.PremultipliedLast
that should fix your code.

After some testing i decided to use the code from "XnaTouch" to load textures, this solves the problem with the red texture.
Of course this has not been the end, because there has been no alpha channel when loading png images. Because this is not acceptable and consumes to much time i decided to write a dds loader ( based on code from http://humus.name/ ).

Did you use the program (with glUseProgram) before using glUniform? Because it doesn't work and would generate that error in this case.
You can also check what are the causes for that GL error in the glUniform man page (at the end).

I see that you are using RGBA for both settings during your TexImage2D step. Judging by how blue your original image and how red your resulting image is, I suggest swapping one of them for BGRA.

Related

SharpGL Low Resolution Textures

I am loading textures in using the following code:
var texture = new SharpGL.SceneGraph.Assets.Texture();
texture.Create(gl, filename);
But when I render them onto a polygon they are extremely low resolution. It looks like about 100x100 but the source image is much higher resolution than that.
to add the texture I later call:
gl.Enable(OpenGL.GL_TEXTURE_2D);
gl.BindTexture(OpenGL.GL_TEXTURE_2D, 0);
That's all the texture commands I call other than supplying each vertex with a gl.TexCoord
This all works fine but its just that the displayed image is very pixilated and blurry.
Is there some OpenGL setting that I must use to enable higher resolution textures?
So the answer was that the Create method in SharpGL.SceneGraph that creates textures is made to downsample textures to the next lowest power of 2 for height and width and does a real shitty job of it. For example an image that's 428x612 will get downsampled to 256x512... poorly.
I wrote this extension method that will import a bitmap into a texture and retain the full resolution.
public static bool CreateTexture(this OpenGL gl, Bitmap image, out uint id)
{
if (image == null)
{
id = 0;
return false;
}
var texture = new SharpGL.SceneGraph.Assets.Texture();
texture.Create(gl);
id = texture.TextureName;
BitmapData bitmapData = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
var width = image.Width;
var height = image.Height;
gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture.TextureName);
gl.TexImage2D(OpenGL.GL_TEXTURE_2D, 0, OpenGL.GL_RGB, width, height, 0, 32993u, 5121u, bitmapData.Scan0);
image.UnlockBits(bitmapData);
image.Dispose();
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_S, new[] { OpenGL.GL_CLAMP_TO_EDGE });
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_WRAP_T, new[] { OpenGL.GL_CLAMP_TO_EDGE });
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MIN_FILTER, new[] { OpenGL.GL_LINEAR });
gl.TexParameterI(OpenGL.GL_TEXTURE_2D, OpenGL.GL_TEXTURE_MAG_FILTER, new[] { OpenGL.GL_LINEAR });
return true;
}
I suppose I wouldn't have encountered this problem if I had supplied image files that were already scaled to powers of two but that wasn't obvious.
Usage
if (gl.CreateTexture(bitmap, out var id))
{
// Do something on success
}

LoadRawTextureData() not enough data provided error in Unity

I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, Texture is null.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
What is the reason? maybe CPU problem?
and I tried to do a different function.
public class TestFrameCamera : MonoBehaviour
{
private Texture2D _texture;
private TextureFormat _format = TextureFormat.RGBA32;
// Use this for initialization
void Start()
{
_texture = new Texture2D(Screen.width, Screen.height, _format, false, false);
}
// Update is called once per frame
void Update()
{
using (var image = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!image.IsAvailable) return;
int size = image.Width * image.Height;
byte[] yBuff = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(image.Y, yBuff, 0, size);
_texture.LoadRawTextureData(yBuff);
_texture.Apply();
this.GetComponent<RawImage>().texture = _texture;
}
}
}
But if I change the texture format, it will come out.
private TextureFormat _format = TextureFormat.R8;
it is work, but i don't want to red color image, i want to rgb color
what i should do?
R8 Just red data.
You can use TextureFormat.RGBA32, and set buffer like this:
IntPtr _buff = Marshal.AllocHGlobal(width * height*4);

Render to texture fails after resize

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

EmguCV Capture Mat error in Unity3D

I'm trying to get a webcam capture with Unity3d and EmguCV 3.0, but I'm stumbling into some weird problems. To start off, I'm trying to get a simple capture going by doing:
Capture cap = new Capture(0);
Mat currentFrame = cap.QueryFrame();
But unfortunately this throws an error:
error CS0029: Cannot implicitly convert type `Emgu.CV.Mat' to `Emgu.CV.Mat'
That doesn't really make any sense to me, I tried to cast it, but that doesn't work either. The documentation shows the QueryFrame returns a Mat: http://www.emgu.com/wiki/files/3.0.0/document/html/18b6eba7-f18b-fa87-8bf2-2acff68988cb.htm
Have you considered getting a Color32 array from a webCamTexture with "GetPixels32"? Could then convert that array into a Mat type (will need to look at the Mat constructors). I'm in the same boat as you - been trying to get EMGU in Unity to properly export to xcode for an iOS app.
It looks like there is a 'Mat.SetTo(data)' which takes an array and applies the data to the mat instance.
Not an answer to the implicit convert error, but you can get things going with the below code, this should be added to a raw image UI object:
private WebCamTexture webcamTexture;
private Color32[] colors;
private int width = 640;
private int height = 480;
private Texture2D tex;
private byte[] bytes;
void Start ()
{
WebCamDevice[] devices = WebCamTexture.devices;
int cameraCount = devices.Length;
if (cameraCount > 0)
{
webcamTexture = new WebCamTexture(devices[0].name, width, height);
webcamTexture.Play();
colors = new Color32[webcamTexture.width * webcamTexture.height];
bytes = new byte[colors.Length*3];
tex = new Texture2D (webcamTexture.width, webcamTexture.height, TextureFormat.RGB24, false);
gameObject.GetComponent<RawImage> ().texture = tex;
CvInvoke.CheckLibraryLoaded();
}
else {
Debug.LogError("No Camera found!");
}
}
void Update ()
{
if (webcamTexture.didUpdateThisFrame) {
webcamTexture.GetPixels32(colors);
GCHandle imageHandle = GCHandle.Alloc(colors, GCHandleType.Pinned);
GCHandle matHandle = GCHandle.Alloc(bytes, GCHandleType.Pinned);
using(Image<Bgra, byte> image = new Image<Bgra, byte>(webcamTexture.width, webcamTexture.height, webcamTexture.width * 4, imageHandle.AddrOfPinnedObject())){
using(Mat mat = new Mat(webcamTexture.height, webcamTexture.width, DepthType.Cv8U, 3, matHandle.AddrOfPinnedObject(), webcamTexture.width*3)){
CvInvoke.CvtColor(image, mat, ColorConversion.Bgra2Bgr);
}
}
imageHandle.Free();
matHandle.Free();
tex.LoadRawTextureData(bytes);
tex.Apply();
}
The next challenge is to convert the Bgra image to a Gray image, by changing the conversion method and depth type I can get an image, but it is 1/3th the width...
This is a workaround to your problem.
Capture capWebcam = new Capture();
Image<Bgr, byte> imgSceneColor = capWebcam.QueryFrame().ToImage<Bgr, byte>();
When you need a Mat() from imgSceneColor you can just use the .Mat property like this.
Mat imgMat = imgSceneColor.Mat;

Generating an alpha mask from a texture

I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.

Categories

Resources