Render to texture fails after resize - c#

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!

As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

Related

How to create 2D map in unity using single image?

I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:

Resize an image to fill a picture box without stretching

I was wanting to get an image to fill a picture box, but not leaving any whitespace. Thus cutting off parts of the image to fit when its not resized to the aspect ratio of the pictureBox. And to adjust as the user resizes the window/pictureBox. The existing options, Sizemode = Zoom leaves whitespace, as its afraid to cut off any of the image and Sizemode = StretchImage stretches the image, distorting it.
The only way I can think of doing this is creating an algorithm to resize the image, keeping the contrast ratio, and setting the width or length of the image to the pictureBox width or length and creating some runtime loop which runs the algorithm once a frame. It seems kind of performance heavy for what it does and kind of hackish. Is there a better option?
Edit:
For anyone coming by, I implemented Ivan Stoev's solution slightly differently:
class ImageHandling
{
public static Rectangle GetScaledRectangle(Image img, Rectangle thumbRect)
{
Size sourceSize = img.Size;
Size targetSize = thumbRect.Size;
float scale = Math.Max((float) targetSize.Width / sourceSize.Width, (float) targetSize.Height / sourceSize.Height);
var rect = new RectangleF();
rect.Width = scale * sourceSize.Width;
rect.Height = scale * sourceSize.Height;
rect.X = (targetSize.Width - rect.Width) / 2;
rect.Y = (targetSize.Height - rect.Height) / 2;
return Rectangle.Round(rect);
}
public static Image GetResizedImage(Image img, Rectangle rect)
{
Bitmap b = new Bitmap(rect.Width, rect.Height);
Graphics g = Graphics.FromImage((Image) b);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(img, 0, 0, rect.Width, rect.Height);
g.Dispose();
try
{
return (Image)b.Clone();
}
finally
{
b.Dispose();
b = null;
g = null;
}
}
public Form1()
{
InitializeComponent();
updateMainBackground();
}
void updateMainBackground()
{
Image img = Properties.Resources.BackgroundMain;
Rectangle newRect = ImageHandling.GetScaledRectangle(img, mainBackground.ClientRectangle);
mainBackground.Image = ImageHandling.GetResizedImage(img, newRect);
}
private void Form1_Resize(object sender, EventArgs e)
{
updateMainBackground();
}
}
If I understand correctly, you are seeking for a "Fill" mode (similar to Windows background picture). There is no standard way of doing that, but it's not so hard to make your own with the help of a small calculation and GDI+:
using System;
using System.Drawing;
using System.IO;
using System.Net;
using System.Windows.Forms;
namespace Samples
{
public class ImageFillBox : Control
{
public ImageFillBox()
{
SetStyle(ControlStyles.Selectable | ControlStyles.SupportsTransparentBackColor, false);
SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer | ControlStyles.Opaque | ControlStyles.UserPaint | ControlStyles.ResizeRedraw, true);
}
private Image image;
public Image Image
{
get { return image; }
set
{
if (image == value) return;
image = value;
Invalidate();
}
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
if (image == null)
e.Graphics.Clear(BackColor);
else
{
Size sourceSize = image.Size, targetSize = ClientSize;
float scale = Math.Max((float)targetSize.Width / sourceSize.Width, (float)targetSize.Height / sourceSize.Height);
var rect = new RectangleF();
rect.Width = scale * sourceSize.Width;
rect.Height = scale * sourceSize.Height;
rect.X = (targetSize.Width - rect.Width) / 2;
rect.Y = (targetSize.Height - rect.Height) / 2;
e.Graphics.DrawImage(image, rect);
}
}
}
static class Test
{
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
var testForm = new Form();
testForm.Controls.Add(new ImageFillBox
{
Dock = DockStyle.Fill,
Image = GetImage(#"http://www.celebrityrockstarguitars.com/rock/images/Metall_1.jpg")
});
Application.Run(testForm);
}
static Image GetImage(string path)
{
var uri = new Uri(path);
if (uri.IsFile) return Image.FromFile(path);
using (var client = new WebClient())
return Image.FromStream(new MemoryStream(client.DownloadData(uri)));
}
}
}
According to the PictureBoxSizeMode documentation you can specify PictureBoxSizeMode.Zoom to get the image to keep its aspect ratio. It will zoom as big as possible without any part of the image overflowing the picture box.
And you can play with the Dock property (setting DockStyle.Full) to get the picture box to resize to the size of its container.
There is a fairly simple solution to this. PictureBox.SizeMode has a few settings that will help us out. Zoom will adjust the image to fit in the box and Normal will place the picture with out resizing. What we will want to do is check the height and width of the image and if it is larger than the PictureBox size we will Zoom, if not, we will place it in with Normal. See below:
if (image.Height > pctbx_ImageRecognition.Height || image.Width > pctbx_ImageRecognition.Width)
pctbx_ImageRecognition.SizeMode = PictureBoxSizeMode.Zoom;
else
pctbx_ImageRecognition.SizeMode = PictureBoxSizeMode.Normal;

Generating an alpha mask from a texture

I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.

OpenGL ES 2.0 / MonoTouch: Texture is colorized red

I'm currently loading a cube-map into my application but it's shown in a red tone.
Edit: The channel problem is also present when using 2D-Textures, it seems the channels are not in the correct order. Is there any way to change the order of the channels using the iOS methods?
That's the code for texture loading:
public TextureCube (Generic3DView device, UIImage right, UIImage left, UIImage top, UIImage bottom, UIImage front, UIImage back)
: base(device)
{
_Device = device;
GL.GenTextures (1, ref _Handle);
GL.BindTexture (TextureType, _Handle);
LoadTexture(All.TextureCubeMapPositiveX, right);
LoadTexture(All.TextureCubeMapNegativeX, left);
LoadTexture(All.TextureCubeMapPositiveY, top);
LoadTexture(All.TextureCubeMapNegativeY, bottom);
LoadTexture(All.TextureCubeMapPositiveZ, front);
LoadTexture(All.TextureCubeMapNegativeZ, back);
GL.TexParameter(All.TextureCubeMap, All.TextureMinFilter, (Int32)All.LinearMipmapLinear);
GL.TexParameter(All.TextureCubeMap, All.TextureMagFilter, (Int32)All.Linear);
GL.GenerateMipmap(All.TextureCubeMap);
}
private void LoadTexture(All usage, UIImage image)
{
GL.TexImage2D(usage, 0, (Int32)All.Rgba, (Int32)image.Size.Width,
(Int32)image.Size.Height, 0, All.Rgba, All.UnsignedByte, RequestImagePixelData(image));
}
protected CGBitmapContext CreateARGBBitmapContext (CGImage inImage)
{
var pixelsWide = inImage.Width;
var pixelsHigh = inImage.Height;
var bitmapBytesPerRow = pixelsWide * 4;
var bitmapByteCount = bitmapBytesPerRow * pixelsHigh;
//Note implicit colorSpace.Dispose()
using (var colorSpace = CGColorSpace.CreateDeviceRGB()) {
//Allocate the bitmap and create context
var bitmapData = Marshal.AllocHGlobal (bitmapByteCount);
if (bitmapData == IntPtr.Zero) {
throw new Exception ("Memory not allocated.");
}
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
if (context == null) {
throw new Exception ("Context not created");
}
return context;
}
}
//Store pixel data as an ARGB Bitmap
protected IntPtr RequestImagePixelData (UIImage inImage)
{
var imageSize = inImage.Size;
CGBitmapContext ctxt = CreateARGBBitmapContext (inImage.CGImage);
var rect = new RectangleF (0.0f, 0.0f, imageSize.Width, imageSize.Height);
ctxt.DrawImage (rect, inImage.CGImage);
var data = ctxt.Data;
return data;
}
I think the channels are inverted, but maybe there is a way to invert the bitmap without some custom code.
This is the image which is rendered( ignore the fancy model in front of it ):
And the expected image:
Edit:
The GL_INVALID_OPERATION issue has been fixed, but it does not solve the issue with the red texture.
The vertex-shader:
attribute vec3 position;
uniform mat4 modelViewMatrix;
varying mediump vec3 texture;
void main()
{
texture = position.xyz;
gl_Position = modelViewMatrix * vec4(position.xyz, 1.0);
}
The fragment-shader:
varying mediump vec3 texture;
uniform samplerCube cubeMap;
void main()
{
mediump vec3 cube = vec3(textureCube(cubeMap, texture));
gl_FragColor = vec4(cube.xyz, 1.0);
}
The problem is your function CreateARGBBitmapContext the line
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
If you change
CGImageAlphaInfo.PremultipliedFirst
to
CGImageAlphaInfo.PremultipliedLast
that should fix your code.
After some testing i decided to use the code from "XnaTouch" to load textures, this solves the problem with the red texture.
Of course this has not been the end, because there has been no alpha channel when loading png images. Because this is not acceptable and consumes to much time i decided to write a dds loader ( based on code from http://humus.name/ ).
Did you use the program (with glUseProgram) before using glUniform? Because it doesn't work and would generate that error in this case.
You can also check what are the causes for that GL error in the glUniform man page (at the end).
I see that you are using RGBA for both settings during your TexImage2D step. Judging by how blue your original image and how red your resulting image is, I suggest swapping one of them for BGRA.

How do I scale image in DirectX device?

I have a control (picturebox), in which I want to draw an 2D image with the help of DirectX.
I have a display and sprite, and it works fine.
// Some code here...
_device = new Device(0, DeviceType.Hardware, pictureBox1,
CreateFlags.SoftwareVertexProcessing,
presentParams);
_sprite = new Sprite(_device);
// ...
_sprite.Draw(texture, textureSize, _center, position, Color.White);
The problem is that texture size is much larger than picturebox, and I just want to find the way to fit it there.
I tried to set device.Transform property but it doesn't help:
var transform = GetTransformationMatrix(textureSize.Width, textureSize.Height);
_device.SetTransform(TransformType.Texture0, transform);
Here is GetTransform method:
Matrix GetTransformationMatrix(float width, float height)
{
var scaleWidth = pictureBox1.Width /width ;
var scaleHeight = pictureBox1.Height / height;
var transform = new Matrix();
transform.M11 = scaleWidth;
transform.M12 = 0;
transform.M21 = 0;
transform.M22 = scaleHeight;
return transform;
}
Thanks for any solution || help.
The solution was just to use this:
var transformMatrix = Matrix.Scaling(scaleWidth, scaleHeight, 0.0F);
instead of hand-made method.

Categories

Resources