Texture2D to get Rect Transform properties of the Image - c#

If any of you had dealt with the same problem could you please tell me the solution.
User can upload his own image from phone gallery to the UI Image, then he can move it, scale it, rotate it, mirror it. After such manipulations he can save the Texture2D of this Image to the persistentDataPath with the following code. But the problem is that no matter which rotation, position or scale properties the UI Image has, Texture2D will still remain default which is 0pos, 0rotation and scale 1 (which actually has logic cause the texture is the same as I only changed the rect transform of the image.
public void SaveClick()
{
CropSprite();
SaveSprite();
}
private Texture2D output;
public void CropSprite()
{
Texture2D MaskTexture = MaskImage.sprite.texture;
//for this part MaskImage is the parent of the Image which is used to actually mask Image with which user is manipulating edits
//attached the image for better understanding what is mask and what is editable area
Texture2D originalTextureTexture = TextureMarble.sprite.texture;
Texture2D TextureTexture = Resize(originalTextureTexture,250,250);
//I rescale editable texture to the size of the mask one, otherwise big size images will be saved incorrectly
output = new Texture2D(TextureTexture.width,TextureTexture.height);
for (int i = 0; i < TextureTexture.width; i++)
{
for (int j = 0; j < TextureTexture.height; j++)
{
if (MaskTexture.GetPixel(i, j).a != 0)
output.SetPixel(i, j, TextureTexture.GetPixel(i, j));
else
output.SetPixel(i, j, new Color(1f, 1f, 1f, 0f));
}
}
//save only the part of editable texture which is overlapping mask image
output.Apply();
}
Texture2D Resize(Texture2D texture2D,int targetX,int targetY)
{
RenderTexture rt=new RenderTexture(targetX, targetY,24);
RenderTexture.active = rt;
Graphics.Blit(texture2D,rt);
Texture2D result=new Texture2D(targetX,targetY);
result.ReadPixels(new Rect(0,0,targetX,targetY),0,0);
result.Apply();
return result;
}
public void SaveSprite()
{
byte[] bytesToSave = output.EncodeToPNG();
File.WriteAllBytes(Application.persistentDataPath + "/yourTexture1.png",bytesToSave);
}
not necessary but for those of you who didn't understand what is mask in my case
So how to save Texture2D with the rect transform properties of the Image?

Related

How to create 2D map in unity using single image?

I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:

LoadRawTextureData() not enough data provided error in Unity

I am working on a project using ARcore.
I need a real world screen that is visible on the ARcore camera, formerly using the method of erasing UI and capturing.
But it was so slow that I found Frame.CameraImage.Texture in Arcore API
It worked normally in a Unity Editor environment.
But if you build it on your phone and check it out, Texture is null.
Texture2D snap = (Texture2D)Frame.CameraImage.Texture;
What is the reason? maybe CPU problem?
and I tried to do a different function.
public class TestFrameCamera : MonoBehaviour
{
private Texture2D _texture;
private TextureFormat _format = TextureFormat.RGBA32;
// Use this for initialization
void Start()
{
_texture = new Texture2D(Screen.width, Screen.height, _format, false, false);
}
// Update is called once per frame
void Update()
{
using (var image = Frame.CameraImage.AcquireCameraImageBytes())
{
if (!image.IsAvailable) return;
int size = image.Width * image.Height;
byte[] yBuff = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(image.Y, yBuff, 0, size);
_texture.LoadRawTextureData(yBuff);
_texture.Apply();
this.GetComponent<RawImage>().texture = _texture;
}
}
}
But if I change the texture format, it will come out.
private TextureFormat _format = TextureFormat.R8;
it is work, but i don't want to red color image, i want to rgb color
what i should do?
R8 Just red data.
You can use TextureFormat.RGBA32, and set buffer like this:
IntPtr _buff = Marshal.AllocHGlobal(width * height*4);

Render to texture fails after resize

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

Generating an alpha mask from a texture

I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.

XNA C# Destructible Terrain

I am currently working on a game in which players can destruct the terrain. Unfortunately, I am getting this exception after using the SetData method on my terrain texture:
You may not call SetData on a resource while it is actively set on the
GraphicsDevice. Unset it from the device before calling SetData.
Now, before anyone says that there are other topics on this problem, I have looked at all of those. They all say to make sure not to call the method within Draw(), but I only use it in Update() anyways. Here is the code I am currently using to destruct the terrain:
public class Terrain
{
private Texture2D Image;
public Rectangle Bounds { get; protected set; }
public Terrain(ContentManager Content)
{
Image = Content.Load<Texture2D>("Terrain");
Bounds = new Rectangle(0, 400, Image.Width, Image.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(Image, Bounds, Color.White);
}
public void Update()
{
if (Globals.newState.LeftButton == ButtonState.Pressed)
{
Point mousePosition = new Point(Globals.newState.X, Globals.newState.Y);
if(Bounds.Contains(mousePosition))
{
Color[] imageData = new Color[Image.Width * Image.Height];
Image.GetData(imageData);
for (int i = 0; i < imageData.Length; i++)
{
if (Vector2.Distance(new Vector2(mousePosition.X, mousePosition.Y), GetPositionOfTextureData(i, imageData)) < 20)
{
imageData[i] = Color.Transparent;
}
}
Image.SetData(imageData);
}
}
}
private Vector2 GetPositionOfTextureData(int index, Color[] colorData)
{
float x = 0;
float y = 0;
x = index % 800;
y = (index - x) / 800;
return new Vector2(x + Bounds.X, y + Bounds.Y);
}
}
}
Whenever the mouse clicks on the terrain, I want to change all pixels in the image within a 20 pixel radius to become transparent. All GetPositionOfTextureData() does is return a Vector2 containing the position of a pixel within the texture data.
All help would be greatly appreciated.
You must unbind your texture from the GraphicsDevice by calling:
graphicsDevice.Textures[0] = null;
before trying to write to it by SetData.

Categories

Resources