XNA C# Destructible Terrain - c#

I am currently working on a game in which players can destruct the terrain. Unfortunately, I am getting this exception after using the SetData method on my terrain texture:
You may not call SetData on a resource while it is actively set on the
GraphicsDevice. Unset it from the device before calling SetData.
Now, before anyone says that there are other topics on this problem, I have looked at all of those. They all say to make sure not to call the method within Draw(), but I only use it in Update() anyways. Here is the code I am currently using to destruct the terrain:
public class Terrain
{
private Texture2D Image;
public Rectangle Bounds { get; protected set; }
public Terrain(ContentManager Content)
{
Image = Content.Load<Texture2D>("Terrain");
Bounds = new Rectangle(0, 400, Image.Width, Image.Height);
}
public void Draw(SpriteBatch spriteBatch)
{
spriteBatch.Draw(Image, Bounds, Color.White);
}
public void Update()
{
if (Globals.newState.LeftButton == ButtonState.Pressed)
{
Point mousePosition = new Point(Globals.newState.X, Globals.newState.Y);
if(Bounds.Contains(mousePosition))
{
Color[] imageData = new Color[Image.Width * Image.Height];
Image.GetData(imageData);
for (int i = 0; i < imageData.Length; i++)
{
if (Vector2.Distance(new Vector2(mousePosition.X, mousePosition.Y), GetPositionOfTextureData(i, imageData)) < 20)
{
imageData[i] = Color.Transparent;
}
}
Image.SetData(imageData);
}
}
}
private Vector2 GetPositionOfTextureData(int index, Color[] colorData)
{
float x = 0;
float y = 0;
x = index % 800;
y = (index - x) / 800;
return new Vector2(x + Bounds.X, y + Bounds.Y);
}
}
}
Whenever the mouse clicks on the terrain, I want to change all pixels in the image within a 20 pixel radius to become transparent. All GetPositionOfTextureData() does is return a Vector2 containing the position of a pixel within the texture data.
All help would be greatly appreciated.

You must unbind your texture from the GraphicsDevice by calling:
graphicsDevice.Textures[0] = null;
before trying to write to it by SetData.

Related

Texture2D to get Rect Transform properties of the Image

If any of you had dealt with the same problem could you please tell me the solution.
User can upload his own image from phone gallery to the UI Image, then he can move it, scale it, rotate it, mirror it. After such manipulations he can save the Texture2D of this Image to the persistentDataPath with the following code. But the problem is that no matter which rotation, position or scale properties the UI Image has, Texture2D will still remain default which is 0pos, 0rotation and scale 1 (which actually has logic cause the texture is the same as I only changed the rect transform of the image.
public void SaveClick()
{
CropSprite();
SaveSprite();
}
private Texture2D output;
public void CropSprite()
{
Texture2D MaskTexture = MaskImage.sprite.texture;
//for this part MaskImage is the parent of the Image which is used to actually mask Image with which user is manipulating edits
//attached the image for better understanding what is mask and what is editable area
Texture2D originalTextureTexture = TextureMarble.sprite.texture;
Texture2D TextureTexture = Resize(originalTextureTexture,250,250);
//I rescale editable texture to the size of the mask one, otherwise big size images will be saved incorrectly
output = new Texture2D(TextureTexture.width,TextureTexture.height);
for (int i = 0; i < TextureTexture.width; i++)
{
for (int j = 0; j < TextureTexture.height; j++)
{
if (MaskTexture.GetPixel(i, j).a != 0)
output.SetPixel(i, j, TextureTexture.GetPixel(i, j));
else
output.SetPixel(i, j, new Color(1f, 1f, 1f, 0f));
}
}
//save only the part of editable texture which is overlapping mask image
output.Apply();
}
Texture2D Resize(Texture2D texture2D,int targetX,int targetY)
{
RenderTexture rt=new RenderTexture(targetX, targetY,24);
RenderTexture.active = rt;
Graphics.Blit(texture2D,rt);
Texture2D result=new Texture2D(targetX,targetY);
result.ReadPixels(new Rect(0,0,targetX,targetY),0,0);
result.Apply();
return result;
}
public void SaveSprite()
{
byte[] bytesToSave = output.EncodeToPNG();
File.WriteAllBytes(Application.persistentDataPath + "/yourTexture1.png",bytesToSave);
}
not necessary but for those of you who didn't understand what is mask in my case
So how to save Texture2D with the rect transform properties of the Image?

DrawLine method doesn't work when using inside a loop

I'm trying to make a program that is able to read a dxf file and plot it's contents but when I try to draw the figures in the window's paint event, nothing is drawn unless I use this.Invalidate(); and this doesn't completely work because the objects appear to blink on the screen. The coordinates to draw the lines are stored in a list declared in the window class.
private void InitialWindow_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
Pen blackPen = new Pen(Color.Black, 1);
if (entities.Count != 0)
{
for (int i = 0; i < entities.Count; i++)
{
for (int k = 0; k < entities[i].path.Count - 1; k++)
{
g.DrawLine(blackPen, D2F(entities[i].path[k]), D2F(entities[i].path[k + 1]));
}
g.DrawLine(blackPen, D2F(entities[i].path[0]), D2F(entities[i].path.Last()));
}
}
}
2DF is a function that converts a point-like type of data to PointF so it can be used in DrawLine. If I try to draw outside the for loops the lines are displayed correctly on the screen. Thanks in advance for any help with it.
I suggest you create a class to handle the model-to-pixel conversions (and in reverse) instead of a function D2F(). A class it going to give you a lot more flexibility and it will retain a state of the current scaling values
public class Canvas
{
public Canvas()
{
Target = new Rectangle(0, 0, 1, 1);
Scale = 1;
}
public Rectangle Target { get; set; }
public float Scale { get; set; }
public void SetModelBounds(float width, float height)
{
Scale = Math.Min(Target.Width / width, Target.Height / height);
}
public PointF ToPixel(Vector2 point)
{
var center = new PointF(Target.Left + Target.Width / 2, Target.Top + Target.Height / 2);
return new PointF(center.X + Scale * point.X, center.Y - Scale * point.Y);
}
public Vector2 FromPixel(Point pixel)
{
var center = new PointF(Target.Left + Target.Width / 2, Target.Top + Target.Height / 2);
return new Vector2((pixel.X - center.X) / Scale, -(pixel.Y - center.Y) / Scale);
}
}
This is setup for each paint event by calling for example
Canvas.Target = this.ClientRectangle;
Canvas.SetModelBounds(2f, 2f);
The above code is going to place coordinates (-1,-1) on the bottom left of the form surface, and (1,1) on the top right. The pixels per model unit are kept the same for x-axis and y-axis (see Canvas.Scale).
Now for the drawing, also use am Entity class to hold the drawing geometry, and various other properties such as color, and if the shape is closed or not. Note that if it defines a Render(Graphics g, Canvas canvas) method, then it can be called by the form and each entity can handle its own drawing (for the most modular design)
Here is an example:
public class Entity
{
Entity(bool closed, Color color, params Vector2[] path)
{
Color = color;
Path = new List<Vector2>(path);
Closed = closed;
}
public Color Color { get; set; }
public List<Vector2> Path { get; }
public bool Closed { get; set; }
public void Render(Graphics g, Canvas canvas)
{
using (Pen pen = new Pen(Color, 1))
{
var points = Path.Select(pt => canvas.ToPixel(pt)).ToArray();
if (Closed)
{
g.DrawPolygon(pen, points);
}
else
{
g.DrawLines(pen, points);
}
}
}
public static Entity Triangle(Color color, Vector2 center, float width, float height)
{
return new Entity(true, color, new Vector2[] {
new Vector2(center.X-width/2, center.Y - height/3),
new Vector2(center.X+width/2, center.Y - height/3),
new Vector2(center.X, center.Y+2*height/3) });
}
public static Entity Rectange(Color color, Vector2 center, float width, float height)
{
return new Entity(true, color, new Vector2[] {
new Vector2(center.X-width/2, center.Y-height/2),
new Vector2(center.X+width/2, center.Y-height/2),
new Vector2(center.X+width/2, center.Y+height/2),
new Vector2(center.X-width/2, center.Y+height/2) });
}
public static Entity Polygon(Color color, params Vector2[] path)
=> new Entity(true, color, path);
public static Entity Polyline(Color color, params Vector2[] path)
=> new Entity(false, color, path);
}
The above is used in the form as follows
public partial class Form1 : Form
{
public Canvas Canvas { get; }
public List<Entity> Entities { get; }
public Form1()
{
InitializeComponent();
Entities = new List<Entity>();
Canvas = new Canvas();
}
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
this.Resize += (s, ev) => Invalidate();
this.Paint += (s, ev) =>
{
ev.Graphics.SmoothingMode = SmoothingMode.AntiAlias;
Canvas.Target = ClientRectangle;
Canvas.SetModelBounds(2f, 2f);
foreach (var item in Entities)
{
item.Render(ev.Graphics, Canvas);
}
};
Entities.Add( .. )
}
}
As you can see the Paint event calls the Render() method for each entity.
In fact, you can generalize this model using an interface IRender. In the example below besides Entity that implements the interface, I define a class to draw the coordinate axis called Axes.
public interface IRender
{
void Render(Graphics g, Canvas canvas);
}
public class Axes : IRender
{
public void Render(Graphics g, Canvas canvas)
{
PointF origin = canvas.ToPixel(Vector2.Zero);
PointF xpoint = canvas.ToPixel(Vector2.UnitX);
PointF ypoint = canvas.ToPixel(Vector2.UnitY);
using (Pen pen = new Pen(Color.Black, 0))
{
pen.CustomEndCap = new AdjustableArrowCap(2f, 5f);
g.DrawLine(pen, origin, xpoint);
g.DrawLine(pen, origin, ypoint);
}
}
}
public class Entity : IRender
{
...
}
and now you can draw with quite a bit of varying things on the screen with the above framework. Here is an example that draws the axis (of unit size) and a few sample entities.
Entities.Add(Entity.Polygon(Color.Orange, Vector2.Zero, -Vector2.UnitX, -Vector2.One));
Entities.Add(Entity.Rectange(Color.Blue, new Vector2(0.5f, 0.5f), 0.25f, 0.25f));
Entities.Add(Entity.Triangle(Color.Red, new Vector2(0.5f, 0.75f), 0.25f, 0.25f));
Entities.Add(Entity.Polyline(Color.Magenta,
new Vector2(0f, -0.2f), new Vector2(0f, -0.4f), new Vector2(0.2f, -0.4f),
new Vector2(0.2f, -0.2f), new Vector2(0.4f, -0.2f), new Vector2(0.4f, -0.4f)));

How to create 2D map in unity using single image?

I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:

How do I animate a GIF which is being pulled from the resource folder?

The "shooter" subclass
class Shooter : Box
{
public bool goleft;
public bool goright;
public Shooter(float startx, float starty)
{
pic = resizeImage (Properties.Resources.shooter, new Size(50,50)); // resize image
goleft = false;
goright = false;
x = startx;
y = starty;
}
And this is the main class from which it inherited the code.
class Box
{
public Image pic;
public float x;
public float y;
public int speed;
public Box()
{
x = 0;
y = 0;
speed = 0;
}
// Image Resizing Code
public static Image resizeImage(Image imgToResize, Size size)
{
return (Image)(new Bitmap(imgToResize, size));
}
// image resizing code
public void Draw(Graphics g)
{
g.DrawImage(pic, x, y);
}
// some code bellow that basically states the borders and hit boxes.
}
So, yeah, I'm just trying to figure out how to animate a gif which is essentially built by a constructor... The shooter shows up, I can move it around but the problem is, is that it's just not spinnin'. Hope you guys can figure it out. Thanks :D
Your GIF is not being animated because your Box class simply doesn't support it.
If you want to animate the image, you can't open it as a Bitmap, you need to get the image data and do the animation manually, or use a PictureBox to display the image. Here's an example of how to do it manually. Note that in order to resize the GIF, you also need to do it frame by frame.

Matrix / coordinate transformation in C#

I have an array of coordinates that reflect known positions on an image. Let's call this the template image. It has a unique barcode and orientation markers (which are also in the coordinate array).
The image is printed, scanned and fed back into my application to be detected. During printing and scanning, the image could be transformed in three ways; translation, rotation and scale.
Assuming that I can find the orientation markers on the distorted image, how can I use matrix transformation to get the relative positions of the remaining coordinates?
I posted this question on SO before but made it too complicated to understand what I wanted.
EDIT
namespace MatrixTest
{
using System;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Collections.Generic;
public static class Program
{
public static void Main ()
{
Template template = new Template(); // Original template image.
Document document = new Document(); // Printed and scanned distorted image.
template.CreateTemplateImage();
// The template image is printed and scanned. This method generates an example scan or this question.
document.CreateDistortedImageFromTemplateImage();
// Stuck here.
document.Transform();
// Draw transformed points on the image to verify that transformation is successful.
document.DrawPoints();
System.Diagnostics.Process.Start(new System.IO.FileInfo(System.Reflection.Assembly.GetExecutingAssembly().Location).Directory.FullName);
}
}
public class Page
{
public Bitmap Image { get; set; }
public Point[] Markers = new Point[3]; // Orientation markers: 1=TopLeft, 2=TopRight, 3=BottomRight.
public Point[] Points = new Point[100]; // Coordinates to transform in the TemplateScanned derived class!
}
// This class represents the originalk template image.
public class Template: Page
{
public Template ()
{
this.Image = new Bitmap(300, 400);
// Known dimentions for marker rectangles.
this.Markers[0] = new Point(10, 10);
this.Markers[1] = new Point(this.Image.Width - 20 - 10, 10);
this.Markers[2] = new Point(this.Image.Width - 20 - 10, this.Image.Height - 20 - 10);
// Known points of interest. Consider them hardcoded.
int index = 0;
for (int y = 0; y < 10; y++)
for (int x = 0; x < 10; x++)
this.Points[index++] = new Point((this.Image.Width / 10) + (x * 20), (this.Image.Height / 10) + (y * 20));
}
public void CreateTemplateImage ()
{
using (Graphics graphics = Graphics.FromImage(this.Image))
{
graphics.Clear(Color.White);
for (int i = 0; i < this.Markers.Length; i++)
graphics.FillRectangle(Brushes.Black, this.Markers[i].X, this.Markers[i].Y, 20, 20);
for (int i = 0; i < this.Points.Length; i++)
graphics.DrawRectangle(Pens.Red, this.Points[i].X, this.Points[i].Y, 5, 5);
}
this.Image.Save("Document Original.png");
}
}
// This class represents the scanned image.
public class Document: Page
{
public struct StructTransformation
{
public float AngleOfRotation;
public SizeF ScaleRatio;
public SizeF TranslationOffset;
}
private Template Template = new Template();
private StructTransformation Transformation = new StructTransformation();
public Document ()
{
this.Template = new Template();
this.Transformation = new StructTransformation { AngleOfRotation = 5f, ScaleRatio = new SizeF(.8f, .7f), TranslationOffset = new SizeF(100f, 30f) };
this.Template.CreateTemplateImage();
// Copy points from template.
for (int i = 0; i < this.Template.Markers.Length; i++)
this.Markers[i] = this.Template.Markers[i];
for (int i = 0; i < this.Points.Length; i++)
this.Points[i] = this.Template.Points[i];
}
// Just distorts the original template image as if it had been read from a scanner.
public void CreateDistortedImageFromTemplateImage ()
{
// Distort coordinates.
Matrix matrix = new Matrix();
matrix.Rotate(this.Transformation.AngleOfRotation);
matrix.Scale(this.Transformation.ScaleRatio.Width, this.Transformation.ScaleRatio.Height);
matrix.Translate(this.Transformation.TranslationOffset.Width, this.Transformation.TranslationOffset.Height);
matrix.TransformPoints(this.Markers);
matrix.TransformPoints(this.Points);
// Distort and save image for visual reference.
this.Image = new Bitmap(this.Template.Image.Width, this.Template.Image.Height);
using (Graphics graphics = Graphics.FromImage(this.Image))
{
graphics.Clear(Color.White);
graphics.RotateTransform(this.Transformation.AngleOfRotation);
graphics.ScaleTransform(this.Transformation.ScaleRatio.Width, this.Transformation.ScaleRatio.Height);
graphics.TranslateTransform(this.Transformation.TranslationOffset.Width, this.Transformation.TranslationOffset.Height);
graphics.DrawImage(this.Template.Image, 0, 0);
}
this.Image.Save("Document Scanned.png");
}
public void Transform ()
{
// The rectangles of the ScannedDcoument are not known at this time. They would obviously be relative to the three orientation markers.
// I can't figure out how to use the following code properly i.e. using Matrix to apply all three transformations.
Matrix matrix = new Matrix();
matrix.Rotate(-this.Transformation.AngleOfRotation);
matrix.Scale(1f/this.Transformation.ScaleRatio.Width, 1f/this.Transformation.ScaleRatio.Height);
matrix.Translate(-this.Transformation.TranslationOffset.Width, -this.Transformation.TranslationOffset.Height);
matrix.TransformPoints(this.Markers);
matrix.TransformPoints(this.Points);
}
public void DrawPoints ()
{
using (Graphics graphics = Graphics.FromImage(this.Image))
{
graphics.Clear(Color.White);
for (int i = 0; i < this.Markers.Length; i++)
graphics.FillRectangle(Brushes.Blue, this.Markers[i].X, this.Markers[i].Y, 20, 20);
for (int i = 0; i < this.Points.Length; i++)
graphics.DrawRectangle(Pens.Purple, this.Points[i].X, this.Points[i].Y, 5, 5);
}
this.Image.Save("Document Fixed.png");
}
}
}
I'm assuming you want to transform the image to the unit square ( (0, 0) - (1.0, 1.0))
You need three points, one is the origin, the other will be transformed to the x-axis (1.0, 0) and the other to the y-axis (0, 1.0).
In the original coordinate system:
The origin is (Ox, Oy)
The X-axis is (X1, Y2)
The Y-Axis is (X2, Y2)
X-Axis relative to the origin (X1-Ox, Y1-Oy) will be shortened to (RX1, RY1)
Y-Axis relative to the origin (X2-ox, Y2-Oy) will be shortened to (RX2, RY2)
First we will shift the origin to (0,0) in homogeneous coordinates the transform matrix will be
(1 0 -Ox)
(0 1 -Oy)
(0 0 1)
The transform from the new space to the old one is represented by the following matrix:
(RX1 RX2 0)
(RY1 RY2 0)
( 0 0 1)
Because you want the inverse transformation, from old space to the new one, we need the invert this matrix:
Let's shorten (RX1*RY2-RX2*RY1) as D
(RY2/D -RX2/D 0)
(-RY1/D RX1/D 0)
( 0 0 1)
Now you can multiply both matrices first you do the translation and then use the second matrix to transform the basis.

Categories

Resources