I have been trying to make my first ever game engine (in OpenTK and Im stuck. I am trying to incoperate animations and I'm trying to use sprite sheets because they would greatly decrease filesize.
I know that I need to use a for loop to draw all the sprites in sequence, but I dont have a clue about what I should do to make it work with spritesheets.
Here is my current drawing code:
//The Basis fuction for all drawing done in the application
public static void Draw (Texture2D texture, Vector2 position, Vector2 scale, Color color, Vector2 origin, RectangleF? SourceRec = null)
{
Vector2[] vertices = new Vector2[4]
{
new Vector2(0, 0),
new Vector2(1, 0),
new Vector2(1, 1),
new Vector2(0, 1),
};
GL.BindTexture(TextureTarget.Texture2D, texture.ID);
//Beginning to draw on the screen
GL.Begin(PrimitiveType.Quads);
GL.Color3(color);
for (int i = 0; i < 4; i++)
{
GL.TexCoord2((SourceRec.Value.Left + vertices[i].X * SourceRec.Value.Width) / texture.Width, (SourceRec.Value.Left + vertices[i].Y * SourceRec.Value.Height) / texture.Height);
vertices[i].X *= texture.Width;
vertices[i].Y *= texture.Height;
vertices[i] -= origin;
vertices[i] *= scale;
vertices[i] += position;
GL.Vertex2(vertices[i]);
}
GL.End();
}
public static void Begin(int screenWidth, int screenHeight)
{
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(-screenWidth / 2, screenWidth / 2, screenHeight / 2, -screenHeight / 2, 0f, 1f);
}
Thanks in advance!!! :)
Firstly, I beg you, abandon the fixed function pipeline. It blinds you and restricts your understanding and creativity. Move to core 3.1+, it's structural perfection is matched only by it's potentiality, and the collaborative forethought and innovation will truly move you.
Now, you'll want to store your sprite sheet images as a Texture2DArray. It took me some time to figure out how to load these correctly:
public SpriteSheet(string filename, int spriteWidth, int spriteHeight)
{
// Assign ID and get name
this.textureID = GL.GenTexture();
this.spriteSheetName = Path.GetFileNameWithoutExtension(filename);
// Bind the Texture Array and set appropriate parameters
GL.BindTexture(TextureTarget.Texture2DArray, textureID);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureWrapS, (int)TextureWrapMode.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureWrapT, (int)TextureWrapMode.ClampToEdge);
// Load the image file
Bitmap image = new Bitmap(#"Tiles/" + filename);
BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
// Determine columns and rows
int spriteSheetwidth = image.Width;
int spriteSheetheight = image.Height;
int columns = spriteSheetwidth / spriteWidth;
int rows = spriteSheetheight / spriteHeight;
// Allocate storage
GL.TexStorage3D(TextureTarget3d.Texture2DArray, 1, SizedInternalFormat.Rgba8, spriteWidth, spriteHeight, rows * columns);
// Split the loaded image into individual Texture2D slices
GL.PixelStore(PixelStoreParameter.UnpackRowLength, spriteSheetwidth);
for (int i = 0; i < columns * rows; i++)
{
GL.TexSubImage3D(TextureTarget.Texture2DArray,
0, 0, 0, i, spriteWidth, spriteHeight, 1,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte,
data.Scan0 + (spriteWidth * 4 * (i % columns)) + (spriteSheetwidth * 4 * spriteHeight * (i / columns))); // 4 bytes in an Bgra value.
}
image.UnlockBits(data);
}
Then you can simply draw a quad, and the Z texture co-ordinate is the Texture2D index in the Texture2DArray. Note frag_texcoord is of type vec3.
#version 330 core
in vec3 frag_texcoord;
out vec4 outColor;
uniform sampler2DArray tex;
void main()
{
outColor = texture(tex, vec3(frag_texcoord));
}
Related
I trying to slice Sprite(type casted to Texture2D) by script,
when project is running on Android or IOS Platform
is it possible by script??
I trying to use UnityEditor Class and it is work on Computer
but When I trying to Build Android or IOS It is failed.
void OnPreprocessTexture()
{
TextureImporter textureImporter = (TextureImporter)assetImporter;
textureImporter.textureType = TextureImporterType.Sprite;
textureImporter.spriteImportMode = SpriteImportMode.Multiple;
textureImporter.mipmapEnabled = false;
textureImporter.filterMode = FilterMode.Point;
}
public void OnPostprocessTexture(Texture2D texture)
{
Debug.Log("Texture2D: (" + texture.width + "x" + texture.height + ")");
int spriteSize = 350;
int colCount = texture.width / spriteSize;
int rowCount = texture.height / spriteSize;
List<SpriteMetaData> metas = new List<SpriteMetaData>();
for (int r = 0; r < rowCount; ++r)
{
for (int c = 0; c < colCount; ++c)
{
SpriteMetaData meta = new SpriteMetaData();
meta.rect = new Rect(c * spriteSize, r * spriteSize, spriteSize, spriteSize);
meta.name = c + "-" + r;
metas.Add(meta);
}
}
TextureImporter textureImporter = (TextureImporter)assetImporter;
textureImporter.spritesheet = metas.ToArray();
}
public void OnPostprocessSprites(Texture2D texture, Sprite[] sprites)
{
Debug.Log("Sprites: " + sprites.Length);
}
It is not working When running project on Android or IOS
[What I want]
Procedure
During running on Android or IOS Platform
1) Receive some Images from server (Url or file)
2) Load Image on C# script
3) Change type Images to Texture or Sprite ect...
4) Slice Images(Don't use Editor)
5) Save Pieces of Image
6) Use piece of Image
What I want is all procedure worked by Script
TextureImporter belongs to the UnityEditor namespace which doesn't exist in a built app but only within the Unity Editor itself. → You can not use this!
What you can do is using Sprite.Create to generate a sprite from a given Texture2D.
Cropping
If it is actually only about cutting out a certain part of the texture to use it as sprite than you only need to define in the rect parameter the part of the texture you want to use from that image.
// Wherever you get the texture from
Texture texture = ...;
// E.g. for using the center of the image
// but half of the size
var rect = new Rect(texture.width / 4, texture.height / 4, texture.width / 2, texture.height / 2);
// Create the sprite
var sprite = Sprite.Create(texture, rect, Vector2.one * 0.5f);
where rect is the
Location of the Sprite on the original Texture, specified in pixels.
Slicing
If you additionally want a slicing border (which you usually define in the Sprite Editor within Unity) there is an overload of Sprite.Create that additionally takes a border parameter e.g.
var borders = new Vector4(2, 2, 2, 2);
var sprite = Sprite.Create(texture, rect, Vector2.one * 0.5f, 100, SpriteMeshType.FullRect, borders);
where border
Returns the border sizes of the sprite.
X=left, Y=bottom, Z=right, W=top.
API doesn't say it but I guess like the rect values those values are also in pixels.
I solved using rect and Sprite Rnederer
here is my code
void Start()
{
rect = new Rect(0f, 0f, 255, 255);
this.GetComponent<SpriteRenderer>().sprite = Sprite.Create(img, rect, new Vector2(0.5f, 0.5f));
this.GetComponent<RectTransform>().localScale = new Vector3(100, 100, 0);
StartCoroutine(Update());
}
/*
* rect = new Rect(i, 0, 255, 255);
this.GetComponent<SpriteRenderer>().sprite = Sprite.Create(img, rect, new Vector2(0.5f, 0.5f));*/
IEnumerator Update()
{
while (true)
{
if (i < 1000)
{
i += 255;
if (i > 770)
{
i = 0;
}
}
yield return new WaitForSeconds(0.25f);
rect = new Rect(i, 0f, 255, 255);
this.GetComponent<SpriteRenderer>().sprite = Sprite.Create(img, rect, new Vector2(0.5f, 0.5f));
}
}
I cannot render triangles for the life of me with a VBO in OpenTK. I am loading my data to the VBO in glControl_Load() event. I get a background screen with no triangles when running. The data is a from a mesh m.OpenGLArrays(out data, out indices) outputs a list of floats and ints. The list of floats for the vertices T1v1, T1v2, T1v3, T2v1, T2v2, T2v3, .... , all three vertices for each triangle back to back.
However given a blank screen with the code when I comment the "intermediate" rendering code everything renders fine....??? What am I doing wrong?
private void glControl_Load(object sender, EventArgs e)
{
loaded = true;
glControl.MouseMove += new MouseEventHandler(glControl_MouseMove);
glControl.MouseWheel += new MouseEventHandler(glControl_MouseWheel);
GL.ClearColor(Color.DarkSlateGray);
GL.Color3(1f, 1f, 1f);
m.OpenGLArrays(out data, out indices);
this.indicesSize = (uint)indices.Length;
GL.GenBuffers(1, out VBOid[0]);
GL.GenBuffers(1, out VBOid[1]);
SetupViewport();
}
private void SetupViewport()
{
if (this.WindowState == FormWindowState.Minimized) return;
glControl.Width = this.Width - 32;
glControl.Height = this.Height - 80;
Frame_label.Location = new System.Drawing.Point(glControl.Width / 2, glControl.Height + 25);
GL.MatrixMode(MatrixMode.Projection);
//GL.LoadIdentity();
GL.Ortho(0, glControl.Width, 0, glControl.Height, -1, 1); // Bottom-left corner pixel has coordinate (0, 0)
GL.Viewport(0, 0, glControl.Width, glControl.Height); // Use all of the glControl painting area
GL.Enable(EnableCap.DepthTest);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid[0]);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(data.Length * sizeof(float)), data, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
float aspect_ratio = this.Width / (float)this.Height;
projection = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 1024);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref projection);
}
private void glControl_Paint(object sender, PaintEventArgs e)
{
if (loaded)
{
GL.Clear(ClearBufferMask.ColorBufferBit |
ClearBufferMask.DepthBufferBit |
ClearBufferMask.StencilBufferBit);
modelview = Matrix4.LookAt(0f, 0f, -200f + zoomFactor, 0, 0, 0, 0.0f, 1.0f, 0.0f);
var aspect_ratio = Width / (float)Height;
projection = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 512);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref projection);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref modelview);
GL.Rotate(angleY, 1.0f, 0, 0);
GL.Rotate(angleX, 0, 1.0f, 0);
GL.EnableClientState(ArrayCap.VertexArray);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid[0]);
GL.Color3(Color.Yellow);
GL.VertexPointer(3, VertexPointerType.Float, Vector3.SizeInBytes, new IntPtr(0));
GL.DrawArrays(PrimitiveType.Triangles, 0, data.Length);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.DisableClientState(ArrayCap.VertexArray);
//GL.Color3(Color.Yellow);
//GL.PolygonMode(MaterialFace.Front, PolygonMode.Fill);
//GL.Begin(PrimitiveType.Triangles);
//for (int i = 0; i < this.md.mesh.Count; i++)
//{
// GL.Normal3(this.md.mesh[i].normal);
// GL.Vertex3(this.md.mesh[i].vertices[0]);
// GL.Vertex3(this.md.mesh[i].vertices[1]);
// GL.Vertex3(this.md.mesh[i].vertices[2]);
//}
//GL.End();
//GL.EndList();
glControl.SwapBuffers();
Frame_label.Text = "Frame: " + frameNum++;
}
}
If something doesn't seem right, then it probably isn't. I seriously questioned my understanding of opengl and spent hours looking at this. However it was just a simple error of forgetting to iterate a count variable in a for loop to transfer the mesh from one object to another. Each triangle had identical vertices! Always expect the unexpected when it comes to debugging!
I would like to flip an image and create a new Texture2D of that flipped image.
I have done some research and found that this code will create a new Texture2D;
RenderTarget2D newTarget = new RenderTarget2D(GraphicsDevice, partWidth, partHeight)
GraphicsDevice.SetRenderTarget(newTarget);
SpriteBatch.Draw(original, Vector2.Zero, partSourceRectangle, ... )
GraphicsDevice.SetRenderTarget(null);
Texture2D newTexture = (Texture2D)newTarget;
I understand that the newTexture is susceptible to being removed from memory so I am advised that I need to use getData/SetData to create a more permanent texture. Could anyone advise me on the specific syntax to do this?
The next method saves flipped texture to new texture2D:
public Texture2D SaveAsFlippedTexture2D(Texture2D input, bool vertical, bool horizontal)
{
Texture2D flipped = new Texture2D(input.GraphicsDevice, input.Width, input.Height);
Color[] data = new Color[input.Width * input.Height];
Color[] flipped_data = new Color[data.Length];
input.GetData<Color>(data);
for (int x = 0; x < input.Width; x++)
{
for (int y = 0; y < input.Height; y++)
{
int index = 0;
if (horizontal && vertical)
index = input.Width - 1 - x + (input.Height - 1 - y) * input.Width;
else if (horizontal && !vertical)
index = input.Width - 1 - x + y * input.Width;
else if (!horizontal && vertical)
index = x + (input.Height - 1 - y) * input.Width;
else if (!horizontal && !vertical)
index = x + y * input.Width;
flipped_data[x + y * input.Width] = data[index];
}
}
flipped.SetData<Color>(flipped_data);
return flipped;
}
Example:
Load our texture then use the method, pass our texture as parameter to return new flipped texture to another texture. You can load your content inside game Update() method as well.
Texture2D texture;
Texture2D flippedTextureHorizontal;
Texture2D flippedTextureVertical;
Texture2D flippedTextureVerticalHorizontal;
protected override void LoadContent()
{
// Create a new SpriteBatch, which can be used to draw textures.
spriteBatch = new SpriteBatch(GraphicsDevice);
texture = Content.Load<Texture2D>("kitty-cat");
flippedTextureHorizontal = SaveAsFlippedTexture2D(texture, false, true);
flippedTextureVertical = SaveAsFlippedTexture2D(texture, true, false);
flippedTextureVerticalHorizontal = SaveAsFlippedTexture2D(texture, true, true);
}
Draw method:
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearWrap, DepthStencilState.None, RasterizerState.CullCounterClockwise);
spriteBatch.Draw(texture, Vector2.Zero, null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);
spriteBatch.Draw(flippedTextureHorizontal, new Vector2(texture.Width + offset, 0), null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);
spriteBatch.Draw(flippedTextureVertical, new Vector2(texture.Width * 2 + offset * 2, 0), null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);
spriteBatch.Draw(flippedTextureVerticalHorizontal, new Vector2(texture.Width * 3 + offset * 3, 0), null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 0);
spriteBatch.End();
base.Draw(gameTime);
}
Ouput:
The alghorithm can be found Here as well.
The same result as above can be achieved by using code below for horizontal and vertical flipping at the same time:
But not sure it will be 100% correct
test = new Texture2D(GraphicsDevice, texture.Width, texture.Height);
int size = texture.Width * texture.Height;
Color[] data = new Color[size];
texture.GetData<Color>(data);
Array.Reverse(data, texture.Width, size - texture.Width);
test.SetData<Color>(data);
I'm trying to draw an indexed square using SlimDX and Direct3D11. I've managed to draw a square without indices, but when I swap to my indexed version I just get a blank screen.
My input layout is set to only take position data (I'm essentially extending from the third tutorial on the SlimDX website) and to draw Triangle Lists.
My render loop code is as follows (I am using the triangle.fx pixel and vertex shader files from the tutorial, they take vertex positions (in screen coordinates) and paint them yellow, D3D is shorthand for SlimDX.Direct3D11)
//clear the render target
context.ClearRenderTargetView(renderTarget, new Color4(0.5f, 0.5f, 1.0f));
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer,12, 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
swapChain.Present(0, PresentFlags.None);
"mesh" is a struct that holds a Vertex buffer, Index buffer and vertex count. The data is filled here:
Vertex[] vertexes = new Vertex[4];
vertexes[0].Position = new Vector3(0, 0, 0.5f);
vertexes[1].Position = new Vector3(0, 0.5f, 0.5f);
vertexes[2].Position = new Vector3(0.5f, 0, 0.5f);
vertexes[3].Position = new Vector3(0.5f, 0.5f, 0.5f);
UInt16[] indexes = { 0, 1, 2, 1, 3, 2 };
DataStream vertices = new DataStream(12 * 4, true, true);
foreach (Vertex vertex in vertexes)
{
vertices.Write(vertex.Position);
}
vertices.Position = 0;
DataStream indices = new DataStream(sizeof(int) * 6, true, true);
foreach (UInt16 index in indexes)
{
indices.Write(index);
}
indices.Position = 0;
mesh = new Mesh();
D3D.Buffer vertexBuffer = new D3D.Buffer(device, vertices, 12 * 4, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.VertexBuffer = vertexBuffer;
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
All of this is nearly identical to my unindexed square method (with the addition of index buffers and indices, and the removal of two duplicate vertices that aren't needed with indexing), but while the unindexed method draws a square, the indexed method doesn't.
My current theory is that there is either something wrong with this line:
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Or these lines:
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
Why don't you just use a vertex and indexbuffer for this simple example?
Like this way (Directx9):
VertexBuffer vb;
IndexBuffer ib;
vertices = new PositionColored[WIDTH * HEIGHT];
//vertex creation
vb = new VertexBuffer(device, HEIGHT * WIDTH * PositionColored.SizeInBytes, Usage.WriteOnly, PositionColored.Format, Pool.Default);
DataStream stream = vb.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
vb.Unlock();
indices = new short[(WIDTH - 1) * (HEIGHT - 1) * 6];
//indicies creation
ib = new IndexBuffer(device, sizeof(int) * (WIDTH - 1) * (HEIGHT - 1) * 6, Usage.WriteOnly, Pool.Default, false);
DataStream stream = ib.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
ib.Unlock();
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetStreamSource(0, vb, 0, PositionColored.SizeInBytes);
device.Indices = ib;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH * HEIGHT, 0, indices.Length / 3);
device.EndScene();
device.Present();
I use the mesh in another way (directx9 code again):
private void CreateMesh()
{
meshTerrain = new Mesh(device, (WIDTH - 1) * (HEIGHT - 1) * 2, WIDTH * HEIGHT, MeshFlags.Managed, PositionColored.Format);
DataStream stream = meshTerrain.VertexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
meshTerrain.VertexBuffer.Unlock();
stream.Close();
stream = meshTerrain.IndexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
meshTerrain.IndexBuffer.Unlock();
stream.Close();
meshTerrain.GenerateAdjacency(0.5f);
meshTerrain.OptimizeInPlace(MeshOptimizeFlags.VertexCache);
meshTerrain = meshTerrain.Clone(device, MeshFlags.Dynamic, PositionNormalColored.Format);
meshTerrain.ComputeNormals();
}
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
int numSubSets = meshTerrain.GetAttributeTable().Length;
for (int i = 0; i < numSubSets; i++)
{
meshTerrain.DrawSubset(i);
}
device.EndScene();
device.Present();
I am writing a small 2d game engine in C# for my own purposes, and it works fine except for the sprite collision detection. I've decided to make it a per-pixel detection (easiest for me to implement), but it is not working the way it's supposed to. The code detects a collision long before it happens. I've examined every component of the detection, but I can't find the problem.
The collision detection method:
public static bool CheckForCollision(Sprite s1, Sprite s2, bool perpixel) {
if(!perpixel) {
return s1.CollisionBox.IntersectsWith(s2.CollisionBox);
}
else {
Rectangle rect;
Image img1 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s1.Image, s1.Position, s1.Origin, s1.Rotation, out rect), s1.Scale);
int posx1 = rect.X;
int posy1 = rect.Y;
Image img2 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s2.Image, s2.Position, s2.Origin, s2.Rotation, out rect), s2.Scale);
int posx2 = rect.X;
int posy2 = rect.Y;
Rectangle abounds = new Rectangle(posx1, posy1, (int)img1.Width, (int)img1.Height);
Rectangle bbounds = new Rectangle(posx2, posy2, (int)img2.Width, (int)img2.Height);
if(Utilities.RectangleIntersects(abounds, bbounds)) {
uint[] bitsA = s1.GetPixelData(false);
uint[] bitsB = s2.GetPixelData(false);
int x1 = Math.Max(abounds.X, bbounds.X);
int x2 = Math.Min(abounds.X + abounds.Width, bbounds.X + bbounds.Width);
int y1 = Math.Max(abounds.Y, bbounds.Y);
int y2 = Math.Min(abounds.Y + abounds.Height, bbounds.Y + bbounds.Height);
for(int y = y1; y < y2; ++y) {
for(int x = x1; x < x2; ++x) {
if(((bitsA[(x - abounds.X) + (y - abounds.Y) * abounds.Width] & 0xFF000000) >> 24) > 20 &&
((bitsB[(x - bbounds.X) + (y - bbounds.Y) * bbounds.Width] & 0xFF000000) >> 24) > 20)
return true;
}
}
}
return false;
}
}
The image rotation method:
internal static Image RotateImagePoint(Image img, Vector pos, Vector orig, double rotation, out Rectangle sz) {
if(!(new Rectangle(new Point(0), img.Size).Contains(new Point((int)orig.X, (int)orig.Y))))
Console.WriteLine("Origin point is not present in image bound; unwanted cropping might occur");
rotation = (double)ra_de((double)rotation);
sz = GetRotateDimensions((int)pos.X, (int)pos.Y, img.Width, img.Height, rotation, false);
Bitmap bmp = new Bitmap(sz.Width, sz.Height);
Graphics g = Graphics.FromImage(bmp);
g.SmoothingMode = SmoothingMode.AntiAlias;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.RotateTransform((float)rotation);
g.TranslateTransform(sz.Width / 2, sz.Height / 2, MatrixOrder.Append);
g.DrawImage(img, (float)-orig.X, (float)-orig.Y);
g.Dispose();
return bmp;
}
internal static Rectangle GetRotateDimensions(int imgx, int imgy, int imgwidth, int imgheight, double rotation, bool Crop) {
Rectangle sz = new Rectangle();
if (Crop == true) {
// absolute trig values goes for all angles
double dera = de_ra(rotation);
double sin = Math.Abs(Math.Sin(dera));
double cos = Math.Abs(Math.Cos(dera));
// general trig rules:
// length(adjacent) = cos(theta) * length(hypotenuse)
// length(opposite) = sin(theta) * length(hypotenuse)
// applied width = lo(img height) + la(img width)
sz.Width = (int)(sin * imgheight + cos * imgwidth);
// applied height = lo(img width) + la(img height)
sz.Height = (int)(sin * imgwidth + cos * imgheight);
}
else {
// get image diagonal to fit any rotation (w & h =diagonal)
sz.X = imgx - (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0));
sz.Y = imgy - (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0));
sz.Width = (int)Math.Sqrt(Math.Pow(imgwidth, 2.0) + Math.Pow(imgheight, 2.0)) * 2;
sz.Height = sz.Width;
}
return sz;
}
Pixel getting method:
public uint[] GetPixelData(bool useBaseImage) {
Rectangle rect;
Image image;
if (useBaseImage)
image = Image;
else
image = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(Image, Position, Origin, Rotation, out rect), Scale);
BitmapData data;
try {
data = ((Bitmap)image).LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
}
catch (ArgumentException) {
data = ((Bitmap)image).LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, image.PixelFormat);
}
byte[] rawdata = new byte[data.Stride * image.Height];
Marshal.Copy(data.Scan0, rawdata, 0, data.Stride * image.Height);
((Bitmap)image).UnlockBits(data);
int pixelsize = 4;
if (data.PixelFormat == PixelFormat.Format24bppRgb)
pixelsize = 3;
else if (data.PixelFormat == PixelFormat.Format32bppArgb || data.PixelFormat == PixelFormat.Format32bppRgb)
pixelsize = 4;
double intdatasize = Math.Ceiling((double)rawdata.Length / pixelsize);
uint[] intdata = new uint[(int)intdatasize];
Buffer.BlockCopy(rawdata, 0, intdata, 0, rawdata.Length);
return intdata;
}
The pixel retrieval method works, and the rotation method works as well, so the only place that the code might be wrong is the collision detection code, but I really have no idea where the problem might be.
I don't think many people here will bother to scrutinize your code to figure out what exactly is wrong. But I can come with some hints to how you can find the problem.
If collision happens long before it is supposed to I suggest your bounding box check isn't working properly.
I would change the code to dump out all the data about rectangles at collision. So you can create some code that will display the situation at collision. That might be easier than looking over the numbers.
Apart from that I doubt that per pixel collision detection easier for you to implement. When you allow for rotation and scaling that quickly becomes difficult to get right. I would do polygon based collision detection instead.
I have made my own 2D engine like you but I used polygon based collision detection and that worked fine.
I think I've found your problem.
internal static Rectangle GetRotateDimensions(int imgx, int imgy, int imgwidth, int imgheight, double rotation, bool Crop) {
Rectangle sz = new Rectangle(); // <-- Default constructed rect here.
if (Crop == true) {
// absolute trig values goes for all angles
double dera = de_ra(rotation);
double sin = Math.Abs(Math.Sin(dera));
double cos = Math.Abs(Math.Cos(dera));
// general trig rules:
// length(adjacent) = cos(theta) * length(hypotenuse)
// length(opposite) = sin(theta) * length(hypotenuse)
// applied width = lo(img height) + la(img width)
sz.Width = (int)(sin * imgheight + cos * imgwidth);
// applied height = lo(img width) + la(img height)
sz.Height = (int)(sin * imgwidth + cos * imgheight);
// <-- Never gets the X & Y assigned !!!
}
Since you never assigned imgx and imgy to the X and Y coordinates of the Rectangle, every call of GetRotateDimensions will produce a Rectangle with the same location. They may be of differing sizes, but they will always be in the default X,Y position. This would cause the really early collisions that you are seeing because any time you tried to detect collisions on two sprites, GetRotateDimensions would put their bounds in the same position regardless of where they actually are.
Once you have corrected that problem, you may run into another error:
Rectangle rect;
Image img1 = GraphicsHandler.ResizeImage(GraphicsHandler.RotateImagePoint(s1.Image, s1.Position, s1.Origin, s1.Rotation, out rect), s1.Scale);
// <-- Should resize rect here.
int posx1 = rect.X;
int posy1 = rect.Y;
You get your boundary rect from the RotateImagePoint function, but you then resize the image. The X and Y from the rect are probably not exactly the same as that of the resized boundaries of the image. I'm guessing that you mean for the center of the image to remain in place while all points contract toward or expand from the center in the resize. If this is the case, then you need to resize rect as well as the image in order to get the correct position.
I doubt this is the actual problem, but LockBits doesn't guarantee that the bits data is aligned to the image's Width.
I.e., there may be some padding. You need to access the image using data[x + y * stride] and not data[x + y * width]. The Stride is also part of the BitmapData.