I am using fbx file format with xna framework without skinnedprocessor.
I followed some of the post like this regarding the rotation or translation of individual bones for creating gestures in human body. I used the XNA sample for display of fbx model, i was even able to move full body left and right.
Now my problem is I wrote a code for rotation of some valid bone, but just normal model is getting displayed instead of model with rotating bone 9.
This part of code was written in skinnedmodelprocessor.cs
List<Matrix> bindPose = new List<Matrix>();
List<Matrix> inverseBindPose = new List<Matrix>();
List<int> skeletonHierarchy = new List<int>();
foreach (BoneContent bone in bones)
{
bindPose.Add(bone.Transform);
inverseBindPose.Add(Matrix.Invert(bone.AbsoluteTransform));
skeletonHierarchy.Add(bones.IndexOf(bone.Parent as BoneContent));
}
//code for updating the position of a particular bone
int idbone = 9;
bindPose[idbone] = bindPose[idbone]*Matrix.CreateRotationX(MathHelper.ToRadians(45.0f));
This is my draw method
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
Matrix[] bones = animationPlayer.GetSkinTransforms();
Matrix view = Matrix.CreateTranslation(0, -40, 0) * Matrix.CreateRotationY(MathHelper.ToRadians(camerarotation))
* Matrix.CreateRotationX(MathHelper.ToRadians(cameraArc))
* Matrix.CreateLookAt(new Vector3(0, 0, -cameraArc), new Vector3(0, 0, 0), Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio, 1, 10000);
//Looping over multiple meshes
foreach (ModelMesh mesh in firstmodel.Meshes)
{
foreach (SkinnedEffect effect in mesh.Effects)
{
effect.SetBoneTransforms(bones);
effect.EnableDefaultLighting();
effect.World = worldtransformations[mesh.ParentBone.Index];
effect.View = view;
effect.Projection = projection;
effect.SpecularColor = new Vector3(0.25f);
effect.SpecularPower = 16;
}
mesh.Draw();
}
}
Its look like I screwed up somewhere.
I have three models for first and third models error pops "no skinning data", but i loaded everything (texture, changed content processor).
Second model just gave blank screen.
Related
I have an interresting thing.
I want to get the rendered objects window coordinates. When i use this in OpenGLDraw event:
var modelview = new double[16];
gl.GetDouble(OpenGL.GL_MODELVIEW_MATRIX, modelview);
Drawing doenes't work.
Environment: VS 2019 community edition, SharpGL.WinForms 3.1.1., c# winform project, framework 4.6.1
OpenGLControl events:
Init, I need only 2D space.
private void RenderPanel_OpenGLInitialized(object sender, EventArgs e)
{
gl = RenderPanel.OpenGL;
gl.Disable(OpenGL.GL_DEPTH_TEST);
gl.Enable(OpenGL.GL_BLEND);
gl.BlendFunc(OpenGL.GL_SRC_ALPHA, OpenGL.GL_ONE_MINUS_SRC_ALPHA);
gl.LoadIdentity();
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.LoadIdentity();
gl.Viewport(0, 0, RenderPanel.Width, RenderPanel.Height);
gl.Ortho(0, RenderPanel.Width, RenderPanel.Height, 0, 1, -1);
gl.MatrixMode(OpenGL.GL_MODELVIEW);
gl.LoadIdentity();
}
Draw: It's works fine, drawing objects
private void RenderPanel_OpenGLDraw(object sender, RenderEventArgs args)
{
gl.ClearColor(1, 1, 1, 1);
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
gl.LoadIdentity();
glColor(Color.White);
gl.Enable(OpenGL.GL_TEXTURE_2D);
foreach (ImageItem img in ImageItems.Items)
{
img.Draw();
}
}
I want use this in foreach loop, after the img.Draw() (part of code)
var modelview = new double[16];
var projection = new double[16];
var viewport = new int[4];
var winx = new double[1];
var winy = new double[1];
var winz = new double[1];
gl.GetDouble(OpenGL.GL_MODELVIEW_MATRIX, modelview);
gl.GetDouble(OpenGL.GL_PROJECTION_MATRIX, projection);
gl.GetInteger(OpenGL.GL_VIEWPORT, viewport);
gl.Project(0, 0, 0, modelview, projection, viewport, winx, winy, winz);
try to debug, comment row by row. I see when use gl.GetDouble() drawing gone. I get a correct window coordinates just last object disappear.
.Draw() is simple
gl.BindTexture(OpenGL.GL_TEXTURE_2D, TextureID);
gl.LoadIdentity();
gl.Color(1f, 1f, 1f, pos.a);
gl.Translate(pos.x, pos.y, 0);
gl.Rotate(pos.r, 0, 0);
OpenGLDraw.DrawQuad(pos.w, pos.h);
It's a bug. I planted this 2 rows to some sample projects and drawing method borken(some objects disappear).
I try to lots of thing. Finally changed renderContextType and the last object appear. I think so not a good solution but works.
I have to create 2D map in unity using single image. So, i have one .png file with 5 different pieces out of which I have to create a map & i am not allowed to crop the image. So, how do create this map using only one image.
I am bit new to unity, i tried searching but didn't find exactly what i am looking for. Also tried, tilemap using Pallet but couldn't figure out how to extract only one portion of the image.
You can create various Sprites from the given texture on the fly in code.
You can define which part of a given Texture2D shall be used for the Sprite using Sprite.Create providing the rect in pixel coordinates of the given image. Remember however that in unity texture coordinates start bottom left.
Example use the given pixel coordinate snippet of a texture for the attached UI.Image component:
[RequireComponent(typeof(Image))]
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public Rect pixelCoordinates;
private void Start()
{
var newSprite = Sprite.Create(texture, pixelCoordinates, Vector2.one / 2f);
GetComponent<Image>().sprite = newSprite;
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
pixelCoordinates = new Rect();
return;
}
// reset to valid rect values
pixelCoordinates.x = Mathf.Clamp(pixelCoordinates.x, 0, texture.width);
pixelCoordinates.y = Mathf.Clamp(pixelCoordinates.y, 0, texture.height);
pixelCoordinates.width = Mathf.Clamp(pixelCoordinates.width, 0, pixelCoordinates.x + texture.width);
pixelCoordinates.height = Mathf.Clamp(pixelCoordinates.height, 0, pixelCoordinates.y + texture.height);
}
}
Or you get make a kind of manager class for generating all needed sprites once e.g. in a list like
public class Example : MonoBehaviour
{
// your texture e.g. from a public field via the inspector
public Texture2D texture;
// define which pixel coordinates to use for this sprite also via the inspector
public List<Rect> pixelCoordinates = new List<Rect>();
// OUTPUT
public List<Sprite> resultSprites = new List<Sprite>();
private void Start()
{
foreach(var coordinates in pixelCoordinates)
{
var newSprite = Sprite.Create(texture, coordinates, Vector2.one / 2f);
resultSprites.Add(newSprite);
}
}
// called everytime something is changed in the Inspector
private void OnValidate()
{
if (!texture)
{
for(var i = 0; i < pixelCoordinates.Count; i++)
{
pixelCoordinates[i] = new Rect();
}
return;
}
for (var i = 0; i < pixelCoordinates.Count; i++)
{
// reset to valid rect values
var rect = pixelCoordinates[i];
rect.x = Mathf.Clamp(pixelCoordinates[i].x, 0, texture.width);
rect.y = Mathf.Clamp(pixelCoordinates[i].y, 0, texture.height);
rect.width = Mathf.Clamp(pixelCoordinates[i].width, 0, pixelCoordinates[i].x + texture.width);
rect.height = Mathf.Clamp(pixelCoordinates[i].height, 0, pixelCoordinates[i].y + texture.height);
pixelCoordinates[i] = rect;
}
}
}
Example:
I have 4 Image instances and configured them so the pixelCoordinates are:
imageBottomLeft: X=0, Y=0, W=100, H=100
imageTopLeft: X=0, Y=100, W=100, H=100
imageBottomRight: X=100, Y=0, W=100, H=100
imageTopRight: X=100, Y=100, W=100, H=100
The texture I used is 386 x 395 so I'm not using all of it here (just added the frames the Sprites are going to use)
so when hitting Play the following sprites are created:
I am currently making a level editor where the user imports tiles from a file, and it currently works, except for the fact that I want the pixels per unit for each imported sprite to change to 32
Here is my code:
//Get tiles from file
StreamReader reader = new StreamReader(Application.dataPath + "/../Maps/" + mapName + "/Tiles/tiles.txt");
string line = reader.ReadLine ();
while (!string.IsNullOrEmpty (line)) {
string[] param = line.Split (',');
foreach (TileTexture t in tileTextures) {
if (t.name == param [0]) {
Sprite sprite = Sprite.Create (t.texture, new Rect (0, 0, t.texture.width, t.texture.height), new Vector2 (0, 0));
sprite.pixelsPerUnit = 32;//THIS LINE DOESNT WORK, GIVES READONLY ERROR
Tile tile = new Tile (param[0], sprite, new Vector2(float.Parse(param[1]), float.Parse(param[2])));
tile.sprite.texture.filterMode = FilterMode.Point;
tiles.Add (tile);
}
}
line = reader.ReadLine ();
}
Looking at the function Sprite.Create() we see that the function signature is
public static Sprite Create(Texture2D texture,
Rect rect,
Vector2 pivot,
float pixelsPerUnit = 100.0f,
uint extrude = 0,
SpriteMeshType meshType = SpriteMeshType.Tight,
Vector4 border = Vector4.zero);
We see that we can pass the pixelsPerUnit as an optional parameter into the function. You can only do this here, and you cannot change it later, because, as you have found out, the field pixelsPerUnit is readonly field (meaning it cannot be changed). So, you just need to pass in your 32f here. Correct code would be
if (t.name == param [0]) {
Sprite sprite = Sprite.Create (t.texture, new Rect (0, 0, t.texture.width, t.texture.height), new Vector2 (0, 0), 32f);
Tile tile = new Tile (param[0], sprite, new Vector2(float.Parse(param[1]), float.Parse(param[2])));
tile.sprite.texture.filterMode = FilterMode.Point;
tiles.Add (tile);
}
I've recently begun playing around with Monogame (OpenSource "remake" of XNA). My current task is to write a simple shadow mapping shader. After following Riemer's XNA Tutorial I got the shadows to show up. But interestingly only around where the "player" (camera) is. When I'm outside of the light's radius there's no light showing up around the camera (which is good), but the light from my spotlight should of course still be there. But it's not. The shadows and the bright lighting from my spotlight only appear around player.
As I said, my HLSL code is exactly the same as Riemer's (Except that I wrote SV_POSITION instead of POSITION0 at the VertexShaderInput struct because MonoGame only supports Pixel-/Vertexshader 4.0 and above)
I also have a feeling that my Light-View/-Projection Matrices might somehow be wrong:
Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(lightPos.X, lightPos.Y-1, lightPos.Z), new Vector3(0, 0, 1));
Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45F), 1f, 1f, 10f);
(I know that my "up" vector is not pointing upwards; that was the only way I got the light to point downwards from it's position, otherwise it would always point to (0 -1 0) even if I wrote it so that the lights coordinates were used and then it would go y -1 from the light - really weird...)
I really hope someone knows what's up here, this issue prevented me from doing anything else in my code for a couple of days now...
PS: Here's my rendering code:
////In my Model-Class
private void DrawModel(string tech)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (var mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = Game1.effect;
Matrix tempWorld = ((transforms[mesh.ParentBone.Index] * Matrix.CreateScale(scale)) * rotationMatrix) * Matrix.CreateTranslation(pos);
part.Effect.CurrentTechnique = part.Effect.Techniques[tech];
part.Effect.Parameters["xWorldViewProjection"].SetValue(tempWorld * Game1.camera.View * Game1.camera.Projektion);
part.Effect.Parameters["xTexture"].SetValue(textures[part]);
part.Effect.Parameters["xWorld"].SetValue(tempWorld);
part.Effect.Parameters["xLightPos"].SetValue(Game1.currentLevel.lightPos);
part.Effect.Parameters["xLightPower"].SetValue(Game1.currentLevel.lightPower);
part.Effect.Parameters["xAmbient"].SetValue(Game1.currentLevel.ambientPower);
part.Effect.Parameters["xLightsWorldViewProjection"].SetValue(tempWorld * Game1.currentLevel.lightsViewProjectionMatrix);
part.Effect.Parameters["xShadowMap"].SetValue(Game1.currentLevel.shadowMap);
}
mesh.Draw();
}
}
////In my Level-Class
public void Render()
{
var device = Game1.graphics.GraphicsDevice;
device.SetRenderTarget(renderTarget);
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
DrawScene("ShadowMap");
device.SetRenderTarget(null);
shadowMap = (Texture2D)renderTarget;
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
DrawScene("ShadowedScene");
}
private void DrawScene(string tech)
{
foreach (var model in models)
{
model.Render(tech);
}
}
and the light-update code:
private void UpdateLightData()
{
ambientPower = 0.2f;
lightPos = new Vector3(-0.2F, 1.2F, 1.1F);
lightPower = 1.8f;
Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(lightPos.X, lightPos.Y-1, lightPos.Z), new Vector3(0, 0, 1));
Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45F), 1f, 1f, 10f);
lightsViewProjectionMatrix = lightsView * lightsProjection;
}
I'm using a single sprite sheet image as the main texture for my breakout game. The image is this:
My code is a little confusing, since I'm creating two elements from the same Texture using a Point, to represent the element size and its position on the sheet, a Vector, to represent its position on the viewport and a Rectangle that represents the element itself.
Texture2D sheet;
Point paddleSize = new Point(112, 24);
Point paddleSheetPosition = new Point(0, 240);
Vector2 paddleViewportPosition;
Rectangle paddleRectangle;
Point ballSize = new Point(24, 24);
Point ballSheetPosition = new Point(160, 240);
Vector2 ballViewportPosition;
Rectangle ballRectangle;
Vector2 ballVelocity;
My initialization is a little confusing as well, but it works as expected:
paddleViewportPosition = new Vector2((GraphicsDevice.Viewport.Bounds.Width - paddleSize.X) / 2, GraphicsDevice.Viewport.Bounds.Height - (paddleSize.Y * 2));
paddleRectangle = new Rectangle(paddleSheetPosition.X, paddleSheetPosition.Y, paddleSize.X, paddleSize.Y);
Random random = new Random();
ballViewportPosition = new Vector2(random.Next(GraphicsDevice.Viewport.Bounds.Width), random.Next(GraphicsDevice.Viewport.Bounds.Top, GraphicsDevice.Viewport.Bounds.Height / 2));
ballRectangle = new Rectangle(ballSheetPosition.X, ballSheetPosition.Y, ballSize.X, ballSize.Y);
ballVelocity = new Vector2(3f, 3f);
And the drawing:
spriteBatch.Draw(sheet, paddleViewportPosition, paddleRectangle, Color.White);
spriteBatch.Draw(sheet, ballViewportPosition, ballRectangle, Color.White);
The problem is I can't detect the collision properly, using this code:
if(ballRectangle.Intersects(paddleRectangle))
{
ballVelocity.Y = -ballVelocity.Y;
}
What am I doing wrong?
You're testing collision based on sourceRectangles for the sprite sheet texture. Those rectangles (paddleRectangle, ballRectangle) are defined in terms of texture coordinates - that is where those sprites are on the sheet. It makes no sense to test those rectangles for collision.
You need to use screen coordinates for collision, that is, you need different rectangles defined with screen positions:
Rectangle paddleViewportRectangle = new Rectangle(paddleViewportPosition.X,
paddleViewportPosition.Y,
paddleSize.X,
paddleSize.Y);
Rectangle ballViewportRectangle = new Rectangle(ballViewportPosition.X,
ballViewportPosition.Y,
ballSize.X,
ballSize.Y);
if(ballViewportRectangle.Intersects(paddleViewportRectangle))
{
ballVelocity.Y = -ballVelocity.Y;
}