I am writing a camera module to my XNA project and I encountered a problem lately.
I allowed camera rotation to my module and I got a fabulous bug - every time I draw multiple sprites in the same position, spriteBatch once draws one in front, second time the second one in front, and what is even funnier, sometimes two sprites are shown, with different alphas.
I've made few experiments:
When I set SpriteBatch mode to Deffered it is all ok - but I want to have access to z-index.
When I draw a whole 500x500 tiles array (all the sprites loaded), it is all ok, but when I get like 50x50 square from the array (containing the whole desired screen content) it gets bugged.
Finally different states are always happening for the same angle.
I must add that I do translation myself - in order to get double precision. Here is the method for translations:
public void DrawSprite(Sprite toDraw)
{
// TODO: Think about converting constructor
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - Position;
drawingPos.X = (float) drawingPostion.X * GraphicsManager.Instance.UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y * GraphicsManager.Instance.UnitToPixels;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth);
}
My ideas for the problem:
Fix the bug somehow (if it's possible)
Force XNA to sort first by z-index and then by drawing order.
Apply some z-indexes everywhere, where overlapping can occur (don't like this)
Abandon z-index (don't want that either)
As lukegravitt suggested, you should use SpriteSortMode.FrontToBack:
Sprites are sorted by depth in front-to-back order prior to drawing.
This procedure is recommended when drawing opaque sprites of varying
depths.
Reference MSDN.
So you can easily set the z-index with the last parameter of SpriteBatch.Draw:
float layerDepth
The depth of a layer. By default, 0 represents the front layer and 1
represents a back layer. Use SpriteSortMode if you want sprites to be
sorted during drawing.
Reference MSDN.
I've finally picked the option C), here is how it works in my code:
public void DrawSprite(Sprite toDraw)
{
// TODO: Think about converting constructor
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - Position;
drawingPos.X = (float) drawingPostion.X * GraphicsManager.Instance.UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y * GraphicsManager.Instance.UnitToPixels;
// proceeding to new z-index
zsortingValue += 0.000001f;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth + zsortingValue);
}
where zsortingValue is zeroed whenever a frame begins. This way each sprite can have its own sorting value, which is being enhanced only.
Related
So, this seems to be a strange one (to me). I'm relatively new to Unity, so I'm sure this is me misunderstanding something.
I'm working on a VSEPR tutorial module in unity. VSEPR is the model by which electrons repel each other and form the shape (geometry) of atoms.
I'm simulating this by making (in this case) 4 rods (cylinder primitives) and using rigidbody.AddForce to apply an equal force against all of them. This works beautifully, as long as the force is equal. In the following image you'll see the rods are beautifully equi-distant at 109.47 degrees (actually you can "see" their attached objects, two lone pairs and two electron bonds...the rods are obscured in the atom shell.)
(BTW the atom's shell is just a sphere primitive - painted all pretty.)
HOWEVER, in the real world, the lone pairs actually exert SLIGHTLY more force...so when I add this additional force to the model...instead of just pushing the other electron rods a little farther away, it pushes the ENTIRE 4-bond structure outside the atom's shell.
THE REASON THIS SEEMS ODD IS...two things.
All the rods are children of the shell...so I thought that made them somewhat immobile compared to the shell (I.e. if they moved, the shell would move with them...which would be fine).
I have a ConfiguableJoint holding the rods to the center of the atoms shell ( 0,0,0). The x/y/z-motion is set to fixed. I thought this should keep the rods fairly immutably attached to the 0,0,0 center of the shell...but I guess not.
POINTS OF INTEREST:
The rods only push on each other, by a script attached to one rod adding force to an adjacent rod. they do not AddForce to the atom shell or anything else.
CODE:
void RepulseLike() {
if (control.disableRepulsionForce) { return; } // No force when dragging
//////////////////// DETERMINE IF FORCE APPLIED //////////////////////////////
// Scroll through each Collider that this.BondStick bumps
foreach (Collider found in Physics.OverlapSphere(transform.position, (1f))) {
// Don't repel self
if (found == this.collider) { continue; }
// Check for charged particle
if (found.gameObject.tag.IndexOf("Charge") < 0) { continue; }// No match "charge", not a charged particle
/////////////// APPLY FORCE ///////////////
// F = k(q1*q2/r^2)
// where
// k = Culombs constant which in this instance is represented by repulseChargeFactor
// r = distance
// q1 and q2 are the signed magnitudes of the charges, in this case -1 for electrons and +1 for protons
// Swap from local to global variable for other methods
other = found;
/////////////////////////////////
// Calculate pushPoints for SingleBonds
forceDirection = (other.transform.position - transform.position) * magnetism; //magnetism = 1
// F = k(q1*q2/distance^2), q1*q2 ia always one in this scenario. k is arbitrary in this scenario
force = control.repulseChargeFactor * (1 / (Mathf.Pow(distance, 2)));
found.rigidbody.AddForce(forceDirection.normalized * force);// * Time.fixedDeltaTime);
}//Foreach Collider
}//RepulseLike Method
You may want to use spheres to represent electrons, so apply forces onto them and re-orient(rotate) rods according to this sphere.
I think I found my own answer...unless someone has suggestions for a better way to handle this.
I simply reduced the mass of the electron rods, and increased the mass of the sphere. I'm not sure if this is the "best practices" solution or a kluge...so input still welcomed :-)
My question is about if there is a way to know the coordinates of the corners of a gameobject. What I have is three Vuforia AR targets and a gameobject, a cube in this case.
What I need to achieve, is so that when I move the targets around, the corners of the cube would follow the targets, eg. the cube would be as wide as the space between the targets.
Right now how it does it, is that it checks the distance between the targets and sets the scale of the cube according to it. This results in the cube being expanded always from its set position, which makes the positioning of the targets awkward in real life.
Here's a couple of pictures showing what it does now, taken during execution:
Here is the code attached to the cube-object:
using UnityEngine;
using System.Collections;
public class show : MonoBehaviour {
float distancex;
float distancez;
// Use this for initialization
void Start () {
renderer.enabled = false;
}
// Update is called once per frame
void Update () {
if (GameObject.Find ("target1").renderer.enabled && GameObject.Find ("target2").renderer.enabled &&
GameObject.Find ("target3").renderer.enabled && GameObject.Find ("target4").renderer.enabled)
{
renderer.enabled = true;
}
distancex = Vector3.Distance ((GameObject.Find ("target1").transform.position), (GameObject.Find ("target2").transform.position));
distancez = Vector3.Distance ((GameObject.Find ("target2").transform.position), (GameObject.Find ("target3").transform.position));
Debug.Log (distancex);
Debug.Log (distancez);
transform.localScale = new Vector3((distancex), transform.localScale.y, transform.localScale.z);
transform.localScale = new Vector3(transform.localScale.x, transform.localScale.y, (distancez));
Debug.Log (transform.localScale);
}
}
How do I make the corners of the object follow the targets? I need the width of the side to be the width of the distance between the targets, is there any way to achieve this without using the scale?
I know this is quite some time after you asked the question, but I came across this as I was looking to sort something similar out myself.
What I found I needed (and others may be helped) is to use Renderer.bounds
What it looks like in practice for me:
void Update () {
rend = currentGameObject.GetComponent<Renderer>();
Debug.Log(rend.bounds.max);
Debug.Log(rend.bounds.min);
}
My object in this case was a quad at position 0,0,200 and a scale of 200,200,1. The output of rend.bounds.max is (100.0,100.0,200.0) the min was (-100.0,-100.0,200.0). This gives you the corner position for each corner (granted my example was in 2D space, but this should work for 3d as well).
To get it a little more specific for your example if you wanted the top right corner, you could get the XY of renderer.bounds.max, for the top left you would get the renderer.bounds.max.y and the renderer.bounds.min.x. Hope that helps!
Cheers!
Create 8 empty game objects.
Make them children of the "cube" object to be tracked.
Move them in Editor to be at the 8 corners of your tracked game object. Since they are children of the tracked object, their positions will be {+/- 0.5, +/- 0.5, +/- 0.5}.
Name them memorably (i.e., "top-left-back corner").
These 8 objects will move and scale with the tracked object, staying at its corners. Any time you want to know where a corner is, simply reference one or more of the 8 game objects by name and get the Vector3 for each of their transform.position.
Note: If you're doing step 5 repeatedly, you may want to avoid potentially costly GameObject.Find() calls (or similar) each time. Do them once and save the 8 game objects in a variable.
I worked my way around the problem by determining the position between the targets and counting their mean.
This is not flawless, however, since it is difficult to determine the scale factor with which the models would accurately follow the targets' corners, when bringing external models to use with the system.
I'm trying to rotate a polygon in windows forms using C# and below is code written .
Please tell me what wrong in the code, there is no output on windows form.
Before and after rotation polygons not visible.
public void RotatePolygon(PaintEventArgs e)
{
pp[0] = new Point(624, 414);
pp[1] = new Point(614, 416);
pp[2] = new Point(626, 404);
e.Graphics.DrawPolygon(myPen2, pp);
System.Drawing.Drawing2D.Matrix myMatrix1 = new System.Drawing.Drawing2D.Matrix();
myMatrix.Rotate(45, MatrixOrder.Append);
myMatrix.TransformPoints(pp);
e.Graphics.Transform = myMatrix1;
e.Graphics.DrawPolygon(myPen, pp);
}
Thanks
You code does not compile if left unmodified. There are two matrices used - one declared in your method (myMatrix1) attached to the graphics object and one declared outside your method (myMatrix without the 1) used to explicitly transform the point array.
I tried the code with the required changes and it works flawless - I used myMatrix1 for both transformations and the effective rotation angle was, as expected, twice the one specified. So I guess you are using two transformation that cancel if the transformed points end where they began.
There could be this problems:
[1] your pens have no color/thickness (where do you define them?)
[2] your polygone is to big, so you only see the inside of it but not the border. --> Test Graphics.FillPolygon-Methode so you will see if [2] is right
You're both transforming the points and changing the transform matrix for the Graphics object - you need to do one or the other.
You also need to think about the fact that the rotate is going to be happing about (0,0) rather than about some part of the object - you may well need a translate in there too.
Bear in mind that TransformPoints just manipulates some numbers in an array - which you can easily inspect with the debugger - this will be a more effective technique than displaying an invisible object and wondering where it went.
Starting with a much smaller rotation angle (10 deg, perhaps?) may also help with the problem of losing the object - it will be easier to work out what's happening if you haven't moved so far.
I am trying to minimize the difference between sets of square markers in 3d space with a set of unknown parameters.
I have a model set of these square markers (represented by 3d position and rotation) which should at the end of optimization match up with a set of observed square markers.
I am using Levenberg–Marquardt to optimize the set of unknown parameters, these parameters will alter the position and rotation of the model 3d markers until they match (more or less) with the observed 3d marker positions.
The observed 3d markers come from a computer vision marker detection algorithm. It gives the id of the markers seen in each frame and the transformation from the camera of each marker (using Coplanar posit). Each 'frame' would only be able to see a small number of markers in the total set of markers, there will also be inaccuracies in the transformation.
I have thought of how to construct my minimization function and I thought to try to compare the relative rotations and minimize the difference between the rotations in each iteration of the LM optimisation.
Essentially:
foreach (Marker m1 in markers)
{
foreach (Marker m2 in markers)
{
Vector3 eulerRotation = getRotation(m1, m2);
ObservedMarker observed1 = getMatchingObserved(m1);
ObservedMarker observed2 = getMatchingObserved(m2);
Vector3 eulerRotationObserved = getRotation(observed1, observed2);
double diffX = Math.Abs(eulerRotation.X - eulerRotationObserved.X);
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
}
}
Where diffX, diffY and diffZ are the values to be minimized.
I am using the following to calculate the angles:
Vector3 axis = Vector3.Cross(getNormal(m1), getNormal(m2));
axis.Normalize();
double angle = Math.Acos(Vector3.Dot(getNormal(m1), getNormal(m2)));
Vector3 modelRotation = calculateEulerAngle(axis, angle);
getNormal(Marker m) calculates the normal to the plane that the square marker lies on.
I am sure I am doing something wrong here though. Throwing this all into the LM optimiser (I am using ALGLib) doesn't seem to do anything, it goes through 1 iteration and finishes without changing any of the unknown parameters (initially all 0).
I am thinking that something is wrong with the function I am trying to minimize over. It seems sometimes the angle calculated (3rd line) returns NaN (I am currently setting this case to return diffX, diffY, diffZ as 0). Is it even valid to compare the euler angles as above?
Any help would be greatly appreciated.
Further information:
Program is written in C#, I am using XNA as well.
The model markers are represented by its four corners in 3D coords
All the model markers are in the same coordinate space.
Observed markers are the four corners as translations from the camera position in camera coordinate space
If m1 and m2 markers are the same marker id or if either m1 or m2 is not observed, I set all the diffs to 0 (no difference).
At first I thought this might be a typo, but then I realized that this could be a bug, having been a victim of similar cases myself in the past.
Shouldn't diffY and diffZ be:
double diffY = Math.Abs(eulerRotation.Y - eulerRotationObserved.Y);
double diffZ = Math.Abs(eulerRotation.Z - eulerRotationObserved.Z);
I don't have enough reputation to post this as a comment, hence posting it as an answer!
Any luck with this? Is it correct to assume that you want to minimize the "sum" of all diffs over all marker combinations? I think if you want to use LM you should not use Math.Abs.
One alternative would be to formulate your objective function manually and use another optimizer. I have recently ported two non-linear optimizers to C# which do not even require you to compute derivatives:
COBYLA2, supports non-linear constraints but require more iterations.
BOBYQA, limited to variable bounds constraints, but provides a considerable more efficient iteration scheme.
I'm working with Unity 3 and my total level size will be just a quadrilateral area with a wall on each side. Within this will be two additional walls that create an enclosure that restricts the player to a portion of the level that gradually expands.
I call these two walls ZBoundary and XBoundary, and so far for a prototype have mapped their movement to some keyboard keys. What I want to do is when one moves, the other grows in length so they are always joined at a perpendicular angle, so I want to be able to linearly interpolate between, for example, the ZBoundary's Z-coordinate and the Z-coordinate of the Xboundary so that they always meet and create a join. This creates a problem as well, because I don't know how to change the size of a GameComponent programmatically, and keep getting errors.
I know Vector3.lerp may help but I'm drowning in transforms and different scales and so would appreciate the help.
I'm halfway there by now getting the XBoundary to adjust its position in order to always lock to the ZBoundary at 90degrees
ZBoundaryRef.transform.Translate((-1 * distancePerZMovement), 0, 0, Space.World);
// using Space.World so transforms are using the global coordinate system!
// what I've done here is say, move the XBoundary Z-CoOrd so that it aligns with the ZBoundary Z-CoOrd.
// the origin of an object is at it's centre point so we need to take this in to account to ensure they join at the corners, not in a Tshape!
float difference = (ZBoundaryRef.transform.position.x - XBoundaryRef.transform.position.x);
float adjustment = difference + (ZBoundaryRef.transform.localScale.x/2);
XBoundaryRef.transform.Translate(adjustment,0,0,Space.World);