I'm trying to set an area whereby if an object enters it, it becomes slowed down.
This is what i've got so far:-
PhysicsBody = BodyFactory.CreateBody(World, new Vector2(x,y));
PhysicsBody.BodyType = BodyType.Static;
List<Vertices> vertList = EarclipDecomposer.ConvexPartition(verts);
Fixtures = FixtureFactory.AttachCompoundPolygon(vertList, density, PhysicsBody);
What setting do I need for the area to cause a slow down to other objects - is it friction?
This post has several solutions for you.
http://farseerphysics.codeplex.com/discussions/240883
You could use friction, drag coffecicients, LinearDamping, the VelocityLimietController or to just have two engines and switch between the two.
Related
I build an app which is used for meassuring outside environments on a larger scale (image the size of a regular personal driveway) using AR.
In order to increase the reliability of the system I want to implement a loop closing algorithm that allows the user to set a fixpoint (e.g. ImageMarker) --> walk around --> return to the fixpoint. The system should then figure out the offset (drift) between the original marker position and the new position upon returning and apply the transformation on the measuring-points in such a way that it accounts for drift.
As far as I know this kind of system is called a "loop closure".
Where do I start with implementing such a thing in ARFoundation? Are there build in methods? What are keywords / names of specific algortihms that I could be researching?
Any help is greatly appreciated. Thank you!
I experimented with two things:
adding a simple standalone imagetrigger to the scene and check if that already has an impact on other ARAnchors in the scene -> this doesnt seem to be the case. The transformations on the other anchors do not seem to be affected when the imagetrigger updates its position.
attaching all points of meassurment as children of the imagetrigger --> this obviously applys the updated image-trigger transformation uniformly onto all other points. However drift is not accumilating uniformly over the runtime of the system.
The issue I am dealing with is that I cannot seem to find an alternative to PickPoint for SectionViews.
In the Revit 2019 API, I've been trying to create a small script which draws a DetailLine between two points. However, I wish these points to be selected by the user, which PickPoint is perfect for. Since I need this to work in Section Views too, I'm at a roadblock though.
The relevant code is given.
XYZ p1 = uiDoc.Selection.PickPoint();
XYZ p2 = uiDoc.Selection.PickPoint();
DetailLine l = uiDoc.Document.Create.NewDetailCurve(
uiDoc.Document.ActiveView,
Line.CreateBound(p1, p2)) as DetailLine;
This throws a Autodesk.Revit.Exceptions.InvalidOperationException in a Section View, since I don't have a Work Plane.
The part that confuses me is that we can very easily draw a DetailLine in Revit itself, but I can't seem to be able to do this in my own AddIn.
I figured it out, but I'll leave my solution here for whoever might need help with it.
Basically, Revit doesn't allow you to pick points without an active Work Plane at all. This is because the coordinate system in Revit is three-dimensional, and when you're picking a point with your mouse you can't go in three dimensions. To make sure everything is precise and right, Revit forces you to have a plane on which you will pick a point.
The work-around is a blatant hack, but it works. You have to create a sketch plane, from which you can create a work plane, in which you select a point, and then after that you delete the work plane. It's dirty, but it works.
Since you're creating and deleting stuff, this requires a Transaction.
XYZ pickPoint;
using (Transaction t = new Transaction(Document))
{
t.Start("Test Transaction");
SketchPlane sp = SketchPlane.Create(Document, Plane.CreateByNormalAndOrigin(
Document.ActiveView.ViewDirection,
Document.ActiveView.Origin));
Document.ActiveView.SketchPlane = sp;
// Finally, we are able to PickPoint()
pickPoint = Document.Selection.PickPoint();
// Don't forget to clean up
Document.Delete(sp.Id);
t.Commit();
}
// Draw whatever you want with this point now.
Hope this helps someone out.
I have implemented annotation feature which is similar to drawing in VR. Drawing is a Unity trail and its shape depends on its trajectory. This is where real problem comes. We are synchronising the drawing in realtime using PhotonTransformView which syncs world position of the trail. But here is the output. The synchronised drawing looks so different from the original one.
Here is sync configuration code:
public void SetupSync(int viewId, int controllingPlayer)
{
if (PhotonNetwork.inRoom)
{
photonView = gameObject.AddComponent<PhotonView>();
photonView.ownershipTransfer = OwnershipOption.Takeover;
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
photonView.viewID = viewId;
photonTransformView = gameObject.AddComponent<PhotonTransformView>();
photonTransformView.m_PositionModel.SynchronizeEnabled = true;
photonView.ObservedComponents = new List<Component>();
photonView.ObservedComponents.Add(photonTransformView);
photonView.TransferOwnership(controllingPlayer);
}
}
How can we make the drawing on two systems more similar? I have seen cases where people have been able to synchronise these perfectly. Check this. What are they doing?
Yes, PhotonTransformView is not suitable for this.
You could send a reliable rpc every x milliseconds with the list of points since the last rpc. That's when it's being drawn live, and when the drawing is finished, you cache the whole drawing definition in a database given a drawing Id. Then drawings can be retrieved later by players joining the room after the drawing was done or even loaded from a list of drawing or by any arbitrary logic.
All in all you need two different system, one when drawing is live, and one when drawing is done.
Martjin Pieters's answer is the correct way to do it.
But for those who have the same problem for a different situations, it comes from this line:
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
It's basically compressing data and not sending the new one if it's too close to the last data sent. Just switch it Unreliable and all the data will be directly sent.
I'm currently trying to implement a complex search field into a unity project I'm working on, where a user has the ability to spawn points into a scene. My goal is to create a custom shape for that user based on the points they created, and then I'd like to detect whether or not other objects are inside that shape, similar to the detection of a point inside a complex hull (still shaky on the theory behind that, but an example can be found here). If possible, I'd also like the shape to update itself if the points are later moved, giving it an almost elastic, stretchy feel.
So far, every tutorial or resource I've found online does the exact same basic example, where a script assigns new verts, UVs and triangles to a custom mesh to make a plane using two triangles, but this is frustratingly simple, and decidedly unhelpful when I simply don't know what the final shape will look like, or what triangles to actually draw even when the user has as little as five points in the scene.
As of right now, the closest visual representation I could come up with has a List keep track of the user's points, and a script that just draws a bunch of pseudo-triangles using LineRenderers to connect every point, even ones that aren't exterior faces, by iterating through the List multiple times. While this looks close to what I want, it isn't actually useful in any way, as I don't know how to 'fill' those faces, and I'm still relatively lost when it comes to whether or not an object is inside that hull, like the red sphere shown in the example below.
I can also destroy and redraw those lines repeatedly during the Update() method, which allows me to grab a point and move it around, resulting in the shape dynamically changing, but this results in an undesirable flashing effect that I'd sooner avoid for now.
As this is such a broad question, I've also included the method I'm using to draw these lines below, which parents a bunch of lines shaped like triangles to an empty game object for easy destruction and recreation:
void drawHull()
{
if (!GameObject.Find("hullHold"))
{
hullHold = new GameObject();
hullHold.name = "hullHold";
}
foreach(GameObject point in points)
{
for (int i = points.IndexOf(point); i < points.Count - 2; i++)
{
lineEdge = Instantiate(lineReference);
lineEdge.name = "Triangle" + i;
lineEdge.GetComponent<LineRenderer>().startColor = Color.black;
lineEdge.GetComponent<LineRenderer>().endColor = Color.black;
lineEdge.GetComponent<LineRenderer>().positionCount = 3;
lineEdge.GetComponent<LineRenderer>().SetPosition(0, points[points.IndexOf(point)].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(1, points[i + 1].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(2, points[i + 2].transform.position);
lineEdge.GetComponent<LineRenderer>().loop = true;
lineEdge.SetActive(true);
lineEdge.transform.SetParent(hullHold.transform);
}
}
}
If anyone has encountered a similar problem somewhere else that I simply couldn't find, please let me know! Anything from more knowledge on creating a custom mesh to a more in depth and beginner-friendly explanation on determining if a point is inside a convex hull would be quite helpful. If it's at all relevant, I am working in VR and running version 2018.2.6f1 to ensure that the Oculus rift package and Unity play nice, but I haven't been having any issues working in an environment a few months behind.
Thanks!
You do mention assinging verts and tris to a mesh - this is the 'right' way to do it in unity, as it ties in with highly optimized MeshRenderer, and also I believe Colliders are able to use meshes where given the shapes like that, so you should be able to just plug into PhysX and query it for colliders overlaping with your mesh. Doing it by hand, aka iterating through faces and establishing actual bounds of your object is actually pretty hard to do effectively
i'm trying to develop Pentago-game in c#.
right now i'm having 2 players mode which working just fine.
the problem is, that i want One player mode (against computer), but unfortunately, all implements of minimax / negamax are for one thing calculated for each "Move" (placing marble, moving game-piece).
butin Pentago, every player need to do two things (place marble, and rotate one of the inner-boards)
I didn't figure out how to implement both rotate part & placing the marble, and i would love someone to guide me with this.
if you're not familiar with the game, here's a link to the game.
if anyone want's, i can upload my code somewhere if that's relevant.
thank you very much in advance
If a single legal moves consists of two sub-moves, then your "move" for game algorithm purposes is simply a tuple where the first item is the marble placement and the second item is the board rotation e.g.:
var marbleMove = new MarbleMove(fromRow, fromCol, toRow, toCol);
var boardRotation = new BoardRotation(subBoard, rotationDirection);
var move = new Tuple<MarblMove, BoardRotation>(marbleMove, boardRotation);
Typically a game playing algorithm will require you to enumerate all possible moves for a given position. In this case, you must enumerate all possible pairs of sub-moves. With this list in hand you can move on to using standing computer game playing approaches.
Rick suggested tuples above, but you might want to actually just have each player make two independent moves, so it remains their turn twice in a row. This can make move ordering easier, but may complicate your search algorithm, depending on which one you are using.
In an algorithm like UCT (which is likely to outperform minimax for simple implementations) breaking into two moves can be more efficient because the algorithm can first figure out what moves placements are good, and then figure out what rotation is best. (Googling UCT doesn't give much. The original research paper isn't very insightful, but this page might be better: http://senseis.xmp.net/?UCT)