I have implemented annotation feature which is similar to drawing in VR. Drawing is a Unity trail and its shape depends on its trajectory. This is where real problem comes. We are synchronising the drawing in realtime using PhotonTransformView which syncs world position of the trail. But here is the output. The synchronised drawing looks so different from the original one.
Here is sync configuration code:
public void SetupSync(int viewId, int controllingPlayer)
{
if (PhotonNetwork.inRoom)
{
photonView = gameObject.AddComponent<PhotonView>();
photonView.ownershipTransfer = OwnershipOption.Takeover;
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
photonView.viewID = viewId;
photonTransformView = gameObject.AddComponent<PhotonTransformView>();
photonTransformView.m_PositionModel.SynchronizeEnabled = true;
photonView.ObservedComponents = new List<Component>();
photonView.ObservedComponents.Add(photonTransformView);
photonView.TransferOwnership(controllingPlayer);
}
}
How can we make the drawing on two systems more similar? I have seen cases where people have been able to synchronise these perfectly. Check this. What are they doing?
Yes, PhotonTransformView is not suitable for this.
You could send a reliable rpc every x milliseconds with the list of points since the last rpc. That's when it's being drawn live, and when the drawing is finished, you cache the whole drawing definition in a database given a drawing Id. Then drawings can be retrieved later by players joining the room after the drawing was done or even loaded from a list of drawing or by any arbitrary logic.
All in all you need two different system, one when drawing is live, and one when drawing is done.
Martjin Pieters's answer is the correct way to do it.
But for those who have the same problem for a different situations, it comes from this line:
photonView.synchronization = ViewSynchronization.ReliableDeltaCompressed;
It's basically compressing data and not sending the new one if it's too close to the last data sent. Just switch it Unreliable and all the data will be directly sent.
Related
The issue I am dealing with is that I cannot seem to find an alternative to PickPoint for SectionViews.
In the Revit 2019 API, I've been trying to create a small script which draws a DetailLine between two points. However, I wish these points to be selected by the user, which PickPoint is perfect for. Since I need this to work in Section Views too, I'm at a roadblock though.
The relevant code is given.
XYZ p1 = uiDoc.Selection.PickPoint();
XYZ p2 = uiDoc.Selection.PickPoint();
DetailLine l = uiDoc.Document.Create.NewDetailCurve(
uiDoc.Document.ActiveView,
Line.CreateBound(p1, p2)) as DetailLine;
This throws a Autodesk.Revit.Exceptions.InvalidOperationException in a Section View, since I don't have a Work Plane.
The part that confuses me is that we can very easily draw a DetailLine in Revit itself, but I can't seem to be able to do this in my own AddIn.
I figured it out, but I'll leave my solution here for whoever might need help with it.
Basically, Revit doesn't allow you to pick points without an active Work Plane at all. This is because the coordinate system in Revit is three-dimensional, and when you're picking a point with your mouse you can't go in three dimensions. To make sure everything is precise and right, Revit forces you to have a plane on which you will pick a point.
The work-around is a blatant hack, but it works. You have to create a sketch plane, from which you can create a work plane, in which you select a point, and then after that you delete the work plane. It's dirty, but it works.
Since you're creating and deleting stuff, this requires a Transaction.
XYZ pickPoint;
using (Transaction t = new Transaction(Document))
{
t.Start("Test Transaction");
SketchPlane sp = SketchPlane.Create(Document, Plane.CreateByNormalAndOrigin(
Document.ActiveView.ViewDirection,
Document.ActiveView.Origin));
Document.ActiveView.SketchPlane = sp;
// Finally, we are able to PickPoint()
pickPoint = Document.Selection.PickPoint();
// Don't forget to clean up
Document.Delete(sp.Id);
t.Commit();
}
// Draw whatever you want with this point now.
Hope this helps someone out.
I'm coding a server for a multi-player RPG, and I'm currently struggling with implementing a sight range. Since some maps are rather large, I have to limit what the client sees. My approach:
If I get new coordinates from the client, I save them as the destination, together with a move start time. Once every x ms I go through all creatures in the world, and update their current position, after saving the position they were at the last time I've updated them. Basically I calculate the new position, based on move start time and speed, and write those in the current position variables, while saving the new start time. Once this update is done, I'm going through all creatures which moved, aka those who have a different position than at the last update. In a sub-loop I go through all creatures/clients again, to check if I have to notify them about a (dis)appearing creature. At the moment I'm running this update every 100ms.
This approach is working, but I have a feeling it's not the best way to do this. And I'm not sure what will happen once I have a few thousand creatures (players, monster, etc) in the world, which have to be updated and checked.
Since I weren't able to find resources about this particular problem, I'm asking here.
Is this approach okay? Will I run into problems soon? What's the standard to do this? What's the best way?
Eric Lippert had a really good series of posts on shadowcasting that might be helpful in approaching/solving this.
You may want to consider using quadtrees to split the game world into sections based on the areas that player characters can see. Then you don't need to loop over every creature in the game all the time; you only need to loop over the ones within the section that the player character in question is located in, and any adjacent ones in case something crossed the boundary.
I haven't done this sort of coding personally myself, but I did work with someone who did this in a space combat game for which I was developing a GUI!
I'm trying to set an area whereby if an object enters it, it becomes slowed down.
This is what i've got so far:-
PhysicsBody = BodyFactory.CreateBody(World, new Vector2(x,y));
PhysicsBody.BodyType = BodyType.Static;
List<Vertices> vertList = EarclipDecomposer.ConvexPartition(verts);
Fixtures = FixtureFactory.AttachCompoundPolygon(vertList, density, PhysicsBody);
What setting do I need for the area to cause a slow down to other objects - is it friction?
This post has several solutions for you.
http://farseerphysics.codeplex.com/discussions/240883
You could use friction, drag coffecicients, LinearDamping, the VelocityLimietController or to just have two engines and switch between the two.
Until recently, our game checked collisions by getting the colour data from a section of the background texture of the scene. This worked very well, but as the design changed, we needed to check against multiple textures and it was decided to render these all to a single RenderTarget2D and check collisions on that.
public bool TouchingBlackPixel(GameObject p)
{
/*
Calculate rectangle under the player...
SourceX,SourceY: Position of top left corner of rectangle
SizeX,SizeY: Aproximated (cast to int from float) size of box
*/
Rectangle sourceRectangle = new Rectangle(sourceX, sourceY,
(int)sizeX, (int)sizeY);
Color[] retrievedColor = new Color[(int)(sizeX * sizeY)];
p.tileCurrentlyOn.collisionLayer.GetData(0, sourceRectangle, retrievedColor,
0, retrievedColor.Count());
/*
Check collisions
*/
}
The problem that we've been having is that, since moving to the render target, we are experiencing massive reductions in FPS.
From what we've read, it seems as if the issue is that in order to get data from the RenderTarget2D, you need to transfer data from the GPU to the CPU and that this is slow. This is compounded by us needing to run the same function twice (once for each player) and not being able to keep the same data (they may not be on the same tile).
We've tried moving the GetData calls to the tile's Draw function and storing the data in a member array, but this does not seem to have solved the problem (As we are still calling GetData on a tile quite regularly - down from twice every update to once every draw).
Any help which you could give us would be great as the power that this collision system affords us is quite fantastic, but the overhead which render targets have introduced will make it impossible to keep.
The simple answer is: Don't do that.
It sounds like offloading the compositing of your collision data to the GPU was a performance optimisation that didn't work - so the correct course of action would be to revert the change.
You should simply do your collision checks all on the CPU. And I would further suggest that it is probably faster to run your collision algorithm multiple times and determine a collision response by combining the results, rather than compositing the whole scene onto a single layer and running collision detection once.
This is particularly the case if you are using the render target to support transformations before doing collision.
(For simple 2D pixel collision detection, see this sample. If you need support for transformations, see this sample as well.)
I suppose, your tile's collision layer does not change. Or at least changes not very frequently. So you can store the colors for each tile in an array or other structure. This would decrease the amount of data that is transfered from the GPU to CPU, but requires that the additional data stored in the RAM is not too big.
I am trying to extract human from a video source, so that I can use his image later. I need to only extract human body, and ignore the environment. The good thing is that the background is static. I have tried to use AForge and applied CustomFrameDifferenceDetector filter, which compares current frame to the static background image and extracts the pixels which differ (difference>threshold). It works well, but there is a problem when skin or part of the clothing has the similar color to background. In these cases filter ignores these parts and the result has various holes in the body. Simply decreasing threshold doesn't solve the problem, since body shadows and other noise increases (even under noise supression).
Do you know of any known solution to this problem? Or is it still unsolved problem?
It's a hard to solve issue (and one of the reasons for Microsoft's Kinect to not use visible light only and why blue/green screening is still so popular). I'd try to remove holes (you should be able to predict where the body has to be). If you've got the processing power, use different thresholds and merge the results. You could as well try to filter new separated images (e.g. add current frame to last frame and normalize the result). This way you could track shapes you're losing for one frame a lot more consistent.
A different approach: Use the detected shape/region for detecting the position of the body only. I.e. ignore its specific shape and use a premade shapre above/around the estimated body position. This most likely won't work if you'd like to do some kind of bluescreen like behaviour but it might help as well closing holes.
Alturos.Yolo does exactly what you are looking for.
Yolo learns from annotated images how to detect the objects you are looking for. First you need to install the project, along with a set of already trained images using the Nuget Package Manager. In your case the YOLOv2-tiny model should suffice:
Install-Package Alturos.Yolo
Install-Package Alturos.YoloV2TinyVocData
Once installed, you can use it like this to detect a human in your image:
using (var yoloWrapper = new YoloWrapper("yolov2-tiny-voc.cfg", "yolov2-tiny-voc.weights", "voc.names"))
{
var items = yoloWrapper.Detect(#"your_image.jpg");
//if (items[0].Type == "Person") { ... }
}
The items array will contain information about all the objects found. You can check there if it's a human you are looking at, using the Type property.