How to implement a loop closure algorithm for ARFoundation - c#

I build an app which is used for meassuring outside environments on a larger scale (image the size of a regular personal driveway) using AR.
In order to increase the reliability of the system I want to implement a loop closing algorithm that allows the user to set a fixpoint (e.g. ImageMarker) --> walk around --> return to the fixpoint. The system should then figure out the offset (drift) between the original marker position and the new position upon returning and apply the transformation on the measuring-points in such a way that it accounts for drift.
As far as I know this kind of system is called a "loop closure".
Where do I start with implementing such a thing in ARFoundation? Are there build in methods? What are keywords / names of specific algortihms that I could be researching?
Any help is greatly appreciated. Thank you!
I experimented with two things:
adding a simple standalone imagetrigger to the scene and check if that already has an impact on other ARAnchors in the scene -> this doesnt seem to be the case. The transformations on the other anchors do not seem to be affected when the imagetrigger updates its position.
attaching all points of meassurment as children of the imagetrigger --> this obviously applys the updated image-trigger transformation uniformly onto all other points. However drift is not accumilating uniformly over the runtime of the system.

Related

Trouble implementing gridbased pathfinding with limited turns

I'm in the process of making a 2d, gridbased game and have reached a standstill due to challenging code. I need to navigate from one cell in the grid to another if possible, but with a maximum of 2 turns.
The red ball is the goal, and the green paths are ones that are valid, "turns" are highlighted by a blue circle.
Without brute forcing the issue and checking every posssible path how could this be done? I've experimented with a few ideas along with an a* implementation, but no luck so far. Any ideas, using unity's API or anything else is highly appreciated.
This can be solved using normal A* by creating a specially-designed directed weighted graph from your original grid.
The trick is to create a graph with multiple "layers". Layer 0 represents 0 turns having been made so far, layer 1 represents 1 turn made, and layer 2 is 2 turns. A node connects to its neighbors on the same layer if they can be reached without turning, and its neighbors on the next layer if they require a turn.
Hopefully this is enough information for you to create the graph, but if not, the explicit steps would be:
Create 6 copies of the graph, Layer_0_Horizontal, Layer_0_Vertical, Layer_1_Horizontal, Layer_1_Vertical, Layer_2_Horizontal, Layer_2_Vertical.
For each node in a Horizontal layer, remove its edges to its vertical neighbors, and replace them with edges to nodes in the next layer down, with eg. Layer_1_Vertical being below Layer_0_Horizontal. Edges in Layer_2 won't be replaced. Do the same thing for Vertical layers / horizontal edges.
Create a fake 'start' node, and connect it to the two Layer_0 nodes that represent that same grid-square with 0-weight edges. If your A* implementation only supports one goal-node, do the same with the goal.
If you want to prefer longer paths with less turns over shorter paths with more turns (is that even possible with only two turns??), give the edges between layers an extremely large weight.

Layered Sprites With Predetermined Offset / Rotation

Probably not a very descriptive title, but I'm doing my best. It's my first time posting on StackOverflow, and I'm relatively new to programming in C# (first started around a year ago using Unity, and decided a few days ago to upgrade to XNA). That being said, please be kind to me.
I'm planning out the mechanics of a 2D game that I'm designing, and while most of it seems straightforward after playing around in XNA, there's one issue that I keep coming back to that I have yet to come up with a satisfactory answer for. The issue involves the layering of sprites into composite / complex sprites. For example, a character in the game might be wielding one or two of any number of weapons. I did do a bit of research on the topic, and found some people recommending to use the RenderTarget class to draw a series of sprites as one, and some recommending simply drawing the sprites on top of one another during Draw(). These topics, however, were mostly focused on the relatively simple case of having a single character in the game.
In my case, the game will have a number of sprite-based characters who have totally different postures / animations. There are around 10 right now, and there will probably be more added later in development. There will likewise be a largish number of weapons (probably around 20 to start) that will be composited onto the characters. That much I'm comfortable with. However, the problem is that each of the characters would require the weapon sprites to be draw in different locations and with different rotations during each frame of a character's animation.
I've considered a couple approaches to how to pull this off, but they all have pretty massive drawbacks.
The first was simply drawing a spritesheet of each weapon for each character, that would be the same size as the appropriate character. The benefit to this approach would be the ease of just adding the call to draw the additional sprite on top of the base character without having to do any calculations. The downside would be that that creates an inordinate amount of extra sprite sheets (200 extra sheets for 10 characters x 20 weapons).
The second was creating a class to handle the weapon sprite. The WeaponSprite class would be attached to a single texture for each weapon, and would then store information about the offset / rotation to use when drawn, based on the character that it is attached to. The problem with this is that organizing the offsets / rotations on a per-frame basis would be incredibly tedious, and I can't think of any easy way to pull the information based on the frame required. (I had the idea of making an AnimationFrame class to keep track of the animation name, directional facing and frame number of each character, and then using a dictionary in the weapon class to load the proper data based on the name of the current frame, but something about the idea seemed really ill-conceived). This method also has the drawback of requiring a relatively large amount of memory to pull off (assuming a Vector2 is 8 bytes and a float is 4, having 10 characters and 20 weapons would require 192KB of memory given the current number of frames being used, which would only get larger as more weapons were added). I had an offshoot of that idea (which I sort of stole from another post on here about the same topic) of using a reserved alpha value pixel to link the offset and the 'origin' of each weapon, calculating the position at runtime and then only having to store the rotational float in the aforementioned dictionary.
Since I'm new to XNA (and still pretty green on C#), I figured I'd post and let the experts chime in. Am I on the right track with my methods, or am I missing something really simple? Thank you very much in advance for your help, and please let me know if you need any additional information.
Wow, big question. I can't really tell you exactly how to implement this. But I can give you some helpful nuggets of advice:
Advice #0: Whenever any kind of compositing problem comes up, people come out of the woodwork recommending "render targets" as some kind of compositing panacea. They are usually wrong. Avoid using render targets if you can. You only need them if you are doing effects on the final, composite image (blends, blurs, etc). Otherwise just draw your sprites over the top of each other directly to the backbuffer.
Advice #1: You want to pack all your sprites onto a single sprite sheet, if possible. If you exceed the texture size limit, you'll have to be clever about how you partition your sprites across sheets. The reason is performance - you want to limit the number of texture swaps - see this answer for details.
You may be able to use an existing sprite-packer for XNA. If you can find a suitable one, I recommend you use it. A good one will allow you to treat a packed sprite just as you would treat a texture when calling SpriteBatch.Draw.
Advice #2: Do not worry about how much space your positioning data takes at runtime. 192kb is almost nothing - the size of a small texture.
The upshot of this, and #1, is to store as much as possible in your positioning meta-data, and avoid duplicate textures.
How you store your meta-data almost doesn't matter.
Advice #3: You can change both your storage requirements and content creation story from an n × m problem to an n + m problem (n characters and m weapons). Simply store weapons with only an "origin", and store characters with an "origin" and a "hand position & rotation". Simply render such that the origin of the weapon lines up with the hand of the character (the maths is very simple).
Then you can add characters without worrying about what weapons exist, and add weapons without worrying about what characters exist.
Here's an example of how much space this needs: 10 characters × 20 bytes + 20 weapons × 8 bytes = 360 bytes. Nice and small! (Although you'll probably want many more attachment points - different kinds for different weapons, hats, whatever. Edit: oops I didn't include animation frames - but it's still a relatively small amount of data.)
Advice #4: The trickiest part, as you seem to be hinting at in your post, is content creation.
As you hint at, ideally you would want to be able to edit the attachment points directly in your image editor. This is a compelling idea. Special alpha values are only appropriate if your sprites have no anti-aliasing. You could theoretically do something with layers and different colours. The hardest part is figuring out how to encode rotation.
You could use an XNA content pipeline processor to extract data from the image at build-time. However this gets very expensive to implement (especially if you've not done it before - the content pipeline is badly under-documented). Unless your art requirements are truly enormous, it is almost certainly not worth the extra development time required to make the content pipeline extension. By the time you're done, you could have hand-coded the positioning data several times over.
My recommendation, then, is to store the extra data in an easy-to-edit XML file. I recommend using XNA's XML Content Importer. It can be tricky to get the hang of the formatting at first, and you have to remember to include the appropriate assembly referencing. But once you know how to use it, it's the easiest way to get structured data into XNA quickly.

Farseer inertia or velocity cap

Not sure if this is a Farseer thing to do with inertia, or if it's my code, but I've simplified the code quite a bit and I can't find it.
Scenario: I've got a Body, with a Mass of 10(kg I assume). I use ApplyLinearImpulse and I scoot the object to the right using a vector like (1,0) and constant like 5.
Problem: It does move to the right, but it seems to be capped. The LinearVelocity property does go up as I increase the value fed to ApplyLinearImpulse, but the actual change in Position does not. As soon as I call world.step(msDelta), the LinearVelocity drops back to some tiny value.
Am I doing this wrong, or is there an internal cap based on my mass?
There is a maximum movement limit of 2.0 units per time step, set in the file b2Settings.h in the Box2D source code. You can change this value (b2_maxTranslation) if you need to, just be aware that the values in this file are tuned to work well together so you may get other problems if you change it too much.
Note that this is a #define'd constant used throughout Box2D so you'll need to recompile the library itself to make the change take effect fully. I don't know enough about Farseer to tell you whether this is easy or not :)
Generally if you feel the need to change this value you might like to first consider scaling all your physics dimensions down so that your bodies don't need to move faster than 2 physics units per time step.
You might be interested in some other common 'gotchas' here: http://www.iforce2d.net/b2dtut/gotchas

Dealing with imprecision in CAD drawing

I have a CAD application, that allows user to draw lines and polygons and all that.
One thorny problem that I face is user drawing can be highly imprecise, for example, a user might want to draw two rectangles that are connected to each other. Hence there should be one line shared by two rectangles. However, it's easy for user to, instead of draw a line, draw two lines that are very close to each other, so close to each other that when look from the screen, you would be mistaken that they are the same line, except that they aren't when you zoom in a little bit.
My application would require user to properly draw the lines ( or my preprocessing must be able to do auto correction), or else my internal algorithm (let's call it The Algorithm) would not be able to process the inputs correctly.
What is the best strategy to combat this kind of problem? I am thinking about rounding the point coordinates to a certain degree of precision, but although I can't exactly pinpoint the problem of this approach, but I feel that this is not the correct way of doing things, that this will introduce a new set of problem.
Edit: For the sake of argument the snapping isn't an available option. For the matter, all sorts of "input-side" guidance are not available. The correction must be done via preprocessing on my code, when the drawing is finished, but just before I submit it to my algorithm.
Crazy restriction, you say. But a user can construct their input either in my application, or they can construct their input in other CAD software and then submit to my engine to do the calculation. I can't control how they input in other CAD software.
Edit 2:I can let user to specify the "cluster radius" to occur, but the important point is, I would need to make sure that my preprocessing algorithm is consistent and won't really introduce a new set of problem.
Any idea?
One problem I see is that your clustering/snapping algorithm would have to decide on its own which point to move onto which other point.
During live input snapping is simple: the first point stays put, the second point is snapped onto the first. If in offline mode you get a bunch of points that you know should be snapped together, you have no idea where the resulting point should lie. Calculate the average, possibly resulting in a completely new point? Choose the most central point out of all the candidates? Pick one at random? Try to align your point with some other points on the x/y/z-axis?
If your program allows any user interaction at all, you could detect point clusters that might be candidates for merging, and present the user with different merge target points to choose from.
Otherwise, you could make this kind of behaviour configurable: take a merge radius ("if two or more poins are within n units of one another...") and a merging algorithm ("... merge them into the most central of the points given") as parameters and read them from a config file.
Snapping points. User should be able to snap to end points (and many more) then, when you detect a snap, just change the point user clicked to snap point point. Check AutoCAD, functions line End, Middle and so on.
EDIT: If you want offline snapping then you just need to check every pair of points if they are near each other. The problem is that this in NP-problem so it will take a lot of time as you can't really get under O(n^2) time complexity. This algorithm you need should be under "clustering".
EDIT2: I think you shouldn't consider that input data is bad. But if you reallllllly want to do this, simples way is to take each point, check if there are other points in users defined radius, if yes find whole group that should merge into one point, find avg of coordinates of points and point all of them to that specific point. But remember - most designers KNOW what are snap points for and if they don't use them they have valid idea for that.
Your basic problem seems to me (I hope I understood correctly) to determine if two lines are the "same" line.
Out of my own experience your feeling is correct, rounding the coordinates in the input might prove not to be a good idea.
Maybe you should leave the coordinates in the input as they are but implement your function let's name it IsSameLine That you use in "The Algorithm" (who among others determines if two rectangles are connected if i understood your description correctly).
IsSameLine could transform the endpoints of the input lines from source coordinates to screen coordinates considering a certain (possibly configurable) screen resolution and check if they are the same in screen coordinates.
I.e. let's say you have an input file with the following extent (lowerleft) (upperRight) ((10,10), (24,53)). The question would be how far apart would be points (11,15) and (11.1, 15.1) if drawn at "zoom to extents" level on a 1600x1200 pixels screen. So you can determine a transform from source coordinates to "screen coordinates". You use then this transformation in IsSameLine as described above.
I'm not sure however this would be actually a good solution for you.
Another (maybe better?) possibility is to implement IsSameLine to return true if the points of the two lines are at maximum epsilon distance apart. The epsilon could have a default value computed based on the extent of the input vector data and probably it would be a good idea to give the user the possibility to give another value for it.

How to implement Pentago AI algorithm

i'm trying to develop Pentago-game in c#.
right now i'm having 2 players mode which working just fine.
the problem is, that i want One player mode (against computer), but unfortunately, all implements of minimax / negamax are for one thing calculated for each "Move" (placing marble, moving game-piece).
butin Pentago, every player need to do two things (place marble, and rotate one of the inner-boards)
I didn't figure out how to implement both rotate part & placing the marble, and i would love someone to guide me with this.
if you're not familiar with the game, here's a link to the game.
if anyone want's, i can upload my code somewhere if that's relevant.
thank you very much in advance
If a single legal moves consists of two sub-moves, then your "move" for game algorithm purposes is simply a tuple where the first item is the marble placement and the second item is the board rotation e.g.:
var marbleMove = new MarbleMove(fromRow, fromCol, toRow, toCol);
var boardRotation = new BoardRotation(subBoard, rotationDirection);
var move = new Tuple<MarblMove, BoardRotation>(marbleMove, boardRotation);
Typically a game playing algorithm will require you to enumerate all possible moves for a given position. In this case, you must enumerate all possible pairs of sub-moves. With this list in hand you can move on to using standing computer game playing approaches.
Rick suggested tuples above, but you might want to actually just have each player make two independent moves, so it remains their turn twice in a row. This can make move ordering easier, but may complicate your search algorithm, depending on which one you are using.
In an algorithm like UCT (which is likely to outperform minimax for simple implementations) breaking into two moves can be more efficient because the algorithm can first figure out what moves placements are good, and then figure out what rotation is best. (Googling UCT doesn't give much. The original research paper isn't very insightful, but this page might be better: http://senseis.xmp.net/?UCT)

Categories

Resources