The issue I am dealing with is that I cannot seem to find an alternative to PickPoint for SectionViews.
In the Revit 2019 API, I've been trying to create a small script which draws a DetailLine between two points. However, I wish these points to be selected by the user, which PickPoint is perfect for. Since I need this to work in Section Views too, I'm at a roadblock though.
The relevant code is given.
XYZ p1 = uiDoc.Selection.PickPoint();
XYZ p2 = uiDoc.Selection.PickPoint();
DetailLine l = uiDoc.Document.Create.NewDetailCurve(
uiDoc.Document.ActiveView,
Line.CreateBound(p1, p2)) as DetailLine;
This throws a Autodesk.Revit.Exceptions.InvalidOperationException in a Section View, since I don't have a Work Plane.
The part that confuses me is that we can very easily draw a DetailLine in Revit itself, but I can't seem to be able to do this in my own AddIn.
I figured it out, but I'll leave my solution here for whoever might need help with it.
Basically, Revit doesn't allow you to pick points without an active Work Plane at all. This is because the coordinate system in Revit is three-dimensional, and when you're picking a point with your mouse you can't go in three dimensions. To make sure everything is precise and right, Revit forces you to have a plane on which you will pick a point.
The work-around is a blatant hack, but it works. You have to create a sketch plane, from which you can create a work plane, in which you select a point, and then after that you delete the work plane. It's dirty, but it works.
Since you're creating and deleting stuff, this requires a Transaction.
XYZ pickPoint;
using (Transaction t = new Transaction(Document))
{
t.Start("Test Transaction");
SketchPlane sp = SketchPlane.Create(Document, Plane.CreateByNormalAndOrigin(
Document.ActiveView.ViewDirection,
Document.ActiveView.Origin));
Document.ActiveView.SketchPlane = sp;
// Finally, we are able to PickPoint()
pickPoint = Document.Selection.PickPoint();
// Don't forget to clean up
Document.Delete(sp.Id);
t.Commit();
}
// Draw whatever you want with this point now.
Hope this helps someone out.
Related
I'm currently trying to implement a complex search field into a unity project I'm working on, where a user has the ability to spawn points into a scene. My goal is to create a custom shape for that user based on the points they created, and then I'd like to detect whether or not other objects are inside that shape, similar to the detection of a point inside a complex hull (still shaky on the theory behind that, but an example can be found here). If possible, I'd also like the shape to update itself if the points are later moved, giving it an almost elastic, stretchy feel.
So far, every tutorial or resource I've found online does the exact same basic example, where a script assigns new verts, UVs and triangles to a custom mesh to make a plane using two triangles, but this is frustratingly simple, and decidedly unhelpful when I simply don't know what the final shape will look like, or what triangles to actually draw even when the user has as little as five points in the scene.
As of right now, the closest visual representation I could come up with has a List keep track of the user's points, and a script that just draws a bunch of pseudo-triangles using LineRenderers to connect every point, even ones that aren't exterior faces, by iterating through the List multiple times. While this looks close to what I want, it isn't actually useful in any way, as I don't know how to 'fill' those faces, and I'm still relatively lost when it comes to whether or not an object is inside that hull, like the red sphere shown in the example below.
I can also destroy and redraw those lines repeatedly during the Update() method, which allows me to grab a point and move it around, resulting in the shape dynamically changing, but this results in an undesirable flashing effect that I'd sooner avoid for now.
As this is such a broad question, I've also included the method I'm using to draw these lines below, which parents a bunch of lines shaped like triangles to an empty game object for easy destruction and recreation:
void drawHull()
{
if (!GameObject.Find("hullHold"))
{
hullHold = new GameObject();
hullHold.name = "hullHold";
}
foreach(GameObject point in points)
{
for (int i = points.IndexOf(point); i < points.Count - 2; i++)
{
lineEdge = Instantiate(lineReference);
lineEdge.name = "Triangle" + i;
lineEdge.GetComponent<LineRenderer>().startColor = Color.black;
lineEdge.GetComponent<LineRenderer>().endColor = Color.black;
lineEdge.GetComponent<LineRenderer>().positionCount = 3;
lineEdge.GetComponent<LineRenderer>().SetPosition(0, points[points.IndexOf(point)].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(1, points[i + 1].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(2, points[i + 2].transform.position);
lineEdge.GetComponent<LineRenderer>().loop = true;
lineEdge.SetActive(true);
lineEdge.transform.SetParent(hullHold.transform);
}
}
}
If anyone has encountered a similar problem somewhere else that I simply couldn't find, please let me know! Anything from more knowledge on creating a custom mesh to a more in depth and beginner-friendly explanation on determining if a point is inside a convex hull would be quite helpful. If it's at all relevant, I am working in VR and running version 2018.2.6f1 to ensure that the Oculus rift package and Unity play nice, but I haven't been having any issues working in an environment a few months behind.
Thanks!
You do mention assinging verts and tris to a mesh - this is the 'right' way to do it in unity, as it ties in with highly optimized MeshRenderer, and also I believe Colliders are able to use meshes where given the shapes like that, so you should be able to just plug into PhysX and query it for colliders overlaping with your mesh. Doing it by hand, aka iterating through faces and establishing actual bounds of your object is actually pretty hard to do effectively
I'm trying to rotate a polygon in windows forms using C# and below is code written .
Please tell me what wrong in the code, there is no output on windows form.
Before and after rotation polygons not visible.
public void RotatePolygon(PaintEventArgs e)
{
pp[0] = new Point(624, 414);
pp[1] = new Point(614, 416);
pp[2] = new Point(626, 404);
e.Graphics.DrawPolygon(myPen2, pp);
System.Drawing.Drawing2D.Matrix myMatrix1 = new System.Drawing.Drawing2D.Matrix();
myMatrix.Rotate(45, MatrixOrder.Append);
myMatrix.TransformPoints(pp);
e.Graphics.Transform = myMatrix1;
e.Graphics.DrawPolygon(myPen, pp);
}
Thanks
You code does not compile if left unmodified. There are two matrices used - one declared in your method (myMatrix1) attached to the graphics object and one declared outside your method (myMatrix without the 1) used to explicitly transform the point array.
I tried the code with the required changes and it works flawless - I used myMatrix1 for both transformations and the effective rotation angle was, as expected, twice the one specified. So I guess you are using two transformation that cancel if the transformed points end where they began.
There could be this problems:
[1] your pens have no color/thickness (where do you define them?)
[2] your polygone is to big, so you only see the inside of it but not the border. --> Test Graphics.FillPolygon-Methode so you will see if [2] is right
You're both transforming the points and changing the transform matrix for the Graphics object - you need to do one or the other.
You also need to think about the fact that the rotate is going to be happing about (0,0) rather than about some part of the object - you may well need a translate in there too.
Bear in mind that TransformPoints just manipulates some numbers in an array - which you can easily inspect with the debugger - this will be a more effective technique than displaying an invisible object and wondering where it went.
Starting with a much smaller rotation angle (10 deg, perhaps?) may also help with the problem of losing the object - it will be easier to work out what's happening if you haven't moved so far.
I'm trying to make a truncated icosahedron, though with more subdivision (so more hexagons)
In the game I use it, eacht pentagon and hexagon is a separate object. So after generating the icosahedron, I just use the generated points to place either a pentagon or a hexagon on it (instead of doing the find-middles-of-each-triangle-thing, I do this since I need them to be separate object anyway.) I have some questions about it though, and google doesn't really help, so I'm hoping there are some smart smath-knowing people here :D
Here we go:
Am I assured that the length of each sides is equal?
Since each hexa/petagon is a separate object, I need to rotate them to get them positioned properly, any help with this?
Assuming I have hexa/petagons with a radius of 1 (one), how far for the middle do I have to position them? (Basecly, whats the relationship between the radius of my hexa/pentagons and the radius of my truncated icosahedron.)
Here's my first test, I generated a icosahedron and then on each point put a pentagon model, which I rotate so it's pointing away from the middle. As you can see they still need to be rotated to fit together (question 2) and their distance to the middle has to the tweaked aswell (question 3).
I'll continue on working on this too, though all help will be appreciated! (I'm making this in Unity, using c#, so if you give sample code, it would be really really awesome if you use that.)
Thanks a lot!
Well, not the answer to your questions but maybe worth thinking about:
Wouldn't it be easier to start with a ready made Blender, Maya, ... model of a soccer ball like for example this one on Blend Swap, change it to fit your needs. Or do it on your own as there are a couple of YouTube tutorials. Then you will have far more options like LOD, materials. You can design it in Blender with each pentagon/hexagon as single object and so it will be imported in Unity.
I have a CAD application, that allows user to draw lines and polygons and all that.
One thorny problem that I face is user drawing can be highly imprecise, for example, a user might want to draw two rectangles that are connected to each other. Hence there should be one line shared by two rectangles. However, it's easy for user to, instead of draw a line, draw two lines that are very close to each other, so close to each other that when look from the screen, you would be mistaken that they are the same line, except that they aren't when you zoom in a little bit.
My application would require user to properly draw the lines ( or my preprocessing must be able to do auto correction), or else my internal algorithm (let's call it The Algorithm) would not be able to process the inputs correctly.
What is the best strategy to combat this kind of problem? I am thinking about rounding the point coordinates to a certain degree of precision, but although I can't exactly pinpoint the problem of this approach, but I feel that this is not the correct way of doing things, that this will introduce a new set of problem.
Edit: For the sake of argument the snapping isn't an available option. For the matter, all sorts of "input-side" guidance are not available. The correction must be done via preprocessing on my code, when the drawing is finished, but just before I submit it to my algorithm.
Crazy restriction, you say. But a user can construct their input either in my application, or they can construct their input in other CAD software and then submit to my engine to do the calculation. I can't control how they input in other CAD software.
Edit 2:I can let user to specify the "cluster radius" to occur, but the important point is, I would need to make sure that my preprocessing algorithm is consistent and won't really introduce a new set of problem.
Any idea?
One problem I see is that your clustering/snapping algorithm would have to decide on its own which point to move onto which other point.
During live input snapping is simple: the first point stays put, the second point is snapped onto the first. If in offline mode you get a bunch of points that you know should be snapped together, you have no idea where the resulting point should lie. Calculate the average, possibly resulting in a completely new point? Choose the most central point out of all the candidates? Pick one at random? Try to align your point with some other points on the x/y/z-axis?
If your program allows any user interaction at all, you could detect point clusters that might be candidates for merging, and present the user with different merge target points to choose from.
Otherwise, you could make this kind of behaviour configurable: take a merge radius ("if two or more poins are within n units of one another...") and a merging algorithm ("... merge them into the most central of the points given") as parameters and read them from a config file.
Snapping points. User should be able to snap to end points (and many more) then, when you detect a snap, just change the point user clicked to snap point point. Check AutoCAD, functions line End, Middle and so on.
EDIT: If you want offline snapping then you just need to check every pair of points if they are near each other. The problem is that this in NP-problem so it will take a lot of time as you can't really get under O(n^2) time complexity. This algorithm you need should be under "clustering".
EDIT2: I think you shouldn't consider that input data is bad. But if you reallllllly want to do this, simples way is to take each point, check if there are other points in users defined radius, if yes find whole group that should merge into one point, find avg of coordinates of points and point all of them to that specific point. But remember - most designers KNOW what are snap points for and if they don't use them they have valid idea for that.
Your basic problem seems to me (I hope I understood correctly) to determine if two lines are the "same" line.
Out of my own experience your feeling is correct, rounding the coordinates in the input might prove not to be a good idea.
Maybe you should leave the coordinates in the input as they are but implement your function let's name it IsSameLine That you use in "The Algorithm" (who among others determines if two rectangles are connected if i understood your description correctly).
IsSameLine could transform the endpoints of the input lines from source coordinates to screen coordinates considering a certain (possibly configurable) screen resolution and check if they are the same in screen coordinates.
I.e. let's say you have an input file with the following extent (lowerleft) (upperRight) ((10,10), (24,53)). The question would be how far apart would be points (11,15) and (11.1, 15.1) if drawn at "zoom to extents" level on a 1600x1200 pixels screen. So you can determine a transform from source coordinates to "screen coordinates". You use then this transformation in IsSameLine as described above.
I'm not sure however this would be actually a good solution for you.
Another (maybe better?) possibility is to implement IsSameLine to return true if the points of the two lines are at maximum epsilon distance apart. The epsilon could have a default value computed based on the extent of the input vector data and probably it would be a good idea to give the user the possibility to give another value for it.
i'm trying to develop Pentago-game in c#.
right now i'm having 2 players mode which working just fine.
the problem is, that i want One player mode (against computer), but unfortunately, all implements of minimax / negamax are for one thing calculated for each "Move" (placing marble, moving game-piece).
butin Pentago, every player need to do two things (place marble, and rotate one of the inner-boards)
I didn't figure out how to implement both rotate part & placing the marble, and i would love someone to guide me with this.
if you're not familiar with the game, here's a link to the game.
if anyone want's, i can upload my code somewhere if that's relevant.
thank you very much in advance
If a single legal moves consists of two sub-moves, then your "move" for game algorithm purposes is simply a tuple where the first item is the marble placement and the second item is the board rotation e.g.:
var marbleMove = new MarbleMove(fromRow, fromCol, toRow, toCol);
var boardRotation = new BoardRotation(subBoard, rotationDirection);
var move = new Tuple<MarblMove, BoardRotation>(marbleMove, boardRotation);
Typically a game playing algorithm will require you to enumerate all possible moves for a given position. In this case, you must enumerate all possible pairs of sub-moves. With this list in hand you can move on to using standing computer game playing approaches.
Rick suggested tuples above, but you might want to actually just have each player make two independent moves, so it remains their turn twice in a row. This can make move ordering easier, but may complicate your search algorithm, depending on which one you are using.
In an algorithm like UCT (which is likely to outperform minimax for simple implementations) breaking into two moves can be more efficient because the algorithm can first figure out what moves placements are good, and then figure out what rotation is best. (Googling UCT doesn't give much. The original research paper isn't very insightful, but this page might be better: http://senseis.xmp.net/?UCT)