Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.
Related
I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.
I have an application I'm working on that requires a fair amount of 3D graphics programming. I have a series of lines that create both text and 3D cylindrical holes (see images).
I would like to be able to click and drag the objects in question using my mouse through the X,Y plane (Z constant). My understanding is that in order for the bounding boxes to be setup correctly, I have to have everything in using 3D polygons (triangles). I would like to be able to do collision detection without this conversion. Is this possible? If I must convert, can anyone point me to a piece of code that does this rather painlessly?
You can treat each line segment as a cylinder, and check them for collision.
Here's the math, as well as more alternatives.
I am attempting to create a function taking a plane in 3d space, and returning a plane which will fit in its entirety inside one section of a grid on the screen.
The grid on the screen is fixed and is defined by either divisions in X and Y, or by a set of lines across the screen.
The original plane can be any size or orientation on the screen, though it will never take the whole screen.
I am working in Unity3.5.2f2 with C#. I have posted this on SO as it is quite heavily math based as opposed to just Unity general knowledge. Ideally a solution will not use external libraries, though it is a possibility.
I have a few methods in mind and would appreciate any input;
Project the plane to screen space, get the min/max x and y values of the mesh, (bounding box), use this to calculate a scale xform (using difference in height/length of mesh to that of a screen division). Re-project into world space, after snapping two edges of the mesh to a selected division.
As the divisions are rectangular in nature, create several view frustums, and come up with some method of scaling/translating the plane in 3d space to fit the frustum.
Function prototype would be;
Plane adjustPlaneToFitScreens(Plane _plane)
Any thoughts?
I solved this issue using method 01. above. Unity provided several handy functions making the math easy, and calculating scaling and translation in pixel/screen space was far easier than in 3d space while having to take into account view angle / FOV.
There are issues with the re-projection into world after the scaling, but this particular application doesnt have the camera moving when viewing the scaled object, so the issues are not actually noticeable in black box
My current project has required me to learn face detection/tracking and image processing, given my experience in c#, I chose Emgu CV as my choice library for face detection and tracking. From what I've learned so far, I can do face detection and tracking, and basic image processing.
My goal is to be able to place virtual hair on the detected face. What I want to achieve is similar to [this video]: http://www.youtube.com/watch?v=BdPmECfUFcI.
What I would like to know is the technique(s) to use in handling hair placement for different kind of hairstyles on the detected face. In what image format do I store the the hair?
After watching the video I noticed it considers the head as a flat rectangle and not as a rectangular prism (the 3D object), so it doesn't consider the use of perspective transformations and I will not consider it too. This is a limitation but serves as a decent first step in doing such placements. Note that it is not a simply matter of taking perspective into consideration, your face tracking algorithm also needs to be able to handle more complicated configurations (the eyes might not be fully visible, for example).
So, the first thing you want is a bounding rectangle aligned according to the angle the eyes make with the x axis, illustrated in the following right figure (the red segment indicates the connection between the eyes). The left figure shows a typical bounding box aligned to the axis, which doesn't serve for this problem.
The problem is also simplified after you consider the head is symmetric, so you know the top middle point in the above figure is the middle of the top of your head. Also, considering that a typical head will likely be larger at top than at bottom, then you have something like in the following figure where the width of the rectangle is close to the width of the forehead. You could also consider a bounding rectangle on only upper half of the head, for example.
Now all that is left is positioning some object in this rectangle. For that, you need to augment the description of this object to be positioned so it is not purely pixels. We can define "entrance width" (EW) and "entrance middle point" (EM). This EW establishes the width needed in the other rectangle (the head one) to position it. So, if EW is smaller than the needed value, you upscale this object, respectively for when EW is larger. Note that the full width of the head's rectangle is usually an overestimation to position this object, so you can experiment with percentages of the width. The EM value is useful to know how you will position this object over the head. In the following figure, EW is the horizontal blue dashed horizontal, and EM is the middle point on it. The vertical blue line indicates how much over the EM you want to move this object inside the top segment of head's rectangle.
The only other special thing this object needs is a value that is considered as background. So when painting this object it is easy to know whether to make a point fully transparent (the background value) or fully opaque (anything else). This was the sketch I had in mind of what needs to be basically done.
I have a 3D model loaded with model = Content.Load<Model>("cube") and i need to get the size of that object after it gets projected to the viewport.
I know that i can use Viewport.Project().But that works for a single point and what i need is a rectangle.Something i can draw a square brackets around.
I can think of a couple of ways of doing this. My suggestion would be to find an upper corner and a lower corner of the model, and project those onto the viewport.
You could do this using the BoundingSphere of the model's Meshes for example. If it's a cube, as above, you could just go through the vertices one by one (obviously after aligning to the camera). Using that, you could draw a rectangle in screen space that will at least encompass the entire model, but perhaps a greater area as well depending on the shape.