Can anyone help me with vtkWarpLens?
What I am trying to do is implement a distortion pattern on the camera to modify how the data is seen.
Here is the meat of the code (it's in C#, I'm using Activis)
vtkPolyData pd = vtkPolyData.New();
CreateFromFile(pd); // this creates a triangle representation of a height field
double [] sr = pd.GetScalarRange();
vtkLookupTable lut = vtkLookupTable.New();
lut.SetNumberOfColors(16);
lut.SetHueRange(0.667, 0.0);
lut.Build();
wl = vtkWarpLens.New();
wl.SetInputConnection(pd.GetProducerPort());
wl.SetPrincipalPoint(0.5, 0.5);
wl.SetFormatWidth(1);
wl.SetFormatHeight(1);
wl.SetImageWidth(1000);
wl.SetImageHeight(1000);
wl.SetK1(0.01307);
wl.SetK2(0.0003102);
wl.SetP1(1.953e-005);
wl.SetP2(-9.655e-005);
vtkDataSetMapper dsmDistorted = vtkDataSetMapper.New();
dsmDistorted.SetInputConnection(wl.GetOutputPort());
dsmDistorted.SetLookupTable(lut);
dsmDistorted.SetScalarRange(sr[0]+20, sr[1]);
vtkActor dsDistortedActor = vtkActor.New();
dsDistortedActor.SetMapper(dsmDistorted);
m_renDistorted.AddActor(dsDistortedActor);
m_renWin.SetDesiredUpdateRate(0);
m_renWin.Render();
m_renDistorted.ResetCamera();
So, basically, I am creating a terrain representation using polygons, passing it through the warplens, passing that through a dataset mapper to give it pretty colors, then displaying it.
The issue is that the "warp" appears to be static to the terrain, not to the camera. I'm pretty new to VTK, so it's possible that I don't understand how the Interactor and the Camera are related.
Can someone help?
Related
I am trying to use openCV.NET to read scanned forms. The problem is that sometimes the positions of the relevant regions of interest and the alignment may differ depending on the printer it was printed form and the way the user scanned the form.
So I thought I could use an ArUco marker as a reference point as there are libraries (ArUco.NET) already built to recognize them. I was hoping to find out how much the ArUco code is rotated and then rotate the form backwards by that amount to make sure the text is straight. Then I can use the center of the ArUco code as a reference point to use OCR on specific regions on the form.
I am using the following code to get the OpenGL modelViewMatrix. However, it always seems to be the same numbers no matter which angle the ArUco code is rotated. I only just started with all of these libraries but I thought that the modelViewMatrix would give me different values depending on the rotation of the marker. Why would it always be the same?
Mat cameraMatrix = new Mat(3, 3, Depth.F32, 1);
Mat distortion = new Mat(1, 4, Depth.F32, 1);
using (Mat image2 = OpenCV.Net.CV.LoadImageM("./image.tif", LoadImageFlags.Grayscale))
{
using (var detector = new MarkerDetector())
{
detector.ThresholdMethod = ThresholdMethod.AdaptiveThreshold;
detector.Param1 = 7.0;
detector.Param2 = 7.0;
detector.MinSize = 0.01f;
detector.MaxSize = 0.5f;
detector.CornerRefinement = CornerRefinementMethod.Lines;
var markerSize = 10;
IList<Marker> detectedMarkers = detector.Detect(image2, cameraMatrix, distortion);
foreach (Marker marker in detectedMarkers)
{
Console.WriteLine("Detected a marker top left at: " + marker[0].X + #" " + marker[0].Y);
//Upper 3x3 matrix of modelview matrix (0,4,8,1,5,9,2,6,10) is called rotation matrix.
double[] modelViewMatrix = marker.GetGLModelViewMatrix();
}
}
}
It looks like you have not initialized your camera parameters.
cameraMatrix and distortion are the intrinsic parameters of your camera. You can use OpenCV to find them.
This is vor OpenCV 2.4 but will help you to understand the basics:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
If you have found them you should be able to get the parameters.
I'm writing a game using the 2d features of unity.
I'm designing a sort of inventory for the player character, and I have a gameobject with a number of placeholder images inside it. The intention is that when I actually load this gameobject, I'll replace the sprites of the placeholder images and I'll display what I want.
My issue is that when I change the sprite using code like this
var ren = item1.GetComponentInChildren<SpriteRenderer>();
ren.sprite = Resources.Load<Sprite>("DifferentSprite");
The image loaded is correct, however the scaling applies to the new sprite. The issue is that these sprites have all different sizes. So whilst the original placeholder image takes up a small square, the replacement might be tiny or massive enough to cover the whole screen depending on how the actual png was sized.
I basically want the sprite to replace the other and scale itself such that it has the same width and height as the placeholder image did. How can I do this?
EDIT - I've tried playing around with ratios. It's still not working perfectly, but its close.
var ren = item1.GetComponentInChildren<SpriteRenderer>();
Vector2 beforeSize = ren.transform.renderer.bounds.size;
ren.sprite = Resources.Load<Sprite>("Day0/LampOn");
Vector2 spriteSize = ren.sprite.bounds.size;
//Get the ratio between the wanted size and the sprite's size ratios
Vector3 scaleMultiplier = new Vector3 (beforeSize.x/spriteSize.x, beforeSize.y/spriteSize.y);
//Multiple the scale by this multiplier
ren.transform.localScale = new Vector3(ren.transform.localScale.x * scaleMultiplier.x,ren.transform.localScale.y * scaleMultiplier.y);
puzzle.Sprite = Sprite.Create(t, new Rect(0, 0, t.width, t.height),Vector2.one/2,256);
the last int is for the pixelperUnity, its the cause of changing size.
It's fixed in my project,
my default pixelPerUnit=256 and unity default = 100 so its bigger 2.56 times
tell me if it helps
How to modify width and height: use RectTransform.sizeDelta.
Example for this type of script
Vector3 originalDelta = imageToSwap.rectTransform.sizeDelta; //get the original Image's rectTransform's size delta and store it in a Vector called originalDelta
imageToSwap.sprite = newSprite; //swap out the new image
imageToSwap.rectTransform.sizeDelta = originalDelta; //size it as the old one was.
More about sizeDeltas here
I would expose two public variables in the script, with some default values:
public int placeholderWidth = 80;
public int placeholderHeight = 80;
Then, whenever you change sprites, set the width & height to those pre-defined values.
I have implemented basic Hardware model instancing method in XNA code by following this short tutorial:
http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/
I have created the needed shader (without texture atlas though, single texture only) and I am trying to use this method to draw a simple tree I generated using 3DS Max 2013 and exported via FBX format.
The results I'm seeing left me without clue as to what is going on.
Back when I was using no instancing methods, but simply calling Draw on a mesh (for every tree on a level), the whole tree was shown:
I have made absolutely sure that the Model contains only one Mesh and that Mesh contains only one MeshPart.
I am using Vertex Extraction method, by using Model's Vertex and Index Buffer "GetData<>()" method, and correct number of vertices and indices, hence, correct number of primitives is rendered. Correct texture coordinates and Normals for lighting are also extracted, as is visible by the part of the tree that is being rendered.
Also the parts of the tree are also in their correct places as well.
They are simply missing some 1000 or so polygons for absolutely no reason what so ever. I have break-pointed at every step of vertex extraction and shader's parameter generation, and I cannot for the life of me figure out what am I doing wrong.
My Shader's Vertex Transformation function:
VertexShaderOutput VertexShaderFunction2(VertexShaderInput IN, float4x4 instanceTransform : TEXCOORD1)
{
VertexShaderOutput output;
float4 worldPosition = mul(IN.Position, transpose(instanceTransform));
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.texCoord = IN.texCoord;
output.Normal = IN.Normal;
return output;
}
Vertex bindings and index buffer generation:
instanceBuffer = new VertexBuffer(Game1.graphics.GraphicsDevice, Core.VertexData.InstanceVertex.vertexDeclaration, counter, BufferUsage.WriteOnly);
instanceVertices = new Core.VertexData.InstanceVertex[counter];
for (int i = 0; i < counter; i++)
{
instanceVertices[i] = new Core.VertexData.InstanceVertex(locations[i]);
}
instanceBuffer.SetData(instanceVertices);
bufferBinding[0] = new VertexBufferBinding(vBuffer, 0, 0);
bufferBinding[1] = new VertexBufferBinding(instanceBuffer, 0, 1);
Vertex extraction method used to get all vertex info (this part I'm sure works correctly as I have used it before to load test geometric shapes into levels, like boxes, spheres, etc for testing various shaders, and constructing bounding boxes around them using extracted vertex data, and it is all correct):
public void getVertexData(ModelMeshPart part)
{
modelVertices = new VertexPositionNormalTexture[part.NumVertices];
rawData = new Vector3[modelVertices.Length];
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
int stride = part.VertexBuffer.VertexDeclaration.VertexStride;
VertexPositionNormalTexture[] vertexData = new VertexPositionNormalTexture[part.NumVertices];
part.VertexBuffer.GetData(part.VertexOffset * stride, vertexData, 0, part.NumVertices, stride);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.ThirtyTwoBits)
part.IndexBuffer.GetData<uint>(modelIndices32);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.SixteenBits)
part.IndexBuffer.GetData<ushort>(modelIndices16);
for (int i = 0; i < modelVertices.Length; i++)
{
rawData[i] = vertexData[i].Position;
modelVertices[i].Position = rawData[i];
modelVertices[i].TextureCoordinate = vertexData[i].TextureCoordinate;
modelVertices[i].Normal = vertexData[i].Normal;
counter++;
}
}
This is the rendering code for the object batch (trees in this particular case):
public void RenderHW()
{
Game1.graphics.GraphicsDevice.RasterizerState = rState;
treeBatchShader.CurrentTechnique.Passes[0].Apply();
Game1.graphics.GraphicsDevice.SetVertexBuffers(bufferBinding);
Game1.graphics.GraphicsDevice.Indices = iBuffer;
Game1.graphics.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0, treeMesh.Length, 0, primitive , counter);
Game1.graphics.GraphicsDevice.RasterizerState = rState2;
}
If anybody has any idea where to even start looking for errors, just post all ideas that come to mind, as I'm completely stumped as to what's going on.
This even counters all my previous experience where I'd mess something up in shader code or vertex generation, you'd get some absolute mess on your screen - numerous graphical artifacts such as elongated triangles originating where mesh should be, but one tip stretching back to (0,0,0), black texture, incorrect positioning (often outside skybox or below terrain), incorrect scaling...
This is something different, almost as if it works - the part of the tree that is visible is correct in every single aspect (location, rotation, scale, texture, shading), except that a part is missing. What makes it weirder for me is that the part missing is seemingly logically segmented: Only tree trunk's primitives, and some leaves off the lowest branches of the tree are missing, leaving all other primitives correctly rendered with no artifacts. Basically, they're... correctly missing.
Solved. Of course it was the one part I was 100% sure it was correct while it was not.
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
Change that into:
modelIndices32 = new uint[part.IndexBuffer.IndexCount];
modelIndices16 = new ushort[part.IndexBuffer.IndexCount];
Now I have to just figure out why are 3 draw calls rendering 300 trees slower than 300 draw calls rendering 1 tree each (i.e. why did I waste entire afternoon creating a new problem).
I am trying to set up an extremely simple XNA game with a 3d terrain and some 2d GUI objects on top. I chose Nuclex since that seems to be one of the few 2d GUIs that's currently active.
My problem: adding a Nuclex "screen" class to my game results in the 3d terrain and lines drawn on top of it to screw up -- I have three layers of 3d objects: terrain, a wireframe outline of the terrain and some "routes" that hug the terrain. With the screen added, the routes that are partially-submerged into the terrain appear entirely on top of the terrain, and the wireframe grid appears thicker.
This is the tail-end of my Game.Initialize method, as made by following the sample on the Nuclex GuiManager page:
InputManager im = new InputManager();
IGraphicsDeviceService igds = Nuclex.Graphics.GraphicsDeviceServiceHelper.MakeDummyGraphicsDeviceService(GraphicsDevice);
gui = new GuiManager(igds, im);
gui.Initialize();
Viewport vp = GraphicsDevice.Viewport;
Screen main_screen = new Screen(vp.Width, vp.Height);
//main_screen.
this.gui.Screen = main_screen;
main_screen.Desktop.Bounds = new UniRectangle(
new UniScalar(0.1f, 0.0f), new UniScalar(0.1f, 0.0f), // x and y
new UniScalar(0.8f, 0.0f), new UniScalar(0.8f, 0.0f) // width and height
);
LabelControl text = new LabelControl("hello");
text.Bounds = new UniRectangle(10, 10, 200, 30);
main_screen.Desktop.Children.Add(text);
Components.Add(gui);
base.Initialize();
So, any ideas as to what I'm doing wrong?
Thanks!
Right, so thanks to catflier who pointed me in the direction of render states, I fixed the problem:
* First, "RenderState" is an XNA3 term; XNA4 simply allows a few state flags directly on the GraphicsDevice object.
* The problem is that Nuclex is secretly setting the GraphicsDevice.DepthStencilState and leaving it instead of restoring it to its previous value after use (naughty!!).
* So the solution is, at the top of your Draw method, to add a line like so:
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
I have an application that is very "connection-based", i.e. multiple inputs/outputs.
The UI concept of a "cable" is exactly what I'm looking for to make the concept clear to the user. Propellerhead took a similar approach in their Reason software for audio components, illustrated in this YouTube video (fast forward to 2m:50s).
I can make this concept work in GDI by painting a spline from point A to point B, there's got to be a more elegant way to use Paths or something in WPF for this, but where do you start? Is there a good way to simulate the animation of the cable swing when you grab it and shake it?
I'm also open to control libraries (commercial or open source) if this wheel has already been invented for WPF.
Update: Thanks to the links in the answers so far, I'm almost there.
I've created a BezierCurve programmatically, with Point 1 being (0, 0), Point 2 being the bottom "hang" point, and Point 3 being wherever the mouse cursor is. I've created a PointAnimation for Point 2 with an ElasticEase easing function applied to it to give the "Swinging" effect (i.e., bouncing the middle point around a bit).
Only problem is, the animation seems to run a little late. I'm starting the Storyboard each time the mouse moves, is there a better way to do this animation? My solution so far is located here:
Bezier Curve Playground
Code:
private Path _path = null;
private BezierSegment _bs = null;
private PathFigure _pFigure = null;
private Storyboard _sb = null;
private PointAnimation _paPoint2 = null;
ElasticEase _eEase = null;
private void cvCanvas_MouseMove(object sender, MouseEventArgs e)
{
var position = e.GetPosition(cvCanvas);
AdjustPath(position.X, position.Y);
}
// basic idea: when mouse moves, call AdjustPath and draw line from (0,0) to mouse position with a "hang" in the middle
private void AdjustPath(double x, double y)
{
if (_path == null)
{
_path = new Path();
_path.Stroke = new SolidColorBrush(Colors.Blue);
_path.StrokeThickness = 2;
cvCanvas.Children.Add(_path);
_bs = new BezierSegment(new Point(0, 0), new Point(0, 0), new Point(0, 0), true);
PathSegmentCollection psCollection = new PathSegmentCollection();
psCollection.Add(_bs);
_pFigure = new PathFigure();
_pFigure.Segments = psCollection;
_pFigure.StartPoint = new Point(0, 0);
PathFigureCollection pfCollection = new PathFigureCollection();
pfCollection.Add(_pFigure);
PathGeometry pathGeometry = new PathGeometry();
pathGeometry.Figures = pfCollection;
_path.Data = pathGeometry;
}
double bottomOfCurveX = ((x / 2));
double bottomOfCurveY = (y + (x * 1.25));
_bs.Point3 = new Point(x, y);
if (_sb == null)
{
_paPoint2 = new PointAnimation();
_paPoint2.From = _bs.Point2;
_paPoint2.To = new Point(bottomOfCurveX, bottomOfCurveY);
_paPoint2.Duration = new Duration(TimeSpan.FromMilliseconds(1000));
_eEase = new ElasticEase();
_paPoint2.EasingFunction = _eEase;
_sb = new Storyboard();
Storyboard.SetTarget(_paPoint2, _path);
Storyboard.SetTargetProperty(_paPoint2, new PropertyPath("Data.Figures[0].Segments[0].Point2"));
_sb.Children.Add(_paPoint2);
_sb.Begin(this);
}
_paPoint2.From = _bs.Point2;
_paPoint2.To = new Point(bottomOfCurveX, bottomOfCurveY);
_sb.Begin(this);
}
If you want true dynamic motion (ie, when you "shake" the mouse pointer you can create waves that travel along the cord), you will need to use finite element techniques. However if you are ok with static behavior you can simply use Bezier curves.
First I'll briefly describe the finite element approach, then go into more detail on the static approach.
Dynamic approach
Divide your "cord" into a large number (1000 or so) "elements", each with a position and velocity Vector. Use the CompositionTarget.Rendering event to compute each element position as follows:
Compute the pull on each element along the "cord" from adjacent elements, which is proportional to the distance between elements. Assume the cord itself is massless.
Compute the net force vector on each "element" which consists of the pull from each adjacent element along the cord, plus the constant force of gravity.
Use a mass constant to convert the force vector to accelaration, and update the position and velocity using the equations of motion.
Draw the line using a StreamGeometry build with a BeginFigure followed by a PolyLineTo. With so many points there is little reason to do the extra computations to create a cubic bezier approximation.
Static approach
Divide your cord into perhaps 30 segments, each of which is a cubic bezier approximation to the catenary y = a cosh(x/a). Your end control points should be on the catenary curve, the parallels should tangent to the catenaries, and the control line lengths set based on the second derivative of the catenary.
In this case you will probably also want to render a StreamGeometry, using BeginFigure and PolyBezierTo to build it.
I would implement this as a custom Shape subclass "Catenary" similar to Rectangle and Ellipse. In that case, all you have to override the DefiningGeometry property. For efficiency I would also override CacheDefiningGeometry, GetDefiningGeometryBounds, and GetNaturalSize.
You would first decide how to parameterize your catenary, then add DependencyProperties for all your parameters. Make sure you set the AffectsMeasure and AffectsRender flags in your FrameworkPropertyMetadata.
One possible parameterization would be XOffset, YOffset, Length. Another might be XOffset, YOffset, SagRelativeToWidth. It would depend on what would be easiest to bind to.
Once your DependencyProperties are defined, implement your DefiningGeometry property to compute the cubic bezier control points, construct the StreamGeometry, and return it.
If you do this, you can drop a Catenary control anywhere and get a catenary curve.
User bezier curve segments in a path.
http://www.c-sharpcorner.com/UploadFile/dbeniwal321/WPFBezier01302009015211AM/WPFBezier.aspx
IMHO 'hanging' (physically simulated) cables are a case of over-doing it - favouring looks over usability.
Are you sure you're not just cluttering the user-experience ?
In a node/connection-based UI I find clear connections (like in Quartz Composer : http://ellington.tvu.ac.uk/ma/wp-content/uploads/2006/05/images/Quartz%20Composer_screenshot_011.png ) way more important than eye-candy like swinging cables that head in a different direction (down due to gravity) than where the actually connection-point is. (And in the mean time eat up CPU-cycles for the simulation that could be more useful elsewhere)
Just my $0.02