Display point cloud continuous c# Intel Realsense - c#

This maybe a continuation of my previous question about displaying ply file with Helix toolkit in c#. The problem I have with that solution is that it is not continuous and if a ply file is made it slows down the program a lot.
My code for making the point cloud looks like:
// CopyVertices is extensible, any of these will do:
var vertices = new float[points.Count * 3];
// var vertices = new Intel.RealSense.Math.Vertex[points.Count];
// var vertices = new UnityEngine.Vector3[points.Count];
// var vertices = new System.Numerics.Vector3[points.Count]; // SIMD
// var vertices = new GlmSharp.vec3[points.Count];
// var vertices = new byte[points.Count * 3 * sizeof(float)];
points.CopyVertices(vertices);
And the ply file is made with the line:
points.ExportToPLY("pointcloud.ply", colorFrame);
The helix toolkit is used like this:
Model3DGroup model1 = import.Load("pointcloud.ply");
model.Content = model1;
the rest of the code is like the C# wrapper of librealsense:
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/csharp
Does anyone have an idea on how to make this pointcloud display continuous?

Are you using HelixToolkit.Wpf or HelixToolkit.SharpDX.Wpf?
Try to use HelixToolkit.SharpDX version if your point cloud is big.
Also try to avoid export and import while doing continuous update. You can convert your point cloud directly into helixtoolkit supported points format and update the point model.

Related

Is it possible to create a new Block from a VectorView of a 3D model?

Assume I have a 3D model imported from a step file. I have
Design design1 to work with a 3D imported model.
Drawing drawing where I create my 2D VectorView topView
Design design2 where I work on my actual design
I would like to create a Block from this topView to use in Design design2 and if I change the model in design1 and/or create another VectorView on drawing does not impact anything on design2. The current workaround is to save the topView as a 2D CAD then import it back.
My code to read the 3D step file and place it to design1
var rf = new ReadSTEP(#"C:\\Sample3DModel.stp");
rf.DoWork();
rf.AddToScene(design1)
My code to create a vector view viewType.Top
drawing.Sheets.Clear();
//Empty sheet
var sheet1 = new Sheet(linearUnitsType.Millimeters, 100, 100, "Sheet 1");
var topView = new VectorView(80, 80, viewType.Top, sheet1.Scale, "Top");
topView.HiddenSegments = false;
topView.Selectable = false;
sheet1.Entities.Add(topView);
drawing.Sheets.Add(sheet1);
drawing.Rebuild(design1);
drawing.ActiveSheet = sheet1;
drawing.Invalidate();
I tried to collect Entities from topView but error var entities = topView.GetEntities(new BlockKeyedCollection()); error: 'A Block with name Top does not exist.'
Please try design1.CopyTo(design2) and you'll get an exact -deep- copy of what you have on design1 control.

What is the best way to load a model in Tensorflow.NET

I saved a tensorflow.keras model in python and need to use in in C# / Tensorflow.NET 0.15
var net = tf.keras.models.load_model(net_name) does not seem to be implemented
var session = tf.Session.LoadFromSavedModel(net_name);
var graph = sess.graph;
seems to work but I have then a session / graph not a keras model
I would ideally like to call something like net.predict(x), how can I get there from a graph/session ?
Yes, i Did. The best way is to convert you package to the ONNX format. ONNX is a open source format that is supposed to run on any framework (tensorflow, torch...)
In python, add the package onnx and keras2onnx:
import onnx
import keras2onnx
import onnxruntime
net_onnx = keras2onnx.convert_keras(net_keras)
onnx.save_model(net_onnx, onnx_name)
Then in C# .NET, install the nuget Microsoft.ML.
var context = new MLContext();
var session = new InferenceSession(filename);
float[] sample;
int[] dims = new int[] { 1, sample_size};
var tensor = new DenseTensor<float>(sample,dims);
var xs = new List<NamedOnnxValue>()
{
NamedOnnxValue.CreateFromTensor<float>("dense_input", tensor),
};
using (var results = session.Run(xs))
{
// manipulate the results
}
Note that you need to call explicitly the fist layer or the input layer of your network to pass on the sample. best is to give it a nice name in Keras. You can check the name in python by running net_keras.summary()

How to implement a pipeline?

The idea of this module is the be able to graphically represent data inside of a pipeline.
For example, data can look like this:
1,4
This would be a function y=f(x), where:
4=f(1)
I need to use this line
TODO: WritePointToHTML(rawData);
The basic idea of this is to generate HTML file, with code which will draw required line.
I tried to draw a line using html, but I am not able to understand how to represent it in a pipeline
var canvas = document.getElementById('Canvas');
var context = canvas.getContext('2d');
I'm assuming that the original poster is looking to draw lines, not use c# pipelines.
var can;
var ctx;
function init(){
can=document.getElementById("Canvas");
ctx=can.getContext("2d");
ctx.canvas.width = window.innerWidth;
ctx.canvas.height = window.innerHeight;
DrawSlope(1,20);
}
function DrawSlope(x,y)
{
var firstPoint = [x,0];
var secondPoint = [1,x+y];
WritePointToHTML(firstPoint[0],firstPoint[1],secondPoint[0],secondPoint[1]);
}
function WritePointToHTML(x,y,xTwo,yTwo)
{
ctx.beginPath();
ctx.moveTo(x, y);
ctx.lineTo(xTwo, yTwo);
ctx.stroke();
}
// added "draw slope" to factor in slope formula.
https://codepen.io/hollyeplyler/pen/gqYyZx

How can I add a customised Carto Map Marker via Carto Map Moblie SDK.UWP?

I'm implementing a Universal Windows Platform (UWP) app, and I m using the Carto Map Mobile SDK (UWP). However, I don't know how to add a .png image as a Map Marker programmatically. Here is my code:
/// Preparation - create layer and datasource
// projection will be needed later
Projection projection = map.Options.BaseProjection;
// Initialize an local data source - a bucket for your objects created in code
LocalVectorDataSource datasource = new LocalVectorDataSource(projection);
// Initialize a vector layer with the previous data source
VectorLayer layer = new VectorLayer(datasource);
// Add layer to map
map.Layers.Add(layer);
/// Now we real adding objects
// Create marker style
MarkerStyleBuilder builder = new MarkerStyleBuilder();
builder.Size = 20;
BinaryData iconBytes = AssetUtils.LoadAsset("Z:/FolderName/ProjectName/Assets/markers_mdpi/mapmarker.png");
byte[] bytearray = iconBytes.GetData();
int size = Marshal.SizeOf(bytearray[0]) * bytearray.Length;
IntPtr pnt = Marshal.AllocHGlobal(size);
builder.Bitmap = new Bitmap(pnt, true);
MarkerStyle style = null;
style = builder.BuildStyle();
// Create a marker with the style we defined previously and add it to the source
Marker marker = new Marker(position, style);
datasource.Add(marker);
The Carto Map official technical document didn't help at all, and here is the screenshotCarto Mobile SDK document. However, when I installed the official SDK.UWP via the Nuget, there aren't any relevant functions that mentioned in the document in the library.
Can anyone help me solve this problem? Otherwise it is meaningless for me to create this UWP app further. Many thanks.
Okay, I just solved this problem, and the Carto Map Support team replied me as well. The official technical document is not updated in time, so it misleads new people who first contact with the carto map (especially the UWP one).
The solution is:
/// Preparation - create layer and datasource
// projection will be needed later
Projection projection = map.Options.BaseProjection;
// Initialize an local data source - a bucket for your objects created in code
LocalVectorDataSource datasource = new LocalVectorDataSource(projection);
// Initialize a vector layer with the previous data source
VectorLayer layer = new VectorLayer(datasource);
// Add layer to map
map.Layers.Add(layer);
/// Now we real adding objects
// Create marker style
MarkerStyleBuilder builder = new MarkerStyleBuilder();
builder.Size = 30;
//here we generate a filePath string then pass it into AssetUtils.LoadAsset
string filePath = System.IO.Path.Combine("SubfolderName", "imagefileName.png");
var data = AssetUtils.LoadAsset("SubfolderName\\imagefileName.png");
var bitmap = Bitmap.CreateFromCompressed(data);
if (bitmap != null)
{
builder.Bitmap = bitmap;
bitmap.Dispose();
}
MarkerStyle style = builder.BuildStyle();
// Create a marker with the style we defined previously and add it to the source
Marker marker = new Marker(position, style);
datasource.Add(marker);
Please make sure all the files/sources come from the Assets folder.

SharpDX 2.5 in DirectX11 in WPF

I'm trying to implement DirectX 11 using SharpDX 2.5 into WPF.
Sadly http://directx4wpf.codeplex.com/ and http://sharpdxwpf.codeplex.com/ don't work properly with SharpDX 2.5. I was also not able to port the WPFHost DX10 sample to DX11 and the full code package of this example is down: http://www.indiedev.de/wiki/DirectX_in_WPF_integrieren
Can someone suggest another way of implementing?
SharpDX supports WPF via SharpDXElement.
Take a look in the Samples repository at the Toolkit.sln - all projects that have WPF in their name use SharpDXElement as rendering surface:
MiniCube.WPF - demonstrates basic SharpDX-WPF integration;
MiniCube.SwitchContext.WPF - demonstrates basic scenario when lifetime of the Game instance is different from the lifetime of SharpDXElement (in other words - when there is need to switch game rendering on another surface).
MiniCube.SwitchContext.WPF.MVVM - same as above, but more 'MVVM-way'.
Update: SharpDX.Toolkit has been deprecated and it is not maintained anymore. It is moved to a separate repository. The Toolkit samples were deleted, however I changed the link to a changeset where they are still present.
You can still use http://sharpdxwpf.codeplex.com/.
In order to work properly with SharpDX 2.5.0 you need to do a few modifications.
1) In project Sharp.WPF in class DXUtils.cs in method
Direct3D11.Buffer CreateBuffer<T>(this Direct3D11.Device device, T[] range)
add this line
stream.Position = 0;
just after
stream.WriteRange(range);
So fixed method looks like this:
public static Direct3D11.Buffer CreateBuffer<T>(this Direct3D11.Device device, T[] range)
where T : struct
{
int sizeInBytes = Marshal.SizeOf(typeof(T));
using (var stream = new DataStream(range.Length * sizeInBytes, true, true))
{
stream.WriteRange(range);
stream.Position = 0; // fix
return new Direct3D11.Buffer(device, stream, new Direct3D11.BufferDescription
{
BindFlags = Direct3D11.BindFlags.VertexBuffer,
SizeInBytes = (int)stream.Length,
CpuAccessFlags = Direct3D11.CpuAccessFlags.None,
OptionFlags = Direct3D11.ResourceOptionFlags.None,
StructureByteStride = 0,
Usage = Direct3D11.ResourceUsage.Default,
});
}
}
2) And in class D3D11 in file D3D11.cs
rename this
m_device.ImmediateContext.Rasterizer.SetViewports(new Viewport(0, 0, w, h, 0.0f, 1.0f));
into this
m_device.ImmediateContext.Rasterizer.SetViewport(new Viewport(0, 0, w, h, 0.0f, 1.0f));
i.e. SetViewports into SetViewport.
And it should work now.

Categories

Resources