I am trying to build an AR app for navigation, and i am trying to modify the app for rest points, so I decided to add 3d graphics overlay in esri maps while building the app in visual studio, but the problem is I can see a 2d point but when I try to implement a 3d point on a scene it doesn't show up in the map or scene , i tried creating the app in windows, android and iOS but there was no solution and i kept on trying adding layers they are working fine, but every time I am using simple marker scene symbol it's not working at all, not even basic points are reflecting in the map/scene
So this is the developer documentation tutorial on how to create a graphic on a scene:
https://developers.arcgis.com/net/scenes-3d/add-graphics-to-a-scene-view/
This is my code that I have written exactly to avoid any mistakes in generation of graphics:
var go = new GraphicsOverlay();
var pierPoint = new MapPoint(xx.xxxx, xx. xxxxxx,SpatialReferences.Wgs84);
// Create a new symbol for the graphic.
var redSphereSymbol = new SimpleMarkerSceneSymbol(SimpleMarkerSceneSymbolStyle.Sphere,
System.Drawing.Color.Red,
500, 500, 500,
SceneSymbolAnchorPosition.Center);
// Create a new graphic.
var santaMonicaPierGraphic = new Graphic(pierPoint, redSphereSymbol);
// Add attribute values (if needed).
santaMonicaPierGraphic.Attributes.Add("Name", "Santa Monica Pier");
santaMonicaPierGraphic.Attributes.Add("type", "pier");
// Add the graphic to the graphics overlay's graphics collection.
go.Graphics.Add(santaMonicaPierGraphic);
// Add the graphic overlay to the geo view's graphics overlay collection.
arSceneView.GraphicsOverlays.Add(go);
arSceneView.Scene = scene;
arSceneView.Scene = scene;
The arSceneView is the sceneview class
the xx.xxxxx is the latitude/longitude for the reference point
Related
I'm developing custom control on .Net MAUI. For my case, I have to update 100's of points at each invalidate. So I'm going for native rendering. Here for Android, I have rendered points on bitmap and rendered the bitmap once and this performance is fine for me and the same had to dine with MU+
I'm new to IOS native's, and I tried to achieve the same as above using ImageContext as below,
UIGraphics.BeginImageContextWithOptions(image.Size, false, 0);
image.Draw(new CGPoint(0, 0));
//Drawn needed shapes here using Image Context
image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
Finally drawn the stored images to screen. But this doesn't look performance effective on my case.
My case is store the existing rendering to one object and render current points with the existing one. Please suggest if it can be achieved using some cases too..
I am creating a mobile app using Unity. I would like to place a photo which fills the screen and then place some 3D objects on top of this photo. The photo is created during runtime and is saved to the persistance data folder.
What is the best method to achieve this? The options I see are Raw Image/Image/Sprite. However, I have been unable to achieve the above mentioned goal using either of these component types.
What about making a panel on a canvas, setting the image as background and the canvas to fit screen size?
Then place the objects on front of it, manually or as children of said canvas but higher in the hierarchy.
I don't really understand what you intend to do, just trying to help.
editing my original answer: you need to use RawImage to be able to easily load images that are not marked as Sprite in the editor
You load images from file like this :
Texture2D texture = new Texture2D(1,1); // the following needs a non null starting poin
var path=System.IO.Path.Combine(Application.streamingAssetsPath,"your_file.jpg");
byte[] bytes=System.IO.File.ReadAllBytes(path);
texture.LoadImage(bytes);
rawImage.texture=texture;
As far as keeping it filling the screen the AspectRatioFiter component is likely to do the job
To get 3d objects rendering in front, set your canvas to 'Screen Space - Camera' and point it to your camera. This will behave similar to word space in regards to rendernig order (will get 3d sroted) but canvas size will match camera viewport, so AspectRatioFitter will be able to do its job
I was wondering how PowerPoint slides can be automatically annotated using digital ink in .NET (using c#). Currently, I'm doing the same thing using free-form shapes, which is straightforward but has some issues. When selecting Office.MsoEditingType.msoEditingAuto as an editing type, the free-forms are smooth, but when constructing them and then converting into a shape (when consisting of more than a couple of points) takes a very long time (the following method would take ca 5s!)
PowerPoint.Shape Shape = builder.ConvertToShape();
When using Office.MsoEditingType.msoEditingCorner, the shape is generated much quicker, but the resulting shapes are jagged (surprise!).
I found the following code sample for doing the same using digital ink:
DrawingAttributes drawingAttributes1 = new DrawingAttributes();
drawingAttributes1.Color = Colors.Green;
StylusPoint stylusPoint1 = new StylusPoint(100, 100);
StylusPoint stylusPoint2 = new StylusPoint(100, 200);
StylusPoint stylusPoint3 = new StylusPoint(200, 200);
StylusPoint stylusPoint4 = new StylusPoint(200, 100);
StylusPoint stylusPoint5 = new StylusPoint(100, 100);
StylusPointCollection points = new StylusPointCollection(
new StylusPoint[] { stylusPoint1, stylusPoint2, stylusPoint3,
stylusPoint4, stylusPoint5 });
Stroke newStroke = new Stroke(points, drawingAttributes1);
InkPresenter inkPres = new InkPresenter();
inkPres.Strokes.Add(newStroke);
However, not being a PowerPoint Add-in expert (hardly even a beginner, actually), I don't know how to attach the inkpresenter to the current slide. Ideally, a new inkpresenter would be created & kept per slide (so I don't have to worry about re-drawing on each slide navigation)
I understood it's possible to create an ink canvas using the designer, and then drawing on that, but would that canvas then be attached to the entire presentation or just the current slide? And would it allow users to draw on the canvas (which is not the goal; drawing would be done automatically)?
I spent quite some time looking for relevant code samples, but none of them seemed to do what I am intending. For instance, as mentioned, I'm not planning to allow users to draw on the slide, but automatically annotating the slide.
Thanks,
William
I come across a problem which seems a bug for me. I'm making an app that visualizes atoms in a crystal. That problem is that it draws a transparent object and hides the object behind.
Here is the code:
foreach (var atom in filteredAtoms)
{
var color = new Color();
color.ScR = (float)atom.AluminiumProbability;
//color.G = 50;
color.ScB = (float)atom.MagnesiumProbability;
//setting alpha channel but Opacity doens't work as well
color.ScA = (float)(1.0 - atom.VacancyProbability); //(float)1.0;//
DiffuseMaterial material = new DiffuseMaterial(new SolidColorBrush(color));
//material.Brush.Opacity = 1.0 - atom.VacancyProbability;
// make visuals and add them to
atomBuldier.Add(new Point3D(atom.X * Atom.ToAngstrom, atom.Y * Atom.ToAngstrom, atom.Z * Atom.ToAngstrom), material);
}
When I change the material to e.g. EmissiveMaterial there are no "cut" atoms. I googled this post, but the advices given don't apply to this case.
Is this a bug with 2D brush applied to 3D?
The full source code can be found here http://alloysvisualisation.codeplex.com the dll and a test file http://alloysvisualisation.codeplex.com/releases beta link.
Steps to reproduce:
Lunch app
Click Open file button
Open test file (xyzT2000.chmc)
Click Mask button
Check 11 (series of atoms are almost transparent)
Ckick Redraw
For the transparent atoms, you must disable z-buffer-writing. I'm unfamiliar with WPF, but you can probably set this in an Appearance or Material object or so.
The problem occurs because of the following:
When a transparent atom is rendered, it writes its depth to the z-buffer. Subsequent non-transparent atoms that are rendered and should appear, do not get written to the frame buffer, because their z-values fail the z-test, because of the z-values already in the z-buffer of the transparent atom.
In short, the graphics card treats the transparent atom as opaque and hides anything behind it.
Edit: Upon looking into WPF it seems pretty high-level, without direct control of z-buffer behavior.
According to this link, the emissive and specular materials do not write to the z-buffer, so using those is your solution when working with transparent objects.
I have an XNA project that utilizes the Windows.Forms to create the GUI. Our GUI consists of a left panel and right panel. They both have a image laid over them(let's call them the panel images). Those images have buttons with images over them. Now the panel images don't completely cover the panel. Now what we want to do is make the panel invisible or transparent so you only see the panel images. In the picture below I circled what I want to be transparent/invisible. As you can see on the upper part of the project it already looks transparent but that is only because it blends in with the background on the XNA scene. On the bottom where the panel is over the ground you can see how the panel extends further than the panel images. So, does anyone know how I can make those parts invisible/transparent.
Alright, we've messed around with making the panel color Color.Transparent, magenta(XNA transparent color) and those attempts haven't worked. Any input/advice is welcome and much appreciated.
Here is the code that sets up the panel:
this.pnlLeftSide.BackgroundImage = global::Referenceator_UI.Resources.LeftBar;
this.pnlLeftSide.BackgroundImageLayout = System.Windows.Forms.ImageLayout.None;
this.pnlLeftSide.Controls.Add(this.btnScreenShot);
this.pnlLeftSide.Controls.Add(this.btnScale);
this.pnlLeftSide.Controls.Add(this.btnMove);
this.pnlLeftSide.Controls.Add(this.btnRotate);
this.pnlLeftSide.Controls.Add(this.btnSelect);
this.pnlLeftSide.Location = new System.Drawing.Point(0, 0);
this.pnlLeftSide.Name = "pnlLeftSide";
this.pnlLeftSide.Size = new System.Drawing.Size(197, Screen.PrimaryScreen.WorkingArea.Height);
this.pnlLeftSide.Dock = DockStyle.Left;
this.pnlLeftSide.BackColor = controlColor; //this what we want invisible/transparent
-Thank you stackoverflow community
Try setting Region property of your panels. You can create necessary Region objects manually (by enumerating lines describing visible polygon) or use some method which converts image with transparency color key to Region (easily googled - https://stackoverflow.com/questions/886968/how-do-i-convert-an-images-transparency-into-a-region for example).
Since geometry of your panels does not seem to be too complex, you can create Region manually following way:
using(var gp = new System.Drawing.Drawing2D.GraphicsPath())
{
// Here goes series of AddLine() calls.
// You must
// gp.AddLine(0, 0, leftPanel.Width, 0);
// ...
gp.CloseFigure();
return new Region(gp);
}
Note that you'll get sharp edges with this method (even if it works). Consider rendering all that GUI using XNA.