Ways to dynamically render a real world 3d environment in Unity3D - c#

Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location.
How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this?
NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs
For example:
Suggested methodology 1:
Prompt user to specify GPS coords
Use Google earth API and HTTP to programmatically obtain a .khm file
describing that location(Not sure if
google earth provides that capability
does it?)
Unzip the .khm so I have the .dae file
Convert that file to a .3ds file using ??? third party converter(is
there a converter that exists?)
Import .3ds into Unity3D at runtime as a plane(is this possible)?
Suggested methodology 2:
Prompt user to specify GPS coords
Use Google earth API and HTTP to programmatically obtain a .khm file
describing that location(Not sure if
google earth provides that capability
does it?)
Unzip the .khm so I have the .dae file
Parse .dae file using my own C# parser I will write(do you think its
possible to write a .dae parser that
can parse the .dae into an array of
Vector3 that describe the height map
of that location?)
Dynamically create a plane in Unity3D and populate it with my
array/list of Vector3 points(is it
possible to create a plane this way?)
Maybe I am meant to create a mesh
instead of a plane?
Can you think of any other ways I could render a real world 3d environment in Unity3D?

No service available has a detailed 3d map of an area given gps coordinates. Google has a super simple height map that they draw and then map their aerial pictures onto, but even if you could get programmatic access to that it would look terrible if you were at street level. If you're okay with a horrid level of detail (like you're going to be viewing this from high up or something) maybe it would be feasible, but not as any kind of first person street level experience.

Related

How to extract geometric positions from 2d .dwg using Forge AutoDesk APIs?

Using Model derivative API I am able to get geometric properties of 3d dwg file but for 2d dwg I am facing the issue(Unrecoverable exit code from extractor: -1073741831) on extracting geometric properties.
I also understand that model derivative API doesn't provide a support for extracting 2d geometries.
Is any other way to extract geometry of 2d file using programming API(c#)?
EDIT
I have added ObjectTree JSON file and POST URL of "Extract Geometry for Selected Objects into an OBJ File" in the following GitHub link.
https://github.com/Jothipandiyan-jp1/Autodesk
From the error, it seems that your 2D drawing is somehow broken, or not uploaded right. Or is it a vertical file, like Plant 3D or Map 3D?
The Model Derivative should extract the 2D View, you can try the file on A360 Viewer or via API at this sample (C# source).
EDIT
From the comments, it seems you are trying to extract the .obj from a single objectId in the 2D DWG. This should not trigger errors, but it may return empty file as the OBJ format is intended for 3D shapes. Can you update your question with the full POST job used on your code? Make sure the modelGuid and objectIds parameters are correct.

Object Detection from image with training data using auto learning algorithm

I am trying to make an application which will do 2 task.
get some object from an image e.g a rectangle which actually a
traffic light.
Find this selected object in training data,training data is actually
bulk of images.
I have searched found an OpenCV library which can be use but how can i start it.How can i detect some specific shape from image and find it in training data with matching probability.
Also is there any algorithm which is auto learning..?
You would need to have stored the coordinates of the rectangle in a CSV file(for example) along with the path to the image. You would then load the image along with the coordinates to get the traffic light as a subimage. This, I think, answers question 1.
You would then feed these subimages, which would be your positive dataset along with some negative data, which could be random portions of the image that don't overlap with the traffic light, into a machine learning algorithm like a HOG SVM. There are some nice tutorials in Python here: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/
This, I think, would lead you to solving question 2.
Does that answer your question? Or have I misinterpreted it?

How to blur part of an Image using EmguCV?

We have a large number of Images taken from a car for a project. To satisfy privacy norms, we need to detect faces & License Plates and then blur those areas. I came to know of the Emgucv project, and the tutorial given at http://www.emgu.com/wiki/index.php/License_Plate_Recognition_in_CSharp has been very useful to detect Licensplates.
Is there a way of blurring this region using Emgu itself?
I don't believe that there is something built-in like what you are looking for.
What you will have to do, like with openCV, is to blur a whole copy of your source image and then copy back the license plate part to the original image.
You can do this using the SmoothBlur method first and then the Copy method that accepts a mask as its second argument.

Getting info of ir field returned to kinect

I've been messing around with the Beta Kinect SDK and was wondering if there was any way to directly access the ir speckle field info that is returned to the kinect. I want to try to map a persons body (not just the skeleton) using triangulation of points on the body. I may be going about this the wrong way but I was thinking that since the Kinect is already processing info about thousands of dots on its target I could use a subset of these as my vertex set rather than generating the points myself.
Does anyone know if this is possible? I would prefer to use c# but would be willing to dust off my c++ skills (and learn a few more) if necessary.
There is no way, currently, to access the raw IR data using the SDK. And trust me when I say you probably don't want to/need to. The IR pattern thrown by the light built into the Kinect is not a simple uniformly spaced pattern. A Google search for "Kinect IR Pattern" will show you that the pattern isn't even perfectly rectangular.
What you should use is the depth map computed by the Kinect. It takes the input from the thousands of IR dots and converts it into an easy to use (albeit somewhat noisy) 640 x 480 (or 320 x 240, or 180 x 60...) image. This should suffice for mapping a person's body, especially since there are methods in the SDK to translate between points on a skeleton and the corresponding points in the depth map.

Trying to work with tiled maps

I know I am probably being dense here but I need some help.
I am working on a program that handles mapping of an area, I need to have the map be GEOref'd so I can gather the MGRS coords for any point on the map. I already have a lib I wrote that does this working with images I import one by one using upper left and bottom right coords. I then simply calculate the number of pixels and their offset from the top left and bottom right of the image.
What I am trying to do is create a dragable map like GoogleMaps or any number of other mapping systems.
Here's the kicker. The system is running on a closed network with no access to Google or any other online resource for the maps.
I have 500gb worth of map data that I can work with but the format is something I am not familiar with, a XML file with some georef data, and a truck load of files with .tileset extension.
I assume I need to create some sort of tile stitching routine similar to what you would see in a game engine, but I have no experience with such engines.
Can anyone give me some advice or libs or directions to start researching to parse and use these tileset files and get this function going?

Categories

Resources