Trying to work with tiled maps - c#

I know I am probably being dense here but I need some help.
I am working on a program that handles mapping of an area, I need to have the map be GEOref'd so I can gather the MGRS coords for any point on the map. I already have a lib I wrote that does this working with images I import one by one using upper left and bottom right coords. I then simply calculate the number of pixels and their offset from the top left and bottom right of the image.
What I am trying to do is create a dragable map like GoogleMaps or any number of other mapping systems.
Here's the kicker. The system is running on a closed network with no access to Google or any other online resource for the maps.
I have 500gb worth of map data that I can work with but the format is something I am not familiar with, a XML file with some georef data, and a truck load of files with .tileset extension.
I assume I need to create some sort of tile stitching routine similar to what you would see in a game engine, but I have no experience with such engines.
Can anyone give me some advice or libs or directions to start researching to parse and use these tileset files and get this function going?

Related

Tesseract OCR C#: Training the network for unknown font

So I am using Tesseract with C# to read english text and it works like a charm. I use pre-trained data from the tesseract repo:https://github.com/tesseract-ocr/tessdata
So far, so good. However, I fail to understand how to solve the following situation: I have an image with a maximum of three numbers on it:
I also followed this tutorial in order to train my own data but I failed to understand what exactly I am doing mid-way:https://pretius.com/how-to-prepare-training-files-for-tesseract-ocr-and-improve-characters-recognition/
In this tutorial, they used some existing font and train their network accordingly. However, I do not know what this font is. I tried to figure it out myself but was overwhelmed by the huge amount of information about tesseract and actually do not have any idea where to start.
I was wondering if the following would be possible: I have lots of pictures looking like that(in fact, every possible character with every possible color, only difference is that the background is different):
etc...
And with those pictures, I want to train the network, without using any existing font files.
My algorithm right now does not use tesseract, it just screenshots the position of the numbers and I compare pixel-wise. I do not like this appoach though, as the accuracy is something like 60%.
Thanks for your help in advance

Object Detection from image with training data using auto learning algorithm

I am trying to make an application which will do 2 task.
get some object from an image e.g a rectangle which actually a
traffic light.
Find this selected object in training data,training data is actually
bulk of images.
I have searched found an OpenCV library which can be use but how can i start it.How can i detect some specific shape from image and find it in training data with matching probability.
Also is there any algorithm which is auto learning..?
You would need to have stored the coordinates of the rectangle in a CSV file(for example) along with the path to the image. You would then load the image along with the coordinates to get the traffic light as a subimage. This, I think, answers question 1.
You would then feed these subimages, which would be your positive dataset along with some negative data, which could be random portions of the image that don't overlap with the traffic light, into a machine learning algorithm like a HOG SVM. There are some nice tutorials in Python here: http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/
This, I think, would lead you to solving question 2.
Does that answer your question? Or have I misinterpreted it?

Ways to dynamically render a real world 3d environment in Unity3D

Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location.
How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this?
NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs
For example:
Suggested methodology 1:
Prompt user to specify GPS coords
Use Google earth API and HTTP to programmatically obtain a .khm file
describing that location(Not sure if
google earth provides that capability
does it?)
Unzip the .khm so I have the .dae file
Convert that file to a .3ds file using ??? third party converter(is
there a converter that exists?)
Import .3ds into Unity3D at runtime as a plane(is this possible)?
Suggested methodology 2:
Prompt user to specify GPS coords
Use Google earth API and HTTP to programmatically obtain a .khm file
describing that location(Not sure if
google earth provides that capability
does it?)
Unzip the .khm so I have the .dae file
Parse .dae file using my own C# parser I will write(do you think its
possible to write a .dae parser that
can parse the .dae into an array of
Vector3 that describe the height map
of that location?)
Dynamically create a plane in Unity3D and populate it with my
array/list of Vector3 points(is it
possible to create a plane this way?)
Maybe I am meant to create a mesh
instead of a plane?
Can you think of any other ways I could render a real world 3d environment in Unity3D?
No service available has a detailed 3d map of an area given gps coordinates. Google has a super simple height map that they draw and then map their aerial pictures onto, but even if you could get programmatic access to that it would look terrible if you were at street level. If you're okay with a horrid level of detail (like you're going to be viewing this from high up or something) maybe it would be feasible, but not as any kind of first person street level experience.

XNA: Embedding (too) large image as a Map

I am trying to create a map application, something like google maps, that shows a portion of a large map and lets you navigate north east up west, zoom in and out, etc...
I encountered a critical problem in the beginning: XNA does not allow importing images larger than a top maximum size limit, even in HiDef mode. And my map image size is much larger than the limit.
I was thinking I could split the map (manually, in photoshop) into smaller pieces and paste them one by one in the game, so they will make up a the whole map.
Is there a better way to do that?
Yes. That is a better way of doing it.
If you wanted to get fancy, you could probably do it in a content processor / importer (rather than doing it manually each time the image changed).
This would involve creating a type that contained a collection of your tiles.
You'd then create a new content importer that could take an image file, and split it up into chunks (maybe of configurable size).
It would produce an instance of your newly create type which you could load at runtime.
Check out the Content Pipeline posts on Shawn Hargreaves' blog.

C# Stitching small pictures into one large one

I have an objective: I need to join, for example 2 pictures like http://imgur.com/9G0fV and http://imgur.com/69HUg. In the result there has to be and image like http://imgur.com/SCG1X not http://imgur.com/LO4fh.
I'll explain in words: I have some images with the same areas and I need to find the area, crop it in one image and after this join them.
Take a look at this article, it's explains a possible solutions using the C# Aforge.NET image processing library
What you want to do is read the pixel values into arrays,
then find overlapping area using an algorithm like correlation
or min cut.
After finding coordinates of overlap, write out both images into
new array, use coordinates relative to large image minus
position of overlap in that source image plus position in destination image.
C# is not a factor in solving this, unless you meant
to ask about existing .NET frameworks that can help.
I am developing .NET library called SharpStitch (commercial) which can do the job.
It uses feature-based image alignment for general purpose image stitching.

Categories

Resources