I want to restart with data structure ( and Ai + I want to clear all my misconceptions too. ;P )
For now I want to know how would I put given pictorial info into algorithm using C# structure. Image processing is not required here. Just need to feed the data in.
Here I need this question to be modified too if not clear. :|
Say Arad is a city in Romania from where I have to go to another city Bucharest.
This map also has info of how far all connecting cities are from any city.
How would I use these info in program to start with any searching or sorting algo?
Any pointer will be helpful. Say if this can be done using anything else than struct. Something like node or something. I don't know.
Please consider I want to learn things. So using C# for ease in use not to use its inbuilt searching and sorting functions. Later to confirm I might use.
The way you typically solve this problem is to create a node class and an edge class. Each node has a set of edges that have "lengths", and each edge connects two nodes. You then write a shortest-path algorithm that determines the least-total-length set of edges that connects two nodes.
For a brief tutorial on how to do that, see my series of articles on the A* algorithm:
http://blogs.msdn.com/b/ericlippert/archive/tags/astar/
Although it's not exactly what you're looking for, Eric Lippert's series on graph colouring is an excellent step-by-step example of designing data structures and implementing algorithms (efficiently!) in C#. It has helped me a lot; I highly recommend reading it. Once you've worked your way through that, you will know much more about C# and you will understand some of the specific design tradeoffs that you may encounter based on your specific problem, including what data structure to use for a particular problem.
If you just want to look at raw algorithms, the shortest path problem has many algorithms defined for it over the years. I would recommend implementing the common Dijkstra's algorithm first. The Wikipedia article has pseudocode; with what you get out of Eric Lippert's series, you should be in good shape to develop an implementation of this. If you still want more step-by-step guidance, try a search for "Dijkstra's algorithm in C#".
Hope that helps!
Related
What algorithm can I use to produce a weighted-cartogram such as the one below: ?
I can generate a shapefile plot using code from R, .NET libraries and also using PostGIS. However I can't find the search terms to use to find an implementation of the algorithm used to produce these warped shapefile plots. Not necessarily looking to plot only world maps, so must be able to work with an arbitrary shapefile.
So as mentioned in the OP's comments, these are called area cartograms. The neatest lil implementation I know of is cartogram.js, which relies on the magical D3 library. If that page ever happens to go down, you should be able to find a similar page by Googling "D3 area cartograms", and if that doesn't get you anywhere then the original paper on the topic was Dougenik 1985.
The D3.js answer, by Andy, is excellent, however, just for completeness, there is an implementation here, Cartogram algorithm, which comes from a Python plugin for the excellent open source GIS application, QGIS. The original paper and algorithm are cited in the comments. The full source code directory for the QGIS plugin is: https://code.google.com/p/ftools-qgis/source/browse/trunk/cartogram/?r=115
I realize that you asked for C#, and there are some QGIS geometry objects in the code, but the TransformGeometry method does illustrate how the algorithm works, QGIS reads shapefiles, and in case you wanted to do any other GIS style processing, QGIS would be a good option.
I know, already so many 'how to compare 2 images' question out there.
I have looked at many and could not find anything relevant to my particular need - I apologize if I have missed something I should not have !
So,
Firstly, I am familiar with c#, completely unfamiliar with image processing and recognition.
Secondly, I am not looking for someone to hand me a complete answer, simply seeking to point myself in the right direction to tackle the job in hand.
Objective:
I am seeking to identify the location of certain physical structures on images taken from google maps.
Unfortunately I cannot tell or show what those structures are, but we can use an example, let's say it's a round swimming pool.
Key point maybe, I'm looking to find a small object 'within' a large image.
Considering I've never tackled image processing before, I appear to be completely over whelmed with options of libraries available, names and terminology of functions and capabilities... and I seem to be spending hours going down dead end avenues.
A lot of reading seems to be comparing 1 image with another, rather than an image 'within' an image.
So far AForge & Open CV are the obvious names that have been seen a lot .. but I really can't work out which of them will do this specific job.
Simply, could someone be kind enough to point me in the right direction to get started?
I'm really trying to narrow reading down to subject matter that is relevant to my case.
Basic Principles
Libraries with the required capability.
Any guidance much appreciated.
Many Thanks
Simon
If you want to detect shapes for example, you can use opencv.
Unfortunately opencv is written in c++ and you would have to write your wrapper.
Instead you can use EmguCV. Try to look at this: link
Consider a program that askes you questions, like "what is the last site you visited?" and the answer would be "stackoverflow". The user is asked this question and gives the answer "stakovervlow" or "overflowstack". I still need the program to count it as a correct answer.
To compare normal strings I would use StringCompare class, but this wouldn't work in this case. I've searched the internet and found some articles about SOUNDEX and some algorithms to compare every char in the string and calculate the similarity percentage (like the damerau levenshtein distance), but i don't really know what is best.
Anyone knows if there is a class in .net to accomplish this or what the best way is to compare the user answer with the correct answer?
From the docs there is the SpellCheck class. You can add customized dictionaries as well for words like "StackOverflow", that are not in the dictionary.
What you are trying to do is quite difficult. The easy but tedious way is to create a dictionary or a table in your database that lists common misspellings.
The difficult way is to try to write some code to do natural language processing. The 2 most successful endeavors into this are the semantic search by Google and IBM's Watson supercomputer. I gather you won't be duplicating their methodology anytime soon.
I am trying to implement a program like intelligent scissors. The difference it will not follow the edges, but it will be tend to pass between two edges.
I found seam carving useful for this purpose. What I need to do is calculate the energy of an image and find seams on it. But I couldn't find how to implement it in the paper and couldn't find an implementation either. Can anybody recommend an easier source that I can understand and implement? or an implementation which I can try and see wheather it will work for my purpose or not?
The best implementation of CAIR I've found is here. The link includes a pretty simple and straightforward explanation of how the algorithm works.
Does this article on Wikipedia help at all? http://en.wikipedia.org/wiki/Seam_carving
I am trying to find information (and hopefully c# source code) about trying to create a basic AI tool that can understand english words, grammar and context.
The Idea is to train the AI by using as many written documents as possible and then based on these documents, for the AI to create its own creative writitng in proper english that makes sense to a human.
While the idea is simple, I do realise that the hurdles are huge, any starting points or good resoueces will be appriacted.
A basic AI tool that you can use to do something like this is a Markov Chain. It's actually not too tricky to write!
See: http://pscode.com/vb/scripts/ShowCode.asp?txtCodeId=2031&lngWId=10
If that's not enough, you might be able to store WordNet synsets in your Markov chain instead of just words. This gives you some sense of the meaning of the words.
To be able to recompose a document you are going to have to have away to filter through the bad results.
Which means:
You are going to have to write a program that can evaluate if the output is valid (grammatically and syntactically is the best you can do reliablily) (This would would NLP)
You would need lots of training data and test data
You would need to watch out for overtraining (take a look at ROC curves)
Instead of writing a tool you could:
Manually score the output (will take a long time to properly train the algorigthm)
With this using the Amazon Mechanical Turk might be a good idea
The irony of this: The computer would have a difficult time "Creatively" composing something new. All of its worth will be based on its previous experiences [training data]
Some good references and reading at this Natural Language article.
As others said, Markov chain seems to be most suitable for such a task. Nice description of implementing Markov chain can be found in Kernighan & Pike, The Practice of Programming, section 3.1. Nice description of text-generating is also present in Programming Pearls.
One thing, though not quite what you need, would be a Markov chain of words. Here's a link I found by a quick search: http://blog.figmentengine.com/2008/10/markov-chain-code.html, but you can find much more information by searching for it.
Take a look at http://www.nltk.org/ (Natural Language Toolkit), lots of powerful tools there. They use Python (not C#) but Python is easy enough to pick up. Much easier to pick up than the breadth and depth of natural language processing, at least.
I agree, that you will have troubles in creating something creative. You could possibly also use a keyword spinner on certain words. You might also want to implement a stop word filter to remove anything colloquial.