Seam carving energy calculation method and seam finding - c#

I am trying to implement a program like intelligent scissors. The difference it will not follow the edges, but it will be tend to pass between two edges.
I found seam carving useful for this purpose. What I need to do is calculate the energy of an image and find seams on it. But I couldn't find how to implement it in the paper and couldn't find an implementation either. Can anybody recommend an easier source that I can understand and implement? or an implementation which I can try and see wheather it will work for my purpose or not?

The best implementation of CAIR I've found is here. The link includes a pretty simple and straightforward explanation of how the algorithm works.

Does this article on Wikipedia help at all? http://en.wikipedia.org/wiki/Seam_carving

Related

Image Recognition Intro: Identifying obects on googlemaps

I know, already so many 'how to compare 2 images' question out there.
I have looked at many and could not find anything relevant to my particular need - I apologize if I have missed something I should not have !
So,
Firstly, I am familiar with c#, completely unfamiliar with image processing and recognition.
Secondly, I am not looking for someone to hand me a complete answer, simply seeking to point myself in the right direction to tackle the job in hand.
Objective:
I am seeking to identify the location of certain physical structures on images taken from google maps.
Unfortunately I cannot tell or show what those structures are, but we can use an example, let's say it's a round swimming pool.
Key point maybe, I'm looking to find a small object 'within' a large image.
Considering I've never tackled image processing before, I appear to be completely over whelmed with options of libraries available, names and terminology of functions and capabilities... and I seem to be spending hours going down dead end avenues.
A lot of reading seems to be comparing 1 image with another, rather than an image 'within' an image.
So far AForge & Open CV are the obvious names that have been seen a lot .. but I really can't work out which of them will do this specific job.
Simply, could someone be kind enough to point me in the right direction to get started?
I'm really trying to narrow reading down to subject matter that is relevant to my case.
Basic Principles
Libraries with the required capability.
Any guidance much appreciated.
Many Thanks
Simon
If you want to detect shapes for example, you can use opencv.
Unfortunately opencv is written in c++ and you would have to write your wrapper.
Instead you can use EmguCV. Try to look at this: link

How to feed data as shown as info using struct in C#?

I want to restart with data structure ( and Ai + I want to clear all my misconceptions too. ;P )
For now I want to know how would I put given pictorial info into algorithm using C# structure. Image processing is not required here. Just need to feed the data in.
Here I need this question to be modified too if not clear. :|
Say Arad is a city in Romania from where I have to go to another city Bucharest.
This map also has info of how far all connecting cities are from any city.
How would I use these info in program to start with any searching or sorting algo?
Any pointer will be helpful. Say if this can be done using anything else than struct. Something like node or something. I don't know.
Please consider I want to learn things. So using C# for ease in use not to use its inbuilt searching and sorting functions. Later to confirm I might use.
The way you typically solve this problem is to create a node class and an edge class. Each node has a set of edges that have "lengths", and each edge connects two nodes. You then write a shortest-path algorithm that determines the least-total-length set of edges that connects two nodes.
For a brief tutorial on how to do that, see my series of articles on the A* algorithm:
http://blogs.msdn.com/b/ericlippert/archive/tags/astar/
Although it's not exactly what you're looking for, Eric Lippert's series on graph colouring is an excellent step-by-step example of designing data structures and implementing algorithms (efficiently!) in C#. It has helped me a lot; I highly recommend reading it. Once you've worked your way through that, you will know much more about C# and you will understand some of the specific design tradeoffs that you may encounter based on your specific problem, including what data structure to use for a particular problem.
If you just want to look at raw algorithms, the shortest path problem has many algorithms defined for it over the years. I would recommend implementing the common Dijkstra's algorithm first. The Wikipedia article has pseudocode; with what you get out of Eric Lippert's series, you should be in good shape to develop an implementation of this. If you still want more step-by-step guidance, try a search for "Dijkstra's algorithm in C#".
Hope that helps!

Learning about Hidden Features

Today I found out about an interface I'd never heard of before: IGrouping
IEnumerable<IGrouping<YourCategory, YourDataItem>>
I am fortunate to have access to some of the best programming books available, but seldom do I come across these kinds of gems in those books. Blogs and podcasts work, but that approach is somewhat scattershot. Is there a better way to learn these things, or do I need to sift through the entire MSDN library to discover them?
Eric Lippert's blog. The real guts of C# - why there are some limitations which might seem arbitrary at first sight, how design decisions are made, etc.
Alternatively, for more variety, look at the Visual C# Developer Center - there's a whole range of blogs and articles there.
Oh, and read the C# spec. No, I mean it - some bits can be hard to wade through (I'm looking at you, generic type inference!) but there's some very interesting stuff in there.
The best place to start is Jon Skeet's C# Coding blog: http://msmvps.com/blogs/jon_skeet/
He regularly covers stuff you won't see anywhere else.
How about the Hidden Features series of questions?
Hidden Features of C#
Hidden Features of ASP.NET
And many more...
I personally like the way of discovering hidden features on my own while solving a specific problem. In the end, a hidden feature that you never needed to get something done is of questionable value. It just adds clutter to the brain.
The way to do it is to use the MSDN library to look things up. Then take a little time to look around what you found.
That's especially important with the pure API documentation. For instance, I just browsed to http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.aspx (note how that URL is formed). When I look in the Contents pane on the left, I see everything from XmlDocument (and XmlDocumentFragment) all the way down to XmlReader. In the middle are some things I rarely or never use, like XmlNamespaceScope and XmlNodeOrder.
From time to time, spend a little time on "abstract knowledge". Sometimes, it's good to look up from the trees to learn your way around the forest. You never know when you'll need something you've learned to get you out of the woods.
For the people who don't know IGrouping:
http://msdn.microsoft.com/en-us/library/bb344977.aspx
I often read useful stuff on the Viual Studio Startup page and start clicking around to other keywords/areas. Not too promote StackOverflow too much, but you'll find some hidden gems here as well, simply by looking at how other people write code.
For example:
Hidden Features of C#?

artificial intelligence - Creative Writing

I am trying to find information (and hopefully c# source code) about trying to create a basic AI tool that can understand english words, grammar and context.
The Idea is to train the AI by using as many written documents as possible and then based on these documents, for the AI to create its own creative writitng in proper english that makes sense to a human.
While the idea is simple, I do realise that the hurdles are huge, any starting points or good resoueces will be appriacted.
A basic AI tool that you can use to do something like this is a Markov Chain. It's actually not too tricky to write!
See: http://pscode.com/vb/scripts/ShowCode.asp?txtCodeId=2031&lngWId=10
If that's not enough, you might be able to store WordNet synsets in your Markov chain instead of just words. This gives you some sense of the meaning of the words.
To be able to recompose a document you are going to have to have away to filter through the bad results.
Which means:
You are going to have to write a program that can evaluate if the output is valid (grammatically and syntactically is the best you can do reliablily) (This would would NLP)
You would need lots of training data and test data
You would need to watch out for overtraining (take a look at ROC curves)
Instead of writing a tool you could:
Manually score the output (will take a long time to properly train the algorigthm)
With this using the Amazon Mechanical Turk might be a good idea
The irony of this: The computer would have a difficult time "Creatively" composing something new. All of its worth will be based on its previous experiences [training data]
Some good references and reading at this Natural Language article.
As others said, Markov chain seems to be most suitable for such a task. Nice description of implementing Markov chain can be found in Kernighan & Pike, The Practice of Programming, section 3.1. Nice description of text-generating is also present in Programming Pearls.
One thing, though not quite what you need, would be a Markov chain of words. Here's a link I found by a quick search: http://blog.figmentengine.com/2008/10/markov-chain-code.html, but you can find much more information by searching for it.
Take a look at http://www.nltk.org/ (Natural Language Toolkit), lots of powerful tools there. They use Python (not C#) but Python is easy enough to pick up. Much easier to pick up than the breadth and depth of natural language processing, at least.
I agree, that you will have troubles in creating something creative. You could possibly also use a keyword spinner on certain words. You might also want to implement a stop word filter to remove anything colloquial.

An effective algorithm for buffering a polyline to create a polygon?

I need to write some code that will buffer a line to create a polygon as shown below.
http://www.sli.unimelb.edu.au/gisweb/BuffersModule/Buff_line.htm
From following the steps outlined, I can create polygon shapes around simple lines that do not cross themselves or have too tight curves, but as the lines I'm trying to buffer are squiggly swhirly hurricane tracks, it's really not good enough.
I know there's a function in SQL Server 2008 that can do this, but I'm afraid that's currently a no go.
Can anyone point me in the direction of a more complete algorithm I can follow, or any background info that could help me figure this out?
Although this is called buffering in GIS, apparently the mathematicians who work on algorithms call it the Minkowski sum. Googling found this page by algorithm expert Steven Skiena that links to several algorithm implementations and some books. Hope this helps!
One of the algorithm implementations it links to right now (March 09) is CGAL, an open source C library.

Categories

Resources