I have a question. I want to write a chess like program applying the rules as follows:
It should have just a king and a queen on one side and the other side should have just a king.
The first side should mate the second side with the lowest number of moves possible.
I want to know your thoughts about how to make this project. For example I want to know about which way of writing code is easier (object oriented or structured, ...) (I have a little information about object oriented) and can say me about writing its algorithm? For example from where I should begin to write the codes?
The good news here is that your problem is quite restricted in scope, as you only have three pieces to contend with. You're not really implementing a game so much here, as solving a logical puzzle. I'd approach it like this:
Figure out how to represent the three pieces in a simple way. You really don't need a UI here (other than for testing), since you're just trying to solve a puzzle. The simplest way is probably a simple Row,Column position for each of the three pieces.
If you haven't written an object-oriented program before, you'll probably want to stick with a procedural model and simply define variables for the data you'll need to represent. The problem scope is small, so you can get away with this. If you have some OOP experience, you can split up the problem appropriately, though you probably won't need any inheritance relationships.
Write the code for generating possible moves and determine whether a given move makes any sense at all. A legal King move is any move that does not check the King. Most queen moves should be permissible, but you probably also want to exclude moves that would allow the enemy King to take the Queen.
Now you need to determine a strategy for how to put together a sequence of moves that will solve the puzzle. If you need to find the true optimal solution (not merely a good solution), you may need to do a brute-force search. This may be feasible for this problem. You'll probably want to perform a depth-first search (if you don't know what this means, that's your first topic to research), as once you find a possible solution, that limits the depth at which all other solutions must be considered.
If you can get brute force functional and need to make things faster, consider if there are moves you can prove will have no benefit. If so, you can exclude these moves immediately from your search, saving on the number of branches you need to consider. You can also work to optimize your evaluation functions, as a faster evaluation is very beneficial when you are doing billions of them. Finally, you might come up with some heuristics to evaluate which of the branches to try first. The faster you can converge to a 'good' solution, the less cases you need to consider to find the optimal solution.
One side note I realized is that the problem is very different if you assume that the enemy King is trying to avoid checkmate. The simple depth-first pruning only works if you are allowed to move the enemy King in the way that best checkmates it. If the enemy King is attempting to avoid checkmate, that complicates the problem, as you have conflicting optimization goals (you want it to occur in as few moves as possible, yet your enemy King wants to postpone as long as possible.) You might be limited to characterizing a range of possibilities (say, 3 moves best case if King is perfectly cooperative, 8 moves best worst-case if King is perfectly evasive.)
Take a look at this SO question (Programming a chess AI).
From the answers to that question, I think this C# Chess Game Starter Kit would be a good start, but I would also look at the other articles referenced as well for some interesting history/information.
This is the simplest possible example of an endgame database. There are less than 64^3 = 262144 positions, so you can easily store the score of each position. In this case, we can define the score as the number of moves to checkmate, for a winning position; or 255 for a drawn position. Here is an outline:
Set all scores to 255.
Look for all checkmate positions, and score them as 0.
Set depth = 1.
For each drawn position (score=255), see if a move exists into a won position (more precisely, see if a move exists into a position from which all of the opponent's moves are losing.) If so, set its score to depth.
If no new position was found in step 4, you're done.
Increment depth, and go to Step 4.
Now you have a 250k table that you can save to disk (not that it should take many seconds to generate it from scratch). If space is important, you can reduce this significantly by various tricks. Wikipedia has a nice article on all this -- seach for "Endgame tablebase".
A poster here suggest that Stockfish would be a good start, but it is a C++ project, whereas you are asking for C#.
The solution depends on your requirement. If you are interested in "just make it work", you could complete the project without writing more than 200 lines of code. You could embed an open source C# project, and ask the engine to report you the number of moves to mate. If the open source project is UCI supported, the following command will do the job:
go mate x
where x is the number of moves to mate.
However, if you need to do the thinking yourself. You will need to choose between efficient bitboard or object-oriented representation. Bitboard is a much better representation, it is very fast but harder to program. All chess engines use bitboard. In your project, represenation efficiency is not too much of concern, so you could choose OO represenation.
Related
I am implementing full text search on a single entity, document which contains name and content. The content can be quite big (20+ pages of text). I am wondering how to do it.
Currently I am looking at using Redis and RedisSearch, but I am not sure if it can handle search in big chunks of text. We are talking about a multitenant application with each customer having more than 1000 documents that are quite big.
TLDR: What to use to search into big chunks of text content.
This space is a bit unclear to me, sorry for the confusion. Will update the question when I have more clarity.
I can't tell you what the right answer is, but I can give you some ideas about how to decide.
Normally if I had documents/content in a DB I'd be inclined to search there - assuming that the search functionality that I could implement was (a) functionally effect enough, (b) didn't require code that was super ugly, and (c) it wasn't going to kill the database. There's usually a lot of messing around trying to implement search features and filters that you want to provide to the user - UI components, logic components, and then translating that with how the database & query language actually works.
So, based on what you've said, the key trade-offs are probably:
Functionality / functional fit (creating the features you need, to work in a way that's useful).
Ease of development & maintenance.
Performance - purely on the basis that gathering search results across "documents" is not necessarily the fastest thing you can do with a IT system.
Have you tried doing a simple whiteboard "options analysis" exercise? If not try this:
Get a small number of interested and smart people around a whiteboard. You can do this exercise alone, but bouncing ideas around with others is almost always better.
Agree what the high level options are. In your case you could start with two: one based on MSSQL, the other based on Redis.
Draw up a big table - each option has it's own column (starting at column 2).
In Column 1 list out all the important things which will drive your decision. E.g. functional fit, Ease of development & maintenance, performance, cost, etc.
For each driver in column 1, do a score for each option.
How you do it is up to you: you could use a 1-5 point system (optionally you could use planning poker type approach to avoid anchoring) or you could write down a few key notes.
Be ready to note down any questions that come up, important assumptions, etc so they don't get lost.
Sometimes as you work through the exercise the answer becomes obvious. If it's really close you can rely on scores - but that's not ideal. It's more likely that of all the drivers listed some will be more important than others, so don't ignore the significance of those.
I am looking for some kind of intelligent (I was thinking AI or Neural network) library that I can feed a list of historical data and this will predict the next sequence of outputs.
As an example I would like to feed the library the following figures 1,2,3,4,5
and based on this, it should predict the next sequence is 6,7,8,9,10 etc.
The inputs will be a lot more complex and contain much more information.
This will be used in a C# application.
If you have any recommendations or warning that will be great.
Thanks
EDIT
What I am trying to do i using historical sales data, predict what amount a specific client is most likely going to spend in the next period.
I do understand that there are dozens of external factors that can influence a clients purchases but for now I need to merely base it on the sales history and then plot a graph showing past sales and predicted sales.
If you're looking for a .NET API, then I would recommend you try AForge.NET http://code.google.com/p/aforge/
If you just want to try various machine learning algorithms on a data set that you have at your disposal, then I would recommend that you play around with Weka; it's (relatively) easy to use and it implements a lot of ML/AI algorithms. Run multiple runs with different settings for each algorithm and try as many algorithms as you can. Most of them will have some predictive power and if you combine the right ones, then you might really get something useful.
If I understand your question correctly, you want to approximate and extrapolate an unknown function. In your example, you know the function values
f(0) = 1
f(1) = 2
f(2) = 3
f(3) = 4
f(4) = 5
A good approximation for these points would be f(x) = x+1, and that would yield f(5) = 6... as expected. The problem is, you can't solve this without knowledge about the function you want to extrapolate: Is it linear? Is it a polynomial? Is it smooth? Is it (approximately or exactly) cyclic? What is the range and domain of the function? The more you know about the function you want to extrapolate, the better your predictions will be.
I just have a warning, sorry. =)
Mathematically, there is no reason for your sequence above to be followed by a "6". I can easily give you a simple function, whose next value is any value you like. Its just that humans like simple rules, and therefore tend to see a connection in these sequences, that in reality is not there. Therefore, this is a impossible task for a computer, if you do not want to feed it with additional information.
Edit:
In the case that you suspect your data to have a known functional dependence, and there are uncontrollable outside factors, maybe regression analysis will have good results. To start easy, look at linear regression first.
If you cannot assume linear dependence, there is a nice application that looks for functions fitting your historical data... I'll update this post with its name as soon as I remember. =)
We have some examples of pictures.
And we have on input set of pictures. Every input picture is one of example after combination of next things
1) Rotating
2) Scaling
3) Cutting part of it
4) Adding noise
5) Using filter of some color
It is guarantee that human can recognize picture ease.
I need simple but effective algorithm to recognize from which one of base examples we get input picture.
I am writing in C# and Java
I don't think there is a single simple algorithm which will enable you to recognise images under all the conditions you mention.
One technique which might cover most is to Fourier transform the image, but this can't be described as simple by any stretch of the imagination, and will involve some pretty heavy mathematical concepts.
You might find it useful to search in the field of Digital Signal Processing which includes image processing since they're just two dimensional signals.
EDIT: Apparently the problem is limited to recognising MONEY (notes and coins) so the first problem of searching becomes avoiding websites which mention money as the result of using their image-recognition product, rather than as the source of the images.
Anyway, I found more useful hits by searching for 'Currency Image Recognition'. Including some which mention Hidden Markov Models (whatever that means). It may be the algorithm you're searching for.
The problem is simplified by having a small set of target images, but complicated by the need to detect counterfeits.
I still don't think there's a 'simple agorithm' for this job. Good luck in your searching.
There is some good research going on in the field of computer vision. One of the problem being solved is identification of an object irrespective of scale changes,noise additions and skews introduced because photo has been clicked from a different view. I have done little assignment on this two years back as a part of computer vision course. There is a transformation called as scale invariant feature transform by which you can extract various features for the corner point. Corner points are those which are different from all its neighboring pixels. As you can observe, If photo has been clicked from two different views, some edges may disappear and appear like some thing else but corners remain almost same. This transformations explains how feature vector of size 128 can be extracted for all the corner points and tells you how to use these feature vector to find out the similarity between two images. Here in you case
You can extract those features for one of all the currency notes you have and check for existence of these corner points in the test image you are supposed to test
As this transformation is robust to rotation,scaling,cropping,noise addition and color filtering, I guess this is the best I can suggest you. You can check this demo to have a better picture of what I explained.
OpenCV has lots of algorithms and features, I guess it should be suitable for your problem, however you'll have to play with PInvoke to consume it from c# (it's C library) - doable, but requires some work.
You would need to build a set of functions that compute the probability of a particular transform between two images f(A,B). A number of transforms have previously been suggested as answers, e.g. Fourier. You would probably not be able to compute the probability of multiple transforms in one go fgh(A,B) with any reliability. So, you would compute the probability that each transform was independently applied f(A,B) g(A,B) h(A,B) and those with P above a threshold are the solution.
If the order is important, i.e you need to know that f(A,B) then g(f,B) then h(g,B) was performed, then you would need to adopt a state based probability framework such as Hidden Markov Models or a Bayesian Network (well, this is a generalization of HMMs) to model the likelihood of moving between states. See the BNT toolbox for Matlab (http://people.cs.ubc.ca/~murphyk/Software/BNT/bnt.html) for more details on these or any good modern AI book.
Say you want to write a Tetris clone, and you just started planning.
How do you decide what should be a class? Do you make individual blocks a class or just the different block-types?
I'm asking this because I often find myself writing either too many classes, or writing too few classes.
Take a step back.
I suspect that you're putting the cart before the horse here. OOP isn't a Good Thing in its own right, it's a technique for effectively solving problems. Problems like: "I have a large multiple-team organization of programmers with diverse skill sets and expertise. We are building large-scale complex software where many subsystems interact with each other. We have a limited budget."
OOP is good for this problem space because it emphasizes abstraction, encapsulation, polymorphism and inheritance. Each of those works well in the many-teams-writing-large-software space. Abstraction allows one team to use the work of another without having to understand the implementation details, thereby lowering the communication cost. Encapsulation allows one team to know that they can make changes to their internal structures to make them better without worrying about the costs of impacting another team. Polymorphism lowers the cost of using many different implementations of a given abstraction, depending on the current need. Inheritance allows one team to build upon the work of another, cleanly re-using existing code rather than spending time and money re-inventing it.
All of these things are good not in of themselves, but because they lower costs in large-team-complex-software scenarios.
Do they lower costs in one-guy-trivial-software scenarios? I don't think they do; I think they raise costs. The point of inheritance is to save time through code re-use; if you spend more time messing around with getting the perfect inheritance hierarchy than the time you save through code re-use, it's not a net win, it's a net loss. Similarly with all the others: if you don't have many different implementations of the same thing then spending time on polymorphism is a loss. If you don't have anyone who is going to consume your abstraction, or anyone from whom you need to protect your internal state, then abstraction and encapsulation are costs with no associated benefits.
If what you want to do is write Tetris in an OO style for practice writing in that style, by all means go right ahead and don't let me stop you. I'm just saying: don't feel that you have a moral requirement to use OOP to solve a problem that OOP is not well-suited to solve; OOP is not the be-all-and-end-all of software development styles.
You might want to check out How do you design object oriented projects?. The accepted solution is a good start. I would also pick up a design patterns book as well.
For a Tetris clone you're going to be better off I'd say creating a block class and using an enum or similar to record what shape piece it is. The reason is that all blocks act in the same way - they fall, they react to user input by rotating or falling faster, and they use collision detection to determine when to stop falling and trigger the next piece.
If you have a class per block-type then there'd be so little difference between each class that it would be a waste of time.
In another situation where you have a lot of similar concepts (like many different types of animals etc.) it might make mroe sense to have a class per sub-type, all inheriting from a parent class if the sub-types were more different from each other
Depends on your development methodology.
Assuming you do agile, then you can start with writing the classes you think you'll need. And then as you start filling in the implementation, you'll discover that some classes are obsolete or others need to be split out.
Assuming a more design-first-then-build approach (dsdm/rup/waterfall...), then you'd want to go for a design based on the "user story", see SwDevMan81's link for an example.
I would make a base-class Piece, because they each have similar functionality like move right, move left, move down, rotate CW, rotate CCW, color, position, and the list goes on. Then each piece should be a sub class like ZPiece, LPiece, SquarePiece, IPiece, BackwardsLPiece, etc... You probably do have many classes, but there are many different types of pieces.
The point of OOP you are asking about is inheritence. You don't want to reinvent the wheel when it comes to some functions like move left/right/down, nor do you want to repeat exact code in multiple locations. Those functions shouldn't change depending on the piece so put it in the base class. Each piece rotates differently, but it is in the base class because each class should implement it's own version of it.
Basically, anything all pieces have in common should be in a base class. Then everything that makes a piece unique should be in the class itself. Yes, I think making a block class and each piece has 4 of them is a bit much, but there are those that would disagree with me.
I found this very cool C++ sample , literally the "Hello World!" of genetic algorithms.
I so decided to re-code the whole thing in C# and this is the result.
Now I am asking myself: is there any practical application along the lines of generating a target string starting from a population of random strings?
EDIT: my buddy on twitter just tweeted that "is useful for transcription type things such as translation. Does not have to be Monkey's". I wish I had a clue.
Is there any practical application along the lines of generating a target string starting from a population of random strings?
Sure. Imagine any scenario in which you know how to evaluate the fitness of a particular string, and in which the choices are discrete and constrained in some way:
Picking pronounceable names ("Xhjkxc" has low fitness; "Artekzo" has high fitness)
Trying out a series of chess moves
Guessing the combination to a safe, assuming you can tell how close you are to unlocking each tumbler
Picking phone numbers that evaluate to words (e.g. "843-2378" has high fitness because it spells "THE-BEST")
No. Each time you run the GA, you are giving it the eventual answer. This is great for showing how a GA works and to show how powerful it can be, but it does not have any purpose beyond that.
You could write an EA that writes code in a dynamic language like IronPython with the goal of creating code that a) executes without crashing and b) analyzes the stock market and intelligently buys and sells stock.
That's a very simplistic take on what would be necessary, but it's possible. You would need a host that provides a lot of methods for the IronPython code (technical indicators, etc) and a database of ticks.
It would also be smart to not just generate any old random code, lest you format your own hard-drive. You need a sandbox, and you need to limit the namespaces that are accessable, and you would need to provide a time limit to avoid infinite loops. You could also provide symantic guidelines that allow it to choose appropriate approved keywords instead of just stringing random letters together -- this would greatly speed up evolution.
So, I was involved with a project that did everything but the EA. We had a satellite dish that got real-time stock ticks from the NASDAQ, a service for trading that had an API, and a primitive decision making "brain" that made decisions as the ticks came in.
Sadly, one of the partners flipped out, quit his job, forked the project (got his own dish, etc), and started trading with logic that wasn't ready. He lost a bunch of money. It turns out that for some people this type of project is only a step away from common gambling. But anyway, the project kind of fizzled out after that. Evolving the logic part is the missing link though. And I know there are people out there doing this type of thing.
I have used GA in 2 real life research problems.
One was a power optimization problem (maximize number of appliances turned on, meeting the available power constraint and service guarantee for each appliance)
Another was for radio network optimization, maximizing the coverage area given a fixed equipment budget
GA has one main disadvantage, it usually works with genetic speed so using it in some serious time-dependant projects is quite risky.