Surface - Card Game WPF - c#

I've been asked to make a simple card game in WPF designed to work with Microsoft Pixel Sense Tables (Version 1).
I've had experience in creating user applications in WPF and even for the Surface but I am quite new to building games. I've had a look at XNA and I am going through the documentation as we speak.
The card game has three different stages and I am looking at completing Stage 1 for now. Stage 1 involves creation of two sets of cards. One set of cards has a word and the other set of cards have a phrase which is related to the word in the other set. The cards are jumbled. The students then have to match the right pairs.
Now I know part of the job ha been simplified thanks to the ScatterView on the Surface SDK. I have also decided that the best way is to create a UserControl which is then added to a ScatterViewItem at runtime. The words and the associated phrases are stored in a mySQL database or an access database.
But this is where I am having trouble. When the students do pair up a set of right cards how do I attach both the cards together? I looked at the Scatter puzzle included in the SDK but it seems a but too complex for this game at this stage. I wanted to create the cards resembling two torn pieces of paper which when attached form a bigger piece of paper but again I am not quite sure how to achieve this.

Related

AR Foundation / ARCore / ARKit fiducial markers generation

I am trying to develop an AR application in unity using the new AR Foundation.
This application would need to use two features:
It needs to use a large amount of tracking images
It needs to properly identify the tracked image (marker) (Only one image will be visible at the same moment)
What I need is dynamically generate the fiducial markers preferably with the tracking part same for all and only with specific part carrying id of the marker. Preferably the AR code would be similar to the ARToolkit One from this image:
Do these markers work well with ARfoundation (abstraction over ARCore and ARKit)?
Lets say I ll add 100 of these generated codes into the XRImageIs it possible that AR Foundation image targets get "confused" and mixup tracked images? Could in theory i use QR codes as Markers and simply code ID information into the QR code?
In a project I searched for a good way to implement a lot of different markers to identify a vast amount of different real world objects. At first I tried QRCodes and added them to the Image Database in ARFoundation.
It worked but sometimes markers got mixed up and this already happened by using only 4 QRCodes containing words ("left", "right", "up", "down"). The problem was that ARFoundation relies on ARCore, ARKit, etc. (Depending on the platform you want to build.)
Excerpt from the ARCore guide:
Avoid images with that contain a large number of geometric features, or very few features (e.g. barcodes, QR codes, logos and other line art) as this will result in poor detection and tracking performance.
The next thing I tried was to combine OpenCV with ARFoundation and use ArUco Marker for detection. The detection works much better and faster than the Image Recognition. This was done by accessing the Camera Image and using OpenCVs marker detection. In ARFoundation you can access the camera image by using public bool TryAcquireLatestCpuImage(out XRCpuImage cpuImage).
The Problem of this method:
This is a resource-intensive process that impacts performance...
On an iPad Pro 13" 2020, the performance in my application dropped from constant 60 FPS to around 25 FPS. For me, this was a too serious performance drop.
A solution could be to create a collection of images with large variations and perfect score, but I am unsure how images with all these aspects in mind could be generated. (Probably also limited to 1000 images per reference database, see ARCore guide)
If you want to check if these markers works well in ARCore , goto this link and download the arcoreimg tool.
The tool will give you a score that will let you know if this image is trackable or not. Though site recommends the score to be 75 , i have tested this for score of as low as 15. Here is quick demo if you are interested to see. The router image in the demo has a score of 15.

Generating navmesh (3D) from walkable points

Here's the deal - I'm working on an algorithm/library that is able to generate a navigation mesh in virtually any environment where I can get coordinates for the controlled agent and/or for other agents within the same static environment. The only input I have, is a collection of points where an agent has been to.
(See the image here to hopefully understand what I mean)
I already got to the point where I can create navmeshes manually and navigate on them well enough. However, in larger environments, having only coordinates of, say, the controlled agent, it's really tedious and time-consuming to manually do it.
The uses for such algorithm/library for me are obvious, but I have put a lot of thought into it already, so I'll list a couple of things I'd like to accomplish:
Robotics (scans environment, only gets distance from self to a point, hence getting coordinates - no need for complicated image/video processing)
AI that is able to navigate an unknown and unseen maze (any shape or size) by exploring it
Recording walked areas and creating AI for games that don't know certain places unless they've been there
Now you hopefully see what kind of solutions I'm looking for.
I have tried a couple of things, but couldn't figure them out. One of the most successful things I've tried is giving a range to each individual point (creating a circle), and then looking for places with overlapping circles - you can most likely move on those areas. The problems with this approach started with triangulation of the areas. The resulting mesh may be a little inaccurate, but it must be able to connect to existing ("discovered") parts of the mesh seamlessly (not everything has to be interconnected somehow, as agents can disappear and reappear, but within reasonable proximity, connect the mesh).
Some more information: I'm working in C#, though solutions in java, C++/C, objective C, pseudocode etc are equally acceptable.
P.S. I'm not interested at all in answers like "just use this library" or "use this other language/environment" etc... I want an algorithm. Thank you in advance.
I can help with 2D path finding. You need to find the red outline. Then you can use a voronoi diagram with the red outline (not the agents points). Remove all edges outside the red outline and the remaining edges can be used to navigate the shape by someone/something. Read about it:http://www.cs.columbia.edu/~pblaer/projects/path_planner/.

Feature extraction in 3D gesture recognition for HMM with Kinect data

I have a set of 3D points mapped onto [0, 1] segments. These points represent simple gestures like circles, waving etc. Now I want to use Hidden Markov Models to recognize my gestures. First step is to extract features for (X, Y, Z) data. I tried to search something useful and found a couple examples: SIFT, SURF, some kind of Fast Fourier Transform etc.
I'm confused which one I should use in my project. I want to recognize gestures using data from Kinect controller, so I don't need to track joints algorithmically.
I had to implement HMM for gesture recognition a year or two ago for a paper on different Machine Learning methods. I came across Accord .NET Framework which helps implement many of those I was looking into, including HMM. It's fairly easy to use and its creator is active on the forums.
To train the HMM I created a Kinect application that would start recording a gesture once a body part was stationary for 3 seconds, it would then record all the points to an output file until said part stopped for 3 seconds again. I then selected the best attempts at the gestures I wanted to train and used them as my training set.
If you are new to Kinect Gesture Recognition and don't need to use HMM I would suggest maybe looking into Template Matching as it's a lot simpler and I found it can be very effective for simple gestures.
I'm working on a similar problem. So far the best material, that I have found is Kinect Toolbox from David Catuhe. Has some basic code for gesture recognition, Kinect data recording and replay.
You can start reading here: http://blogs.msdn.com/b/eternalcoding/archive/2011/07/04/gestures-and-tools-for-kinect.aspx
Have you considered a [trained] Support Vector Machine?
See: LibSVN Library http://www.csie.ntu.edu.tw/~cjlin/libsvm/
The idea would be to define your gesture as a n-dimensional training problem. Then simply train for each gesture (multiple classification SVM). Once trained you map any user gesture as N-dimensional vector and attempt to classify it with the trained model.

Shopping Mall Map Directory Editing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a shopping mall directory image (for example like this: http://www.westfield.com/annapolis/map/ ) and want to make an application like this http://www.youtube.com/watch?v=LNj8D8JKv-4&feature=related where I can draw the path between two locations without needing make many images for the paths.
So, What techniques (Library, programming techniques, softwares .. etc) do you suggest to do this using .NET (Windows Forms/ WPF) application?
EDIT for BOUNTY
I am looking for some start like. I am on 3rd Floor. I have a image of third floor. There is a point of entrance from 3rd floor in map. There are 29 seats in the floor. I want to show with line path, where is somebody's seat. I want to do it through Web App. C# MVC 4.5 Where should I start from ? Any sample code will be very helpful.
I solved similar problem some time ago. In short, I was making a game like Heroes of Might and Magic and since I didn't want to draw background myself I decided to use a static image which I would simply manipulate along with player moves.
So we found an open-source game with a map editor that we used for creating of a map background. Then we created a mapper that would load an image created from the map editor and user would mark places on a grid. The following image is a screenshot from our map mapper.
There was the image, and drawed grid with adjustable size for cells so that we could map objects very precisely. The yellow boxes with an item inside are mapped objects (some of them actually do not give a sense, it is just a proof of concept). When we were content of the result, we would safe position and information about all obejcts to a xml file so that I can be used from our game application.
In our game we have defined a class called TileMap that was aware of all objects (stored as instances of a class Tile) on the map and all players move requests were permitted or forbidden by it.
Finally, this is how I would solve your problem. I have only positive experience with this solution. Create an application that would allow to you to specify where there is a path, where there is a store and how it is called, and so forth. Serialize values to xml or another format you are confident with. Then, create an application that works with these obejcts, define a tile map and you are nearly finished. Now you just need to implement path finding and drawing. Path finding is easy, there are plenty different algorithms with different efficiency and speed. Once you know what tiles you have to cross in order to get into a destination, simply draw stars, arrows or whatever you like above the tiles.
Cons
you need to create two applications
Pros
no need to generate more images, you are drawing on a transparent layer and you can safe already generated paths in memory or file
quite easy implementation, just a lot of code
feel free to choose a technology - we used WinRT, but WPF and even WinForms are also suitable
if you like GUI applications, you will have a plenty of fun while doing this :)
Feel free to ask any design and even implementation details questions .
Prepare the list of all places that you want to be searchable. Give each one an ID. The stairs and elevators should have IDs too.
Create the map floor masks. The easy way is the "coloring book" technique. Create semi-transparent layer in Photoshop and overlay it above the map. Then, draw the walls with some set "color 1". Then, draw each distinct place with a distinct color. Just convert the place ID to Hex and use that hex value as the color. Draw the mask for each floor (in a separate image).
Create the map loader. It should load the mask images and extract the object position and passing ability information. You have to find the position (x, y, floor) of each place by scanning the maps and looking at the pixel color hex value. You have to locate the stairs and elevators, their positions and the floors that they connect.
Implement the pathfinding algorithm like A*. It's quite easy and looks like viscous water flowing to the destination's low point. For each position (x, y, floor) one can move in any of the four directions where there aren't walls. And if there is an elevator at that point, one can move to some other floor. The algorithm can quickly find the best route between any two (x, y, floor) points.
When user gives you the name of the place where he wants to go, you need to find out the user's position coordinates (you may try to use GPS) and the destination position (use the object location table, you prepared when you loaded the map on step 3). Give these two (x, y, floor) points to the pathfinding algorithm to get back the route - the sequence of the (x, y, floor) points leading to the destination.
Analyze the route. You need to scan the route and find out what floors does it pass through. Split the route in chunks where each chunk belongs to a single floor. Now you have a list of floors and the route points for each of the floors.
Visualize the floors and their route parts by drawing the route points over the floor maps. To increase the root point spacing, you may just draw every 10th point or so. With HTML5, the sequence of route points can be drawn as an SVG or canvas overlay on top of the floor map image background.
The best way to set up something like this is to pick up a book on game programming, as it will give you a lot of information on setting up paths around "solid objects". The actual UI technology should not matter as much, so choose WPF or Windows Forms, HTML5 etc. Of the choices you have, I would probably aim for WPF or Silverlight, as it gives you much more flexibility on creating the UI. But I would not be adverse to HTML5 either.
You can definately do the whole thing in WPF, you're looking at drawing simple paths so you need to chunk your project into multiple sub issues:
1) How to draw the UI area (i assume you already know that)
2) How to draw the map, it heavily depends on the map data you have but it could be as simple as a single image, please add more detail for your source
3) How to figure out the path, for this you will need to use some form of pathfinding algorithm (one of the simplest one is A*, but there is a myriad of algorithms for different needs)
4) How to draw the Path, this depends on what you're looking for once again
I know it isn't much of an answer yet but depending on your needs (please add a comment bellow) i'll edit it to help you the best i can.
It would be pretty significant undertaking if you are trying to do this whole thing by yourself. All you have is image of shopping mall. From there you need to convert it in to vector data which in itself a fairly significant project. Then you will need to design navigation algorithms and then you need to setup UX for the whole thing. IMO, all these would require a lot of resources and research if you want to do this nicely and accurately.
Fortunately here is a good news: Google has been trying to do same thing for quite sometime and throwing lot more resources than that are likely at your disposal. Their effort is called "indoor mapping" which you can pretty much leverage out-of-box for your scenario. I'm going to give out here pointers to start you off as you have asked.
First visit Google blog to get familiarize with their indoor maps initiative. Then try out adding your floor plan in Google maps here. You are basically uploading image and aligning with the building in Google maps at this point. Here's another tutorial. Note that this does not make your floor plan navigable and may not show yet users location on it because to do that you need data from wifi/cell towers to triangulate users location on floor plan. We'll go over that next. If you have tons of these floor plans, I'd suggest taking help of mechanical turk or such service to have other humans do it for you cheaply. Google maps allows you to keep floor plan privates by using overlays but likely you don't want to do that so users can access it from anywhere.
Next, you want to make your user locatable on your floor plan. This involves getting data such as wifi/cell tower signals at different points on your floor plan. Google has app for this. And here's little demo. You can also use SketchUp to add vector data and polygons.
Next, you want to embed Google maps in your app so it becomes integral part of your app instead of users having to go through Google Maps website. To do this look at Maps SDK (here's link for iOS, and snippet for indoor maps).
Good luck!
You may easier this job by looking into Google map indoor solution. http://maps.google.com/help/maps/indoormaps/
No programming except a web page is needed.

What is the best way to implement scrollable maps (like Google Maps) in Microsoft Silverlight?

Problem Overview
I am working on a game application and need to be able to implement scrollable maps in Silverlight similar to those found in Google Maps. However, I am unsure as to how to implement this effectively. The following paragraphs provide much additional detail. Any ideas or guidance is greatly appreciated!
Problem Detail
I have been working on a new MMOG (massively multi-player online game). The game will implement a coordinate (x,y) based map. Only a very small fraction (less than 0.1%) of the map will be displayed on the screen at any given time. The player should have the ability to click on the map and drag the mouse to scroll and view map areas which are not presently visible. (This is somewhat similar to Google Maps.)
The map background is made up of a series of stitched (repeating) images. These images are woven together to give the basic appearance of the game's "world". A standard set of additional graphics are then superimposed, as appropriate, on each of the coordinate locations . For example, point (0,0) might be a lake, (0,1) might be a city, and (0,2) might be a forest. The respective images for a lake, a city, and a forest would be superimposed on the background.
It is important to mention that the entire map is NOT stored on the local client machine. Rather, as a player scrolls to or opens a specific location, the appropriate map information is retrieved from the remote game server. It is infeasible for us to build the entire game world map ahead of time due to its size and the fact that portions of the map are constantly changing.
I have toyed with the idea of building a bitmap on-the-fly of the new map each time a player moves. However, I think there may be a much better way to add to the map as the player scrolls.
When scrolling, movement of the map should not, if possible, result in a "flickered" refresh of the screen. I believe recreating a bitmap each and every time a player moves even one or two pixels would almost certainly result in flicker.
I am open to 3rd party tools and solutions. However, to the degree possible, I would prefer to use standard Microsoft libraries or open source tools rather than commercial tools.
What are some ideas as to the best way to implement this functionality so that it performs well, is reliable, and transitions to new areas of the map appear seamless to the player?
Thank you in advance for all your help!
Update
Here are a few pieces of additional information that may prove helpful.
Since my initial post, I have been introduced to the concept of a "tile engine". (Many thanks to Michael and Paul for pointing me towards Bing and BruTile.)
My understanding is that a tile engine basically breaks larger images into sections and renders them side by side. As a user scrolls, additional tiles are rendered as others are removed from view. This is very much what I am looking for.
However, there may be a couple of wrinkles that affect my use of a standard tile engine. All of the graphics for the game, including the backgrounds which would be displayed on any tile, will already be downloaded on the client. It is important that the tile engine not retrieve the graphics from a server as this would consume significant unnecessary bandwidth.
Other graphics (e.g. a lake, forest, hill), which represent objects from the gameworld, must be superimposed when the tiles are rendered on the screen. Tile engines such as Bing appear to provide the ability to superimpose custom images. Whatever tile engine is used must not only support this feature but allow exact placement of these superimposed images.
Finally, there is a a requirement to support popup descriptions when the user mouses over one of the superimposed graphics. Unlike the graphics which are already stored on the client, the descriptions contain information which must be downloaded from the game server. BruTile, while excellent in many ways, does not appear to yet support these popup descriptions.
We are making great progress. Thanks for all your help so far!
For an open source solution you could look at BruTile. It too has all the features you describe. It can also be used on the Microsoft Surface and on Windows Phone (for your markeplace version).
Use the Bing Maps control or the MultiScaleImage (Deep Zoom) which it uses.
To seen an example, go here. You can use the Deep Zoom Composer to create maps or topologies using your own photos and images.
Here is the SDK for the control.

Categories

Resources