I want to render a custom display from an emulation. Think like a dot matrix display from pinball machines.
How would i effectively go about this? (Think about actually writing to a texture that size will probably run way too slow)
There has to be a good way to get this to render, but i have trouble finding a way that actually performs properly as well.
There are many options to do this but without further details (DMD screen resolution, number of colors, animated or not, etc) it's not easy to help. Here are a bunch of options popped into my mind, hope the one you are looking for is somewhere here :)
1) There was a similar question, you can find it along with the answer here
2) If you want to display text only, there's a wide range of sites offering DMD fonts for free, e.g. here
3) You can also edit/extend the font set you download and display 'special characters' as graphics, or just use the standard ASCII table for the purpose if that's enough for your needs. e.g. ▓ █ ╔ ═ ╗ and similar "drawing characters"
You can find inspiration and ASCII art (including animated ones) e.g. here
4) Might be slow (again, "depends") but you can go for bitmap and .SetPixels with a Texture2D and DrawTexture
5) A bit "hacky", but you can save your anim phases into either bitmap data/array (readonly/constant variables for example, or read from disc in a managed way, or draw with the help of a free asset from the store, like this one here, etc) and do Graphics.DrawTexture
6) If the thing you want to display is 100% static (i.e. it's not actual data like score, but "hardcoded" animations like "TILT" text or such), you can create a Sprite Animation
7) You can mix the above and e.g. go for a font (#2) to display dynamic data on a canvas, and play the static animation around it making it look like the whole thing is dynamic
Hm. That's all right off the top of my head :)
Hope this helps!
Related
I am writing a geoscience visualization application that uses wpf 3d. The user needs to be able to zoom deep into detail and out quick with minimum resources taken. I've decided to divide my slice (ModelVisual3D) into subrectangles (GeometryModel3D), so that each has it's own texture that changes when the camera zooms in (similar to Google maps).
The problem is that "cracks" are appearing between subrectangles, even though they actually have no empty space between them.
How to hide these? or is there any other way to assign multiple materials with different sizes to one ModelVisual3D?
PS I've tried making the background gray, light-gray, silver and white-smoke. It helps a little, but it's not acceptable. I've also tried overlapping the subrectangles, with no result.
Instead of your current setup you might want to make several textures at different resolutions and switch between these depending on the zoom level. (Mipmaps)
When getting really close you might replace the entire object and switch it for a much smaller one) and use a highly detailed texture.
It will require a bit more pre-processing but you will be able to use a single geometry.
Seems like changing ImageBrush's stretch to Stretch.None and using textures larger than the subsquare helps. Although now I need more precise control over texture coordinates for the surface.
We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)
do u know any techniques allowing to speed up 2d primitives such as lines and circles?
i develop application that allow to edit images containing such primitives. they can be moved and selected in the same way as windows desktop icons are (including group selection by rectangle). also objects that cursor is on are highlighted.
it seems that there are many display updated involved when mouse is used. so i need to do it smartly.
i know that:
changing GDI+ to D3D can speed up display greately
dirty rects allow to restrict updates to only those rectangles that changed. (major drawback is that rectangles containing lines can be as big as display area)
xor technique allow to clear primitive by drawing it second time. (drawback is that it seems to be useless with multicolor images and primitives with common points)
thanks for useful tips & links.
Take a look at Michael Abrash's Graphics Programming Black Book
My program is working with fax documents stored as separate bitmaps
I wonder if there is a way to detect automatically page orientation (vertical or horizontal) to show image preview for user in right order (meant rotate if neccesary)
Any advices much appreciated!
EDIT: Clarification:
When Faxmachine receives multi-page document it saves each page as separate TIFF file.
My app has built-in viewer displaying those files. All files are scaled to A4 format and saved in TIFF (so there is no change to detect orientation by height/width parameters)
My viewer displays images in portrait mode by default
What I'd like to do is automagically detect situation when org document was printed in landscape mode (eg wide Excel tables) then I'd like to show rotated preview for end user to speed up preview process
Obviously there are 4 possible fax orientation portrait / landscape x 2 kinds of rotations.
I'm even interested simplified solution detecting when org doc was landscape or portrait (I've noticed most of landscape docs needs to be rotated clockwise)
EDIT2: Idea
I think it might be some idea:
If I could draw horizontal and vertical lines and check if line doesn't cut any (black) point. Then we can compare what are more type of lines (horizontal or vertical) and his decides about page orientation.
What do you think ?
You could perform a Fast Fourier Transform (FFT) to convert your spatial image to a frequency/angle representation. Then find the angle with the most prominent frequency. It sounds complicated but it's not that hard, it's pretty efficient, and in effect it tests every possible angle at once, instead of being a hard-coded hack that only works for specific angles. Search for a sample implementation with search terms like Numerical Recipes and FFT.
You'd need OCR for that. Rolling your own OCR would be a bit difficult, but there might be library or something out there worth looking into? Also, even with good OCR, it's not a 100% reliable solution.
I wonder if there are some properties of text you could use to help you do this.
For instance based on a quick glance, there are far more vertical lines in text (l,j,k,m,n etc) than horizontal ones so maybe you could start with this.
But even detecting these isn't straightforward, you'd need to use some sort of filter like a Sobel or Prewitt. They both have horizontal and vertical versions, see here for more info.
Of course the vertical/horizontal lines of an excel spreadsheet would be the strongest edges so you'd have to ignore these and look only at the text.
Alternative: Can you not just give the user an easy way to rotate the images, like the arrows in Windows Picture viewer or just show 4 thumbnail previews they can click on. You might need to cache the 4 versions (if you are rotating) so it's quick, but only if speed turns out to be an issue?
Here's a paper entitled "Combined Script and Page Orientation Estimation using
the Tesseract OCR engine" [pdf]
I haven't been able to find an implementation of their work, but the approach looks good to me:
The basic idea behind the proposed approach is simple.
A shape classifier is trained on characters (classes) from all the scripts of interest. At run-time, the classifier is run independently on each connected component (CC) in the image and the process is repeated after rotating each CC into three other candidate orientations (90°, 180° and 270° from the input orientation).
The algorithm keeps track of the estimated number of characters in each script for a given orientation, and the accumulated classifier confidence score across all candidate orientations. The estimate of page orientation is chosen as the one with the highest cumulative confidence score, and the estimate of script is chosen as the one with the highest number of characters in that script for the best orientation estimate.
I have a bitmap with black background and some random objects in white. How can I identify these separate objects and extract them from the bitmap?
It should be pretty simple to find the connected white pixel coordinates in the image if the pixels are either black or white. Start scanning pixels row by row until you find a white pixel. Keep track of where you found it, create a new data structure to hold its connected object. Do a recursive search from that pixel to its surrounding pixels, add each connected white pixel's coordinates to the data structure. When your search can't find any more connected white pixels "end" that object. Go back to where you started and continue scanning pixels. Each time you find a white pixel see if it is in one of your existing "objects". If not, create a new object and repeat your search, adding connected white pixels as you go. When you are done, you should have a set of data structures representing collections of connected white pixels. These are your objects. If you need to identify what they are or simplify them into shapes, you'll need to do some googling -- I can't help you there. It's been too long since I took that computer vision course.
Feature extraction is a really complex topic and your question didn't expose the issues you face and the nature of the objects you want to extract.
Usually morphological operators help a lot for such problems (reduce noise, fill gaps, ...) I hope you already discovered AForge. Before you reinvent the wheel have a look at it. Shape recognition or blob analysis are buzz works you can have a look at in google to get some ideas for solutions to your problem.
There are several articles on CodeProject that deals with these kinds of image filters. Unfortunately, I have no idea how they work (and if I did, the answer would probably be too long for here ;P ).
1) Morphological operations to make the objects appear "better"
2) Segmentation
3) Classification
Each topic is a big one. There are simple approches but your description is not too detailed...