I'm using the wpfmediakit with two ids ethernet cameras over the uEye drivers and sometimes when I launch the app the video feed is upside down. I am using the contols just like the documentation on the projects front page. I have observed two things,
The video feed is perfectly normal when used with usb cameras (microsoft, logitech and ueye usb cameras)
When I'm use the demo application the video feed is always correct
Has anyone experienced similar issues?
When the image is flipped upside down in DirectShow, there is one common cause for such artifact:
The normal image row order for RGB images is bottom-to-top, that is last row goes first. The top-to-bottom format also exists and goes with negative value in biHeight field of underlying media type. It is quite rare and some components might be ignoring this. A similar but much more rare thing is that YUV images are always top-to-bottom, regardless of biHeight sign, and some buggy components are incorrectly flipping such images.
All in all, somewhere in the pipeline the top-to-bottom order is likely to be confused with bottom-to-top and as a result the image is flipped.
Related
After putting togheter enough code to parse a pdf file, i'm actually struck on how to handle the decoded stream content, describing how to "draw" the actual page content. Apart from the concepts of operators "drawing this or that, or move from here to there", which are mostly self-explanatory, i can't realize the idea of user space or device space. I simply do not understand what they are, and how should i represent them in code. Can anyone point me to a good source of technical information on the subject (maybe a book RATHER than the sea of words known as "PDF Specs")? Thank you in advance.
This is a slightly out-of-the box suggestion, but you should try reading the Apple Quartz 2D documentation. Obviously, you are not on OS X (since you have tagged c#), but I make this suggestion because the Quartz 2D drawing model is almost the same as the PDF drawing model. In fact, rendering a PDF content stream on OS X (and iOS) is very easy because, corresponding to every PDF operator is the equivalent Quartz call (using a framework call Core Graphics).
Start with this.
(The reason for this similarity is because the initial Mac OS/NextStep drawing model was based on something called Display Postscript.)
As for user space and device space - they are pretty intuitive. The device space is just the coordinate system for the device: where the origin is, and which direction the axes go. So on OS X, for example, a screen's origin is at the top left hand corner of the screen, whereas PDF Page Space has an origin (usually) on the bottom left hand corner of the page. This means that EVERY thing you draw has to be transformed appropriately, which seems pretty cumbersome, except that this so-called CTM can be applied once (in the OS X case it involves a scale transform to flip the page, and a translate to slide it down). In the Quartz case, once you have applied these two transforms to the drawing context, you can forget about the problem. I imagine that the Windows API you are using has a very similar solution.
It would be helpful to if you read the Wikipedia entry on Affine Transforms.
Hy
I take two pictures from a webcam and split them into a 9 pieces. Then i match the pieces of the two pictures. The problem is that my webcam have a picture noise. So my programm thinks that in every piece of the second picture have chanced something.
I need a logical push to solve my problem please help.
The pictures from the web cam will never exactly match - even the slightest change in lighting will cause a difference. For this kind of picture matching you have to use a forgiving algorithm that allows at least some change and still makes a match. Create a histogram of each image, then calculating the difference seems to be a promising approach.
See the following threads on SO (just for examples, there are many more threads):
Image comparison - fast algorithm
Image comparison algorithm
Also I would check out Emgu if you are working with .NET, this is a .NET wrapper for openCV, a computer vision library.
I am making an object tracking application. I have used Emgucv 2.1.0.0
to load a video file
to a picturebox. I have also taken the video stream from a web camera.
Now, I want
to draw an unfilled square on the video stream using a mouse and then track the object enclosed
by the unfilled square as the video continues to stream.
This is what people have suggested so far:-
(1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester
said that there are .NET wrappers, but I had a hard time finding any.
(2) DxLogo sample
DxLogo – A sample application showing how to superimpose a logo on a data stream.
It uses a capture device for the video source, and outputs the result to a file.
Sadly, this does not use a mouse.
(3) GDI+ and mouse handling - this area I do not have a clue.
And for tracking the object in the square, I would appreciate if someone give me some research paper links to read.
Any help as to using the mouse to draw on a video is greatly appreciated.
Thank you for taking the time to read this.
Many Thanks
It sounds like you want to do image detection and / or tracking.
The EmguCV ( http://www.emgu.com/wiki/index.php/Main_Page ) library provides a good foundation for this sort of thing in .Net.
e.g. http://www.emgu.com/wiki/index.php/Tutorial#Examples
It's a pretty meaty subject with quite a few years and different branches of research associated with it so I'm not sure anyone can give the definitive guide to such things but reading up neural networks and related topics would give you a pretty good grounding in the way EmguCV and related libraries manage it.
It should be noted that systems such as EmguCV are designed to recognise predefined items within a scene (such as a licence plate number) rather than an arbitory feature within a scene.
For arbitory tracking of a given feature, a search for research papers on edge detection and the like (in combination with a library such a EmguCV) is probably a good start.
(You also may want to sneak a peek at an existing application such as http://www.pfhoe.com/ to see if it fits your needs)
I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.
This is for an arrivals/departures flight display. The display is difficult to read because of blurry fonts. The current state is an asp.net page displayed using internet explorer on a HDTV.
From the software side, what can I do to produce good looking fonts? I've noticed that powerpoint presentations have nicely rendered fonts even on smaller resolutions. Refactoring as a windows application is an option.
Note: I know there is an issue with the hardware that needs to be worked out but I want to make sure I'm displaying the best fonts possible. The current hardware setup is a vga output to hardware to convert to component video and a long cable run to a hdtv.
Use ClearType.
If it's an LCD connected with DVI or VGA, set it to the native resolution.
If powerpoint looks good, then I can assume your display is set up reasonably well. Make sure Always use ClearType for HTML is checked in IE under Tools/Internet Options/Advanced.
Note you need to restart the browser for this to take effect.
Edit: To know when you've got it right, load the PowerPoint that looks good, and use the exact same font face in your web app. Then compare them side by side so you know when they look exactly the same.
The current hardware setup is a vga output to hardware to convert to component video and a long cable run to a hdtv.
One more thing: as a rule, component video comes in several predefined resolutions, namely: 720x576 (576p), 1280x720 (720p) and 1920x1080 (1080p).
It seems that your VGA -> YPrPb hardware rescales the picture.
Set your VGA resolution to one of mentioned above.
Are you sure it has to do with the actual software? Have you checked connection to the HDTV, resolution used, any upscaling used by the HDTV to get the incoming digital signal to the potentially larger number of display pixels?
It's not on the software side, but probably the biggest factor in font readability is whether or not the display is showing the picture with 1:1 pixel mapping between the display and the source, and that no overscan is going on and causing unnecessary interpolation.