I'm writing an animation app in C#/WinForms (see this question). Basically, the animation in my application is smooth but shows tearing effects; when I take the same animation and render it to an AVI file and play it with Windows Media Player, the animation shows no tearing effects at all. I know WMP is not changing the frame rate because the animation is synchronized with music.
I assume WMP uses DirectX or some other technology that is aware of the monitor's refresh rate and scanline position etc., but I always assumed that programming to the refresh rate would constrain the frame rate. Obviously this isn't the case with WMP.
Does anyone know anything about how WMP (or other video players) renders video internally? I've searched but I can't seem to find any details about this.
It's been a while since I did any DirectX programming, so this may be out of date.
From what I remember, with DirectX you could set up a flipping chain of buffers, usually three buffers: the buffer being displayed, the buffer to be displayed and the buffer being written to. On an update, DirectX will wait for a V-sync before updating the displayed buffer. Now, this will cause a discrepency between the displayed image and the image that should be displayed, but this will be, at most, one refresh, about 1/60th of a second, so you're unlikely to notice.
Some ASCII art to show what I mean:
|-|-|-|-|-|-|-|-|-|-|-|-|-|-| - screen refresh
|----|----|----|----|----|--- - animation
|-----|---|-----|---|-----|-- - displayed
Are you painting each frame of your animation to a memory bitmap first, and then blitting the bitmap to your window? If not, this might be the solution for you.
(this is, of course, in addition to double-buffering)
Related
I need a very simple video player in my C# app. It only has to loop a video from file and nothing more. Since I'm developing a WPF application, I've tried to use System.Windows.Controls.MediaElement. It has all the functions I need, but works quite poor: I've played some full HD videos on it, and it's always lagging and spiking.
To make sure, it's not my app problem, I've created 2 test applications. The first in a WinForms borderless 1920x1080 window with only AxWMPLib.AxWindowsMediaPlayer control. And the second in a borderless WPF window of the same size with System.Windows.Controls.MediaElement.
Then I run 2 videos on both of players. Here are their specs:
1: 1920x1080, 12000kb/s, 25 FPS, wmv
2: 1920x1080, 5730kb/s, 25 FPS, mp4
On AxWindowsMediaPlayer everything looks fine. But MediaElement seems to drop some frames and ignore vertical sync (it's possible to see parts of one frame on another during fast scene changes). So, it's completely unsuitable and shouldn't be like that, but I've found nothing about the problem in Microsoft official docs (they only suggest to use MediaElement instead of AxWindowsMediaPlayer in WPF apps). Is it possible to make it work more smoothly or using an additional WinForms Form with AxWindowsMediaPlayer is the only solution?
It was written over five years ago (look up James Dailey messages in the thread), there were possibly some improvements but overall I suppose the statements are still in good standing. I will pick up some relevant quotes:
As you know the WPF environment is constructed from the ground up to offer developers a very rich “graphics first” environment. The MediaElement in particular was designed to allow you to mix video with various other UI components seamlessly. This solution will give you the flicker free, “draw over video” solution that you are looking for. The best part is you can do all of this in C#. The bad part of this solution is that the MediaElement is not designed for displaying time sensitive media content. In other words, the MediaElement is notorious for dropping and delaying the display of video frames. There are ways to minimize this such as using SD rather than HD content, use a video accelerated codec, etc.
also:
Unfortunately you can’t really tell the WPF MediaElement to never drop frames. The term we use for this class of issues is “disparate clocks”. In this case WPF is updating the screen at a certain rate (clock 1). The MediaElement (based on WMP) is cranking out video frames at a slightly different rate (clock 2). Given the underlying technologies there is currently no way to synchronize the two clocks and force them to “tick” at the same rate. Since the display will only be updated according to the WPF clock, multiple frames of video may be sent from the MediaElement to WPF between clock ticks. Because of this the MediaElement may appear to drop frames. This is a very common problem in multimedia development and there is no simple solution.
Windows Media Player uses Media Foundation and DirectShow APIs which power media playback with high quality video experience.
I'm making a Windows game using XNA 4.0. I have an quick little intro screen that shows our studio logo and plays a sound. It lasts 1.5 seconds and looks and works as desired in windowed mode.
We want to run the game full screen. So all I added was "graphics.IsFullScreen = true" to the Game subclass constructor after the GraphicsDeviceManager is instantiated and we've set the preferred backbuffer dimensions. When the game starts the video card just glitches my monitors for like 1 or 2 seconds switching resolutions, etc. - and that is all a customary and understandable delay between the video card, the device drivers and my monitors all figuring out this change, but XNA is running the game loop while all this nonsense is going on.
This means my intro starts, runs and is over by the time the system gets around to actually displaying what I'm drawing and by then the intro is over. What I'd really like is a way to detect when the video card is actually rendering before I start drawing and playing sound and timing things assuming the player can see them. Searching around online, I've seen reference to a "graphics.EnsureDevice()" call that seems to have been deprecated and is no longer available in XNA 4.0.
I guess this is kind of dirty, but you could fire off a thread that continually checks to see if the DeviceParameter IsFullscreen is set to true on active graphics device, after you set the GraphicsDeviceManager.IsFullscreen property to true.
Once it is set, then you would start your game loop.
You could also write it in a way that the thread would only fire off if you set GraphicsDeviceManager.IsFullscreen to true. That way it would support both modes.
I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.
I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh.
When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio.
Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem?
Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?
Update 2: the effect I'm seeing is definitely not flicker (which is different from tearing). My panel is double-buffered, sets the control styles for AllPaintingInWmPaint, UserPaint, OptimizedDoubleBuffer etc., overrides onPaintBackGround and so on. All these are necessary to eliminate flicker, but the tearing problem remains. It is especially pronounced when the animation has very fast-moving objects or objects that change from light to dark very quickly. When objects are slow-moving and don't change color rapidly, the tearing effect is much less noticeable (because consecutive cels are always very similar to each other).
Tearing occurs when your image update is not in sync with the refresh rate of the monitor. The monitor shows part of the previous image, part of the new image. When the objects in the image move fast, the effect is quite noticeable.
It isn't fixable in Windows Forms, you can't get to the video adapter's v-sync signal. You can in a DirectX app.
I tried the double buffering idea on the project I'm working on at the moment, but I didn't get very good results with it. In the end, I created the following:
A System.Drawing.Bitmap for my offscreen buffer. Decode the animation into this bitmap.
A UserControl the same size as the image in (1) and where the OnPaintBackground method was empty (no drawing, no call to base class) and the OnPaint did a Graphics.DrawImage to copy the offscreen image to the screen.
Now, you've got a weird animation rate so the tearing is almost certainly to do with a mismatch between screen update rate and screen refresh rate. You are updating the screen midway through the screen's refresh so the screen is drawing the old frame at the top of the screen and the new frame at the bottom of the screen. If you can synchronise the frame rate with the display refresh rate, the tearing should disappear.
You would better to use directx (or opengl) for such tasks. But if you want to use only winforms use DoubleBuffered property.
Double buffer it.
You can enable double buffering using windows styles or, whats probably easier, is to draw to a picture from offscreen and then swap them.
If this doesnt work then the best thing to do is bitblit and double buffer.
Essentially its the same.
Have a reference to two bitmaps, one is the screen, the other is the buffer.
You draw to the buffer first, then blit that entire thing to the screen.
This way you only ever write live data to the buffer. The sceen simply shows something you made earlier (blue peter style)
Tearing is an artifact from a frame being drawn on top of another. The only safe ways of avoiding it is to a) wait from vsync or b) draw behind the beam (this is rather tricky). Double buffer alone doesn't guarantee against tearing since you can have double buffer but still draw having the vsync off. Some cards might also have vsync wait option "forced off". You need to check the documentation regarding vsync and how to check where it is. This is the only safe way to do it. Also, keep in mind that this will lock your framerate.
I have an XNA game (its a slot machine).
I have some really cool animations my artist made for me that are more or less 1600x1000 and over 50 frames.
For all of the animations so far I have been using sprite sheets. (Where all the frames are in one image file and when its rendered it chooses what part of the image to show).
The problem is that you can only load an image of a certain size. 2kx2k or 4kx4k depending on your videocard. Obviously putting each frame into one file is out of the question for this large animation.
Can you just load each image individually and display them in order? (That is what I used to do for the smaller animations anyway before I found out that isn't how you were supposed to do it)
My Questions:
What if any is a good way to play these large animations?
Is there a benefit to having a spritesheet instead of loading the frames in individually as Texture2D's?
Is there a (free) way to play fullscreen videos in XNA?
Apparently, XNA 3.1 "now supports the ability to play back video that can be used for such purposes as opening splash and logo scenes, cut scenes, or in-game video displays." That is what you'll want to use - the sizes you're talking about are far too big for conventional animation techniques. Some sample code is here.