Delay starting XNA game switching to Fullscreen messing up intro screen - c#

I'm making a Windows game using XNA 4.0. I have an quick little intro screen that shows our studio logo and plays a sound. It lasts 1.5 seconds and looks and works as desired in windowed mode.
We want to run the game full screen. So all I added was "graphics.IsFullScreen = true" to the Game subclass constructor after the GraphicsDeviceManager is instantiated and we've set the preferred backbuffer dimensions. When the game starts the video card just glitches my monitors for like 1 or 2 seconds switching resolutions, etc. - and that is all a customary and understandable delay between the video card, the device drivers and my monitors all figuring out this change, but XNA is running the game loop while all this nonsense is going on.
This means my intro starts, runs and is over by the time the system gets around to actually displaying what I'm drawing and by then the intro is over. What I'd really like is a way to detect when the video card is actually rendering before I start drawing and playing sound and timing things assuming the player can see them. Searching around online, I've seen reference to a "graphics.EnsureDevice()" call that seems to have been deprecated and is no longer available in XNA 4.0.

I guess this is kind of dirty, but you could fire off a thread that continually checks to see if the DeviceParameter IsFullscreen is set to true on active graphics device, after you set the GraphicsDeviceManager.IsFullscreen property to true.
Once it is set, then you would start your game loop.
You could also write it in a way that the thread would only fire off if you set GraphicsDeviceManager.IsFullscreen to true. That way it would support both modes.

Related

How to deal with high resolution devices while porting for android?

I recently developed a car parking game using Unity 2018, and I was facing a lot of stutters on my Note 4. Later I turned down the device resolution to 1080p from 1440, and the stuttering was gone. I would like to know how can I detect if a device resolution is above 1080p and downscale it or if there is another better way of handling it, please let me know.
You can use Screen.SetResolution to set a specific value (do it in Awake() on some script in your first scene) and then make sure you adjust the Blit Type in Player Settings to Always or you'll run into this problem.

SteamVR creating loading screen flicker when recording using FFMPEG in unity

I have a heatmap plugin integrated in my Unity VR game(with SteamVR). The plugin uses eye tracking information to generate live heatmaps as the user gazes at different elements in the game. As the heatmaps are generated, the whole camera view(user's) is overlayed with heatmaps info and written to a MP4 file using FFMPEG.
The whole process works fine. But I have an annoying problem where during the recording the user's camera view is not stable and keeps flickering and stops only when the recording is stopped. It interrupts the smooth flow of his game
For now, I've narrowed down the code which causes the trouble,
public void Write(byte[] data)
{
if (_subprocess == null) return;
_stdin.Write(data);
_stdin.Flush();
}
From my understanding, It is in this part of the code stdinput is invoked to write to the file system. So, I surmise that the problem must be with accessing the file system which in turn must have caused some time delay when each frame is written in the update method. Correct me if i am wrong here.
The loading screen which appears during every frame write looks something like above. It interrupts the smooth flow of the game and also makes the recording useless as the user keeps focusing on the flicker rather than the actual objects of interest. I would really be grateful if someone shows light on the issue here?
Accessing the file system always takes a huge amount of time. Try offloading that work to another thread or a Coroutine.

Detect when user takes a screenshot in Unity

So in this maze kind of game I'm making, I will show the maze to the player for 30 seconds.
What I don't want is that the player taking screenshot of the maze.
I want to do something like Snapchat, or Instagram, how it detects when you take a screenshot of a snap/story.
I'm using C#. It can also prevent user to take screenshot. I don't mind.
Is there a possible way to detect when the user takes screenshots or prevent it in Unity?
No, you can't detect this reliably. They also could make a photo with a digi cam. Furthermore there are endless ways to create a screenshot and the os has no "callback" to inform an application about that. You could try to detect the "print screen" key but as I said there are other screenshot / screen recording tools which could use any hotkey or no hotkey at all. I have never used Snapchat but it seems it's not safe either.
There are even monitors and video projectors which have a freeze mode to keep the current image. You could also run your browser in a virtual machine. There you can actually freeze the whole virtual PC or take screen shots from the virtual screen and an application running inside the VM has no way to even detect or prevent that.
I once had to do something similar. If you just want to do what snapchat did then it can be done but remember that as long as the app is running on anyone's device instead of your server, it can be de-compiled, modified and compiled again so this screenshot detection can be circumvented.
First of all you need to know this about Apple's rule:
2.5.9 Apps that alter or disable the functions of standard switches, such as the Volume Up/Down and Ring/Silent switches, or other native
user interface elements or behaviors will be rejected.
So, the idea of altering what happens when you take a screenshot is eliminated.
What you do is start the game, do the following when you are showing the show the maze to the player for 30 seconds:
On iOS:
Continuously check if the player presses the power and the home button at the-same time. If this happens, restart the game and show the maze to the player for 30 seconds again. Do it over and over again until player stops doing it. You can even disconnect or ban the player if you detect power + the home button press.
On Android:
Continuously check if the player presses the the power and volume down buttons at the-same time. Perform the-same action described above.
You cannot just do this with C#. You have to use make plugins for both iOS and Android devices. The plugin should use Java to the the detection on android and Object-C to do the detection for iOS. This is because the API required is not available in C#. You can then call the Java and Objective-C functions from C#.
Other improvement to make:
Check or external display devices and disable them when you are
showing the maze to the player for 30 seconds. Enable them back
during this time.
When you detect the screenshot button press as described above,
immediate take your own screenshot too. Loop through images on the player's picture gallery and load all the images taken that day.
Compare it with the screenshot you just took and see if they match.
If they do, you are now very sure that the player is trying to cheat.
Take action like banning the player, restarting the game or even
trolling the player by sending their screenshot to the other player. You can also use it as a proof to show that the user is cheating when they complain after being banned.
Finally, you can even go deeper by using OpenCV. When you are
showing the player the maze for 30 seconds, start the front camera of
the device and use OpenCV to continuously check if any object other
than the player's head is in front of the camera. If so, then the
player is trying to take a screenshot with another device. Take
action immediately. You can use machine language to train this.
How far to go depends on how much time you want to spend and how much you care about player's cheating online. The only thing to worry about is players de-compiling the game and removing those features but it is worth implementing.
My Android phone takes screenshots differently. I swipe down from the
top of the screen and select the "Capture" option.
Nothing is always the-same on Android. This is different on some older or different Android devices. You can detect swipe patterns on the screen. The best way to do this is to build a profile that handles each Android device from different manufactures.
For those commenting, this is possible to do. You must do it especially if it is a multiplayer game. Just because a game can be hacked does not mean that a programmer should not implement basic hack prevention mechanism. Basic hack prevention mechanism should be implemented then improved as you get feedback from players.

How does Windows Media Player deal with monitor refresh rates?

I'm writing an animation app in C#/WinForms (see this question). Basically, the animation in my application is smooth but shows tearing effects; when I take the same animation and render it to an AVI file and play it with Windows Media Player, the animation shows no tearing effects at all. I know WMP is not changing the frame rate because the animation is synchronized with music.
I assume WMP uses DirectX or some other technology that is aware of the monitor's refresh rate and scanline position etc., but I always assumed that programming to the refresh rate would constrain the frame rate. Obviously this isn't the case with WMP.
Does anyone know anything about how WMP (or other video players) renders video internally? I've searched but I can't seem to find any details about this.
It's been a while since I did any DirectX programming, so this may be out of date.
From what I remember, with DirectX you could set up a flipping chain of buffers, usually three buffers: the buffer being displayed, the buffer to be displayed and the buffer being written to. On an update, DirectX will wait for a V-sync before updating the displayed buffer. Now, this will cause a discrepency between the displayed image and the image that should be displayed, but this will be, at most, one refresh, about 1/60th of a second, so you're unlikely to notice.
Some ASCII art to show what I mean:
|-|-|-|-|-|-|-|-|-|-|-|-|-|-| - screen refresh
|----|----|----|----|----|--- - animation
|-----|---|-----|---|-----|-- - displayed
Are you painting each frame of your animation to a memory bitmap first, and then blitting the bitmap to your window? If not, this might be the solution for you.
(this is, of course, in addition to double-buffering)

How to eliminate tearing from animation?

I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh.
When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio.
Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem?
Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?
Update 2: the effect I'm seeing is definitely not flicker (which is different from tearing). My panel is double-buffered, sets the control styles for AllPaintingInWmPaint, UserPaint, OptimizedDoubleBuffer etc., overrides onPaintBackGround and so on. All these are necessary to eliminate flicker, but the tearing problem remains. It is especially pronounced when the animation has very fast-moving objects or objects that change from light to dark very quickly. When objects are slow-moving and don't change color rapidly, the tearing effect is much less noticeable (because consecutive cels are always very similar to each other).
Tearing occurs when your image update is not in sync with the refresh rate of the monitor. The monitor shows part of the previous image, part of the new image. When the objects in the image move fast, the effect is quite noticeable.
It isn't fixable in Windows Forms, you can't get to the video adapter's v-sync signal. You can in a DirectX app.
I tried the double buffering idea on the project I'm working on at the moment, but I didn't get very good results with it. In the end, I created the following:
A System.Drawing.Bitmap for my offscreen buffer. Decode the animation into this bitmap.
A UserControl the same size as the image in (1) and where the OnPaintBackground method was empty (no drawing, no call to base class) and the OnPaint did a Graphics.DrawImage to copy the offscreen image to the screen.
Now, you've got a weird animation rate so the tearing is almost certainly to do with a mismatch between screen update rate and screen refresh rate. You are updating the screen midway through the screen's refresh so the screen is drawing the old frame at the top of the screen and the new frame at the bottom of the screen. If you can synchronise the frame rate with the display refresh rate, the tearing should disappear.
You would better to use directx (or opengl) for such tasks. But if you want to use only winforms use DoubleBuffered property.
Double buffer it.
You can enable double buffering using windows styles or, whats probably easier, is to draw to a picture from offscreen and then swap them.
If this doesnt work then the best thing to do is bitblit and double buffer.
Essentially its the same.
Have a reference to two bitmaps, one is the screen, the other is the buffer.
You draw to the buffer first, then blit that entire thing to the screen.
This way you only ever write live data to the buffer. The sceen simply shows something you made earlier (blue peter style)
Tearing is an artifact from a frame being drawn on top of another. The only safe ways of avoiding it is to a) wait from vsync or b) draw behind the beam (this is rather tricky). Double buffer alone doesn't guarantee against tearing since you can have double buffer but still draw having the vsync off. Some cards might also have vsync wait option "forced off". You need to check the documentation regarding vsync and how to check where it is. This is the only safe way to do it. Also, keep in mind that this will lock your framerate.

Categories

Resources