I wrote a simple OpenTK program based on the OpenTK official tutorial (here) that basically draws a cube on the screen, applies a mixed texture, and rotates it based on elapsed time.
What I found is that when I set GameWindowSettings RenderFrequency and UpdateFrequency, weird stuff happens (all time/frequencies are measured with System.Diagnostics.Stopwatch):
If I don't set any frequency, the program runs at max speed and I see a smooth textured cube rotating in the widows. The measured screen FPS is about 1000 and updates are about 1000 per second. The cube rotates smoothly and the CPU / GPU runs at 100%.
if I set RenderFrequency say to 60.0 Hz and UpdateFrequency say to 1.0 Hz, the magic happens: the actual UpdateFrequency is about 1 Hz but the actual RenderFrequecy is about half the set, says around 30 Hz. The most stunning thing is that the cube still rotates but instead of rotating smoothly, it stays still for around 1 second and then suddenly moves, and all the rotation amount is done in a short time, but in a smooth way (I don't see any twitchy movement of the image, simply rotate very fast and smooth). CPU and GPU run at about 10-15%.
If I let RenderFrequency to its default value and I set UpdateFrequency, then the actual update matches the set and the RenderFrequency goes to 1500-2000 Hz and the cube rotates smoothly. The CPU is about 50% and the GPU goes up to 90-100%.
If I let the UpdateFrequency to its default and I set RenderFrequency to 60.0 Hz, then the actual UpdateFrequency goes to more than 100 kHz, the RenderFrequency matched 60 Hz, and the cube rotates as in case 2. CPU runs about 30% (why not the max?) and GPU around 10%.
OpenTK behavior, CPU/GPU usage, and the rotating cube I see don't make any sense to me.
I run the software on a Lifebook U Series Laptop with Win10.
Any clue?
I'm losing the night on this, many thanks in advance.
Related
I've been following a tutorial to make a top down rpg in unity. Up until now, everything was going good until I tried to make a new scene, my tile map wasn't filling tiles properly like it did in the last scene so I tried to adjust scale and such to fix it. However, after scrapping the scene and going back to the original scene my character moves much slower, what did I do wrong?
Here you must have to understand the frame of reference.
Suppose that if you are walking on the earth with a speed of 4 Km/Hour towards your destination, and suddenly someone had scale up the earth so that the surface area of the earth increased and your destination distance also increase if someone [your camera in the game] observing you outside earth then it will notice that you are now walking very slow compared to the previous state.
So, this what you have done in your game. You have scaled the tile map so that the area movable area increases but the speed of your player is still the same as previous one.
I was trying to google and understand my problem in two weeks but i am defeated, so i am asking for help.
The main problem is that my custom unlit frag shader after certain amount of time on devices with mali gpu starts to drop fps - i see it via fps counter in the cpu section. I dont know why in that section and not in render section, but with out this fps counter i can still feel the drop of fps.
So on Samsung S4, Samsung sm-t 380, xiaomi redmi note 5, samsung tab s2 is all ok, the drop was only on samsung s6 - to solve the problem i just half the resolution by Screen.SetResolution function.
I was thinking that drawing the full screen sprite on such a high res device is the point.
Then i can test app on samsung sm-t580 - and the drop was there - this device has low resolution than S6.
I assume that the problem in arm mali gpu - or i am sure in my hands.
So again, i am rendering full-screen with draw-mode = tile(countinuous) sprite with 3 textures, just changing theirs uvs in vert func and apply in fragment.
This makes distorting effect.
The shader show that i have 10 maths and 4 textures, yes 3 textures but i am making 4 sampling.
The textures are 256x256 with repeat mode.
I was trying diferrent textures compressions, dxt5, etc1-2, alpha8, R5 - no effect.
I was trying to different precisions half and float in different parts in shader - no effect.
I was thinking in alpha, so i took textures without alpha and not getting it from tex2D.
Even the size of textures is doesnt matter, even if they 64x64.
The dance with ZTest, ZWrite, Blend, Queue RenderType, other params - no effect.
My guess was that maybe on mali uv : TEXCOORD0 must be only float2, not float3 or float4 - no effect.
I am applying _Time.y and replace it with float from script - no effect.
Only reducing amount of tex2d in shader - is working.
Why?
What i`m doing wrong?
I'm making a 2D platform game engine with C# and MonoGame that uses floating point numbers for all of its maths. It works well so far, however I'd like to have the option of using the engine for a retro-style pixel art game, which is obviously best done with integer maths. Is there a simple way to achieve this?
The simplest method I can come up with is to do all the calculations with floating point numbers, but then when I draw each sprite, I round the position to the nearest multiple of the scale of the pixel art (for example, to the nearest 5 pixels for pixel art that is scaled 5x). This functions, but the movement of the player character on the screen doesn't feel smooth.
I've tried rounding the player position itself each time I update it, but this breaks my collision detection, causing the player to be stuck on the floor.
Maybe there's a standard way people achieve a solution?
Thanks :)
Apologies for resurrecting an ancient question, but I think a very simple way to do this for both programmer and hardware using GPU triangle rasterization (I'm assuming you're using a GPU pipeline, as this is trivial otherwise) that most directly simulates the look and feel of old hardware rendering at 200p or so is just render your entire game to an offscreen texture that is that actual 200p resolution and perform your entire game logic, including collision detection, at that resolution. You can shift all the coordinates by half a pixel which, combined with nearest neighbor sampling, should get them to plot precisely at a desired pixel if you have to work in floating point.
Then just draw a rectangle with the offscreen texture to the screen scaled to the full display resolution using nearest neighbor sampling and integer-sized scalars (2x, 3x, 5x, etc). That should be so much simpler than scaling each individual sprite and tile.
This is assuming you want a full retro look and feel where you can't draw things at what would normally be sub-pixel positions (1 pixel increments after scaling 5x instead of 5 pixel increments, or even sub-pixel at full resolution). If you want something that feels more modern where the scaled up art would be able to transform (move, rotate, scale) and animate in single-pixel increments or even sub-pixel with floating point, then you do need to scale up every individual sprite and tile. I don't think that would feel so retro and low-res though, more like very modern with scaled up blocky pixel art. Operating at the original low resolution tends to impact not just the look of the game but the feel as well.
I pretty much put the question in the title. Would it help performance-wise if I stopped drawing targets are aren't on the screen anymore? What I mean by this is:
if (textureLocation is on the screen)
{
draw code here
}
Or it is so insignificant(if at all) that it doesn't matter?
Thanks,
Shyy
Depends. Ultimately spent time comes down to 3 things: sending data to the GPU, vertex shading, and pixel shading.
If the texture is located on a spritesheet that has other textures that are being drawn on screen and the offscreen draw call is within the same .Begin() .End() block as those others, it won't hurt performance since it takes just as long to send data and set the GPU up for the spritesheet. The 4 off-screen vertices will run through the vertex shader but that is not a bottle neck. The graphic pipeline culls offscreen objects between the vertex shader and pixel shader so it won't spend any time in the pixel shader
But if is a stand alone texture or in it's own .Begin() .End() block, it will cost time sending it's data to the GPU even though the GPU will cull it.
Whether it is significant or not only profiling can tell you.
I have a timer. When it ticks , by calculating based on formulas, the position of 12 panels changes.
The problem is, although timer's interval is 1 milisecond, the moves are very slow.There are many calculations. What can be done for improving speed, using drawing class or something else?
The gui shows positions, I can move the panels by clicking, so the values. If the correct way is drawing class, do I have a chance to move the rectangles by clicking and take the values of them?
although timer's interval is 1 milisecond
That's the core problem, a Timer cannot tick that fast. Actual timer resolution is constrained by the operating system's clock interrupt rate. Which ticks 64 times per second on most Windows machines. Or once every 15.625 millsecond. The smallest interval you can hope to get is therefore 16 msec. So these panels probably now move 16 times slower than you hoped they would.
Keep in mind how this is observed, you only need to keep human eyes happy. They can't perceive anything that changes at 1 msec, anything that updates faster than 25 times per second just looks like a blur. Something that's taken advantage of in TV and the cinema, a movie updates at 24 frames per second. Once ever 42 milliseconds.
So a sane setting for the Timer.Interval is a hair below three times the clock interrupt rate, 46 milliseconds. The actual tick interval will be 3 x 15.625 = 46.875 msec on a regular machine. And still close to 46 msec if the machine runs with a higher clock interrupt rate. You'll get an equivalent frame rate of 21 fps. Right on the edge of a blur to human eyes. The next lower sane rate is two times the interrupt rate or 31 msec for 32 fps. Making it any smaller doesn't make sense, it isn't observable and just burns cpu time for no benefit.
And, important, the rate at which the panel moves is now determined by how much you change its Location property in the Tick event handler. The interval is fixed so the amount of motion you get is determined by the increment in the position. Which will not be one pixel, probably what you are using now.