Is it conceivable to make a game in system.drawing? - c#

a while ago I started making simple game in c#, however after some time i started to add more and more objects on the screen and sadly i saw my framerate geting realy slow.
Bitmap bmp = new Bitmap(1200, 700);
Graphics FrameGraphics = Graphics.FromImage(bmp);
while (true)
{
FrameGraphics.Clear(Color.White);
foreach (GraphicalObject S in ObjectControll.ToList())
{
FrameGraphics.DrawImage(S.picture, (float)S.X, (float)S.Y);
}
DrawHandle.DrawImage(bmp, 0, 0);
frames++;
}
why most 3D games run faster than this loop with 20 objects in it? Do I need to use any software to make a game?

You could possibly create a game using System.Drawing, however it's not really what it's designed for and with a complex game and a lot of objects I imagine you will run into problems.
Your best bet is to use something that is the correct tool for the job and designed for games. Look into something like Unity, or personally I have found the XNA framework very easy to get into as a C# programmer, although it is unsupported and has been superseded by MonoGame.

Related

How do I create an OpenGL GrContext for SkiaSharp?

I'm trying to render a SkiaSharp element in my WPF application using an OpenGL backend in the hopes of making it faster. I found this documentation explaining how to work with the resulting surface created, but it has this extremely unhelpful sentence in it:
Skia does not create a OpenGL context or Vulkan device for you. In OpenGL mode it also assumes that the correct OpenGL context has been made current to the current thread when Skia calls are made.
The documentation then proceeds to also assume that I know how to create said OpenGL or Vulkan device. After Googling this for an hour or so, I can tell you there are a few ways to do this (apparently), involving OpenTK, or creating the context manually, or using a WindowsFormsHost to host my drawing element, but I have been completely unable to find any specific information on what I need to do to make these things happen. If I have found this info in my Google searches, I lack the knowledge to recognize it as the answer.
To be 100% clear, what I'm asking is this: What do I need to do before the following lines of code will work?
var context = GRContext.CreateGl();
var gpuSurface = SKSurface.Create(context, true, new SKImageInfo(PageData.Instance.GetTotalWidth(), PageData.Instance.GetTotalHeight()));
Still looking for alternatives that don't require adding a 5 MB DLL just to use one object, but this is what I have for now.
I ended up going with this after installing OpenTK and OpenTK.GLControl from NuGet:
public SKSurface GetOpenGlSurface(int width, int height)
{
if (GPUContext == null)
{
GLControl control = new GLControl(new GraphicsMode(32, 24, 8, 4));
control.MakeCurrent();
GPUContext = GRContext.CreateGl();
}
var gpuSurface = SKSurface.Create(GPUContext, true, new SKImageInfo(width, height));
return gpuSurface;
}
It's worth noting that doing this and rendering various images using the GPU instead actually drastically cut performance, even with a decent video card. This is probably because I'm terrible at this, and I'm introducing bottlenecks. I suspect someone who knows what they're doing can avoid these pitfalls just fine.

How to dynamically show or hide meshes from a Burstable ForEach job using Hybrid Renderer V2 in unity 2020?

I have a specific way of generating meshes and structuring my scene that makes occlusion culling very straight forward and optimal. Now all I need is to know how to actually show or hide a mesh efficiently using the ECS hybrid renderer. I considered changing the layer to a hidden layer in the RenderMesh component but the RenderMesh component is an ISharedComponentData and so does not support jobification or burst. I saw the Unity BatchRendererGroup API and it looked promising with its OnPerformCulling callback but I don't know if it is possible to hook into the HybridRenderSystem's internal BatchRenderGroup. I also saw the DisableRendering IComponentData tag that I guess disables an entities rendering. However, again, this can only be done from the main thread. I can write my own solution to render meshes using Graphics.DrawMesh or something like it, but I would prefer to integrate it natively with HybridRenderer in order to also cull meshes that are not related to my procedural meshes.
Is any of this possible? What is the intended use?
I'm not sure it's the best option but you can maybe try parallel command buffer:
var ecb = new EntityCommandBuffer( Allocator.TempJob );
var cmd = ecb.AsParallelWriter();
/* job 1 executes with burst & cmd adds/removes Disabled or DisableRendering tags */
// (main thread) job 2 executes produced commands:
Job
.WithName("playback_commands")
.WithCode( () =>
{
ecb.Playback( EntityManager );
ecb.Dispose();
}
).WithoutBurst().Run();
There is another way of hiding/showing entities. But it requires you to group adjacent entities in chunks spatially (you're probably doing that already). Then you will be occluding not specific entities one by one but entire chunks of them (sectors of space). It's possible thanks to fabulous powers of:
chunk component data
var queryEnabled = EntityManager.CreateEntityQuery(
ComponentType.ReadOnly<RenderMesh>()
, ComponentType.Exclude<Disabled>()
);
queryEnabled.SetSharedComponentFilter( new SharedSector {
Value = new int3{ x=4 , y=1 , z=7 }
} );
EntityManager.AddChunkComponentData( queryEnabled , default(Disabled) );
// EntityManager.RemoveChunkComponentData<Disabled>( queryDisabled );
public struct SharedSector : ISharedComponentData
{
public int3 Value;
}
The answer is that you can't and you shouldn't! Unity Hybrid rendering gets its speed by laying out data in sequence. If there was a boolean for each mesh that allowed you to show or hide the mesh Unity would still have to evaluate it which I guess is not in the their design philosophy. This whole design philosophy in general did not work out for me as I found out.
My world is made up of chunks of procedurally generated terrain meshes (think minecraft but better ;)) The problem with this is that each chunk has its own RenderMesh with a unique mesh... meaning that each chunk gets its own... chunk... in memory xD. Which, as appropriate as that sounds, is extremely inefficient. I decided to abandon Hybrid ECS all together and use good old game objects. With this change alone I saw a performance boost of 4x (going from 200 to 800fps). I just used the MeshRenderer.enabled property in order to efficiently enable and disable rendering. To jobify this I simply stored an array of the mesh bounds and a boolean for if it is visibile or not. This array I could then evaluate in a job and spit back out an index list of all the chunks that needed their visibility changed. This leaves only setting a few boolean values for the main thread which is not very expensive at all... It is not the ECS friendly solution I was looking for but from the looks of it, ECS was not exactly my friend here. Having unique meshes for each section of my world was clearly not the intended use case of Hybrid ECS.

What is the best way to dynamically generate 3D objects on a grid in Unity?

I am an inexperienced programmer, I am looking for advice on a new unity project:
I need to generate terrain for a 3d game from fairly large tiles. For now I only need one type of tile, but I was thinking, I better set up a registry system now and dynamically generate that default tile in an infinite grid. I have a few concerns though, like will the objects continue to load as the character moves into the render distance of a new tile (or chunk if you rather). Also, all the tutorials I have found are wrong for me in some way, like it only works in 2d and doesn't have collision, or is just a static registry and does not allow for changing the content of the tiles in-game.
Right now I don't even know what the code looks like to place a 3d object in the scene without building them from vectors, which maybe I could do. I also don't know how I would want to trigger the code.
Could someone give me an idea of what the code would look like / terminology to look up / a tutorial that gives me what I need?
This looks like a pretty big scope for a new programmer but lets give it a shot. Generating terrain will be a large learning experience when it comes to performance and optimization when you don't know what you're doing.
First off, you'll probably want to make a script that acts as a controller for generating your objects and put this inside of the player. I would start by only making a small area, or one chunk, generate and then move on to making multiple chunks generate when you understand what you're doing. To 'place' an object in your scene you will want to make an instance of the object. I'd start by trying to make your grid of objects, this can be done pretty easily on initialization (Start() function) through a for loop, for testing purposes. IE, if you are trying to make 16x16 squares like minecraft; have a for loop that runs 16 times (For the x) and a for loop inside of that to run 16 times (for the z). That way you can make a complete square of, in this case, cubes. Here is some very untested code just to give you an example of what I'm talking about.
public GameObject cube; //Cube you want to make a copy of, this will appear in the editor
void Start(){
for(var x=0; x < 16; x++){
for(var z=0; z < 16; z++){
GameObject newCube = Instantiate(cube); //Creates an instance of the 'cube' object, think of this like a copy.
newCube.transform.position = new Vector3(x, 0, z); //Places the cube on the x and z which is updated in the for loops
}
}
}
Now where you go from here will be very different depending on what you're trying to do exactly but you can start by looking into perlin noise to add in a randomized y level that looks good. It's very easy to use once you grasp the general concept and this example I provided should help you understand how to use it. They even give good examples of how to use this on the Unity docs.
In all, programming is all about learning. You'll have to learn how to only take the parts of resources that you need for what you're trying to create. I think what I provided you should give you a good start on what you want to create but it will take a deeper understanding to carry things out on your own. Just test different things and truly try and understand how they work and you'll be able to implement parts of them into your own project.
I hope this helps, good luck!

How to move a Powerpack Ovalshape in VS using threads

I'm trying to make a model that acts like sub-atomic particles for an interesting project. I understand that C# isn't exactly the best coding language for this kind of undertaking, however since I'm pretty newish to coding and only now C# and Java I decided I would just try on C#.
Currently what I'm trying to do is get the ovalShape, representing a particle, to passively move towards oppositely charged particles. To get this to run in the background all the time I decided to use threads, but when I ran my simple test for proof of concept of moving the ovalshape with a thread, it didnt move at all and I don't understand why.
I set up the thread normally using
Thread thrFOA = new Thread(new ThreadStart(ForceOfAttraction));
and then made my test background function:
public void ForceOfAttraction()
{
ovalShape1.SetBounds(ovalShape1.Location.X + 1, ovalShape1.Location.Y, ovalShape1.Width, ovalShape1.Height);
ovalShape1.Refresh();
}
So I thought this should just make it so that the oval slowly moves across the screen, being moved a pixel at a time, however there was no movement at all and I don't understand why not. I also tried this.Refresh(); instead of ovalShape1.Refresh(); as it is in the form class, but the same result.
I appreciate the time and help, thanks!

Texture related memory leak in XNA 4.0 / C#

I found a memory leak in my XNA 4.0 application written in C#. The program needs to run for a long time (days) but it runs out of memory over the course of several hours and crashes. Opening Task Manager and watching the memory footprint, every second another 20-30 KB of memory is allocated to my program until it runs out. I believe the memory leak occurs when I set the BasicEffect.Texture property because that is the statement that finally throws the OutOfMemory exception.
The program has around 300 large (512px) textures stored in memory as Texture2D objects. The textures are not square or even powers of 2 - e.g. can be 512x431 - one side is always 512px. These objects are created only at initialization, so I am fairly confident it is not caused by creating/destroying Texture2D objects dynamically. Some interface elements create their own textures, but only ever in a constructor, and these interface elements are never removed from the program.
I am rendering texture mapped triangles. Before each object is rendered with triangles, I set the BasicEffect.Texture property to the already created Texture2D object and the BasicEffect.TextureEnabled property to true. I apply the BasicEffect in between each of these calls with BasicEffect.CurrentTechnique.Passes[0].Apply() - I'm aware that I'm calling Apply() twice as much as I should, but the code is wrapped inside of a helper class that calls Apply() whenever any property of BasicEffect changes.
I am using a single BasicEffect class for the entire application and I change its properties and call Apply() any time I render an object.
First, could it be that changing the BasicEffect.Texture property and calling Apply() so many times is leaking memory?
Second, is this the proper way to render triangles with different textures? E.g. using a single BasicEffect and updating its properties?
This code is taken from a helper class so I've removed all the fluff and only included the pertinent XNA calls:
//single BasicEffect object for entire application
BasicEffect effect = new BasicEffect(graphicsDevice);
// loaded from file at initialization (before any Draw() is called)
Texture2D texture1 = new Texture2D("image1.jpg");
Texture2D texture2 = new Texture2D("image2.jpg");
// render object 1
if(effect.Texture != texture1) // effect.Texture eventually throws OutOfMemory exception
effect.Texture = texture1;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices1, 0, numVertices1, indices1, 0, numTriangles1);
// render object 2
if(effect.Texture != texture2)
effect.Texture = texture2;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices2, 0, numVertices2, indices2, 0, numTriangles2);
It's an XNA application, so 60 times per second I call my Draw method, which renders all my various interface elements. This means that I could be drawing between 100-200 textures per frame, and the more textures I draw, the faster I run out of memory, even though I am not calling new anywhere in the update/draw loops. I am more experienced with OpenGL than DirectX, so clearly something is happening behind the scenes that is creating unmanaged memory I'm not aware of.
My only suggestion to you that I can come up with, is to group your textures into atlases rather than one by one. It will speed up your rendering time and reduce the load on the GPU, provided that you are rendering a massive amount of textures per frame. The reason for this, is because the GPU will not have to swap textures to render so often which is an expensive operation. I am using my knowledge of OpenGL, but my guess is that XNA is based on DirectX and I am assuming they load textures in a similar manner (unless you are using Monogame which lets you use OpenGL)
That said, you did not provide very much information. The memory leak can be coming from the texture switching, but it can also be coming from somewhere else. Most of your memory is occupied with textures, which is probably why you are getting a crash there rather than somewhere else. I am guessing a few things are happening here:
The garbage collector is not working fast enough to pick up all the RAM being allocated inside the rendering functions
You have a memory leak somewhere else in your code, and it is showing up here instead
Once again, it is difficult to figure out what is on here provided how little I know about your code. But to my best ability, I have a few suggestions for you:
Run through your code and look at how you are referecing things. Make sure that you don't have any temporary references inside of classes and structs. If you use something and pass it around to difference classes and consider it later as "discarded", there is good chance that someone is still holding onto that object, preventing it from being deleted
Do a search for all the "new" keywords you have in the solution. If you have something that constantly gets using the "new" keyword, this can be a huge memory leak as it's creating a massive ammount of objects in the heap. The garbage collector SHOULD be picking them up, but I wouldn't say I trust the garbage collector very much. Worst case that garbage collector is not coming around often enough to deal with this memory leak.
Find ways to reduce the sizes of textures. Atlasing is a solution that will reduce the overhead of packaging each texture in it's own Texture2D. This can be a bit more work because you will have to work in a system of swapping textures from the same file, but in your case that very well maybe worth it.
If you are convinced that there is an issue within XNA, try a different implementation called Monogame. It follows the exact same structure as XNA but is maintained by a community. As a result, the guts of the libraries you are using have been rewritten and there is a good chance that whatever is destroying your heap has been fixed.
My advice to you? If you are really familiar with OpenGL and what you are doing is fairly simple, then I would check out OpenTK. it is a thin linker layer that takes OpenGL and "ports" it into C#. All the commands are the exact same and you have the flexibility of using the entire .NET library for all the extra fluff.
I hope this helps!

Categories

Resources