I'm planning to use XNA to generate a model made up of multiple components. The model is created at run time from dynamic data. My model would be similar to a tree. The tree has a trunk, branches, leaves, cells, etc. I was planning to create classes for each content type such as branches that would encapsulate the leaves and so forth.
My question is where should I manage the the primitives for drawing the model?
Option 1) Each object manages it's own primitives or drawing objects. Drawing the tree would start with a call to tree->draw which would call truck->draw which would call branches->draw and so on.
Option 2) A 'master' method would traverse the tree collecting primitives into a collection then draw the collection independent of the tree.
There are benefits to both options but which follows a typical architecture for 3d graphics?
Thanks in advance.
That question really depends on the logical approach that you will use to store the data. If your tree has access to all of its children then it would be logically correct to call the tree.Draw() method once, and all subsequent draw methods will be called automatically according to your internal logic.
Moreover this approach can be performance efficient if you will render tree textures from a single spritesheet (eg. many subsequent draw calls using deferred rendering and some spritesheet(s)).
Related
I have a specific way of generating meshes and structuring my scene that makes occlusion culling very straight forward and optimal. Now all I need is to know how to actually show or hide a mesh efficiently using the ECS hybrid renderer. I considered changing the layer to a hidden layer in the RenderMesh component but the RenderMesh component is an ISharedComponentData and so does not support jobification or burst. I saw the Unity BatchRendererGroup API and it looked promising with its OnPerformCulling callback but I don't know if it is possible to hook into the HybridRenderSystem's internal BatchRenderGroup. I also saw the DisableRendering IComponentData tag that I guess disables an entities rendering. However, again, this can only be done from the main thread. I can write my own solution to render meshes using Graphics.DrawMesh or something like it, but I would prefer to integrate it natively with HybridRenderer in order to also cull meshes that are not related to my procedural meshes.
Is any of this possible? What is the intended use?
I'm not sure it's the best option but you can maybe try parallel command buffer:
var ecb = new EntityCommandBuffer( Allocator.TempJob );
var cmd = ecb.AsParallelWriter();
/* job 1 executes with burst & cmd adds/removes Disabled or DisableRendering tags */
// (main thread) job 2 executes produced commands:
Job
.WithName("playback_commands")
.WithCode( () =>
{
ecb.Playback( EntityManager );
ecb.Dispose();
}
).WithoutBurst().Run();
There is another way of hiding/showing entities. But it requires you to group adjacent entities in chunks spatially (you're probably doing that already). Then you will be occluding not specific entities one by one but entire chunks of them (sectors of space). It's possible thanks to fabulous powers of:
chunk component data
var queryEnabled = EntityManager.CreateEntityQuery(
ComponentType.ReadOnly<RenderMesh>()
, ComponentType.Exclude<Disabled>()
);
queryEnabled.SetSharedComponentFilter( new SharedSector {
Value = new int3{ x=4 , y=1 , z=7 }
} );
EntityManager.AddChunkComponentData( queryEnabled , default(Disabled) );
// EntityManager.RemoveChunkComponentData<Disabled>( queryDisabled );
public struct SharedSector : ISharedComponentData
{
public int3 Value;
}
The answer is that you can't and you shouldn't! Unity Hybrid rendering gets its speed by laying out data in sequence. If there was a boolean for each mesh that allowed you to show or hide the mesh Unity would still have to evaluate it which I guess is not in the their design philosophy. This whole design philosophy in general did not work out for me as I found out.
My world is made up of chunks of procedurally generated terrain meshes (think minecraft but better ;)) The problem with this is that each chunk has its own RenderMesh with a unique mesh... meaning that each chunk gets its own... chunk... in memory xD. Which, as appropriate as that sounds, is extremely inefficient. I decided to abandon Hybrid ECS all together and use good old game objects. With this change alone I saw a performance boost of 4x (going from 200 to 800fps). I just used the MeshRenderer.enabled property in order to efficiently enable and disable rendering. To jobify this I simply stored an array of the mesh bounds and a boolean for if it is visibile or not. This array I could then evaluate in a job and spit back out an index list of all the chunks that needed their visibility changed. This leaves only setting a few boolean values for the main thread which is not very expensive at all... It is not the ECS friendly solution I was looking for but from the looks of it, ECS was not exactly my friend here. Having unique meshes for each section of my world was clearly not the intended use case of Hybrid ECS.
I'm trying to create a system in which numbers are changed according to the time elapsed since last update, their target/destination value, etc.
In an earlier project, I created an abstract class for animation data which contained Tick(), IsDone(), etc, which were then implemented for each facet of the game object I wanted to animate, such as position and opacity. The animation data were held by the game object, with the Tick() function being called by the game object's Tick() function, which is called by the engine's logic update loop using a list of all game objects.
However, I now have more things I want to animate, and was looking in to doing so with as few classes as possible, and definitely not with N+1 classes.
I looked in to saving references to the variable being animated, using the fact that all variables I wanted to animate were floats. However, it appears C# pointers are analogs to the C pointers, and so I cannot save pointers to the values being animated, which appears to be impossible for the CLR in the first place.
The only alternative I could think of is to use reflection to record the argument being passed, then use reflection again to find the value to alter each tick. But then using dozens of reflection calls 60 times a second did not appeal to me.
Since animating objects is a near universal feature in games, I was wondering if there were established practices.
I need to transfer .NET objects (with hierarchy) over network (multiplayer game). To save bandwidth, I'd like to transfer only fields (and/or properties) that changes, so fields that won't change won't transfer.
I also need some mechanism to match proper objects on the other client side (global object identifier...something like object ID?)
I need some suggestions how to do it.
Would you use reflection? (performance is critical)
I also need mechanism to transfer IList deltas (added objects, removed objects).
How is MMO networking done, do they transfer whole objects?
(maybe my idea of per field transfer is stupid)
EDIT:
To make it clear: I've already got mechanism to track changes (lets say every field has property, setter adds field to some sort of list or dictionary, which contains changes - structure is not final now).
I don't know how to serialize this list and then deserialize it on other client. And mostly how to do it effectively and how to update proper objects.
There's about one hundred of objects, so I'm trying avoid situation when I would write special function for each object. Decorating fields or properties with attributes would be ok (for example to specify serializer, field id or something similar).
More about objects: Each object has 5 fields in average. Some object are inherited from other.
Thank you for all answeres.
Another approach; don't try to serialize complex data changes: instead, send just the actual commands to apply (in a terse form), for example:
move 12432 134, 146
remove 25727
(which would move 1 object and remove another).
You would then apply the commands at the receiver, allowing for a full resync if they get out of sync.
I don't propose you would actually use text for this - that is just to make the example clearer.
One nice thing about this: it also provides "replay" functionality for free.
The cheapest way to track dirty fields is to have it as a key feature of your object model, I.e. with a "fooDirty" field for every data field "foo", that you set to true in the "set" (if the value differs). This could also be twinned with conditional serialization, perhaps the "ShouldSerializeFoo()" pattern observed by a few serializers. I'm not aware of any libraries that match exactly what you describe (unless we include DataTable, but ... think of the kittens!)
Perhaps another issue is the need to track all the objects for merge during deserialization; that by itself doesn't come for free.
All things considered, though, I think you could do something alon the above lines (fooDirty/ShouldSerializeFoo) and use protobuf-net as the serializer, because (importantly) that supports both conditional serialization and merge. I would also suggest an interface like:
ISomeName {
int Key {get;}
bool IsDirty {get;}
}
The IsDrty would allow you to quickly check all your objects for those with changes, then add the key to a stream, then the (conditional) serialization. The caller would read the key, obtain the object needed (or allocate a new one with that key), and then use the merge-enabled deserialize (passing in the existing/new object).
Not a full walk-through, but if it was me, that is the approach I would be looking at. Note: the addition/removal/ordering of objects in child-collections is a tricky area, that might need thought.
I'll just say up front that Marc Gravell's suggestion is really the correct approach. He glosses over some minor details, like conflict resolution (you might want to read up on Leslie Lamport's work. He's basically spent his whole career describing different approaches to dealing with conflict resolution in distributed systems), but the idea is sound.
If you do want to transmit state snapshots, instead of procedural descriptions of state changes, then I suggest you look into building snapshot diffs as prefix trees. The basic idea is that you construct a hierarchy of objects and fields. When you change a group of fields, any common prefix they have is only included once. This might look like:
world -> player 1 -> lives: 1
... -> points: 1337
... -> location -> X: 100
... -> Y: 32
... -> player 2 -> lives: 3
(everything in a "..." is only transmitted once).
It is not logical to transfer only changed fields because you would be wasting your time on detecting which fields changed and which didn't and how to reconstruct on the receiver's side which will add a lot of latency to your game and make it unplayable online.
My proposed solution is for you to decompose your objects to the minimum and sending these small objects which is fast. Also, you can use compression to reduce bandwidth usage.
For the Object ID, you can use a static ID which increases when you construct a new Object.
Hope this answer helps.
You will need to do this by hand. Automatically keeping track of property and instance changes in a hierarchy of objects is going to be very slow compared to anything crafted by hand.
If you decide to try it out anyway, I would try to map your objects to a DataSet and use its built in modification tracking mechanisms.
I still think you should do this by hand, though.
I am making sims like game and right now I am trying to figure out how I will structure my objects.
Right now I am thinking to create a class called GameObject, the psuedo is below
public class GameObject {
name:String
width:int
height:int
}
This way I could create objects like bushes, trees, and buildings. But then I began to think. what if I wanted to create multiple buildings and trees of the same type ?? I would have to keep making instances of GameObject and giving it a new name and height and width. The properties would have to be the same values in order for me to duplicate one object. That seems a little tedious. Then I figure , maybe that isnt the right way to go. So I was thinking, I would have to extend GameObject like below
public class Tree extends GameObject{
birdHouse:Boolean
}
public class Building extends GameObject{
packingGarage:Boolean
stories:Number
}
public class House extends GameObject{
garage:Boolean
stories:Number
}
Now this way, I can just create multiple instances of house, or tree, without creating properties that specify that it is indeed a house or tree. This seems more logical, but at the same time it seems it allocates more memory because I am creating more classes.
I just need to know what the best practices for dealing with objects like this. If anyone can help me out with this. also if you know any resources for best practices of reducing loading on games or any application at that. I also want to use Interfaces. the second concept seems more reasonable and I was thinking about having the parent implement a interface like below
public class GameObject implement IGameObject {
name:String
width:int
height:int
}
Now this way I can create a class that has a method that loosely accept accepts any type that inherits GameObject.
Selector.loadObject(gObject:IGameObject);
Depending on what type it is (i.e tree, building, house) I can use a case statement to figure out which type it is and evaluate it accordingly.
I also created a Tile Class that will pass through the loadObject method. It also will be a child of the GameOject class. if the case statement finds that it is type Tile, it will highlight whatever Tile depending on what tile my mouse is over.
My second question is if a class inherits a class that implements a interface, is that class child class considered to be a IGameObject as well. or does it have to implement that interface directly.
does all this sound like I am going in the right directions lol, as far as organization is concerned.
Thanks for all of your help, thanks guys!
One thing you could think about is using Composition of objects over inheritance. This sort of goes along with the Flyweight answer. Rather than having all your GameObjects inherit properties from GameObject; instead, have each game object just have a reference or pointer to an object or interface that has the properties it needs. For example, all your game objects probably have some sort of "size" property - rather than inheriting this from a base class, just have each game object reference or point to a "Size" class, so that the size object can potentially be shared among similar objects.
You should look into the Flyweight pattern
From wikipedia:
Flyweight is a software design
pattern. A flyweight is an object that
minimizes memory use by sharing as
much data as possible with other
similar objects; it is a way to use
objects in large numbers when a simple
repeated representation would use an
unacceptable amount of memory.
As for your second question, the answer is yes. All Subclasses of a Class can be said to implement all interfaces that the parent class implements.
This seems more logical, but at the
same time it seems it allocates more
memory because I am creating more
classes.
Creating new classes doesn't use a significant amount of memory. It's creating instances that uses memory - but again, the amount will be negligible compared to the memory used by loading in your graphics etc. Don't worry about memory. Your only concern at this stage should be good code organisation.
You should have separate classes when they have different behaviour. If they have the same behaviour but different properties, then you use the same class and set the properties accordingly.
In this case, you don't appear to have significantly different behaviour, but if separating it into Tree, Building, and House makes life easier for you when managing which items can be included in others etc, do it.
I'm writing a plug-in for a 3D modeling program. There is a a feature of the API where you can intercept the display pipeline and insert additional geometry that will be displayed with out actually being in the model (you can see it but you can't select/move/delete etc. etc..).
Part of this feature of the API is a method that gets called on every screen refresh that is used to tell the program what extra geometry to display. Right now I have a HashSet that is iterated through with a foreach statement. OnBrep is the generic geometry class of the API.
I have an additional command that will dump the "Ghost" geometry into the actual model. I've found, that if the geometry is actually in the model the display speeds up a lot. So I'm wondering if there is a faster way to provided the list of objects to the program? Would a simple one dimensional array be significantly faster than a HashSet<>?
The fastest way to return a collection of objects is to return either (a) the actual physical type that was used internally to build up the collection, or (b) a type that can be cast to in such a way that data is not copied in memory. As soon as you start copying data (e.g. CopyTo, ToArray, ToList, a copy constructor, etc) you have lost time.
Having said that, unless the number of items is large, this will be a micro-optimisation and therefore probably not worth doing. In that case, just return the collection type that would be of most use to the calling code. If you are unusure, do some timing tests rather than taking a guess.
This here is an extensive study on the performance of hashset/dictionary/generic list
But it's about key lookups
Personnaly I think that a normal or generic list is faster for a foreach operation since it involves no indexed items/overhead (esp inserting etc should be faster).... But this is just a gut feeling.
Usually when working with 3D graphics, you get the best performance if you manage to reduce the draw calls/state changes as much as possible.
In your case I'd try to reduce the draw calls to a minimum by merging your adorned geometry or trying to use some sort of batching feature if it's available.
It's very likely that the frame drop is not because of using a hash list/dictionary instead of an array. (Unless there's a broken/expensive hashing function somewhere...).