For my Game Programming class, I am designing a class for a player character. The player can collect power-ups throughout the game, which are held onto and can be used whenever the player chooses.
I thought a dynamic array would work well, but my background is in C++ (we are using Unity in class, which uses Java & C#), and I know that memory deallocation is handled differently in C# (I know there is a garbage collector, but don't really know much about how it functions). After looking around the web a while, I couldn't find anything that seemed to fit the functionality I need (or if I did it was over my head and I didn't realize it).
If someone can list a C# structure, or structures, that would be good for storing a growing collection of objects, it would be extremely helpful.
List is probably the simplest structure you could use, it is like a dynamic array which will automatically grow as you add things to it. It is strongly typed so it will only contain objects of the same type (if you have some interface for your power-ups you can have a list of IPowerUp instances)
Start by looking at .NET's generic collections. (.NET generics are similar in concept to C++ templates.) Either List or Dictionary will likely be useful to you depending on how you need to store and retrieve your objects.
Since you'll probably have types of objects you may consider using a Dictionary of Lists where the key will be a string identifier or an enumeration of types of collected objects and the value will be a list of object instances.
Related
For context, using C# inside the Unity3D Editor.
I have more and more often started using enums to loosely couple things to settings.
For example i am setting up an item, and i want to give it a visual from a pool of defined visuals. That visual is basically a class that contains a sprite, a color, and a model attached to an integer unique ID. From this Unique ID, i generate an Enum. And it takes some effort to verify that the UniqueID is actually Unique, and catch some edge cases regarding that.
The benefit of doing the above, is that the enum is all that has to be stored on the item, to link it to the visual. At runtime there is a dictionary created to lookup the enum, and then request the stored visual to be loaded/used. This loosely couples the visuals to the item, so loading the item list does not automatically load all of the visual assets associated with the item. The last part is unity default behavior and is really annoying, and it really slows down the game and consumes a massive amount of RAM in this default behavior.
As a result we have a lot of those enums for various purposes and a lot of lookup stuff happening. And currently we are having no big problems with it.
However, the enums and the editing/generation of those enums is error prone in the sense that when values are removed, the items (and any other interested parties) are non the wiser, which then has to be either tested before build, or runs into a safety catch/error at runtime.
My question is. Is this a blatant abuse of Enums? And if so, what would be a better way of approaching this problem of loose coupling?
If it is not, what would be a better way to set up and manage these enums in a safe way? So alarm bells will go off if anything using the enum now has an invalid value, or the values meaning would change? Which i imagine is hardly possible, and requires code all over the place to "self check" on recompile?
Or does this just all boil down to team discipline to manage the values well, and know what the enums mean and represent? In which case, it would never be able to make this designer friendly unless i write a custom editor for each and every one of these.
Thanks for any insights you might be able to provide.
If I understand you correctly, you're trying to associate each item with one of multiple static visuals? If this is the case you can simply write each visual as a static readonly object inside the visuals class. In your "item" objects you can then make a field called e.g. "visual" and set this to reference the right visual.
I don't know what makes the visuals load, but if the constructor does, then I believe they will load when the visual class is first used at runtime.
I am currently working on an Ability system for a rpg game in Unity.
I have decided I was going to have an Ability system made of Ability class instances which each contain an array of Effects.
On an ability use a Use() method on the Ability is called which loops through the Effect[] array and calls an Apply() method on each.
The way I see it is that I will have to make sure that each effect can fetch the information it needs and store it in variables on its own class, as I cannot pass specific arguments to effects via the Apply() > Use() calls.
Assuming that I can find a way to do that, would it be more efficient to have only one instance of each Effect shared amongst all the Abilities or separate instances.
I can see drawbacks for both:
1) What I would expect is that with shared instances the Use() calls would have to be queued (am I right in thinking that ?) which could slow the game down when many calls are made.
2) With individual instances, I would not have that problem but I would constantly have a lot more instances loaded in memory.
Which is best ? In terms of general programming practice and performance ?
I'm actually in a similar situation and thought I would share some of my thoughts.
I don't think that using a Use() function would really slow anything down, as the same basic code is going to have to be run in either implementation, and I don't think that a queue would be necessary, as you could make the functions static, not requiring any instances of the class to call the class methods.
I think the answer mainly depends on what kind of information you need passed into the Use() function. If each character needs to store parameters specific to each ability, then having individual instances won't really save memory, it just changes how the memory is organized, so I would go for individual instances for ease of use. If all the parameters are character specific rather than ability specific, then I think the other implementation would be the better one for memory, as you would only need to store something to point you to the proper Use() function.
I hope this helps! I don't really know that this the type of answer you were looking for, but I figured it's better than no answer. I would love to know what you choose to use!
Well this is basically like a generic binary writer... let's say you have an object, and you don't know what it is, but you have it. How do you write it's binary data to a binary file to be able to retrieve later?
My original idea that I don't know how to do was:
Figure out all the members of the object somehow (reflection maybe)
Unless the members are of types writable by the BinaryWriter, repeat step 1 on the member
Make a header that states the types of the members and how they are assembled into the object (somehow)
Write the header thing
Write all the core level members
I don't know how to use Reflection much so I'm not sure how to do most of the above.
It should be quite doable however.
How should I do this, if it's possible? Or how should I implement the above?
bin
Simplest approach is to use BinaryFormatter. However you should be very careful with any changes to your classes if you want to load instances saved by previous versions of your application.
The hard aspect is not writing out objects, but reading them back. The .NET framework provides various techniques for serialization and deserialization of class types which are supposed to automate the process, but all of the built-in techniques I'm familiar with have various limitations.
A major problem is that .NET makes no distinction between a storage location which holds a reference to an object for the purpose of identifying an object which is used by other code, for the purpose of only identifying immutable aspects of the object's state other than identity, or for the purpose of encapsulating the object's mutable state. Without knowing what a field is supposed to represent, it's not possible to know how it should be serialized or deserialized. For example, suppose that a particular type has a field of type int[], which holds a reference to a single-element array which holds the value 23. It may be that the purpose of that field is to hold the value 23, or it may be that the purpose of that field is to identify an array whose first element should be incremented every time something happens. In the former scenario, serialization should write out the fact that it's a single element array containing the value 23. In the latter scenario, if serialization is going to be possible at all, it will require knowing what is significant about the array to which the field holds a reference.
While various people have written various methods to automatically serialize various classes, I tend to be skeptical of such things. If one doesn't know what the fields of a class are used for, one should be cautious making any assumptions about what state is encapsulated thereby.
It might be possible with BinaryFormatter. But think of an object structure where you have many of your unknown objects which all reference a common object. If you serialize all of your unknown objects you end up with as many copies of the common object as there are unknown objects.
And there might be many fields of the unknown object which are not relevant as they are set by the constructor or other classes, they could be in an inconsistent state when deserialized.
So it might be not so hard to serialize them, but how do you want to deserialize them?
I want to understand all the advantages of singly rooted class (object) hierarchy in languages like .NET, Java.
I can think of one advantage. Let's say I have a function which I want to accept all data types (or references thereof). Then in that case instead of writing a function for each data type, I can write a single function:
public void MyFun(object obj)
{
// Some code
}
What other advantages we get from such type of hierarchy?
I'll quote some lines from a nice book - Thinking in Java by Bruce Eckel:
All objects in a singly rooted hierarchy have an interface in common,
so they are all ultimately the same type. The alternative (provided by
C++) is that you don’t know that everything is the same fundamental
type. From a backward-compatibility standpoint this fits the model of
C better and can be thought of as less restrictive, but when you want
to do full-on object-oriented programming you must then build your own
hierarchy to provide the same convenience that’s built into other OOP
languages. And in any new class library you acquire, some other
incompatible interface will be used. It requires effort (and possibly
multiple inheritance) to work the new interface into your design. Is
the extra “flexibility” of C++ worth it? If you need it—if you have a
large investment in C—it’s quite valuable. If you’re starting from
scratch, other alternatives such as Java can often be more productive.
All objects in a singly rooted hierarchy (such as Java provides) can
be guaranteed to have certain functionality. You know you can perform
certain basic operations on every object in your system. A singly
rooted hierarchy, along with creating all objects on the heap, greatly
simplifies argument passing.
A singly rooted hierarchy makes it much easier to implement a garbage
collector (which is conveniently built into Java). The necessary
support can be installed in the base class, and the garbage collector
can thus send the appropriate messages to every object in the system.
Without a singly rooted hierarchy and a system to manipulate an object
via a reference, it is difficult to implement a garbage collector.
Since run-time type information is guaranteed to be in all objects,
you’ll never end up with an object whose type you cannot determine.
This is especially important with system level operations, such as
exception handling, and to allow greater flexibility in programming.
A single-rooted hierarchy is not about passing your objects to methods but rather about a common interface all your objects implement.
For example, in C# the System.Object implements few members which are inherited down the hierarchy.
For example this includes the ToString() which is used to get a literal representation of your object. You are guaranteed that for each object, the ToString() will succeed. At the language level you can use this feature to get strings from expressions like (4-11).ToString().
Another example is the GetType() which returns the object of type System.Type representing the type of the object the method is invoked on. Because this member is defined at the top of the hierarchy, the reflection is easier, more uniform than for example in C++.
It provides a base for everything. For example in C# the Object class is the root which has methods such as ToString() and GetType() which are very useful, if you're not sure what specific objects you will be dealing with.
Also - not sure if it would be a good idea, but you could create Extension Methods on the Object class and then every instance of every class would be able to use the method.
For example, you could create an Extension Method called WriteToLogFile(this Object o) and then have it use reflection on the object to write details of it's instance members to your log. There are of course better ways to log things, but it is just an example.
Single rooted hierarchy enables platform developer to have some minimum knowledge about all objects which simplifies development of other libraries which can be used on all other objects.
Think about Collections without GetHashCode(), Reflection without GetType() etc.
How are elements stored in containers in .Net?
For example, a C++ vector stores in sequential order, while List doesn't.
How are they implemented for .Net containers (Array, ArrayList, ...)?
Thanks.
It depends on the element. But a C++ Vector is equivalent to a C# List, and a C++ List<T> is equivalent to a C# LinkedList
The C# ArrayList is pretty much, a C# List<object>
Wikipedia lists many data structures, and I suggest you have a look there, to see how the different ones are implemented.
So:
C++ C# How
Vector ArrayList / List Array (sequential)
List LinkedList Linked List (non-sequential, i.e. linked)
It varies by container. Because the implementation is proprietary, there are no public specs mandating what the underlying data structure must be.
If you're interested in taking the time, I've used the following tools before to understand how MS implemented this class or that:
Debugging with the Microsoft .NET Framework debugging symbols.
Inspecting assemblies with Reflector.
In .net, containers (even arrays) handle all access to them. You don't use pointers to work with them (except in extremely rare cases you probably won't ever get into), so it often doesn't matter how they store info. In many cases, it's not even specified how things work behind the scenes, so the implementation can be changed to something "better" without breaking stuff that relies on those details for some stupid reason.
Last i heard, though, arrays store their entries sequentially -- with the caveat that for reference-type objects (everything that's not a struct), "entries" are the references as opposed to the objects themselves. The data could be anywhere in memory. Think of it more like an array of references than an array of objects.
ArrayLists, being based on arrays, should store their stuff the same way.