Enumeration's default value is integer. However, when we use Enumeration we don't use so many values. So my questions are:
enum TYPE : byte{HORIZONTAL , DIAGONAL} //uses 1 byte
enum TYPE {HORIZONTAL , DIAGONAL} // int by default. Uses 4 bytes
1)Does 3 byte space save us so much space? How does it effect nowadays computers?
2)If yes, why its default value isn't byte?
3)What should a good programmer do?
P.S I apologize for bad english it is not my native language.
1) Does 3 byte space save us so much space? How does it effect nowadays computers?
Whether it actually saves space depends on a lot of other things. For example, if you use your enum as a field inside another class, things might (or might not) get memory aligned. I am not sure about the specifics in C# (even in C++ this can be a complex topic) - maybe this SO answer clarifies some things.
Even if it were guaranteed that using a byte enum always saves you three bytes, consider how many enum instances you use in a typical application. Ten? One hundred? Maybe a thousand? We are talking in the order of kilobytes here, while modern computers have at least gigabytes of RAM, that is 6 order of magnitude more.
2) If yes, why its default value isn't byte?
This is a decision by the team that designed C#, we can only assume they had their reasons.
Yes, a byte can only hold values up to 255. Now probably most enumerations don't have more than 255 values, but for example for Flags enums that only gives you 8 possible flags and you may need more. Also, since enums are basically just integer numbers, you want to be able to do integer or bitwise operations on them (like + or |) -- again this is especially true if you have enum Flags -- and those work on full 32- or 64-bit integers anyway. In fact they may even be slower for bytes because the value has to be expanded to a full integer and then truncated again.
3) What should a good programmer do?
As usual, unless you are writing performance or memory critical applications, don't worry about it. Especially in C# with its smart compilers, it is easy to apply premature "optimizations" that actually slow down the performance, so unless you have very convincing evidence that you actually need those three extra bytes, just write the code for legibility and not for speed. Even if the theoretical 3 byte difference actually turns out to be there in practice, you probably lose a multitude of that in other "inefficiencies" such as padded classes, inefficient string operations, copies of local variables, etc.
I've never experienced an issue defining Enums with default Integral type, and you seem not to have an issue yourself. If you ever experience a memory issue in your system it is more likely that it is caused by other issues and not by choosing int over byte for your Enums.
Those little micro-Enhancements has rare impact in today's cheap memory computers.
Unless you're working on embedded systems with limited memory, try not to focus on those micro-optimizations.
Related
What is the purpose of the LongLength property for arrays in .Net. Using a standard integer for length, you could accommodate up to 2 billion indices. Are there really people using .Net to maintain a single array with more the 2 billion elements. Even if each element was a single byte, that would still be 2 GB of data. Is it feasible to use such a large array in .Net?
For example, if you had a > 2 GB file and needed to read it all into memory at once, that would call for such an array. Not that that is necessarily a recommended approach most of the time, but there could well be some case (on a powerful enough 64 bit system with a lot of memory anyway) that this might be required (maybe for performance reasons?).
Edit: Of course, it should be noted that as of CLR 2.0, having an array > 2 GB isn't actually supported (all the implementation of LongLength does is cast Length into a long, and attempting to create a bigger array will fail)... but maybe Microsoft is planning to add support later...?
There's a school of thought known as the "0, 1 or N" school which believes that you should have either none of something; one of something; or any number of something, as resources permit.
In other words, don't set arbitrary limits if you don't have to. Arbitrary limits have given us such monstrosities as:
the 640K limit in early PCs.
buffer overflow vulnerabilities.
the hideous CHS disk addressing scheme.
Keep in mind that even two billion 64-bit integers only takes up
17,179,869,184 bytes of the
18,446,744,073,709,551,616 bytes of 64-bit address space available.
That's less than 1 thousand-millionth, or 10-9, or you could have many millions of these massive arrays before running out of address space.
Plus it returns the total number of elements in all the dimensions of the Array so it can be an array with "just" a half billion elements and 4 dimensions to make it needed to be 64-bit int.
It's very possible to have an array with more than 2 Billion entries in a 64 bit scenario. LongLength is indeed meant to support such scenarios.
As to whether or not that is actually used. I can say with certainty that there is some customer, somewhere, that considers this a vital business need. Customers find uses for features that you've never thought possible.
I am wondering about the performance of different primitive types, primarily in C#. Now, I realize this is not strictly a language related concept, since the machine is optimized for handling types.
I have read the following two questions:
performance of byte vs. int in .NET
Why should I use int instead of a byte or short in C#
Nevertheless, I need a few clarifications.
I know that on a 32-bit machine an int is faster than both short and byte, since int is native for the platform. However, what happens on 64-bit systems? Is it better, performance wise, to use a long instead of an int?
Also, what happens with floating point types? Is double better than float?
The answer may or may not be language specific. I assume there aren't many differences for different languages regarding this issue. However, if there are, it would be nice to have an explanation of why.
Actually in most cases you will get the same performance in anything smaller than your machines architecture (i.e. int, short, byte on 32 bit) as internally the code will just use a 32 bit value to process them.
The same applies on 64 bit systems, there is no reason not to use an int (as they will run at the same speed as a long on 64 bit, faster than a long on 32 bit) unless you need the extra range. If you think about it - 64 bit systems had to run 32 bit code fast as otherwise no-one would have made the transition.
The smaller size is only used when packing multiple copies of the primitive into structures such as arrays. There you do get a small slowdown from unpacking them, but in most cases that will be more than compensated for by things such as better cache/memory coherence.
In 10, or even 5 years there will be no [Edit2: server or desktop] 32-bit CPUs.
So, are there any advantages in using int (32bit) over long (64bit) ?
And are there any disadvantages in using int ?
Edit:
By 10 or 5 years I meant on vast majority of places where those langs are used
I meant which type to use by default. This days I won't even bother to think if I should use short as cycle counter, just for(int i.... The same way long counters already win
registers are already 64-bit, there is already no gain in 32 bit types. And I think some loss in 8 bit types (you have to operate on more bits then you're using)
32-bit is still a completely valid data type; just like we have 16-bit and bytes still around. We didn't throw out 16-bit or 8-bit numbers when we moved to 32-bit processors. A 32-bit number is half the size of a 64-bit integer in terms of storage. If I were modeling a database, and I knew the value couldn't go higher than what a 32-bit integer could store; I would use a 32-bit integer for storage purposes. I'd do the same thing with a 16-bit number as well. A 64-bit number takes more space in memory as well; albeit not anything significant given today's personal laptops can ship with 8 GB of memory.
There is no disadvantage of int other than it's a smaller data type. It's like asking, "Where should I store my sugar? In a sugar bowl, or a silo?" Well, that depends on entirely how much sugar you have.
Processor architecture shouldn't have much to do with what size data type you use. Use what fits. When we have 512-bit processors, we'll still have bytes.
EDIT:
To address some comments / edits..
I'm not sure about "There will be no 32-bit desktop CPUs". ARM is currently 32-bit; and has declared little interest in 64-bit; for now. That doesn't fit too well with "Desktop" in your description; but I also think in 5-10 years the landscape of the type of devices we are writing software will drastically change as well. Tablets can't be ignored; people will want C# and Java apps to run on them, considering Microsoft officially ported Windows 8 to ARM.
If you want to start using long; go ahead. There is no reason not to. If we are only looking at the CPU (ignoring storage size), and making assumptions we are on an x86-64 architecture, then it doesn't make much difference.
Assuming that we are sticking with the x86 architecture; that's true as well. You may end up with a slightly larger stack; depending on whatever framework you are using.
If you're on a 64-bit processor, and you've compiled your code for 64-bit, then at least some of the time, long is likely to be more efficient because it matches the register size. But whether that will really impact your program much is debatable. Also, if you're using long all over the place, you're generally going to use more memory - both on the stack and on the heap - which could negatively impact performance. There are too many variables to know for sure how well your program will perform using long by default instead of int. There are reasons why it could be faster and reasons why it could be slower. It could be a total wash.
The typical thing to do is to just use int if you don't care about the size of the integer. If you need a 64-bit integer, then you use long. If you're trying to use less memory and int is far more than you need, then you use byte or short.
x86_64 CPUs are going to be designed to be efficient at processing 32-bit programs and so it's not like using int is going to seriously degrade performance. Some things will be faster due to better alignment when you use 64-bit integers on a 64-bit CPU, but other things will be slower due to the increased memory requirements. And there are probably a variety of other factors involved which could definitely affect performance in either direction.
If you really want to know which is going to do better for your particular application in your particular environment, you're going to need to profile it. This is not a case where there is a clear advantage of one over the other.
Personally, I would advise that you follow the typical route of using int when you don't care about the size of the integer and to use the other types when you do.
Sorry for the C++ answer.
If the size of the type matters use a sized type:
uint8_t
int32_t
int64_t
etc
If the size doesn't matter use an expressive type:
size_t
ptrdiff_t
ssize_t
etc
I know that D has sized types and size_t. I'm not sure about Java or C#.
In my C# app, I would like to know whether it is really important to use short for smaller numbers, int for bigger etc. Does the memory consumption really matter?
Unless you are packing large numbers of these together in some kind of structure, it will probably not affect the memory consumption at all. The best reason to use a particular integer type is compatibility with an API. Other than that, just make sure the type you pick has enough range to cover the values you need. Beyond that for simple local variables, it doesn't matter much.
The simple answer is that it's not really important.
The more complex answer is that it depends.
Obviously you need to choose a type that will hold your datastructure without overflowing, and even if you're only storing smaller numbers then choosing int is probably the most sensible thing to do.
However, if your application loads a lot of data or runs on a device with limited memory then you might need to choose short for some values.
For C# apps that aren't trying to mirror some sort of structure from a file, you're better off using ints or whatever your native format is. The only other time it might matter is if using arrays on the order of millions of entries. Even then, I'd still consider ints.
Only you can be the judge of whether the memory consumption really matters to you. In most situations it won't make any discernible difference.
In general, I would recommend using int/Int32 where you can get away with it. If you really need to use short, long, byte, uint etc in a particular situation then do so.
This is entirely relative to the amount of memory you can afford to waste. If you aren't sure, it probably doesn't matter.
The answer is: it depends. The question of whether memory matters is entirely up to you. If you are writing a small application that has minimal storage and memory requirements, then no. If you are google, storing billions and billions of records on thousands of servers, then every byte can cost some real money.
There are a few cases where I really bother choosing.
When I have memory limitations
When I do bitshift operations
When I care about x86/x64 portability
Every other case is int all the way
Edit : About x86/x64
In x86 architecture, an int is 32 bits but in x64, an int is 64 bits
If you write "int" everywhere and move from one architecture to another, it might leads to problems. For example you have an 32 bits api that export a long. You cast it to an integer and everything is fine. But when you move to x64, the hell breaks loose.
The int is defined by your architecture so when you change architecture you need to be aware that it might lead to potential problems
That all depends on how you are using them and how many you have. Even if you only have a few in memory at a time - this might drive the data type in your backing store.
Memory consumption based on the type of integers you are storing is probably not an issue in a desktop or web app. In a game or a mobile device app, it may be more of an issue.
However, the real reason to differentiate between the types is the kind of numbers you need to store. If you have really big numbers, or high precision, you may need to use long to store it.
The context of the situation is very important here. You don't need to take a guess at whether it is important or not though, we are dealing with quantifiable things here. We know that we are saving 2 bytes by using a short instead of an int.
What do you estimate the largest number of instances are going to be in memory at a given point in time? If there are a million then you are saving ~2Mb of Ram. Is that a large amount of ram? Again, it depends on the context, if the app is running on a desktop with 4Gb of ram you probably don't care too much about the 2Mb.
If there will be hundreds of millions of instances in memory the savings are going to get pretty big, but if that is the case you may just not have enough ram to deal with it and you may have to store this structure on disk and work with parts of it at a time.
Int32 will be fine for almost anything. Exceptions include:
if you have specific needs where a different type is clearly better. Example: if you're writing a 16 bit emulator, Int16 (aka: short) would probably be better to represent some of the internals
when an API requires a certain type
one time, I had an invalid int cast and Visual Studio's first suggestion was to verify my value was less than infinity. I couldn't find a good type for that without using the pre-defined constants, so i used ulong since that was the closest I could come in .NET 2.0 :)
Given a case where I have an object that may be in one or more true/false states, I've always been a little fuzzy on why programmers frequently use flags+bitmasks instead of just using several boolean values.
It's all over the .NET framework. Not sure if this is the best example, but the .NET framework has the following:
public enum AnchorStyles
{
None = 0,
Top = 1,
Bottom = 2,
Left = 4,
Right = 8
}
So given an anchor style, we can use bitmasks to figure out which of the states are selected. However, it seems like you could accomplish the same thing with an AnchorStyle class/struct with bool properties defined for each possible value, or an array of individual enum values.
Of course the main reason for my question is that I'm wondering if I should follow a similar practice with my own code.
So, why use this approach?
Less memory consumption? (it doesn't seem like it would consume less than an array/struct of bools)
Better stack/heap performance than a struct or array?
Faster compare operations? Faster value addition/removal?
More convenient for the developer who wrote it?
It was traditionally a way of reducing memory usage. So, yes, its quite obsolete in C# :-)
As a programming technique, it may be obsolete in today's systems, and you'd be quite alright to use an array of bools, but...
It is fast to compare values stored as a bitmask. Use the AND and OR logic operators and compare the resulting 2 ints.
It uses considerably less memory. Putting all 4 of your example values in a bitmask would use half a byte. Using an array of bools, most likely would use a few bytes for the array object plus a long word for each bool. If you have to store a million values, you'll see exactly why a bitmask version is superior.
It is easier to manage, you only have to deal with a single integer value, whereas an array of bools would store quite differently in, say a database.
And, because of the memory layout, much faster in every aspect than an array. It's nearly as fast as using a single 32-bit integer. We all know that is as fast as you can get for operations on data.
Easy setting multiple flags in any order.
Easy to save and get a serie of 0101011 to the database.
Among other things, its easier to add new bit meanings to a bitfield than to add new boolean values to a class. Its also easier to copy a bitfield from one instance to another than a series of booleans.
It can also make Methods clearer. Imagine a Method with 10 bools vs. 1 Bitmask.
Actually, it can have a better performance, mainly if your enum derives from an byte.
In that extreme case, each enum value would be represented by a byte, containing all the combinations, up to 256. Having so many possible combinations with booleans would lead to 256 bytes.
But, even then, I don't think that is the real reason. The reason I prefer those is the power C# gives me to handle those enums. I can add several values with a single expression. I can remove them also. I can even compare several values at once with a single expression using the enum. With booleans, code can become, let's say, more verbose.
From a domain Model perspective, it just models reality better in some situations. If you have three booleans like AccountIsInDefault and IsPreferredCustomer and RequiresSalesTaxState, then it doesnn't make sense to add them to a single Flags decorated enumeration, cause they are not three distinct values for the same domain model element.
But if you have a set of booleans like:
[Flags] enum AccountStatus {AccountIsInDefault=1,
AccountOverdue=2 and AccountFrozen=4}
or
[Flags] enum CargoState {ExceedsWeightLimit=1,
ContainsDangerousCargo=2, IsFlammableCargo=4,
ContainsRadioactive=8}
Then it is useful to be able to store the total state of the Account, (or the cargo) in ONE variable... that represents ONE Domain Element whose value can represent any possible combination of states.
Raymond Chen has a blog post on this subject.
Sure, bitfields save data memory, but
you have to balance it against the
cost in code size, debuggability, and
reduced multithreading.
As others have said, its time is largely past. It's tempting to still do it, cause bit fiddling is fun and cool-looking, but it's no longer more efficient, it has serious drawbacks in terms of maintenance, it doesn't play nicely with databases, and unless you're working in an embedded world, you have enough memory.
I would suggest never using enum flags unless you are dealing with some pretty serious memory limitations (not likely). You should always write code optimized for maintenance.
Having several boolean properties makes it easier to read and understand the code, change the values, and provide Intellisense comments not to mention reduce the likelihood of bugs. If necessary, you can always use an enum flag field internally, just make sure you expose the setting/getting of the values with boolean properties.
Space efficiency - 1 bit
Time efficiency - bit comparisons are handled quickly by hardware.
Language independence - where the data may be handled by a number of different programs you don't need to worry about the implementation of booleans across different languages/platforms.
Most of the time, these are not worth the tradeoff in terms of maintance. However, there are times when it is useful:
Network protocols - there will be a big saving in reduced size of messages
Legacy software - once I had to add some information for tracing into some legacy software.
Cost to modify the header: millions of dollars and years of effort.
Cost to shoehorn the information into 2 bytes in the header that weren't being used: 0.
Of course, there was the additional cost in the code that accessed and manipulated this information, but these were done by functions anyways so once you had the accessors defined it was no less maintainable than using Booleans.
I have seen answers like Time efficiency and compatibility. those are The Reasons, but I do not think it is explained why these are sometime necessary in times like ours. from all answers and experience of chatting with other engineers I have seen it pictured as some sort of quirky old time way of doing things that should just die because new way to do things are better.
Yes, in very rare case you may want to do it the "old way" for performance sake like if you have the classic million times loop. but I say that is the wrong perspective of putting things.
While it is true that you should NOT care at all and use whatever C# language throws at you as the new right-way™ to do things (enforced by some fancy AI code analysis slaping you whenever you do not meet their code style), you should understand deeply that low level strategies aren't there randomly and even more, it is in many cases the only way to solve things when you have no help from a fancy framework. your OS, drivers, and even more the .NET itself(especially the garbage collector) are built using bitfields and transactional instructions. your CPU instruction set itself is a very complex bitfield, so JIT compilers will encode their output using complex bit processing and few hardcoded bitfields so that the CPU can execute them correctly.
When we talk about performance things have a much larger impact than people imagine, today more then ever especially when you start considering multicores.
when multicore systems started to become more common all CPU manufacturer started to mitigate the issues of SMP with the addition of dedicated transactional memory access instructions while these were made specifically to mitigate the near impossible task to make multiple CPUs to cooperate at kernel level without a huge drop in perfomrance it actually provides additional benefits like an OS independent way to boost low level part of most programs. basically your program can use CPU assisted instructions to perform memory changes to integers sized memory locations, that is, a read-modify-write where the "modify" part can be anything you want but most common patterns are a combination of set/clear/increment.
usually the CPU simply monitors if there is any other CPU accessing the same address location and if a contention happens it usually stops the operation to be committed to memory and signals the event to the application within the same instruction. this seems trivial task but superscaler CPU (each core has multiple ALUs allowing instruction parallelism), multi-level cache (some private to each core, some shared on a cluster of CPU) and Non-Uniform-Memory-Access systems (check threadripper CPU) makes things difficult to keep coherent, luckily the smartest people in the world work to boost performance and keep all these things happening correctly. todays CPU have a large amount of transistor dedicated to this task so that caches and our read-modify-write transactions work correctly.
C# allows you to use the most common transactional memory access patterns using Interlocked class (it is only a limited set for example a very useful clear mask and increment is missing, but you can always use CompareExchange instead which gets very close to the same performance).
To achieve the same result using a array of booleans you must use some sort of lock and in case of contention the lock is several orders of magnitude less permorming compared to the atomic instructions.
here are some examples of highly appreciated HW assisted transaction access using bitfields which would require a completely different strategy without them of course these are not part of C# scope:
assume a DMA peripheral that has a set of DMA channels, let say 20 (but any number up to the maximum number of bits of the interlock integer will do). When any peripheral's interrupt that might execute at any time, including your beloved OS and from any core of your 32-core latest gen wants a DMA channel you want to allocate a DMA channel (assign it to the peripheral) and use it. a bitfield will cover all those requirements and will use just a dozen of instructions to perform the allocation, which are inlineable within the requesting code. basically you cannot go faster then this and your code is just few functions, basically we delegate the hard part to the HW to solve the problem, constraints: bitfield only
assume a peripheral that to perform its duty requires some working space in normal RAM memory. for example assume a high speed I/O peripheral that uses scatter-gather DMA, in short it uses a fixed-size block of RAM populated with the description (btw the descriptor is itself made of bitfields) of the next transfer and chained one to each other creating a FIFO queue of transfers in RAM. the application prepares the descriptors first and then it chains with the tail of the current transfers without ever pausing the controller (not even disabling the interrupts). the allocation/deallocation of such descriptors can be made using bitfield and transactional instructions so when it is shared between diffent CPUs and between the driver interrupt and the kernel all will still work without conflicts. one usage case would be the kernel allocates atomically descriptors without stopping or disabling interrupts and without additional locks (the bitfield itself is the lock), the interrupt deallocates when the transfer completes.
most old strategies were to preallocate the resources and force the application to free after usage.
If you ever need to use multitask on steriods C# allows you to use either Threads + Interlocked, but lately C# introduced lightweight Tasks, guess how it is made? transactional memory access using Interlocked class. So you likely do not need to reinvent the wheel any of the low level part is already covered and well engineered.
so the idea is, let smart people (not me, I am a common developer like you) solve the hard part for you and just enjoy general purpose computing platform like C#. if you still see some remnants of these parts is because someone may still need to interface with worlds outside .NET and access some driver or system calls for example requiring you to know how to build a descriptor and put each bit in the right place. do not being mad at those people, they made our jobs possible.
In short : Interlocked + bitfields. incredibly powerful, don't use it
It is for speed and efficiency. Essentially all you are working with is a single int.
if ((flags & AnchorStyles.Top) == AnchorStyles.Top)
{
//Do stuff
}