Difference between IntPtr and UIntPtr - c#
I was looking at the P/Invoke declaration of RegOpenKeyEx when I noticed this comment on the page:
Changed IntPtr to UIntPtr: When invoking with IntPtr for the handles, you will run into an Overflow. UIntPtr is the right choice if you wish this to work correctly on 32 and 64 bit platforms.
This doesn't make much sense to me: both IntPtr and UIntPtr are supposed to represent pointers so their size should match the bitness of the OS - either 32 bits or 64 bits. Since these are not numbers but pointers, their signed numeric values shouldn't matter, only the bits that represent the address they point to. I cannot think of any reason why there would be a difference between these two but this comment made me uncertain.
Is there a specific reason to use UIntPtr instead of IntPtr? According to the documentation:
The IntPtr type is CLS-compliant, while the UIntPtr type is not. Only the IntPtr type is used in the common language runtime. The UIntPtr type is provided mostly to maintain architectural symmetry with the IntPtr type.
This, of course, implies that there's no difference (as long as someone doesn't try to convert the values to integers). So is the above comment from pinvoke.net incorrect?
Edit:
After reading MarkH's answer, I did a bit of checking and found out that .NET applications are not large address aware and can only handle a 2GB virtual address space when compiled in 32-bit mode. (One can use a hack to turn on the large address aware flag but MarkH's answer shows that checks inside the .NET Framework will break things because the address space is assumed to be only 2GB, not 3GB.)
This means that all correct virtual memory addresses a pointer can have (as far as the .NET Framework is concerned) will be between 0x00000000 and 0x7FFFFFFF. When this range is translated to signed int, no values would be negative because the highest bit is not set. This reinforces my belief that there's no difference in using IntPtr vs UIntPtr. Is my reasoning correct?
Fermat2357 pointed out that the above edit is wrong.
UIntPtr and IntPtr are internal implemented as
private unsafe void* m_value;
You are right both simply only managing the bits that represent a address.
The only thing where I can think about an overflow issue is if you try to perform pointer arithmetics. Both classes support adding and subtracting of offsets. But also in this case the binary representation should be ok after such an operation.
From my experience I would also prefer UIntPtr, because I think on a pointer as an unsigned object. But this is not relevant and only my opinion.
It seems not make any difference if you use IntPtr or UIntPtr in your case.
EDIT:
IntPtr is CLS-compliant because there are languages on top of the CLR which not support unsigned.
This, of course, implies that there's no difference (as long as someone doesn't try to convert the values to integers).
Unfortunately, the framework attempts to do precisely this (when compiled specifically for x86). Both the IntPtr(long) constructor, and the ToInt32() methods attempt to cast the value to an int in a checked expression. Here's the implementation seen if using the framework debugging symbols.
public unsafe IntPtr(long value)
{
#if WIN32
m_value = (void *)checked((int)value);
#else
m_value = (void *)value;
#endif
}
Of course, the checked expression will throw the exception if the value is out of bounds. The UIntPtr doesn't overflow for the same value, because it attempts to cast to uint instead.
The difference between IntPtr\UIntPtr is the same as the differences found between: Int32\UInt32 (ie, it's all about how the numbers get interpreted.)
Usually, it doesn't matter which you choose, but, as mentioned, in some cases, it can come back to bite you.
(I'm not sure why MS chose IntPtr to begin with(CLS Compliance, etc,.), memory is handled as a DWORD(u32), meaning it's unsigned, thus, the preferred method should be UIntPtr, not IntPtr, right?)
Even: UIntPtr.Add()
Seems wrong to me, it takes a UIntPtr as a pointer, and an 'int' for the offset. (When, to me, 'uint' would make much more sense. Why feed a signed value to an unsigned method, when, more than likely the code cast it to 'uint' under the hood. /facepalm)
I would personally prefer UIntPtr over IntPtr simply because the unsigned values match the values of the underlying memory which I'm working with. :)
Btw, I'll likely end up creating my own pointer type(using UInt32), built specifically for working directly with memory. (I'm guessing that UIntPtr isn't going to catch all the possible bad memory issues, ie, 0xBADF00D, etc, etc,. Which is a CTD waiting to happen... I'll have to see how the built in type handles things first, hopefully, a null\zero check is properly filtering out stuff like this.)
Related
Casting IntPtr to int only works sometimes
Consider this code: IntPtr p = (IntPtr) (long.MaxValue); // Not a valid ptr in 32 bit, // but this is to demonstrate the exception for 64 bit Console.WriteLine((int)(long)p); Console.WriteLine((int)p); The second WriteLine throws an OverflowException when compiling and running on 64 bit. This is documented here. My question is: why? When converting a pointer to an Int32 you conceptually lose all pointer semantics, and reduce the pointer semantically to just it's integer representation. Then why throw an exception, instead of truncating the value to fit inside an integer? That would be the most sensible thing to do imho. Because if the programmer really wanted to avoid truncation at all cost, why put it into an int in the first place? Is this design on purpose, or is this an incomplete implementation of the conversion operator? I feel some clarification is necessary after reading the first comments. My question can also be read as follows: Is there a realistic use case where you would put an IntPtr into an int and later translate it back to an IntPtr, such that it is still valid after that? As a reply to people asking for use cases, let me give the only use case for putting an IntPtr into an int that I can come up with: Implementing GetHashCode for a managed wrapper object around an unmanaged object.
The implementation of the cast operator for IntPtr to int is as follows (from ReferenceSource) [System.Security.SecuritySafeCritical] // auto-generated [System.Runtime.Versioning.NonVersionable] public unsafe static explicit operator int (IntPtr value) { #if WIN32 return (int)value.m_value; #else long l = (long)value.m_value; return checked((int)l); #endif } As you can see, for Windows x64 it will be explicitly using checked to force an exception on overflow. (And I'm very glad that it does!) Conversely, when you cast it to a long first and then cast to an int yourself, by default it will be done unchecked and therefore no exception will be thrown. I can only guess at the reason they did this, but it seems fairly obvious that truncating a pointer is almost always A Bad Thing, so they decided that the code should throw an exception if truncation happens.
What is the difference between [In, Out] and ref when using pinvoke in C#?
Is there a difference between using [In, Out] and just using ref when passing parameters from C# to C++? I've found a couple different SO posts, and some stuff from MSDN as well that comes close to my question but doesn't quite answer it. My guess is that I can safely use ref just like I would use [In, Out], and that the marshaller won't act any differently. My concern is that it will act differently, and that C++ won't be happy with my C# struct being passed. I've seen both things done in the code base I'm working in... Here are the posts I've found and have been reading through: Are P/Invoke [In, Out] attributes optional for marshaling arrays? Makes me think I should use [In, Out]. MSDN: InAttribute MSDN: OutAttribute MSDN: Directional Attributes These three posts make me think that I should use [In, Out], but that I can use ref instead and it will have the same machine code. That makes me think I'm wrong -- hence asking here.
The usage of ref or out is not arbitrary. If the native code requires pass-by-reference (a pointer) then you must use those keywords if the parameter type is a value type. So that the jitter knows to generate a pointer to the value. And you must omit them if the parameter type is a reference type (class), objects are already pointers under the hood. The [In] and [Out] attributes are then necessary to resolve the ambiguity about pointers, they don't specify the data flow. [In] is always implied by the pinvoke marshaller so doesn't have to be stated explicitly. But you must use [Out] if you expect to see any changes made by the native code to a struct or class member back in your code. The pinvoke marshaller avoids copying back automatically to avoid the expense. A further quirk then is that [Out] is not often necessary. Happens when the value is blittable, an expensive word that means that the managed value or object layout is identical to the native layout. The pinvoke marshaller can then take a shortcut, pinning the object and passing a pointer to managed object storage. You'll inevitably see changes then since the native code is directly modifying the managed object. Something you in general strongly want to pursue, it is very efficient. You help by giving the type the [StructLayout(LayoutKind.Sequential)] attribute, it suppresses an optimization that the CLR uses to rearrange the fields to get the smallest object. And by using only fields of simple value types or fixed size buffers, albeit that you don't often have that choice. Never use a bool, use byte instead. There is no easy way to find out if a type is blittable, other than it not working correctly or by using the debugger and compare pointer values. Just be explicit and always use [Out] when you need it. It doesn't cost anything if it turned out not to be necessary. And it is self-documenting. And you can feel good that it still will work if the architecture of the native code changes.
CLS-compliant alternative for ulong property
Background I am writing a managed x64 assembler (which is also a library), so it has multiple classes which define an unsigned 64-bit integer property for use as addresses and offsets. Some are file offsets, others are absolute addresses (relative to the main memory) and again others are relative virtual addresses. Problem I use the ulong datatype for the properties in the mentioned scenarios, and this works fine. However, such properties are not CLS-compliant. I can mark them as [ClsCompliant(false)], but then I need to provide a CLS-compliant alternative to users of the library. Options and questions Some suggest providing an alternative property with a bigger data type, but this is not an option because there is no bigger signed integer primitive which could hold all values from 0 to UInt64.MaxValue. I would rather not mark my entire assembly as non-CLS-compliant, because in most usage scenario's, not all the possible values up to UInt64.MaxValue are used. So, for e.g. Address I could provide an alternative long property AddressAlternative, which only accepts positive values. However, what should happen when Address somehow contains a value above Int64.MaxValue. Should AddressAlternative throw some exception? And what would be an appropriate name for AddressAlternative? Providing an alternative for every usage of ulong would result in many 'double' properties. Is there a better way to do this? Note that not all usages of ulong properties have the same semantics, so a single struct would not cut it. And finally, I have the same CLS compliance problem in constructor parameters. So should I provide an alternative overload accepting long for such a parameter? I do not mind restricting the use of (some functionality) of the library when it is used from a CLS-only context, as long as it can be used in most scenarios.
but when it represents an unsigned address above Int64.MaxValue You are using the wrong type, addresses must be stored in IntPtr or UIntPtr. There is just no way your problem is realistic. If you can't afford to lose the single bit in UInt64 then you are way too close to overflow. If this number represents an index then a plain Int32 will be fine, .NET memory blobs are limited to 2 gigabyte, even on a 64-bit machine. If it is an address then IntPtr will be fine for a very, very long time. Currently available hardware is 4.5 orders of magnitude away from reaching that limit. Very drastic hardware redesign will be needed to get close, you'll have much bigger problems to worry about when that day ever comes. Nine exabyte of virtual memory is enough for everybody until I retire.
Microsoft defines a 64-bit address as Int64, not UInt64, so you can still be CLS compliant. Please refer to http://msdn.microsoft.com/en-us/library/837ksy6h.aspx. Which basically says: IntPtr Constructor (Int64) Initializes a new instance of IntPtr using the specified 64-bit pointer. Parameters value Type: System.Int64 A pointer or handle contained in a 64-bit signed integer. And yes, I just did a quick test and the following worked fine in a project targeted for either x64 or Any CPU. I placed a brekpoint in the code and examined x. However, when targeted for only x86, it will throw an exception. using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { IntPtr x = new IntPtr(long.MaxValue); } } } But, if it turns out that you really need that extra bit. You could provide two libraries. One that is CLS-Compliant and one that is not--user's choice. This could be accomplished by using #if statements and using Conditional Compilation Symbols. This way, you can define the same variable name, but with different definitions. http://msdn.microsoft.com/en-us/library/4y6tbswk.aspx
64 Bit P/Invoke Idiosyncrasy
I am trying to properly Marshal some structs for a P/Invoke, but am finding strange behavior when testing on a 64 bit OS. I have a struct defined as: /// <summary>http://msdn.microsoft.com/en-us/library/aa366870(v=VS.85).aspx</summary> [StructLayout(LayoutKind.Sequential)] private struct MIB_IPNETTABLE { [MarshalAs(UnmanagedType.U4)] public UInt32 dwNumEntries; public IntPtr table; //MIB_IPNETROW[] } Now, to get the address of the table, I would like to do a Marshal.OffsetOf() call like so: IntPtr offset = Marshal.OffsetOf(typeof(MIB_IPNETTABLE), "table"); This should be 4 - I have dumped the bytes of the buffer to confirm this as well as replacing the above call with a hard coded 4 in my pointer arithmetic, which yielded correct results. I do get the expected 4 if I instantiate MIB_IPNETTABLE and perform the following call: IntPtr offset = (IntPtr)Marshal.SizeOf(ipNetTable.dwNumEntries); Now, in a sequential struct the offset of a field should be sum of the sizes of preceding fields, correct? Or is it the case that when it is an unmanaged structure the offset really is 8 (on an x64 system), but becomes 4 only after Marshalling magic? Is there a way to get the OffsetOf() call to give me the correct offset? I can limp along using calls to SizeOf(), but OffsetOf() is simpler for larger structs.
In a 64-bit C/C++ build the offset of your table field would be 8 due to alignment requirements (unless you forced it otherwise). I suspect that the CLR is doing the same to you: http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.layoutkind.aspx The members of the object are laid out sequentially, in the order in which they appear when exported to unmanaged memory. The members are laid out according to the packing specified in StructLayoutAttribute.Pack, and can be noncontiguous. you may wnat to use that attribute or use the LayoutKind.Explicit attribute along with the FieldOffset attribute on each field if you need that level of control.
Should I use int or Int32
In C#, int and Int32 are the same thing, but I've read a number of times that int is preferred over Int32 with no reason given. Is there a reason, and should I care?
The two are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32s in the same way. The resulting code will be identical: the difference is purely one of readability or code appearance.
ECMA-334:2006 C# Language Specification (p18): Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.
They both declare 32 bit integers, and as other posters stated, which one you use is mostly a matter of syntactic style. However they don't always behave the same way. For instance, the C# compiler won't allow this: public enum MyEnum : Int32 { member1 = 0 } but it will allow this: public enum MyEnum : int { member1 = 0 } Go figure.
I always use the system types - e.g., Int32 instead of int. I adopted this practice after reading Applied .NET Framework Programming - author Jeffrey Richter makes a good case for using the full type names. Here are the two points that stuck with me: Type names can vary between .NET languages. For example, in C#, long maps to System.Int64 while in C++ with managed extensions, long maps to Int32. Since languages can be mixed-and-matched while using .NET, you can be sure that using the explicit class name will always be clearer, no matter the reader's preferred language. Many framework methods have type names as part of their method names: BinaryReader br = new BinaryReader( /* ... */ ); float val = br.ReadSingle(); // OK, but it looks a little odd... Single val = br.ReadSingle(); // OK, and is easier to read
int is a C# keyword and is unambiguous. Most of the time it doesn't matter but two things that go against Int32: You need to have a "using System;" statement. using "int" requires no using statement. It is possible to define your own class called Int32 (which would be silly and confusing). int always means int.
As already stated, int = Int32. To be safe, be sure to always use int.MinValue/int.MaxValue when implementing anything that cares about the data type boundaries. Suppose .NET decided that int would now be Int64, your code would be less dependent on the bounds.
Byte size for types is not too interesting when you only have to deal with a single language (and for code which you don't have to remind yourself about math overflows). The part that becomes interesting is when you bridge between one language to another, C# to COM object, etc., or you're doing some bit-shifting or masking and you need to remind yourself (and your code-review co-wokers) of the size of the data. In practice, I usually use Int32 just to remind myself what size they are because I do write managed C++ (to bridge to C# for example) as well as unmanaged/native C++. Long as you probably know, in C# is 64-bits, but in native C++, it ends up as 32-bits, or char is unicode/16-bits while in C++ it is 8-bits. But how do we know this? The answer is, because we've looked it up in the manual and it said so. With time and experiences, you will start to be more type-conscientious when you do write codes to bridge between C# and other languages (some readers here are thinking "why would you?"), but IMHO I believe it is a better practice because I cannot remember what I've coded last week (or I don't have to specify in my API document that "this parameter is 32-bits integer"). In F# (although I've never used it), they define int, int32, and nativeint. The same question should rise, "which one do I use?". As others has mentioned, in most cases, it should not matter (should be transparent). But I for one would choose int32 and uint32 just to remove the ambiguities. I guess it would just depend on what applications you are coding, who's using it, what coding practices you and your team follows, etc. to justify when to use Int32. Addendum: Incidentally, since I've answered this question few years ago, I've started using both F# and Rust. F#, it's all about type-inferences, and bridging/InterOp'ing between C# and F#, the native types matches, so no concern; I've rarely had to explicitly define types in F# (it's almost a sin if you don't use type-inferences). In Rust, they completely have removed such ambiguities and you'd have to use i32 vs u32; all in all, reducing ambiguities helps reduce bugs.
There is no difference between int and Int32, but as int is a language keyword many people prefer it stylistically (just as with string vs String).
In my experience it's been a convention thing. I'm not aware of any technical reason to use int over Int32, but it's: Quicker to type. More familiar to the typical C# developer. A different color in the default visual studio syntax highlighting. I'm especially fond of that last one. :)
I always use the aliased types (int, string, etc.) when defining a variable and use the real name when accessing a static method: int x, y; ... String.Format ("{0}x{1}", x, y); It just seems ugly to see something like int.TryParse(). There's no other reason I do this other than style.
Though they are (mostly) identical (see below for the one [bug] difference), you definitely should care and you should use Int32. The name for a 16-bit integer is Int16. For a 64 bit integer it's Int64, and for a 32-bit integer the intuitive choice is: int or Int32? The question of the size of a variable of type Int16, Int32, or Int64 is self-referencing, but the question of the size of a variable of type int is a perfectly valid question and questions, no matter how trivial, are distracting, lead to confusion, waste time, hinder discussion, etc. (the fact this question exists proves the point). Using Int32 promotes that the developer is conscious of their choice of type. How big is an int again? Oh yeah, 32. The likelihood that the size of the type will actually be considered is greater when the size is included in the name. Using Int32 also promotes knowledge of the other choices. When people aren't forced to at least recognize there are alternatives it become far too easy for int to become "THE integer type". The class within the framework intended to interact with 32-bit integers is named Int32. Once again, which is: more intuitive, less confusing, lacks an (unnecessary) translation (not a translation in the system, but in the mind of the developer), etc. int lMax = Int32.MaxValue or Int32 lMax = Int32.MaxValue? int isn't a keyword in all .NET languages. Although there are arguments why it's not likely to ever change, int may not always be an Int32. The drawbacks are two extra characters to type and [bug]. This won't compile public enum MyEnum : Int32 { AEnum = 0 } But this will: public enum MyEnum : int { AEnum = 0 }
I know that the best practice is to use int, and all MSDN code uses int. However, there's not a reason beyond standardisation and consistency as far as I know.
You shouldn't care. You should use int most of the time. It will help the porting of your program to a wider architecture in the future (currently int is an alias to System.Int32 but that could change). Only when the bit width of the variable matters (for instance: to control the layout in memory of a struct) you should use int32 and others (with the associated "using System;").
int is the C# language's shortcut for System.Int32 Whilst this does mean that Microsoft could change this mapping, a post on FogCreek's discussions stated [source] "On the 64 bit issue -- Microsoft is indeed working on a 64-bit version of the .NET Framework but I'm pretty sure int will NOT map to 64 bit on that system. Reasons: 1. The C# ECMA standard specifically says that int is 32 bit and long is 64 bit. 2. Microsoft introduced additional properties & methods in Framework version 1.1 that return long values instead of int values, such as Array.GetLongLength in addition to Array.GetLength. So I think it's safe to say that all built-in C# types will keep their current mapping."
int is the same as System.Int32 and when compiled it will turn into the same thing in CIL. We use int by convention in C# since C# wants to look like C and C++ (and Java) and that is what we use there... BTW, I do end up using System.Int32 when declaring imports of various Windows API functions. I am not sure if this is a defined convention or not, but it reminds me that I am going to an external DLL...
Once upon a time, the int datatype was pegged to the register size of the machine targeted by the compiler. So, for example, a compiler for a 16-bit system would use a 16-bit integer. However, we thankfully don't see much 16-bit any more, and when 64-bit started to get popular people were more concerned with making it compatible with older software and 32-bit had been around so long that for most compilers an int is just assumed to be 32 bits.
I'd recommend using Microsoft's StyleCop. It is like FxCop, but for style-related issues. The default configuration matches Microsoft's internal style guides, but it can be customised for your project. It can take a bit to get used to, but it definitely makes your code nicer. You can include it in your build process to automatically check for violations.
It makes no difference in practice and in time you will adopt your own convention. I tend to use the keyword when assigning a type, and the class version when using static methods and such: int total = Int32.Parse("1009");
int and Int32 is the same. int is an alias for Int32.
You should not care. If size is a concern I would use byte, short, int, then long. The only reason you would use an int larger than int32 is if you need a number higher than 2147483647 or lower than -2147483648. Other than that I wouldn't care, there are plenty of other items to be concerned with.
int is an alias for System.Int32, as defined in this table: Built-In Types Table (C# Reference)
I use int in the event that Microsoft changes the default implementation for an integer to some new fangled version (let's call it Int32b). Microsoft can then change the int alias to Int32b, and I don't have to change any of my code to take advantage of their new (and hopefully improved) integer implementation. The same goes for any of the type keywords.
You should not care in most programming languages, unless you need to write very specific mathematical functions, or code optimized for one specific architecture... Just make sure the size of the type is enough for you (use something bigger than an Int if you know you'll need more than 32-bits for example)
It doesn't matter. int is the language keyword and Int32 its actual system type. See also my answer here to a related question.
Use of Int or Int32 are the same Int is just sugar to simplify the code for the reader. Use the Nullable variant Int? or Int32? when you work with databases on fields containing null. That will save you from a lot of runtime issues.
Some compilers have different sizes for int on different platforms (not C# specific) Some coding standards (MISRA C) requires that all types used are size specified (i.e. Int32 and not int). It is also good to specify prefixes for different type variables (e.g. b for 8 bit byte, w for 16 bit word, and l for 32 bit long word => Int32 lMyVariable) You should care because it makes your code more portable and more maintainable. Portable may not be applicable to C# if you are always going to use C# and the C# specification will never change in this regard. Maintainable ihmo will always be applicable, because the person maintaining your code may not be aware of this particular C# specification, and miss a bug were the int occasionaly becomes more than 2147483647. In a simple for-loop that counts for example the months of the year, you won't care, but when you use the variable in a context where it could possibly owerflow, you should care. You should also care if you are going to do bit-wise operations on it.
Using the Int32 type requires a namespace reference to System, or fully qualifying (System.Int32). I tend toward int, because it doesn't require a namespace import, therefore reducing the chance of namespace collision in some cases. When compiled to IL, there is no difference between the two.
According to the Immediate Window in Visual Studio 2012 Int32 is int, Int64 is long. Here is the output: sizeof(int) 4 sizeof(Int32) 4 sizeof(Int64) 8 Int32 int base {System.ValueType}: System.ValueType MaxValue: 2147483647 MinValue: -2147483648 Int64 long base {System.ValueType}: System.ValueType MaxValue: 9223372036854775807 MinValue: -9223372036854775808 int int base {System.ValueType}: System.ValueType MaxValue: 2147483647 MinValue: -2147483648
Also consider Int16. If you need to store an Integer in memory in your application and you are concerned about the amount of memory used, then you could go with Int16 since it uses less memeory and has a smaller min/max range than Int32 (which is what int is.)
It's 2021 and I've read all answers. Most says it's basically the same (it's an alias), or, it depends on "what you like", or "by convention use int..." No answer gives you a clear when, where and why use Int32 over int. That's why I'm here. 98% of the time, you can get away with int, and that's perfectly fine. What are the other 2% ? IO with records (struct, native types, organization and compression). Someone said an useless application is one that can read and manipulate data, but not actually capable of writing new datas to a defined storage. But in order to not reinvent the wheel, at some point, those dealing with old datas has to retrieve the documentation on how to read them. And chances are they were compiled from an era where a long was always a 32-bits integer. It happenned before, where some had trouble remembering a db is a byte, a dw is a word, a dd is a double word, but how many bits was that about ? And that will likely happen again on C# 43.0 on a 256-bits platform... where the (future) boys never heard of "by convention, use int instead of Int32". That's the 2% where Int32 matters over int. MSDN saying today it's recommended to use int is irrelevant, it usually works with current C# version, but that may get dropped in future MSDN pages, in 2028, or 2034 ? Fewer and fewer people have WORD and DWORD encouters today, yet, two decades ago, they were common. The same thing will happen to int, in the very case of dealing with precise-fixed-length data. In memory, a ushort (UInt16) can be a Decimal as long as it's fractional part is null, it is positive or null, and does not exceed 65535. But inside a file, it must be a short, 16-bits long. And when you read a documentation about a file structure from another era (inside the source code), you realize there are 3545 records definitions, some nested inside others, each record having between a couple and hundreds of fields of varying types. Somewhere in 2028 a boy thought he could just get away by Ctrl-H-ing int to Int32, whole word only and match case... ~67000 changes in whole solution. Hit Run and still get CTDs. Clap clap clap. Go figure which int you should have changed to Int32 and which ones you should have changed to var. Also worth to point out Pointers are useful, when you deal with terabytes of datas (have a virtual representation of an entire planet on some cloud, download on demand, and render to user screen). Pointers are really fast in the ~1% of cases where there are so many datas to compute in realtime, you must trade with unsafe code. Again, it's to come up with an actually useful application, instead of being fancy and waste time porting to managed. So, be carefull, IntPtr is 32-bits or 64-bits already ? Could you get away with your code without caring how many bytes you read/skip ? Or just go (Int32*) int32Ptr = (Int32*) int64Ptr;... An even more factual example is a file containing data processing and their respective commands (methods in the source code), like internal branching (a conditional continue or jump to if the test fails) : IfTest record in file says : if value equals someConstant, jump to address. Where address is a 16-bits integer representing a relative pointer inside the file (you can go back towards the start of the file up to 32768 bytes, or up to 32767 bytes further down). But 10 years later, platforms can handle larger files and larger datas, and now you have 32-bits relative address. Your method in the source code were named IfTestMethod(...), now how would you name the new one ? IfTestMethodInt() or IfTestMethod32() ? Would you also rename the old method IfTestMethodShort() or IfTestMethod16() ? Then a decade later, you get a new command with long (Int64) relative address... What about a 128 bits command some 10 years later ? Be consistent ! Principles are great, but sometimes logic is better. The problem is not me or you writing a code today, and it appears okay to us. It is being in the place of the one guy trying to understand what we wrote, 10 or 20 years later, how much it costs in time (= money) to come up with a working updated code ? Being explicit or writing redundant comments will actually save time. Which one you prefer ? Int32 val; or var val; // 32-bits. Also, working with foreign data from other platforms or compile directives is a concept (today involves Interop, COM, PInvoke...) And that's a concept we cannot get rid of, whatever the era, because it takes time to update (reformat) datas (via serialization for ex.) Upgrading DLLs to managed code also takes time. We took time to leave assembler behind and go full-C. We are taking time to move from 32-bits datas to 64-bits, yet, we still need to care about 8 and 16-bits. What next in the future ? Move from 128-bits to 256 or directly to 1024 ? Do not assume a keyword explicit to you will remain explicit for the guys reading your documentation 20 years later (and documentation usually contains errors, mainly because of copy/paste). So here it is : Where to use Int32 today over int ? It's when you are producing code that is data-size sensible (IO, network, cross-platform data...), and at some point in the future - could be decades later - someone will have to understand and port your code. The key reason is era-based. 1000 lines of code, it's okay to use int, 100000 lines, it's not anymore. That's a rare duty only a few will have to do, and hell yeah, they have struggle, if only some were a little more explicit instead of relying on "by convention" or "it looks pretty in the IDE, Int32 is so ugly" or "they are the same, don't bother, it's a waste of time to write that two numbers and holding shift key", or "int is unambiguous", or "those who don't like int are just VB fanboys - go learn C# you noob" (yeah, that's the underlying meaning of a few comments right here) Do not take what I wrote as a generalized perception, nor an attempt to promote Int32 on all cases. I clearly stated the specific case (as it seems to me this was not clear from other answers), to advocate for the few ones getting blammed by their supervisors for being fancy writing Int32, and at the same time the very same supervisor not understanding what takes so long to rewrite that C DLL to C#. It's an edge case, but at least for those reading, "Int32" has at least one purpose in its life. The point can be further discussed by turning the question the other way around : Why not just get rid of Int32, Int64 and all the other variants in future C# specifications ? What that would imply ?