is there a warning (error), similar to C4061 for C# - c#

Usually, if I use switch for enums in C#, I have to write something like that:
switch (e)
{
case E.Value1:
//...
break;
case E.Value2:
//...
break;
//...
default:
throw new NotImplementedException("...");
}
In C++ (for VS) I could enable warnings C4061 and C4062 for this switch, make them errors and have a compile-time check. In C# I have to move this check to runtime...
Does anyone know how in C# I can have this checked in compile time? Maybe there is a warning, disabled by default, which I missed, or some other way?

No, there isn't be a compile-time check - it's legitimate to have a switch/case which only handles some of the named values. It would have been possible to include it, but there are some issues.
Firstly, it's entirely valid (unfortunately) for an enum value not to have any of the "named" values:
enum Foo
{
Bar = 0,
Baz = 1
}
...
Foo nastyValue = (Foo) 50;
Given that any value is feasible within the switch/case, the compiler can't know that you didn't mean to try to handle an unnamed value.
Secondly, it wouldn't work well with Flags enums - the compiler doesn't really know which values are meant to be convenient combinations. It could infer that, but it would be a bit icky.
Thirdly, it's not always what you want - sometimes you really do only want to respond to a few cases. I wouldn't want to have to suppress warnings on a reasonably regular basis.
You can use Enum.IsDefined to check for this up front, but that's relatively inefficient.
I agree that all of this is a bit of a pain - enums are a bit of a nasty area when it comes to .NET :(

I understand that this is necroposting, but nothing really changed in this area inside of the compiler. So, I made Roslyn analyzer for switch statements.
You can download SwitchAnalyzer package.
This is Roslyn analyzer and it supports
Enums with operations | and & for them. So, you can check flags as well
(but not like single int value)
Interface implementations (pattern matching) in the current data context.
Pattern matching for classes is not implemented in version 0.4 yet (but I hope to implement it soon).
To use it, just add this package to your project, you will get warnings for all uncovered cases if you don't have default branch or if it throws exception. And of course, you can enable "Treat warnings as errors" option for your project for all or specific warnings. Feel free to contact me in case if you will find any bugs.

Related

Why can't I get the count of enums during compile time?

I have asked How can I get the number of enums as a constant?, and I found out that I cannot get the count of enums during compile time, because C# uses reflection to do so.
I read What is reflection and why is it useful?, so I have a very basic understanding of reflection.
To get the count of enums, I can use Enum.GetNames(typeof(Item.Type)).Length;, and this happens during runtime using reflection.
I don't see any runtime knowledge needed to get the count of enums, because as far as I know, the count of enums cannot be changed during runtime.
Why does C# have to use reflection to get the count of enums? Why can't it do so during compile time?
Just because something can be evaluated at compile time doesn't mean that someone has programmed the compiler to do so. string.Format("{0:N2}", Math.PI) is another example.
The only way at present to get the count of the number of values of an Enum is by using reflection (Enum.GetNames or something similar). So it is not a constant expression, although technically the compiler could just evaluate the expression at compile-time and determine what the result is.
nameof is a perfect example. It is constant at compile-time, but there was no mechanism to extract the result at compile time until someone designed, developed, tested, documented, and shipped the feature. None of those are free, and thus the idea must compete for valuable resources (people, time, money) against other features that may be more valuable.
So if you feel that a compile-time construct like enumcount(Item.Type) is a valuable addition to the language, then you are more than welcome to post a suggestion on Connect and see if it makes it to the top of the feature list.
But, I need this number as a constant number, so that I can use it in Unity's [Range(int, int)] feature.
One non-ideal workaround is to define a constant that matches the current number of enum items, and throw an exception at run-time if the counts do not match:
Define a public constant right next to your enum, commenting it so that developers know to update it:
// Update this value any time the Type enum is updated
public const int TypeCount = 5;
public Enum Type
{
Bar1,
Bar2,
Bar3,
Bar4,
Bar5,
}
use it in your attribute:
[Range(0, Item.TypeCount)]
public void BlahBlahBlah() {}
and check it at the start of your app:
public static Main()
{
if(Enum.GetNames(typeof(Item.Type)).Length != Item.TypeCount)
throw new ApplicationException ("TypeCount and number of Types do not match.\nPlease update TypeCount constant.")
}
I think in simple terms:
Enums is one "type definition", .NET use Reflection when "type descriptor navigation" is needed.
So Enums is a Type and # runtime, if you want to count the defined enums voice you need to user a reflection.
I don't see any runtime knowledge needed to get the count of enums,
because as far as I know, the count of enums cannot be changed during
runtime.
Here is the mistake in your reasoning: Yes, the count of enums cannot be changed during runtime. However, it can be changed between runtime:
A.dll - version 1
public enum Foo { A }
A.dll - version 2
public enum Foo { Bar, Baz }
Replace version 1 of A.dll with version 2. The count of enums has changed (and the names of the values as well).
Why does C# have to use reflection to get the count of enums? Why
can't it do so during compile time?
It could do so. But then you would run into the problem above. The compile-time calculated value could become incorrect.

Dynamically create compiler error based on API usage?

Did not know how to search for an answer to this, but I am wondering if there is a way in a compiled language such as Java, C#, Scala etc... To force a compiler error where an API is used incorrectly.
Say you have some sort of API you are working with and you know you need to call a specific setup method X before calling some other method Y, is it possible to set things up such that the compiler will catch the error and avoid having to do so at run time?
It would be fairly useful for enforcing some code standards or fixing broken API's. No idea if its even possible though.
Tools like NDepend (for .NET) or JArchitect (for java) lets write custom code rules over LINQ queries that can emit warning or error at analysis time (in the IDE, or at Build Process time). For example the following CQLinq code rule enforces that if a method is calling MethodA(), it must call MethodB():
warnif count > 0
from m in Application.Methods where
m.IsUsing("MyNamespace.MyClass.MyMethodA()") &&
!m.IsUsing("MyNamespace.MyClass.MyMethodB()")
select m
Generally, you can't create arbitrary custom compile-time errors in most static languages. There are a few exceptions (like #error directives in C and C++, but even then the preprocessor comes strictly before the main compilation so that won't help).
However, you can utilize a language feature specifically designed to catch errors at compile time:
The type system is your friend.
Requiring methods to be called in a particular order is unfriendly to API consumers, and is a code smell in a high-level language.
A trivial solution is to expose a single method that performs both operations in the correct order. More generally (e.g. if the 2nd method doesn't have to be called), have the 2nd method call the 1st method at the top. But presumably you want the calls to be under the control of the API consumer.
In that case, refactor your code so that it is not possible to call methods in the wrong order without creating a type error. For example, suppose you're writing a regex library where regexes have to be compiled before matching. In Scala, refactor:
class Regex(pattern : String) {
private[this] var compiled : Option[CompiledData] = None
def compile() {
// do stuff
this.compiled = ...
}
def search(s : String) : MatchResult = compiled match {
case Some(c) => ... // match the string
case None => throw new IllegalStateException("must compile regex first")
}
}
to
class Regex(pattern : String) {
def compile : CompiledRegex = {
// do stuff
CompiledRegex(...)
}
}
class CompiledRegex(c : CompiledData) {
def search(s : String) : MatchResult = {
... // match the string
}
}
Since the search method is now only available on CompiledRegex, and CompiledRegex is obtained by calling Regex.compile, it's impossible to call search before that action is made valid by compiling the regex.
This setup also helps out API consumers, because if the user needs an object of type MatchResult and has already typed new Regex("[abc123]*"), a good IDE can autocomplete or suggest the needed method .compile; that would not be possible with the original setup.
(In this example, it also no longer needs mutable state, which is often avoided in Scala.)
Static assertions
As an alternative solution, some languages (I think D and C++11, though not the ones you listed) support static assertions.
the usage of specific functions ist on another level than the compiler operation ... therefore i don't think that a compiler should do things like that ...
on the other hand: compilers are commonly integrated into IDEs ... why don't you put that into an IDE module too? ... most IDEs allow pre compiler operations ...
you could write some sort of checking tool (based on CodeDOM or something similar) to test for specific function usage, and based on that check-result, abort the compiler run if the quality criteria is not matched ...

Automatically exchange explicit type with var-keyword

I want to automatically remove all explicit types and exchange them with the var keyword in a big solution, e.g. instead of
int a = 1;
I want to have:
var a = 1;
This is just cosmetics, the code in the solution works perfectly fine, I just want to have things consistent, as I started out using explicit types, but later on used var-keywords.
I'm guessing I would have to write some sort of code parser - sounds a little cumbersome. Does anybody know an easy solution to this?
Cheers,
Chris
This isn't an answer per se, but it's too long for a comment.
You should strongly consider not doing this. There's no stylistic concern with mixing explicit and inferential typing (you should infer types when you need to, either when using anonymous types or when it makes the code easier to read), and there are plenty of potential issues you'll encounter with this:
Declarations without assignment are ineligible
Declarations that are assigned to null are ineligible
Declarations that are of a supertype but initialized to an instance of a subtype (or compatible but different type) would change their meaning.
I.E.
object foo = "test";
...
foo = 2;
Obviously, this is a simple (and unlikely) example, but changing foo from object to var would result in foo being typed as a string instead of object, and would change the semantics of the code (it wouldn't even compile in this case, but you could easily run into more difficult to find scenarios where it changes overload resolution but doesn't produce a compile-time error).
In other words, don't do this, please.
Firstly, this is probably not such a good idea. There is no advantage to var over int; many declarations will be almost as simple.
But if you must...
A partly manual solution is to turn ReSharper's "Use var" hint into a warning and get it to fix them all up. I don't know if ReSharper will do it en masse, but I often rifle through a badly-done piece of third-party code with a rapid sequence of Alt+PgDn, Alt+Enter.
This has the significant advantage that ReSharper respects the semantics of your code. It won't replace types indiscriminately, and I'm pretty sure it will only make changes that don't affect the meaning of your program. E.g.: It won't replace object o = "hello"; (I think; I'm not in front of VS to check this).
Look into Lex & Yacc. You could combine that with a perl or awk script to mechanically edit your source.
You could also do this in emacs, using CEDET. It parses code modules and produces a table of its code analysis.
In either case you will need to come up with an analysis of the code that describes... class declarations (class name, parent types, start and end points), method declarations (similar), variable declarations, and so on. Then you will write some code (perl, awk, powershell, elisp, whatever) that walks the table, and does the replace on each appropriate variable declaration.
I'd be wary of doing this in an automated fashion. There are places where this may actually change the semantics of the program or introduce errors. For example,
IEnumerable<string> list = MethodThatReturnsListType();
or
string foo = null;
if (!dict.TryGetValue( "bar", out foo ))
{
foo = "default";
}
Since these aren't errors, I would simply replace them as you touch the code for other reasons. That way you can inspect the surrounding code and make sure you aren't changing the semantics and avoid introducing errors that need to be fixed.
What's about search/replace in Visual Studio IDE
For example search vor 'int ' and replace it with 'var '.

What is the point of a constant in C#

Can anyone tell what is the point of a constant in C#?
For example, what is the advantage of doing
const int months = 12;
as opposed to
int months = 12;
I get that constants can't be changed, but then why not just... not change it's value after you initialize it?
If the compiler knows that a value is constant and will never change, it can compile the value directly into your program.
If you declare pi to be a constant, then every time it sees pi / 2 the compiler can do the computation and insert 1.57... directly into the compiled code. If you declare pi as a variable, then every time your program uses pi / 2 the computer will have to reference the pi variable and multiply it by 0.5, which is obviously slower.
I should also add that C# has readonly which is for values that the compiler cannot compute, but which cannot change for the duration of your program's execution. For example, if you wanted a constant ProgramStartTime, you would have to declare it readonly DateTime ProgramStartTime = DateTime.Now because it has to be evaluated when the program starts.
Finally, you can create a read-only property by giving it a getter but no setter, like this:
int Months { get { return 12; } } but being a property it doesn't have to have the same value every time you read it, like this:
int DaysInFebruary { get { return IsLeapYear ? 29 : 28 } }
The difference between "can't change" and "won't change" only really becomes apparent when one of the following situations obtains:
other developers begin working on your code,
third-parties begin using your code, or
a year passes and you return to that code.
Very similar questions arise when talking about data accessibility. The idea is that you want your code to provide only as much flexibility as you intend, because otherwise someone (possibly you) will come along and do something you did not intend, which leads to bugs!
If you never make mistakes, nobody on any team you work with ever makes mistakes, you never forget the exact purpose of a variable you've defined even after coming back to code you haven't looked at in months or years, and you and everyone you work with 100% reliably recognizes, understands and follows your intention of never changing a const value when you haven't bothered to use a built-in language construct that both clearly indicates and enforces constness, then no, there's no point.
If any one of those things ever turns out not to be the case, that's the point.
As for me, I find that even when I'm at my most lucid, remembering more than seven things at once is pretty close to impossible, so I'll take all the help I can get to prevent mistakes, especially when the cost is a single keyword. If you're smarter than me, and you never have to work on a team with someone less smart than you, then do whatever you want.
In some cases, there may be some compiler optimizations that can be done based on constness (constant folding, for one, which collapses expressions consisting of constants at compile-time). But usually, that's not the point.
Keep in mind you may not be the only person using the value. Not only can't you change it, but nobody using your code (as a library, for example) can change it.
And marking it as constant also makes your intent clear.
That's it. The you tell the compiler it can never change, and the compiler can optimise much better knowing that it is immutable.
Using constants programmers have the advantage of Readability over actual Value like
const double PI = 3.14159;
It also speeds up computation compared to variables by inserting values at compile time and not being inferred from a register/memory location.
For several reasons:
You want to differentiate certain values that have certain meaning from other variables.
You may later forget you are not meant to change a value, and cause unforeseen behavior.
Your code may be used by other people an they may change it (especially if you're developing a library or an API).
Because every thing that can go wrong usually will - so prevent it by making such errors discoverable at compile time rather than runtime.
Declaring a value 'const' brings the compiler into play to help you enforce personal discipline, by not allowing any changes to that value.
Also, it will catch unexpected errors due to side-effects of passing a value (that you intend to treat as a constant) into a method that takes a 'ref' parameter and could conceivably alter the value unexpectedly.
Strictly speaking, "const" isn't necessary. Python, for example, has neither "const" nor "private"; you specify your intent with the naming convention of THIS_IS_A_CONSTANT and _this_is_private.
C#, however, has a design philosophy of catching errors at compile time rather than runtime.
All about readability, different semantics for programmer is true and you should know all of it.
But constants in C# (rather in .net) has very different semantics (in terms of implementation) as compared with ordinary variables.
Because a constant value never changes, constants are always considered to be part of the
defining type. In other words, constants are always considered to be static members, not
instance members. Defining a constant causes the creation of metadata. When code refers to a constant symbol, compilers embed the value in the emitted Intermediate Language (IL) code.
These constraints mean that constants don’t have a good cross-assembly versioning story, so you should use them only when you know that the value of a symbol will never change.
The "point" is so that you can use a single program-wide variable and have only one spot to change it.
Imagine if you were making a video game that relied on knowing the FPS (frames per second) in 100 different code files. So pretend the FPS was 50... and you had the number "50" in all those files and functions... then you realize that you wanted to make it 60 FPS... instead of changing those 100 files, you would just change this:
const int FRAMES_PER_SECOND = 60;
and in those files / functions, you would use the FRAMES_PER_SECOND variable.
By the way, this has nothing to do with C#... constants are in tons of languages.
It is a case of being explicit of your intention.
If you intend to change value, then do not use 'const'.
If you do not intend to change value, then use 'const'.
That way both compiler and third party (or yourself if you read your code after long time) can know what you intended. If you or somebody makes a mistake of changing value, then compiler can detect it.
And anyways, to use 'const' is not mandatory. If you think, you can handle 'constancy' yourself (without wanting compiler to detect mistake), then do not use 'const' ;-).
You should define a variable as "const" when you know the value of this will remain constant through out your application. So once you define a const, it value must be determinable at compile time and this value will be saved into the assembly's metadata. Some more important points abt Const:
A const can be defined only for types your compiler treats as primary types.
Constants are always considered to be the part of defining type.
They are always considered to be Static members.
Constants value are always embedded directly into the code, so constants dont require any memory to be allocated to them at run time.
Because of this embedding of the value into metadata, when somebody change the value of the const (in the assembly where the const is defined) due to versioning or some other requirement
then the user of the dll has to recompile his own assembly. And this issue can be avoided using "readonly" keyword.
One of the rule that many programmers follow is: Never hard-code any variables / except 0.
that means, instead of doing
for(int i=0; i<100; i++) {}
should do
const loopCount = 100;
....
for (int i=0; i<loopCount; i++) {}
I think using const is a good reason to replace this. And indeed, there's much more reason to this:
Optimize for compiler - memory, performance.
Tell the programmer that follow your work, this is CONSTANT
If you need to refactor, change your mind in the number, you know where to go. And make sure no other places in the code would change this.

Why do I need to use break?

I was wondering why C# requires me to use break in a switch statement although a fall-through semantics is by definition not allowed. hence, the compiler could generate the break at the end of each case-block and save me the hassle.
However, there is one scenario (which has already been discussed on this site) which I could come up with that might be the reason for the explicit usage of break:
switch (foo) {
case 0:
case 1:
bar();
break;
default:
break;
}
Here, the method bar() is called if foo has either the value 0 or 1.
This option would break in the truest sense of the word if the compiler would generate break statements by itself. Is this it, is this the reason why the break is compulsory or are there any other good reasons?
The question presupposes a falsehood and therefore cannot be answered. The language does NOT require a break at the end of a switch section. The language requires that the statement list of a switch section must have an unreachable end point. "break" is just the most commonly used statement that has this property. It is perfectly legal to end a switch section with "return;", "goto case 2;", "throw new Exception();" or "while (true) {}" (or in some cases, continue, though that's usually a bad idea.)
The reason we require a statement list with an unreachable endpoint is to enforce the "no fall through" rule.
If your question is better stated as "why doesn't the compiler automatically fix my error when I fail to produce a switch section with a statement list with an unreachable end point, by inserting a break statement on my behalf?", then the answer is that C# is not a "make a guess about what the developer probably meant and muddle on through" sort of language. Some languages are -- JScript, for example -- but C# is not.
We believe that C# programmers who make mistakes want to be told about those mistakes so that they can intentionally and deliberately correct them, and that this is a good thing. Our customers tell us that they do not want the compiler to make a guess about what they meant and hope for the best. That's why C# also does not make a guess about where you meant to put that missing semicolon, as JScript does, or silently fail when you attempt to modify a read-only variable, as JScript does, and so on.
I suspect that the reason C# requires the developer to place a break or terminal statement at the end of each case is for clarity.
It avoids newcomers to the language from assuming that switch( ) in C# behaves like switch in C or C++ where fall through behavior occurs. Only in the cases of adjacent empty cases does fall through occur in C# - which is relatively obvious.
EDIT: Actually, in C# fallthrough is always illegal. What is legal, however, is having a single case associated with two or more labels. Eric Lippert writes at length about this behavior and how it differs from C/C++ switch statements.
You may be interested in reading this article on Eric Lipperts blog.
By making break optional, you open yourself up to bugs like this:
switch (thing_result)
{
case GOOD_THING:
IssueReward();
// UH-OH, missing break!
case BAD_THING:
IssuePunishment();
break;
}
The C# language designers have tried to help programmers avoid pitfalls in the languages that came before it, generally by forcing the programmer to be more explicit (e.g. explicit use of the 'override' or 'new' keyword for implementing or shadowing virtual members).
My understanding of the matter is that it was included to match C++ syntax.
If you wouldn't need to append a break, there is a problem
switch (number)
{
case 0:
case 1:
DoSomething();
}
What happens if number == 0? Is this an empty case which does not do anything or will DoSomething() be executed?

Categories

Resources