Flag enum speed vs booleans? - c#

[Flags]
enum Flaggy { None = 0, A=1, B=2, C=4, D=8}
Flaggy test;
or
bool A, B, C, D;
Is the flagged enum more efficient than the booleans or does it not really matter? In terms of cpu?
EDIT:
Yes I know that [Flags] doesn't really do anything compared to a non-flags enums aside from adding a .ToString() method and some readability. Well, this piece of code is checked like 25000+ times per second so even a micro gain would be worth it. But a Flags Enum is nicer to read in the code compared to like 20 booleans and with .NET 4.0 the HasValue() makes up for the previously annoying checking for the Flags-values. But a method call in stead of an if-check is another micro cpu drain.
But reading the answers that came so quickly I guess it's more a choice of readability than performance.

The flagged enum will be backed as a single Int32 in memory whereas the booleans will be stored as separate Boolean variables. So both will occupy the same memory.
In terms of CPU, with the enum you will need to perform bitwise operations to determine the values whereas with the booleans it's a simple if so I guess it will be slightly faster. But that's a premature optimization that you shouldn't be concerned at all with. Both will be fast enough so pick the one that makes your code more readable.

First off AFAIK there is no [Flagged] - best guess would be that you mean [Flags], see MSDN.
Whether it is faster in terms of CPU time depends very much on what exactly you do with them... but I think this is "micro optimization" (which is usually a bad idea)... run the code with a profiler and see where the bottlenecks really before guessing where/what to optimize...

I prefer flags instead of boolean in this case, sure performance is not much difference and doing bit optimization is not good because it will reduce code readability and code maintenance.

Related

Are there any performance benefits in C# discards?

Consider this code:
var (mult, sum) = MultSum(a, b);
and
var (_, sum) = MultSum(a, b);
Question 1.
If I use discard instead of a variable name, does it have performance benefit? eg. by reducing assignment operations.
Question 2.
Is there any way to write the MultSum smart enough so it doesn't calculate the discards!?
If I use discard instead of a variable name, does it have performance benefit? eg. by reducing assignment operations.
In your particular case it is unlikely that there would be a benefit in performance. The tuple that is returned is assigned to temporary storage; you've just not given a name to one part of that storage.
Now, if you had an expression that had discards that were entire values, not fragments of a tuple, then the compiler and the jitter can be smart about not allocating any storage on the short-term pool for the result, or re-using existing storage that was already allocated. Note that by "short-term pool" I effectively mean "activation record on the stack" or "registers". This could, in theory, lead to better register allocation or smaller frames (and therefore better locality of reference) and that in turn could save you entire nanoseconds.
Nano-optimizations are generally not worth it; there is almost always a better bang-for-buck performance problem to attack. But if you think it might be relevant for your scenario, measure it and see. That is the only way to know if there is a relevant performance difference. Get out a nano-scale stopwatch, run the code both ways, and see which one is faster.
The benefit you should be attempting to accrue by using discards is the "make my program easier to understand" benefit. Programmers are expensive; optimize for making your code easy for future programmers to read, understand and modify.
Is there any way to write the MultSum smart enough so it doesn't calculate the discards!?
Yes. Write your program in Haskell. Haskell will avoid performing calculations whose results are never used. C# is not such a language.

Which code-flow pattern is more efficient in C#/.NET?

Consider the situation in which the main logic of a method should only actually run given a certain condition. As far as I know, there are two basic ways to achieve this:
If inverse condition is true, simply return:
public void aMethod(){
if(!aBoolean) return;
// rest of method code goes here
}
or
If original condition is true, continue execution:
public void aMethod(){
if(aBoolean){
// rest of method code goes here
}
}
Now, I would guess that which of these implementations is more efficient is dependent on the language its written in and/or how if statements and return statements, and possibly method calls, are implemented by the compiler/interpreter/VM (depending on language); so the first part of my question is, is this true?
The second part of my question is, if the the answer to the first part is "yes", which of the above code-flow patterns is more efficient specifically in C#/.NET 4.6.x?
Edit:
In reference to Dark Falcon's comment: the purpose of this question is not actually to fix performance issues or optimize any real code I've written, I am just curious about how each piece of each pattern is implemented by the compiler, e.g. for arguments sake, if it was compiled verbatim with no compiler optimizations, which would be more efficient?
TL;DR It doesn't make a difference. Current generations of processors (circa Ivy Bridge and later) don't use a static branch-prediction algorithm that you can reason about anymore, so there is no possible performance gain in using one form or the other.
On most older processors, the static branch-prediction strategy is generally that forward conditional jumps are assumed to be taken, while backwards conditional jumps are assumed not-taken. Therefore, there might be a small performance advantage to be gained the first time the code is executed by arranging for the fall-through case to be the most likely—i.e.,
if { expected } else { unexpected }.
But the fact is, this kind of low-level performance analysis makes very little sense when writing in a managed, JIT-compiled language like C#.
You're getting a lot of answers that say readability and maintainability should be your primary concern when writing code. This is regrettably common with "performance" questions, and while it is completely true and unarguable, it mostly skirts the question instead of answering it.
Moreover, it isn't clear why form "A" would be intrinsically more readable than form "B", or vice versa. There are just as many arguments one way or the other—do all parameter validation at the top of the function, or ensure there is only a single return point—and it ultimately gets down to doing what your style guide says, except in really egregious cases where you'd have to contort the code in all sorts of terrible ways, and then you should obviously do what is most readable.
Beyond being a completely reasonable question to ask on conceptual/theoretical grounds, understanding the performance implications also seems like an excellent way to make an informed decision about which general form to adopt when writing your style guide.
The remainder of the existing answers consist of misguided speculation, or downright incorrect information. Of course, that makes sense. Branch prediction is complicated, and as processors get smarter, it only gets harder to understand what is actually happening (or going to happen) under the hood.
First, let's get a couple of things straight. You make reference in the question to analyzing the performance of unoptimized code. No, you don't ever want to do that. It is a waste of time; you'll get meaningless data that does not reflect real-world usage, and then you'll try and draw conclusions from that data, which will end up being wrong (or maybe right, but for the wrong reasons, which is just as bad). Unless you're shipping unoptimized code to your clients (which you shouldn't be doing), then you don't care how unoptimized code performs. When writing in C#, there are effectively two levels of optimization. The first is performed by the C# compiler when it is generating the intermediate language (IL). This is controlled by the optimization switch in the project settings. The second level of optimization is performed by the JIT compiler when it translates the IL into machine code. This is a separate setting, and you can actually analyze the JITed machine code with optimization enabled or disabled. When you're profiling or benchmarking, or even analyzing the generated machine code, you need to have both levels of optimizations enabled.
But benchmarking optimized code is difficult, because the optimization often interferes with the thing you're trying to test. If you tried to benchmark code like that shown in the question, an optimizing compiler would likely notice that neither one of them is actually doing anything useful and transform them into no-ops. One no-op is equally fast as another no-op—or maybe it's not, and that's actually worse, because then all you're benchmarking is noise that has nothing to do with performance.
The best way to go here is to actually understand, on a conceptual level, how the code is going to be transformed by a compiler into machine code. Not only does that allow you to escape the difficulties of creating a good benchmark, but it also has value above and beyond the numbers. A decent programmer knows how to write code that produces correct results; a good programmer knows what is happening under the hood (and then makes an informed decision about whether or not they need to care).
There has been some speculation about whether the compiler will transform form "A" and form "B" into equivalent code. It turns out that the answer is complicated. The IL will almost certainly be different because it will be a more or less literal translation of the C# code that you actually write, regardless of whether or not optimizations are enabled. But it turns out that you really don't care about that, because IL isn't executed directly. It's only executed after the JIT compiler gets done with it, and the JIT compiler will apply its own set of optimizations. The exact optimizations depend on exactly what type of code you've written. If you have:
int A1(bool condition)
{
if (condition) return 42;
return 0;
}
int A2(bool condition)
{
if (!condition) return 0;
return 42;
}
it is very likely that the optimized machine code will be the same. In fact, even something like:
void B1(bool condition)
{
if (condition)
{
DoComplicatedThingA();
DoComplicatedThingB();
}
else
{
throw new InvalidArgumentException();
}
}
void B2(bool condition)
{
if (!condition)
{
throw new InvalidArgumentException();
}
DoComplicatedThingA();
DoComplicatedThingB();
}
will be treated as equivalent in the hands of a sufficiently capable optimizer. It is easy to see why: they are equivalent. It is trivial to prove that one form can be rewritten in the other without changing the semantics or behavior, and that is precisely what an optimizer's job is.
But let's assume that they did give you different machine code, either because you wrote complicated enough code that the optimizer couldn't prove that they were equivalent, or because your optimizer was just falling down on the job (which can sometimes happen with a JIT optimizer, since it prioritizes speed of code generation over maximally efficient generated code). For expository purposes, we'll imagine that the machine code is something like the following (vastly simplified):
C1:
cmp condition, 0 // test the value of the bool parameter against 0 (false)
jne ConditionWasTrue // if true (condition != 1), jump elsewhere;
// otherwise, fall through
call DoComplicatedStuff // condition was false, so do some stuff
ret // return
ConditionWasTrue:
call ThrowException // condition was true, throw an exception and never return
C2:
cmp condition, 0 // test the value of the bool parameter against 0 (false)
je ConditionWasFalse // if false (condition == 0), jump elsewhere;
// otherwise, fall through
call DoComplicatedStuff // condition was true, so do some stuff
ret // return
ConditionWasFalse:
call ThrowException // condition was false, throw an exception and never return
That cmp instruction is equivalent to your if test: it checks the value of condition and determines whether it's true or false, implicitly setting some flags inside the CPU. The next instruction is a conditional branch: it branches to the specification location/label based on the values of one or more flags. In this case, je is going to jump if the "equals" flag is set, while jne is going to jump if the "equals" flag is not set. Simple enough, right? This is exactly how it works on the x86 family of processors, which is probably the CPU for which your JIT compiler is emitting code.
And now we get to the heart of the question that you're really trying to ask; namely, does it matter whether we execute a je instruction to jump if the comparison set the equal flag, or whether we execute a jne instruction to jump if the comparison did not set the equal flag? Again, unfortunately, the answer is complicated, but enlightening.
Before continuing, we need to develop some understanding of branch prediction. These conditional jumps are branches to some arbitrary section in the code. A branch can either be taken (which means the branch actually happens, and the processor begins executing code found at a completely different location), or it can be not taken (which means that execution falls through to the next instruction as if the branch instruction wasn't even there). Branch prediction is very important because mispredicted branches are very expensive on modern processors with deep pipelines that use speculative execution. If it predicts right, it continues uninterrupted; however, if it predicts wrong, it has to throw away all of the code that it speculatively executed and start over. Therefore, a common low-level optimization technique is replacing branches with clever branchless code in cases where the branch is likely to be mispredicted. A sufficiently smart optimizer would turn if (condition) { return 42; } else { return 0; } into a conditional move that didn't use a branch at all, regardless of which way you wrote the if statement, making branch prediction irrelevant. But we're imagining that this didn't happen, and you actually have code with a conditional branch—how does it get predicted?
How branch prediction works is complicated, and getting more complicated all the time as CPU vendors continue to improve the circuitry and logic inside of their processors. Improving branch prediction logic is a significant way that hardware vendors add value and speed to the things they're trying to sell, and every vendor uses different and proprietary branch-prediction mechanisms. Worse, every generation of processor uses slightly different branch-prediction mechanisms, so reasoning about it in the "general case" is exceedingly difficult. Static compilers offer options that allow you to optimize the code they generate for a particular generation of microprocessor, but this doesn't generalize well when shipping code to a large number of clients. You have little choice but to resort to a "general purpose" optimization strategy, although this usually works pretty well. The big promise of a JIT compiler is that, because it compiles the code on your machine right before you use it, it can optimize for your specific machine, just like a static compiler invoked with the perfect options. This promise hasn't exactly been reached, but I won't digress down that rabbit hole.
All modern processors have dynamic branch prediction, but how exactly they implement it is variable. Basically, they "remember" whether a particular (recent) branch was taken or not taken, and then predict that it will go this way the next time. There are all kinds of pathological cases that you can imagine here, and there are, correspondingly, all kinds of cases in or approaches to the branch-prediction logic that help to mitigate the possible damage. Unfortunately, there isn't really anything you can do yourself when writing code to mitigate this problem—except getting rid of branches entirely, which isn't even an option available to you when writing in C# or other managed languages. The optimizer will do whatever it will; you just have to cross your fingers and hope that it is the most optimal thing. In the code we're considering, then, dynamic branch prediction is basically irrelevant and we won't talk about it any more.
What is important is static branch prediction—what prediction is the processor going to make the first time it executes this code, the first time it encounters this branch, when it doesn't have any real basis on which to make a decision? There are a bunch of plausible static prediction algorithms:
Predict all branches are not taken (some early processors did, in fact, use this).
Assume "backwards" conditional branches are taken, while "forwards" conditional branches are not taken. The improvement here is that loops (which jump backwards in the execution stream) will be correctly predicted most of the time. This is the static branch-prediction strategy used by most Intel x86 processors, up to about Sandy Bridge.
Because this strategy was used for so long, the standard advice was to arrange your if statements accordingly:
if (condition)
{
// most likely case
}
else
{
// least likely case
}
This possibly looks counter-intuitive, but you have to go back to what the machine code looks like that this C# code will be transformed into. Compilers will generally transform the if statement into a comparison and a conditional branch into the else block. This static branch prediction algorithm will predict that branch as "not taken", since it's a forward branch. The if block will just fall through without taking the branch, which is why you want to put the "most likely" case there.
If you get into the habit of writing code this way, it might have a performance advantage on certain processors, but it's never enough of an advantage to sacrifice readability. Especially since it only matters the first time the code is executed (after that, dynamic branch prediction kicks in), and executing code for the first time is always slow in a JIT-compiled language!
Always use the dynamic predictor's result, even for never-seen branches.
This strategy is pretty strange, but it's actually what most modern Intel processors use (circa Ivy Bridge and later). Basically, even though the dynamic branch-predictor may have never seen this branch and therefore may not have any information about it, the processor still queries it and uses the prediction that it returns. You can imagine this as being equivalent to an arbitrary static-prediction algorithm.
In this case, it absolutely does not matter how you arrange the conditions of an if statement, because the initial prediction is essentially going to be random. Some 50% of the time, you'll pay the penalty of a mispredicted branch, while the other 50% of the time, you'll benefit from a correctly predicted branch. And that's only the first time—after that, the odds get even better because the dynamic predictor now has more information about the nature of the branch.
This answer has already gotten way too long, so I'll refrain from discussing static prediction hints (implemented only in the Pentium 4) and other such interesting topics, bringing our exploration of branch prediction to a close. If you're interested in more, examine the CPU vendor's technical manuals (although most of what we know has to be empirically determined), read Agner Fog's optimization guides (for x86 processors), search online for various white-papers and blog posts, and/or ask additional questions about it.
The takeaway is probably that it doesn't matter, except on processors that use a certain static branch-prediction strategy, and even there, it hardly matters when you're writing code in a JIT-compiled language like C# because the first-time compilation delay exceeds the cost of a single mispredicted branch (which may not even be mispredicted).
Same issue when validating parameters to functions.
It's much cleaner to act like a night-club bouncer, kicking the no-hopers out as soon as possible.
public void aMethod(SomeParam p)
{
if (!aBoolean || p == null)
return;
// Write code in the knowledge that everything is fine
}
Letting them in only causes trouble later on.
public void aMethod(SomeParam p)
{
if (aBoolean)
{
if (p != null)
{
// Write code, but now you're indented
// and other if statements will be added later
}
// Later on, someone else could add code here by mistake.
}
// or here...
}
The C# language prioritizes safety (bug prevention) over speed. In other words, almost everything has been slowed down to prevent bugs, one way or another.
If you need speed so badly that you start worrying about if statements, then perhaps a faster language would suit your purposes better, possibly C++
Compiler writers can and do make use of statistics to optimize code, for example "else clauses are only executed 30% of the time".
However, the hardware guys probably do a better job of predicting execution paths. I would guess that these days, the most effective optimizations happen within the CPU, with their L1 and L2 caches, and compiler writers don't need to do a thing.
I am just curious about how each piece of each pattern is implemented
by the compiler, e.g. for arguments sake, if it was compiled verbatim
with no compiler optimizations, which would be more efficient?
The best way to test efficiency in this way is to run benchmarks on the code samples you're concerned with. With C# in particular it is not going to be obvious what the JIT is doing with these scenarios.
As a side note, I throw in a +1 for the other answers that point out that efficiency isn't only determined at the compiler level - code maintainability involves magnitudes of levels of efficiency more than what you'll get from this specific sort of pattern choice.
As [~Dark Falcon] mentioned you should not be concerned by micro optimization of little bits of code, the compiler will most probably optimize both approaches to the same thing.
Instead you should be very concerned about your program maintainability and ease of read
From this perspective you should choose B for two reasons:
It only has one exit point (just one return)
The if block is surrounded by curly braces
edit
But hey! as told in the comments that is just my opinion and what I consider good practices

Which is better in accessing a property value?

Which is better in accessing a property value?
Accessing like this
propertyobjA.objB.Prop1
propertyobjA.objB.Prop2
or assign to var
var objB = propertyobjA.objB;
then call objB.Prop1 and objB.Prop1
Which one improves performance in c#?
To be perfectly the honest, the answer is likely that the second will be faster, but I can pretty much guarantee that it will not matter in the slightest. You should be careful of thinking too hard about optimisation too early. 99% of all performance issues are down to much larger issues such as hitting a database too frequently, etc., not trivial issues like this. Even if there was a tiny difference between the two cases, unless this is some of the most time-critical software on the planet, what matters is readability (not that either are hard to read in this case), not which is faster.
It depends on what objB is. If you are calculating something (which you shouldn't do but can do) then of course assigning it to a value will yield better performance.
Another note, you should avoid having dependencies on sub properties of a variable, since you are putting a higher coupling between the classes.
I think this won't make a big difference performancewise (second alternative might be a bit faster). But this is not the place where your performance problems (if any) come from.
UPDATE: Thinking about, the value of propertyobjA.objB could change between getting Prop1 and Prop2, so the two alternatives cannot be considered as being the same code.
The impact to performance largely depends on the implementation of the propertyObjA.objB property getter. For instance, if it is simply implemented as:
public Foo objB { get { return this._objB; } }
Then calling that twice will have a negligible impact on performance.
If, however, that same property did something computationally expensive, then your second suggestion would perform better.
That being said, the framework guidelines state that you should not use property getters to hide potentially computationally expensive operations, instead preferring a method call instead, e.g.:
public objB ComputeB ();
You really ought to not concern yourself with things like that when writing code in a higher level language such as c#.
Modern compilers of such languages as c# and java are extremely sofisticated and will perform all kinds optimizations on your code. The end result for you as developer is that you will never see a difference in performance when writing a particular trivial piece of code one way or the other. The compiler will pick the most optimal way.
Everything else is down to preference. If you like to chain several property accesses, that's fine. If you like to assign an intermediate result to a variable to improve readability of your code, that's fine too.

Faster assignment or check for bool values

The question is simple, which is faster between CalledOften1 and CalledOften2
class MyTest
{
public bool test = false;
void CalledOften1()
{
if (!test) test = true;
DoSomething();
}
void CalledOften2()
{
test = true;
DoSomething();
}
}
Is the compiler optimized (if possible) to avoid future assignments of test if it's already true?
UPDATE:
This question is just an information, I will not use the if (bla) style if I can write test=true, I prefer code readability.
I prefer to measure for these sorts of questions rather than guess:
CalledOften1: 52 million operations per second
CalledOften2: 53 million operations per second
So they are nearly the same. If anything, the simpler method is also the faster.
This is a perfect example of premature optimization.
If you want to set test to true every time, just set it. Don't complicate your code for a theorized speedup.
That being said, the reduced instruction set of the second example, along with being simpler and more maintainable, is most likely faster due to avoiding the branching and reducing the number of instructions. A single assignment of a bool is a very fast operation. If you really need to know how much faster it may be, I would profile this yourself. However, I suspect that either would be fast enough in any case.
I would expect the second version to be slightly faster, given that it doesn't involve any branching. It also expresses the intention of "make sure the variable is true, whatever it was before" more clearly IMO. However:
I doubt that it's significant
Any number of actual changes in context could make the results change (including your code, or the version of the framework you're running against)
Write the clearest code first, and optimize later
Benchmark this against your real code, under realistic conditions before you decide to change anything
Compiler optimizes only something that is definite at compile time. This is changed at runtime so answer is no. Compiler could optimize if you were checking against constant. CalledOften1 is faster, but the magnitude is so small that you would not notice. This is kind of microptimisation you should avoid.
If I had to guess, I would say that CalledOften2 is more optimized, as there is no logic test operation done.
In the end, if you are looking at this level of optimization, then your application will probably go as fast as it can. Any performance gain you get out of this type of optimization will likely never be noticed by anyone.
My two cents,
Brian
Premature optimization is the root of all evil. Use the one that expresses your intent most clearly.
(I'm guessing a read+branch is going to be more expensive than just a write, but don't really know the CLR. The important thing is that computers are increasing in speed exponentially, and programmers aren't. Algorithmic improvements in performance bottlenecks are worth exploring, barely measurable constant-time improvements for their own sake aren't.)

How to get optimization from a "pure function" in C#?

If I have the following function, it is considered pure in that it has no side effects and will always produce the same result given the same input x.
public static int AddOne(int x) { return x + 1; }
As I understand it, if the runtime understood the functional purity it could optimize execution so that return values wouldn't have to be re-calculated.
Is there a way to achieve this kind of runtime optimization in C#? And I assume there is a name for this kind of optimization. What's it called?
Edit: Obviously, my example function wouldn't have a lot of benefit from this kind of optimization. The example was given to express the type of purity I had in mind rather than the real-world example.
As others have noted, if you want to save on the cost of re-computing a result you've already computed, then you can memoize the function. This trades increased memory usage for increased speed -- remember to clear your cache occasionally if you suspect that you might run out of memory should the cache grow without bound.
However, there are other optimizations one can perform on pure functions than memoizing their results. For example, pure functions, having no side effects, are usually safe to call on other threads. Algorithms which use a lot of pure functions can often be parallelized to take advantage of multiple cores.
This area will become increasingly important as massively multi-core machines become less expensive and more common. We have a long-term research goal for the C# language to figure out some way to take advantage of the power of pure functions (and impure but "isolated" functions) in the language, compiler and runtime. But doing so involves many difficult problems, problems about which there is little consensus in industry or academia as to the best approach. Top minds are thinking about it, but do not expect any major results any time soon.
if the calculation was a costly one, you could cache the result in a dictionary?
static Dictionary<int, int> cache = new Dictionary<int, int>();
public static int AddOne(int x)
{
int result;
if(!cache.TryGetValue(x, out result))
{
result = x + 1;
cache[x] = result;
}
return result;
}
of course, the dictionary lookup in this case is more costly than the add :)
There's another much cooler way to do functional memoization explained by Wes Dyer here: http://blogs.msdn.com/wesdyer/archive/2007/01/26/function-memoization.aspx - if you do a LOT of this caching, then his Memoize function might save you a lot of code...
I think you're looking for functional memoization
The technique you are after is memoization: cache the results of execution, keyed off the arguments passed in to the function, in an array or dictionary. Runtimes do not tend to apply it automatically, although there are certainly cases where they would. Neither C# nor .NET applies memoization automatically. You can implement memoization yourself - it's rather easy -, but doing so is generally useful only for slower pure functions where you tend to repeat calculations and where you have enough memory.
This will probably be inlined (aka inline expansion) by the compiler ...
Just make sure you compile your code with the "Optimize Code" flag set (in VS : project properties / build tab / Optimize Code)
The other thing you can do is to cache the results (aka memoization). However, there is a huge initial performance hit due to your lookup logic, so this is interesting only for slow functions (ie not an int addition).
There is also a memory impact, but this can be managed through a clever use of weak references.
As I understand it, if the runtime
understood the functional purity it
could optimize execution so that
return values wouldn't have to be
re-calculated.
In your example, the runtime WILL have to compute the result, unless x is known at compile time. In that case, your code will be further optimized through the use of constant folding
How could the compiler do that ? How does it know what values of x are going to be passed in at runtime?
and re: other answers that mention inlining...
My understanding is that inlining (as an optimization) is warranted for small functions that are used only once (or only a very few times...) not because they have no side effects...
A compiler can optimize this function through a combination of inlining (replacing a function call with the body of that function at the call site) and constant propagation (replacing an expression with no free variables with the result of that expression). For example, in this bit of code:
AddOne(5);
AddOne can be inlined:
5 + 1;
Constant propagation can then simplify the expression:
6;
(Dead code elimination can then simplify this expression even further, but this is just an example).
Knowing that AddOne() has no side effects might also enable the a compiler to perform common subexpression elimination, so that:
AddOne(3) + AddOne(3)
may be transformed to:
int x = AddOne(3);
x + x;
or by strength reduction, even:
2*AddOne(3);
There is no way to command the c# JIT compiler to perform these optimizations; it optimizes at its own discretion. But it's pretty smart, and you should feel comfortable relying on it to perform these sorts of transformations without your intervention.
Another option is to use a fody plugin https://github.com/Dresel/MethodCache
you can decorate methods that should be cached. When using this you should of course take into consideration all the comments mentioned in the other answers.

Categories

Resources