Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I was just wondering where there any downsides to using in method argument from c# 7.2
I know this is recommended for dealing with Structs and they say it won't increase performance that much but from what I can tell this won't have any negative impact on the code and might help you with stack overflow exceptions when doing recursions.
Dose anyone know of a good reason why all methods should not be marked as in?
I found why people recommend to avoid ref and out but they don't work here.
As this is a performance related question, as a matter of discussion in was introduced for that reason, it's hard to find single, correct answer, without really measuring performance on concrete code sample.
But...
We know that in creates a reference which is 32bit on x86 OS, and 64bit on x64 OS.
Now consider a structure like
struct Token
{
char x;
}
on x64 OS, copy this structure on the stack, likely, will execute faster, than creating 64bit reference to the same data.
Plus, do not forget, that in implies "constantness" on an instance, which goes beyond performance reasoning, and targets directly your design. Hence, while from performance related matters, some of reasoning might be arguable, from design point of view, in has distinct and clear semantics to target specific design use cases.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am working on implementing threading to my C# program. Each thread requires access to the same array, but does not need to write to it, only read data. Should this array be deep copied for each thread?
The reason I think this might be important (from my very limited knowledge of threading) is so that threads on different CPU cores can store copies of the array in their own cache instead of constantly requesting data from a single core's cache where the array is stored, but perhaps the compiler or something else will optimise away this inefficiency?
Any insight would be much appreciated.
Since you haven't specified the hardware architecture you are running on I'm going to assume it is either and Intel or AMD x64 processor. In which case I recommend trusting the processor to correctly optimize this situation. By creating multiple copies that the processor that the compiler can't know are duplicate copies you will force the processor to use more memory and spread the available cache space over more memory lessening it's effectiveness.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I bet this question has been asked before, but I can't really find out what i'm looking for, so excuse me in advance :)
is there a difference (programmatically speaking OR overhead speaking) between this:
var data = GetProducts();
GetAllData(data);
and this:
GetAllData(GetProducts());
what are the pros and cons of both methods if any? is there a more elegant/right way of achieving it (say Func<>)?
thanks in advance,
Rotem
Doing it in two lines makes it easier to debug, because you can break on the second line and observe the value assigned on the first line.
The compiler will optimize them both into the same CIL anyway, so it's not a matter of efficiency. It's all a matter of preference.
There is no functional difference and when the code is translated into machine language (or JVM byte code or whatever), it will result in more or less the same low level code.
The main difference is a matter of (a) aesthetics and (b) maintainability of the code. With respect to aesthetics, some may argue that the second form is prettier. It's largely personal choice but I would argue that if the expression wasn't as simple as GetProducts() but was very long (e.g. GetContext().GetProductService().GetProductsFor(GetContext().GetCurrentUser()) then breaking it up into two lines with an intermediate variable is more readable.
With respect to maintainability, I think you will find that having fewer variables is always better for future maintenance. You are less likely to encounter bugs relating to side effects or changing assumptions. In other languages you can use constructs like const or final to use the compiler to help protect against code rot, but I would still argue that it's cleaner to have fewer lines of code.
Hope this helps!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I explored c# source code reference. And I came across with interesting mention of IArithmetic<T> interface. For example, Int32, Double contain commented implementations of IArithmetic interface. I am interested by these details. As I understood, it is attempt to add "supporting" of arithmetic operations. But why are they commented? Is it bad way to add supporting generic "operators"?
It was probably scrapped due to performance reasons and not very much usability.
Primitive types supporting arithmetic operations through an interface is really not a very attractive scenario; performance would be horrible compared to simply using the value type itself due to the necessary boxing and unboxing.
What possible uses? Well, the first one to spring to mind would be the following scenario:
public Matrix<T> where T: IArithmetic<T>
or some such. Although this could be interesting, due to performance reasons, it would probably need to be solved some other way, not through interfaces; read this for very educated musing on the subject.
On top of all that, if you really need something similar to Arithmetic<T> you can always build your own with an added level of indirection.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm planning on gaining some insights into inheritance usage for .NET systems written in C#. I want to analyse Intermediate Language code instead of C# code to make it possible to also look at compiled code. Is there any information available on which optimizations the C# compiler may do when the optimize code flag is enabled?
I'm analysing call behavior related to inheritance graphs (e.g. using polymorphism, reuse methods from base class, etc).
Most questions and resources on the internet say 'minor optimizations' and other vague things. I need to specifically know the changes in semantics that might occur when compiling for release mode. I am not interested in the performance of code.
For example, Scott Hanselman posts in his blog that method inlining will occur in release mode. But that is just one example. This means that What is /optimize C# compiler key intended for? does not answer my question.
http://blogs.msdn.com/b/ericlippert/archive/2009/06/11/what-does-the-optimize-switch-do.aspx
Eric Lippert (a former principal developer on the C# compiler team) answered this on his blog. A few of the remarks:
Eliminate dead code (branches which are never reached, checks that always return true,...)
nullcheck optimization.
removal of intermittent calls (A(B(C(D))) is rewritten as A(D));
double return calls.
The entire blog has many more examples and I urge you to read it if you want to know about this.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What are the ways to identify Syntactic Sugar? For example, if someone is unaware of the version 1.0 of C# and gets involved in learning version 4.0, how should he/she go about detecting Syntactic Sugar? Is looking into the disassembled code the only way?
OR
Should a guy learning C# 4.0 go all the way down to 1.0, peeling one layer of abstraction after the other, to see the inner details?
An example:
When dealing with Events in C#, the following comes to my mind (courtesy Hans Passant)
the += operator calls add(), -= calls remove()
P.S.
The questions I ask may be just too hilarious for the extremely knowledgeable crowd here. But making sense of the ever evolving software world is no "walk in the park". It is like finding your way out of the Amazon (I mean the Brazilian jungle. The website is easy to navigate.).
Everything atop electric current is syntactic sugar for us humans. The computer doesn't need that nor does the alphabet or words as we know them mean anything to him.
Your job as a programmer is to use whatever feature makes you job easier. There is no bad feature if it makes you faster or better. Preferably both.
You'll easily detect sugar going backwards from .NET 4.0 and noticing that simple "sugared" language structures from higher versions are getting more and more complicated.
Sugar is nothing bad, just need to know that there are many ways to accomplish certain task.