Do unused properties cause overhead? - c#

Suppose we had a generated class with a lot of redundant accessors. They are just accessors, they are not fields. They are not called anywhere. They just sit there being redundant and ugly.
For example:
public class ContrivedExample
{
public int ThisOneIsUsed { get; set; }
public int ThisOneIsNeverCalled0 { get { /* Large amounts of logic go here, but is never called. */ } }
public int ThisOneIsNeverCalled1 { get { /* Large amounts of logic go here, but is never called. */ } }
public int ThisOneIsNeverCalled2 { get { /* Large amounts of logic go here, but is never called. */ } }
//...
public int ThisOneIsNeverCalled99 { get { /* Large amounts of logic go here, but is never called.*/ } }
}
ContrivedExample c = new ContrivedExample() { ThisOneIsUsed = 5; }
The only overhead I can think of is that it would make the .DLL larger. I expect that there would be zero runtime penalties.
Does this cause any other overhead? Even a tiny overhead of any kind?

It is unlikely to have any measurable run-time overhead. In any case since it is performance question - measure your usage and make decisions for your case if in doubt.
Unreferenced methods do not get compiled by JIT nor cause direct run-time overhead.
Metadata for the class will be bigger (along with size of assembly as you've mentioned).
You may get indirect impact if the class is used in some code that involves a lot of reflection, also if code repeatedly reflects over the same class it is likely wrong by itself.

Related

C# Struct this() initializer - Memory, Performance, and cleanliness

Resharper recommended a change to my .net struct that I was unaware of. I am having a hard time finding Microsoft information about the this() initializer on a struct.
I have a constructor on my struct where i am passing in the values, but want the struct properties to be read-only to once the struct has been created. The Resharper proposed way makes the code much cleaner looking.
Questions:
Memory: I want to avoid generating any extra garbage if possible. I worry using this() may pre-initialize my value types, prior to setting them.
Performance: I worry that using the this() will first initialize the struct values with defaults, then set the values. An unnecessary operation. It would be nice to avoid that.
Cleanliness: Its obvious that using the :this() makes the struct much cleaner. Any reason why we wouldn't want to use that?
Example:
public struct MyContainer
{
public MyContainer(int myValue) : this()
{
MyValue = myValue;
}
public int MyValue { get; private set; }
}
public struct MyContainer2
{
private readonly int _myValue;
public MyContainer2(int myValue)
{
_myValue = myValue;
}
public int MyValue
{
get { return _myValue; }
}
}
If you are trying to optimize performance and less .net garbage, which is the correct route to go? Is there even a difference when it gets compiled?
I don't want to blindly accept using this, when I am creating millions of structs for data processing. They are short lived container objects so .net garbage and performance matters.
I create a quick benchmark of a struct with the "this()" initializer and one without, like this:
struct Data
{
public Data(long big, long big2, int small)
{
big_data = big;
big_data2 = big2;
small_data = small;
}
public long big_data;
public long big_data2;
public int small_data;
}
I benchmarked by initializing 5 billion structs of each type. I found that in debug mode, the struct test without "this()" initializer was measurably faster. In release mode, they were almost equal. I am assuming that in release mode, the "this()" is being optimized out and in debug it is running the "this()" and possibly even initializing the struct fields to default.
This is a short coming of the language concerning auto implemented properties and structs. It's fixed in C# 6 where the explicit call to this is not necessary , and you could even do away with the private setter:
public struct MyContainer
{
public int MyValue { get; }
public MyContainer(int value)
{
MyValue = value; //readonly properties can be set in the constructor, similar to how readonly fields behave
}
}
As to performance. I'd be very much surprised if there is a noticeable difference between the two (I can't currently check the differences in the generated IL). (As per comments, the next bit of the answer is irrelevant, calling this() will not generate additional "garbage") Also, if the objects are short lived like you claim, I wouldn't worry about garbage at all as they would all be stored in the stack, not the heap memory.

Singleton List in Abstract Class ... Am I doing it wrong?

I'm loading in lots of parts that need to be assigned materials.
I have a list of Materials that I don't want to run to the database each time to get and thought that putting them inside an inherited base might make it easier for me to carry around.
public abstract class StoreBase : SecurityBase {
internal static IEnumerable<AppEntities.Material> materialsList { get; set; }
internal static IEnumerable<AppEntities.Material> MaterialsList {
get {
if (materialsList == null) {
materialsList = MaterialService.Get().Result;
}
return materialsList;
}
}
Now anytime I need to load it.. I just call
var mats = MaterialList;
However, I'm loading in thousands of parts at a time, and my material list is almost 3k items.
Suspicious that this is causing me to have some memory problems, as I can really only load in a few thousand parts before it bottoms out. (x parts times 3k seems to add up fast)
I know there has to be a better solution, but don't know how to achieve without then blowing up the database with queries.

How can I design small getter-based classes without multiple execution?

I like to write many small getter properties that describe exactly what they mean.
However this can lead to repeated, expensive calculations.
Every way I know to avoid this makes the code less readable.
A pseudocode example:
ctor(decimal grossAmount, taxRateCalculator, itemCategory)
{
// store the ctor args as member variables
}
public decimal GrossAmount { get { return _grossAmount, } }
private decimal TaxRate { get { return _taxRateCalculater.GetTaxRateFor(_itemCategory); } } // expensive calculation
public decimal TaxAmount { get { return GrossAmount * TaxRate; } }
public decimal NetAmount { get { return GrossAmount - TaxAmount; } }
In this example it is very obvious what each of the properties do because they are simple accessors. TaxRate has been refactored into its own property so that it too, is obvious what it does. If the _taxRateCalculator operation is very expensive, how can I avoid repeated execution without junking up the code?
In my real scenario I might have ten fields that would need to be treated this way, so ten sets of _backing or Lazy fields would be ugly.
Cache the value
private decimal? _taxRate = null;
...
private decimal TaxRate
{
get
{
if (!this._taxRate.HasValue) {
this._taxRate = _taxRateCalculator.GetTaxRateFor(_itemCategory);
}
return this._taxRate.Value;
}
}
You could create a method which will recalculate your values every time you call here.
The advatage of this solution is
you can recalculate your Properties manually
you can implement some sort of Eventlistner which could easy call your recalculation anytime something happends
.
ctor(decimal grossAmount, taxRateCalculator, itemCategory)
{
// store the ctor args as member variables
recalculate();
}
public decimal GrossAmount { get; private set; }
public decimal TaxAmount { get; private set; }
public decimal NetAmount { get; private set; }
public void recalculate();
{
// expensive calculation
var _taxRate = _taxRateCalculater.GetTaxRateFor(_itemCategory);
GrossAmount = grossAmount;
TaxAmount = GrossAmount * _taxRate ;
NetAmount = GrossAmount - _taxRate;
}
Profile. Find out where the application is spending most of the time, and you'll have places to improve. Focus on the worst offenders, there's no point in performance optimizing something unless it has significant impact on the user experience.
When you identify the hotspots, you've got a decision to make - is the optimization worth the costs? If it is, go ahead. Having a backing field to store an expensive calculation is quite a standard way of avoiding repeated expensive calculations. Just make sure to only apply it where it matters, to keep the code simple.
This really is a common practice, and as long as you make sure you're consistent (e.g. if some change could change the tax rate, you'd want to invalidate the stored value to make sure it gets recalculated), you're going to be fine. Just make sure you're fixing a real performance problem, rather than just going after some performance ideal =)
I don’t think there is a way. You want to cache data without a cache. You could build caching functionality into your GetTaxRateFor(…) Method, but I doubt that this is the only expensive method you call.
So the shortest way would be a BackingField. What I usually do is this:
private decimal? _backingField;
public decimal Property { get { return _backingField ?? (_backingField = expensiveMethode()).Value; } }

Object memory optimization question

Please pardon me for a n00bish question.
Please consider the following code:
public class SampleClass
{
public string sampleString { get; set; }
public int sampleInt { get; set; }
}
class Program
{
SampleClass objSample;
public void SampleMethod()
{
for (int i = 0; i < 10; i++)
{ objSample = new SampleClass();
objSample.sampleInt = i;
objSample.sampleString = "string" + i;
ObjSampleHandler(objSample);
}
}
private void ObjSampleHandler(SampleClass objSample)
{
//Some Code here
}
}
In the given example code, each time the SampleMethod() is called, it would iterate for 10 times and allocate new memory space for the instance of SampleClass and would assign to objSample object.
I wonder,
If this is a bad approach as a lot of
memory space is being wasted with it?
If that is the case, is there a
better approach to reuse/optimize the
allocated memory?
Or, Am I getting worried for no reason at all and getting into unneccesary micro optimisation mode ? :)
Edit: Also consider the situation when such a method is being used in a multi threaded enviornment. Would that change anything?
The technical term for what you are doing is premature optimization
You're definitely doing well to think about the performance implications of things. But in this case, the .NET Garbage Collector will handle the memory fine. And .NET is very good at creating objects fast.
As long as your class's constructor isn't doing a lot of complex, time-consuming things, this won't be a big problem.
Second option.
You shouldn't be concerned with this kind of optimization unless you're having a performance issue.
And even if you are, it would depend of what you do with the object after you create it, for example, if in ObjSampleHandler() you're storing the objects to use them later, you simply cannot avoid what you're doing.
Remember, "early optimization is the root of all evil" or so they say ;)
As you are creating a new object (objSample = new SampleClass();), you are not reusing that object. You are only reusing the reference to an instance of SampleClass.
But now you are forcing that reference to be a member-variable of your class Program, where it could have been a local variable of the method SampleMethod.
Assuming your code in ObjSampleHandler method doesnt create any non-local references to objSample, the object will become eligible for Garbage Collection once the method finishes, which will be quite memory efficient, and unlikely to be of concern.
However, if you are having problems specifically with the managed heap because of this type of code then you could change your class to a struct, and it will be stored on the Stack rather than the Heap which is more efficient. Please remember though that structs are copied by value rather than reference, and you need to understand the consequences of this in the remainder of your code.
public struct SampleClass
{
public string sampleString { get; set; }
public int sampleInt { get; set; }
}

Changing internal representation in runtime

UPDATE
The main questions remain the ones
under the example, but I guess it boils down
to :
**If you have a type where 99% of the values could be represented in one
fast, powerfull type, and only 1% in a
very heavy type, (say int vs.
BigInteger) How to represent it?? **
A school we learned a lot about internal representations, but never how to change it at runtime. I mean : suppose you have a class representing a decimal, but you use an integer to represent it internal, until you actually need a bigger value than the integer, and only than change representation...
I never thought of this before, and when thinkihng of it, I thought that would never work, since all the checks would kill it. But I just did a test since I'm too curious for my own good and there do exist situations when changing of representation is more perormant : given this interface :
interface INumber
{
void add1000();
void SetValue(decimal d);
decimal GetValue();
}
I found the latter of the two implementations to be more powerful in a lot of situations, including this one that I composed to attract as many ideas I could on the matter (not rep, it's community)
1. Representation by only a decimal
public class Number1:INumber
{
private decimal d { get; set; }
public void add1000()
{
d += 1000;
}
public decimal GetValue()
{
return d;
}
public void SetValue(decimal d)
{
this.d = d;
}
}
2. Representation by a decimal and an int
public class Number2:INumber
{
private bool usedecimal;
private int i;
private decimal d;
public void add1000()
{
if (usedecimal)
{
d += 1000;
return;
}
i += 1000;
if (i > 2147480000)
{
d = i;
usedecimal = true;
}
}
public void SetValue(decimal d)
{
try
{
i = (int)d;
}
catch (OverflowException e)
{
this.d = d;
}
}
public decimal GetValue()
{
return Math.Max(i,d);
}
}
}
My question is the following :
This seems sth. I have been missing, but this must be the bleeding obvious. Can anyone help me out with this?
Are there guidelines for mixed representations, when to use them, when not?
How to have a hunch when a mixed represtenation can be faster without benchmarking?
Any examples?
Any patterns?
Any ideas on the matter?
If you have a type where 99% of the values could be represented in one fast, powerfull type, and only 1% in a very heavy type, (say int vs. BigInteger) How to represent it??
BigInteger implementations typically do exactly that; they keep everything in ints or longs until something overflows, and only then do they go to the heavierweight implementation.
There's any number of ways to represent it. A pattern I like is:
public abstract class Thing
{
private class LightThing : Thing
{ ... }
private class HeavyThing : Thing
{ ... }
public static Thing MakeThing(whatever)
{ /* make a heavy or light thing, depending */ }
... etc ...
}
Are there guidelines for mixed representations, when to use them, when not?
Sure. We can easily compile such a list. This technique makes sense if:
(1) the lightweight implementation is much lighter than the heavyweight implementation
(2) the typical usage falls into the lightweight code path most of the time
(3) the cost of detecting the transition is not a significant cost compared to the cost of the heavyweight solution
(4) the more complex two-representation solution is necessary in order to achieve a customer-focused, realistic performance goal.
How to have a hunch when a mixed represtenation can be faster without benchmarking?
Don't. Making performance decisions based on hunches is reasoning in advance of facts. Drive performance decisions on realistic, customer focused, data-driven analysis, not on hunches. If I've learned one thing about performance analysis over the years its that my hunches are usually wrong.
Any examples?
Any number of implementations of BigInteger.
Any patterns?
Beats the heck out of me. I'm not much of one for memorizing pattern taxonomies.
Any ideas on the matter?
See above.
Perhaps you're looking for the Bridge pattern.

Categories

Resources