I'm basically a C++ guy trying to venture into C#. From the basic tutorial of C#, I happen to find that all objects are created and stored dynamically (also true for Java) and are accessed by references and hence there's no need for copy constructors. There is also no need of bitwise copy when passing objects to a function or returning objects from a function. This makes C# much simpler than C++.
However, I read somewhere that operating on objects exclusively through references imposes limitations on the type of operations that one can perform thus restricting the programmer of complete control. One limitation is that the programmer cannot precisely specify when an object can be destroyed.
Can someone please elaborate on other limitations? (with a sample code if required)
Most of the "limitations" are by design rather than considered a deficiency (you may not agree of course)
You cannot determine/you don't have to worry about
when an object is destroyed
where the object is in memory
how big it is (unless you are tuning the application)
using pointer arithmetic
accessing out side an object
accessing an object with the wrong type
sharing objects between threads is simpler
whether the object is on the stack or the heap. (The stack is being used more and more in Java)
fragmentation of memory (This is not true of all collectors)
Because of Garbage collection done in java we cannot predict when the object will get destroyed but it performs the work of destructor.
If you want to free up some resources then you can use finally block.
try {
} finally{
// dispose resources.
}
Having made a similar transition, the more you look into it, the more you do have to think about C#'s GC behaviour in all but the most straighforward cases. This is especially true when trying to handle unmanaged resources from managed code.
This article highlights a lot of the issues you may be interested in.
Personally I miss a reference counted alternative to IDisposable (more like shared_ptr), but that's probably a hangover from a C++ background.
The more I have to write my own plumbing to support C++ like programming, the more likely it is there is another C# mechanism I've overlooked, or I end up getting frustrated with C#. For example, swap and move are not common idioms in C# as far as I've seen and I miss them: other programmers with a C# background may well disagree about how useful those idioms are.
Related
Sorry for being confused, at C++ I know to return local variable's reference or pointers can cause bad_reference exception. I am not sure how it is in C# ?
e.g
List<StringBuilder> logs = new List<StringBuilder>();
void function(string log)
{
StringBuilder sb = new StringBuilder();
logs.Add(sb);
}
at this function a local object is created and stored in a list, is that bad or must be done in another way. I am really sorry for asking this, but I am confused after coding C++ for 2 months.
Thanks.
Your C# code doesn't return an object reference so it doesn't match your concern. It is however a problem that doesn't exist in C#. The CLR doesn't let you create objects on the stack, only the heap. And the garbage collector makes sure that object references stay valid.
In C#, the garbage collector manages all the (managed) objects you create. It will not delete one unless there are no longer any references to it.
So that code is perfectly valid. logs keeps a reference to the StringBuilder. The garbage collector knows this, so it will not clean it up even after the context in which it was originally created goes out of scope.
In C# object lifecycle is managed for you by the CLR; compared to C++ where you have to match each new with a delete.
However in C# you can't do
void fun()
{
SomeObject sb(10);
logs.Add(sb);
}
i.e. allocating on the stack you have to use new - so in this respect both C# and C++ work similarly - except when it comes to releasing / freeing the object reference.
It is still possible to leak memory in C# - but it's harder than in C++.
There's nothing wrong with the code you've written. This is mostly because C#, like any .NET language, is a "managed" language that does a lot of memory management for you. To get the same effect in C++ you would need to explicitly use some third-party library.
To clear up some of the basics for you:
In C#, you rarely deal with "pointers" or "references" directly. You can deal with pointers, if you need to, but that is "unsafe" code and you really avoid that kind of thing unless you know what you're doing. In the few cases where you do deal with references (e.g. ref or out parameters) the language hides all the details from you and lets you treat them as normal variables.
Instead, objects in C# are defined as instances of reference types; whenever you use an instance of a reference type, it is similar to using a pointer except that you don't have to worry about the details. You create new instances of references types in C# in the same way that you create new instances of objects in C++, using the new operator, which allocates memory, runs constructors, etc. In your code sample, both StringBuilder and List<StringBuilder> are reference types.
The key aspect of managed languages that is important here is the automatic garbage collection. At runtime, the .NET Framework "knows" which objects you have created, because you're always creating them from it's own internally-managed heap (no direct malloc or anything like that in C#). It also "knows" when an object has gone completely out of scope -- when there are no more references to it anywhere in your program. Once that happens, the runtime is able to free the memory whenever it wants to, typically when it starts to run low on free memory, and you never have to do it. In fact, there is no way in C# to explicitly destroy a managed object (though you do have to clean up unmanaged resources if you use them).
In your example, the runtime knows that you've created a StringBuilder and put it into a List<>; it will keep track of that object, and as long as it's in the List<> it will stick around. Once you either remove it from the List<>, or the List<> itself goes away, the runtime will automatically clean up the StringBuilder for you.
I am originally a native C++ programmer, in C++ every process in your program is bound to your code, i.e, nothing happens unless you want it to happen. And every bit of memory is allocated (and deallocated) according to what you wrote. So, performance is all your responsibility, if you do good, you get great performance.
(Note: Please don't complain about the code one haven't written himself such as STL, it's a C++ unmanaged code after all, that is the significant part).
But in managed code, such as code in Java and C#, you don't control every process, and memory is "hidden", or not under your control, to some extent. And that makes performance something relatively unknown, mostly you fear bad performance.
So my question is: What issues and Bold Lines should I look after and keep in mind to achieve a good performance in managed code?
I could think only of some practices such as:
Being aware of boxing and unboxing.
Choosing the correct Collection that best suites your needs and has the lowest operation cost.
But these never seem to be enough and even convincing! In fact perhaps I shouldn't have mentioned them.
Please note I am not asking for a C++ VS C# (or Java) code comparing, I just mentioned C++ to explain the problem.
There is no single answer here. The only way to answer this is: profile. Measure early and often. The bottlenecks are usually not where you expect them. Optimize the things that actually hurt. We use mvc-mini-profiler for this, but any similar tool will work.
You seem to be focusing on GC; now, that can sometimes be an issue, but usually only in specific cases; for the majority of systems the generational GC works great.
Obviously external resources will be slow; caching may be critical: in odd scenarios with very-long-lived data there are tricks you can do with structs to avoid long GEN-2 collects; serialization (files, network, etc), materialization (ORM), or just bad collection/algorithn choice may be the biggest issue - you cannot know until you measure.
Two things though:
make sure you understand what IDisposable and "using" mean
don't concatenate strings in loops; mass concatenation is the job of StringBuilder
Reusing large objects is very important in my experience.
Objects on the large object heap are implicitly generation 2, and thus require a full GC to clean up. And that's expensive.
The main thing to keep in mind with performance with managed languages is that your code can change structure at runtime to be better optimized.
For example the default JVM most people use is Sun's Hotspot VM which will actually optimize your code as it runs by converting parts of the program to native code, in-lining on the fly and other optimizations (such as the CLR or other managed runtimes) which you will never get using C++.
Additionally Hotspot will also detect which parts of you're code are used the most and optimize accordingly.
So as you can see optimising performance on a managed system is slightly harder than on an un-managed system because you have an intermediate layer that can make code faster without your intervention.
I am going to invoke the law of premature optimization here and say that you should first create the correct solution then, if performance becomes an issue, go back and measure what is actually slow before attempting to optimize.
I would suggest understanding better garbage collection algorithms. You can find good books on that matter, e.g. The Garbage Collection Handbook (by Richard Jones, Antony Hosking, Eliot Moss).
Then, your question is practically related to particular implementation, and perhaps even to a specific version of it. For instance, Mono used (e.g. in version 2.4) to use Boehm's garbage collector, but now uses a copying generational one.
And don't forget that some GC techniques can be remarkably efficient. Remember A.Appel's old paper Garbage Collection can be faster than stack allocation (but today, the cache performance matters much much more, so details are different).
I think that being aware of boxing (& unboxing) and allocation is enough. Some compilers are able to optimize these (by avoiding some of them).
Don't forget that GC performance can vary widely. There are good GCs (for your application) and bad ones.
And some GC implementations are quite fast. For example the one inside Ocaml
I would not bother that much: premature optimization is evil.
(and C++ memory management, even with smart pointers, or with ref-counters, can often be viewed as a poor man's garbage collection technique; and you don't have full control on what C++ is doing -unless you re-implement your ::operator new using operating system specific system calls-, so you don't really know a priori its performance)
.NET Generics don't specialize on reference types, which severely limits how much inlining can be done. It may (in certain performance hotspots) make sense to forgo a generic container type in favor of a specific implementation that will be better optimized. (Note: this doesn't mean to use .NET 1.x containers with element type object).
you must :
using large objects is very important in my experience.
Objects on the large object heap are implicitly generation 2, and thus require a full GC to clean up. And that's expensive.
Could someone explain to a C++ programmer most important differences between Java (and C# as well) references and shared_ptr (from Boost or from C++0x).
I more or less aware how shared_ptr is implemented. I am curious about differences in the following ares:
1) Performance.
2) Cycling. shared_ptr can be cycled (A and B hold pointers to each other). Is cycling possible in Java?
3) Anything else?
Thank you.
Performance: shared_ptr performs pretty well, but in my experience is slightly less efficient than explicit memory management, mostly because it is reference counted and the reference count has to allocated as well. How well it performs depends on a lot of factors and how well it compares to Java/C# garbage collectors can only be determined on a per use case basis (depends on language implementation among other factors).
Cycling is only possible with weak_ptr, not with two shared_ptrs. Java allows cycling without further ado; its garbage collector will break the cycles. My guess is that C# does the same.
Anything else: the object pointed to by a shared_ptr is destroyed as soon as the last reference to it goes out of scope. The destructor is called immediately. In Java, the finalizer may not be called immediately. I don't know how C# behaves on this point.
The key difference is that when the shared pointer's use count goes to zero, the object it points to is destroyed (destructor is called and object is deallocated), immediately. In Java and C# the deallocation of the object is postponed until the Garbage Collector chooses to deallocate the object (i.e., it is non-deterministic).
With regard to cycles, I am not sure I understand what you mean. It is quite common in Java and C# to have two objects that contain member fields that refer to each other, thus creating a cycle. For example a car and an engine - the car refers to the engine via an engine field and the engine can refer to its car via a car field.
Nobody pointed the possibility of moving the object by the memory manager in managed memory. So in C# there are no simple references/pointers, they work like IDs describing object which is returned by the manager.
In C++ you can't achieve this with shared_ptr, because the object stays in the same location after it has been created.
First of all, Java/C# have only pointers, not references, though they call them that way. Reference is a unique C++ feature. Garbage collection in Java/C# basically means infinite life-time. shared_ptr on the other hand provides sharing and deterministic destruction, when the count goes to zero. Therefore, shared_ptr can be used to automatically manage any resources, not just memory allocation. In a sense (just like any RAII design) it turns pointer semantics into more powerful value semantics.
Cyclical references with C++ reference-counted pointers will not be disposed. You can use weak pointers to work around this. Cyclical references in Java or C# may be disposed, when the garbage collector feels like it.
When the count in a C++ reference-counted pointer drops to zero, the destructor is called. When a Java object is no longer reachable, its finalizer may not be called promptly or ever. Therefore, for objects which require explicit disposal of external resources, some form of explicit call is required.
We have certain application written in C# and we would like it to stay this way. The application manipulates many small and short lived chunks of dynamic memory. It also appears to be sensitive to GC interruptions.
We think that one way to reduce GC is to allocate 100K chunks and then allocate memory from them using a custom memory manager. Has anyone encountered custom memory manager implementations in C#?
Perhaps you should consider using some sort of pooling architecture, where you preallocate a number of items up front then lease them from the pool. This keeps the memory requirements nicely pinned. There are a few implementations on MSDN that might serve as reference:
http://msdn2.microsoft.com/en-us/library/bb517542.aspx
http://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.socketasynceventargs.aspx
...or I can offer my generic implementation if required.
Memory management of all types that descend from System.Object is performed by the garbage collector (with the exception of structures/primitives stored on the stack). In Microsoft's implementation of the CLR, the garbage collector cannot be replaced.
You can allocate some primitives on the heap manually inside of an unsafe block and then access them via pointers, but it's not recommended.
Likely, you should profile and migrate classes to structures accordingly.
The obvious option is using Marshal.AllocHGlobal and Marshal.FreeHGlobal. I also have a copy of the DougLeaAllocator (dlmalloc) written and battle-tested in C#. If you want, I can get that out to you. Either way will require careful, consistent usage of IDisposable.
The only time items are collected in garbage collection is when there are no more references to the object.
You should make a static class or something to keep a lifetime reference to the object for the life of the application.
If you want to manage your own memory it is possible using unsafe in C#, but you would be better to choose a language that wasn't managed like C++.
Although I don't have experience with it, you can try to write C# unmanaged code.
Or maybe, you can tell GC to not collect your objects by calling
GC.KeepAlive(obj);
This question has been puzzling me for a long time now. I come from a heavy and long C++ background, and since I started programming in C# and dealing with garbage collection I always had the feeling that such 'magic' would come at a cost.
I recently started working in a big MMO project written in Java (server side). My main task is to optimize memory comsumption and CPU usage. Hundreds of thousands of messages per second are being sent and the same amount of objects are created as well. After a lot of profiling we discovered that the VM garbage collector was eating a lot of CPU time (due to constant collections) and decided to try to minimize object creation, using pools where applicable and reusing everything we can. This has proven to be a really good optimization so far.
So, from what I've learned, having a garbage collector is awesome, but you can't just pretend it does not exist, and you still need to take care about object creation and what it implies (at least in Java and a big application like this).
So, is this also true for .NET? if it is, to what extent?
I often write pairs of functions like these:
// Combines two envelopes and the result is stored in a new envelope.
public static Envelope Combine( Envelope a, Envelope b )
{
var envelope = new Envelope( _a.Length, 0, 1, 1 );
Combine( _a, _b, _operation, envelope );
return envelope;
}
// Combines two envelopes and the result is 'written' to the specified envelope
public static void Combine( Envelope a, Envelope b, Envelope result )
{
result.Clear();
...
}
A second function is provided in case someone has an already made Envelope that may be reused to store the result, but I find this a little odd.
I also sometimes write structs when I'd rather use classes, just because I know there'll be tens of thousands of instances being constantly created and disposed, and this feels really odd to me.
I know that as a .NET developer I shouldn't be worrying about this kind of issues, but my experience with Java and common sense tells me that I should.
Any light and thoughts on this matter would be much appreciated. Thanks in advance.
Yes, it's true of .NET as well. Most of us have the luxury of ignoring the details of memory management, but in your case -- or in cases where high volume is causing memory congestion -- then some optimizaition is called for.
One optimization you might consider for your case -- something I've been thinking about writing an article about, actually -- is the combination of structs and ref for real deterministic disposal.
Since you come from a C++ background, you know that in C++ you can instantiate an object either on the heap (using the new keyword and getting back a pointer) or on the stack (by instantiating it like a primitive type, i.e. MyType myType;). You can pass stack-allocated items by reference to functions and methods by telling the function to accept a reference (using the & keyword before the parameter name in your declaration). Doing this keeps your stack-allocated object in memory for as long as the method in which it was allocated remains in scope; once it goes out of scope, the object is reclaimed, the destructor is called, ba-da-bing, ba-da-boom, Bob's yer Uncle, and all done without pointers.
I used that trick to create some amazingly performant code in my C++ days -- at the expense of a larger stack and the risk of a stack overflow, naturally, but careful analysis managed to keep that risk very minimal.
My point is that you can do the same trick in C# using structs and refs. The tradeoffs? In addition to the risk of a stack overflow if you're not careful or if you use large objects, you are limited to no inheritance, and you tightly couple your code, making it it less testable and less maintainable. Additionally, you still have to deal with issues whenever you use core library calls.
Still, it might be worth a look-see in your case.
This is one of those issues where it is really hard to pin down a definitive answer in a way that will help you. The .NET GC is very good at tuning itself to the memory needs of you application. Is it good enough that your application can be coded without you needing to worry about memory management? I don't know.
There are definitely some common-sense things you can do to ensure that you don't hammer the GC. Using value types is definitely one way of accomplishing this but you need to be careful that you don't introduce other issues with poorly-written structs.
For the most part however I would say that the GC will do a good job managing all this stuff for you.
I've seen too many cases where people "optimize" the crap out of their code without much concern for how well it's written or how well it works even. I think the first thought should go towards making code solve the business problem at hand. The code should be well crafted and easily maintainable as well as properly tested.
After all of that, optimization should be considered, if testing indicates it's needed.
Random advice:
Someone mentioned putting dead objects in a queue to be reused instead of letting the GC at them... but be careful, as this means the GC may have more crap to move around when it consolidates the heap, and may not actually help you. Also, the GC is possibly already using techniques like this. Also, I know of at least one project where the engineers tried pooling and it actually hurt performance. It's tough to get a deep intuition about the GC. I'd recommend having a pooled and unpooled setting so you can always measure the perf differences between the two.
Another technique you might use in C# is dropping down to native C++ for key parts that aren't performing well enough... and then use the Dispose pattern in C# or C++/CLI for managed objects which hold unmanaged resources.
Also, be sure when you use value types that you are not using them in ways that implicitly box them and put them on the heap, which might be easy to do coming from Java.
Finally, be sure to find a good memory profiler.
I have the same thought all the time.
The truth is, though we were taught to watch out for unnecessary CPU tacts and memory consumption, the cost of little imperfections in our code just negligible in practice.
If you are aware of that and always watch, I believe you are okay to write not perfect code. If you have started with .NET/Java and have no prior experience in low level programming, the chances are you will write very abusive and ineffective code.
And anyway, as they say, the premature optimization is the root of all evil. You can spend hours optimizing one little function and it turns out then that some other part of code gives a bottleneck. Just keep balance doing it simple and doing it stupidly.
Although the Garbage Collector is there, bad code remains bad code. Therefore I would say yes as a .Net developer you should still care about how many objects you create and more importantly writing optimized code.
I have seen a considerable amount of projects get rejected because of this reason in Code Reviews inside our environment, and I strongly believe it is important.
.NET Memory management is very good and being able to programmatically tweak the GC if you need to is good.
I like the fact that you can create your own Dispose methods on classes by inheriting from IDisposable and tweaking it to your needs. This is great for making sure that connections to networks/files/databases are always cleaned up and not leaking that way. There is also the worry of cleaning up too early as well.
You might consider writing a set of object caches. Instead of creating new instances, you could keep a list of available objects somewhere. It would help you avoid the GC.
I agree with all points said above: the garbage collector is great, but it shouldn't be used as a crutch.
I've sat through many wasted hours in code-reviews debating over finer points of the CLR. The best definitive answer is to develop a culture of performance in your organization and actively profile your application using a tool. Bottlenecks will appear and you address as needed.
I think you answered your own question -- if it becomes a problem, then yes! I don't think this is a .Net vs. Java question, it's really a "should we go to exceptional lengths to avoid doing certain types of work" question. If you need better performance than you have, and after doing some profiling you find that object instantiation or garbage collection is taking tons of time, then that's the time to try some unusual approach (like the pooling you mentioned).
I wish I were a "software legend" and could speak of this in my own voice and breath, but since I'm not, I rely upon SL's for such things.
I suggest the following blog post by Andrew Hunter on .NET GC would be helpful:
http://www.simple-talk.com/dotnet/.net-framework/understanding-garbage-collection-in-.net/
Even beyond performance aspects, the semantics of a method which modifies a passed-in mutable object will often be cleaner than those of a method which returns a new mutable object based upon an old one. The statements:
munger.Munge(someThing, otherParams);
someThing = munger.ComputeMungedVersion(someThing, otherParams);
may in some cases behave identically, but while the former does one thing, the latter will do two--equivalent to:
someThing = someThing.Clone(); // Or duplicate it via some other means
munger.Munge(someThing, otherParams);
If someThing is the only reference, anywhere in the universe, to a particular object, then replacing it with a reference to a clone will be a no-op, and so modifying a passed-in object will be equivalent to returning a new one. If, however, someThing identifies an object to which other references exist, the former statement would modify the object identified by all those references, leaving all the references attached to it, while the latter would cause someThing to become "detached".
Depending upon the type of someThing and how it is used, its attachment or detachment may be moot issues. Attachment would be relevant if some object which holds a reference to the object could modify it while other references exist. Attachment is moot if the object will never be modified, or if no references outside someThing itself can possibly exist. If one can show that either of the latter conditions will apply, then replacing someThing with a reference to a new object will be fine. Unless the type of someThing is immutable, however, such a demonstration would require documentation beyond the declaration of someThing, since .NET provides no standard means of annotating that a particular reference will identify an object which--despite its being of mutable type--nobody is allowed to modify, nor of annotating that a particular reference should be the only one anywhere in the universe that identifies a particular object.