My app uses Entity Framework. As I perform operations on the Context, such as inserts/deletes/updates, due to its unity of work behavior im sure it occupies more and more memory as these operations take place.
My question here is: Is there a way to obtain how much memory is the context holding in a given moment?
Details:
No Lazy Loading being used
No Proxy Creation
EF 4
When you need to monitorize memory and CPU usage of objects within an application or service is a synonym of requiring a profiler.
There're many options here. It's just about looking for ".NET profiler" in Google.
Note I'm redirecting you to Google to avoid spam.
Answering to #Servy's concern:
Profilers are fine for a debugging tool during development. It's not
really something that you can use during the execution of the program
outside of development. I get the impression that that's what he's
asking about. –
It's when the requirement of implementing some kind of load tests arises. OP should implement some test cases which mimic real-world scenarios to get stats as close as possible to actual production execution of the same code.
Related
I'm working on an existing large enterprise application. This application has a small asynchronous method framework built in to it's ViewModel base class. These async methods are similar to APM and the event-based asynchronous pattern. There are little bits from both established patterns that were borrowed.
I've been assigned to profile the performance of a particularly slow view in the application. I have been given a license to Redgate ANTS Performance Profiler for the job.
As far as I believe I have read today, ANTS is normally capable of linking async/await calls to the actual work done. However, since this application I am working on does not follow the async/await pattern, I believe I am missing out on this automatic linkage of Async calls to their execution to their completion handlers.
The actual work being performed is done by a service that is central to the application, so there are hundreds of things that are causing this service to perform work, constantly.
Because of this issue, what ANTS is showing me is the worker method being extremely slow, but it is giving me zero feedback on what inside the view is actually causing this slow work to be done.
I spoke to a coworker about this problem, and he told me this is why he doesn't bother with performance profilers. He told me that what he would do is put time stamped logging calls all over the view and then write a quick and dirty tool to filter the data into something consumable by a human. But this is pretty much exactly what the profiler should be doing for me.
We talked about this for a while and concluded that for a tool to be effective with Async calls, it would either have to support a specific standard, or it would have to support something in the actual code, perhaps such as an attribute, that allows you to mark the async call and the completion handler.
Do you agree with what I've said here? If so, are there any such performance profilers for .NET that have custom attributes to annotate your problematic code with for profiling? If not, could you please enlighten me as to how I can interpret this data to determine the actual cause of the issue?
Thank you for any help.
In which situations are CERs useful? I mean, real-life situations, not some abstract examples.
Do you personally use them? Haven't seen their usage except for examples in books and articles. That, for sure, can be because of my insufficient programming experience. So I am also interested how wide-spread technique it is.
What are the pros and cons for using them?
In which situations are CERs useful? I mean, real-life situations, not some abstract examples.
When building software that has stringent reliability requirements. Database servers, for example, must not leak resources, must not corrupt internal data structures, and must keep running, period, end of story, even in the face of godawful scenarios like thread aborts.
Building managed code that cannot possibly leak, that maintains consistent data structures when aborts can happen at arbitrary places, and keeps the service going is a difficult task. CERs are one of the tools in the toolbox for building such reliable services; in this case, by restricting where aborts can occur.
One can imagine other services that must stay reliable in difficult circumstances. Software that, say, finds efficient routes for ambulances, or moves robot arms around in factories, has higher reliability constraints than your average end user code running on a desktop machine.
Do you personally use them?
No. I build compilers that run on end-user machines. If the compiler fails halfway through a compilation, that's unfortunate but it is not likely to have a human life safety impact or result in the destruction of important data.
I am also interested how wide-spread technique it is.
I have no idea.
What are the pros and cons for using them?
I don't understand the question. You might as well ask what the pros and cons of a roofing hatchet are; unless you state the task that you intend to use the hatchet for, it's hard to say what the pros and cons of the tool are. What task do you wish to perform with CERs? Once we know the task we can describe the pros and cons of using any particular tool to accomplish that task.
I've been badly let-down and received an application that in certain situations is at least 100 times too slow, which I have to release to our customers very soon (a matter of weeks).
Through some very simple profiling I have discovered that the bottleneck is its use of .NET Remoting to transfer data between a Windows service and the graphical front-end - both running on the same machine.
Microsoft guidelines say "Minimize round trips and avoid chatty interfaces": write
MyComponent.SaveCustomer("bob", "smith");
rather than
MyComponent.Firstname = "bob";
MyComponent.LastName = "smith";
MyComponent.SaveCustomer();
I think this is the root of the problem in our application. Unfortunately calls to MyComponent.* (the profiler shows that 99.999% of the time is spent in such statements) are scattered liberally throughout the source code and I don't see any hope of redesigning the interface in accordance with the guidelines above.
Edit: In fact, most of the time the front-end reads properties from MyComponent rather than writes to it. But I suspect that MyComponent can change at any time in the back-end.
I looked to see if I can read all properties from MyComponent in one go and then cache them locally (ignoring the change-at-any-time issue above), but that would involve altering hundreds of lines of code.
My question is: Are they any 'quick-win' things I can try to improve performance?
I need at least a 100-times speed-up. I am a C/C++/Delphi programmer and am pretty-much unfamiliar with C#/.NET/Remoting other than what I have read up on in the last couple of days. I'm looking for things that can be completed in a few days - a major restructuring of the code is not an option.
Just for starters, I have already confirmed that it is using BinaryFormatter.
(Sorry, this is probably a terrible question along the lines of 'How can I feasibly fix X if I rule out all of the feasible options'… but I'm desperate!)
Edit 2
In response to Richard's comment below: I think my question boils down to:
Is there any setting I can change to reduce the cost of a .NET Remoting round-trip when both ends of the connection are on the same machine?
Is there any setting I can change to reduce the number of round-trips - so that each invocation of a remote object property doesn't result in a separate round-trip? And might this break anything?
Under .Net Remoting you have 3 ways of communicating by HTTP, TCP and IPC. If the commnuicatin is on the same pc I sugest using IPC channels it will speed up your calls.
In short, no there are no quick wins here. Personally I would not make MyComponent (as a DTO) a MarshalByRefObject (which is presumably the problem), as those round trips are going to cripple you. I would keep it as a regular class, and just move a few key methods to pump them around (i.e. have a MarshalByRef manager/repository/etc class).
That should reduce round-trips; if you still have problems then it will probably be bandwidth related; this is easier to fix; for example by changing the serializer. protobuf-net allows you to do this easily by simply implementing ISerializable and forwarding the two methods (one from the interface, plus the ctor) to ProtoBuf.Serializer - it then does all the work for you, and works with remoting. I can provide examples of this if you like.
Actually, protobuf-net may help with CPU usage too, as it is a much more CPU-efficient serializer.
Could you make MyComponent a class that will cache the values and only submit them when SaveCustomer() is called?
You can try compressing traffic. If not 100-times increase, you'll still gain some performance benefit
If you need the latest data (always see the real value), and the cost of getting the data each time dominates the runtime then you need to be radical.
How about changing polling to push. Rather than calling the remote side each time you need a value, have the remote push all changes and cache the latest values locally.
Local lookups (after the initial get) are always up to date with all remoting overhead being done in the background (on another thread). Just be careful about thread safety for non-atomic types.
I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away.
I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.
There is the CLR Profiler that lets you review various object graphs (I've never used it):
http://www.microsoft.com/downloads/details.aspx?FamilyID=86ce6052-d7f4-4aeb-9b7a-94635beebdda&displaylang=en
Of course, ANTS Profiler (free trial) is an often recommended profiler. You shouldn't need to manually garbage collect, the GC is a very intelligently built solution that will likely be more optimal than any manual calls you do. A better approach is to be minimalist with the number of objects you keep in memory - and be rid of memory-heavy objects as soon as possible if memory is priority.
I have written a Winform application in C#. How can I check the performance of my code. By that I mean, how can I check which forms references are active at a given time or event, so that I can remove them if they are not required (make them available for garbage collection). Is there a way to do it using VS 2005 or any free tool. Any tutorials or guide will be useful.
[Edit] Sorry if my question is confusing. I am not looking for a professional tool, but ways to know/understand the working of my code better and code more efficiently.
Thanks
Making code efficient is always a secondary step for me. First I write the code so that it works. Next, I profile it if i am unhappy with the performance. The truth is most applications run fast enough after the first time writing them. Sometimes though, better performance is needed. Performance can be gained many different ways. It all depends on your application. I write LOB apps mainly, so I deal with alot of IO to databases, services and storage. These calls are all very expensive and need to be limited so they are my first area to optimize. I optimize by lazy-loading, eager-loading, batching calls, making less frequent calls and so on. I recently had a winforms app that created hundreds of controls dynamically and it took a long time. That's another bottleneck that I have to address. I use a profiler to measure the performance of the applications.
Use the free Equatec profiler. It will show you how long calls take and how many times a call is made. The profiler gives a nice report and visual display that can drill down the call stacks.
Red Gate Performance Profiler
...it's been said here a million times before. If you suspect performance issues, profile your application. It will tell you how long calls are taking and point out the bottlenecks in your code.
Kobra,
What you're looking for is called a Memory Profiler. There happens to be one (paid) version for .NET aptly named ".NET Memory Profiler", I've not used it extensively but it should answer the questions you're asking. There are a few others ones which will do basically the same thing, like giving you instance counts of loaded types, and help you identify when instances are not being garbage collected for one reason or another (i.e. Event Handler References, Static Properties, etc).
Hope this helps,
Dylan