C# Changing AssemblyVersion to an Earlier Version Affects Performance - c#

I've been looking into this bug for a week or so, and after all attempts to figure out how to fix it I still have not done so. That issue is the title of this post. When I go into my c# project and increment the AssemblyVersion to a newer version I get a small performance boost. When I decrement the AssemblyVersion, I get a large noticeable performance boost.
At this time I am working on an 64 bit machine with an AMD processor, and if I switch to an Intel machine (also 64 bit), this issue does not occur. The project I am working on is reliant on 2 or 3 Microsoft dlls and 1 dll created by someone who worked on this project in the past whom I can not contact (built in x86).
In VisualStudio 2013 I analyzed the performance when the Assembly version number was incremented, decremented, and kept the same. From what I could see it looked like the application on average was using less threads on the slower version, and having more collisions.
I am going to be honest and say I am picking up a project someone in the past was working on, and I am definitely a C# amateur. Because of this I have spent the past week or so trying to research exactly what the AssemblyVersion and AssemblyFileVersion are used for, how the AssemblyInfo.cs is built with the project, and how dlls are built with the project. Though I still am a little bit hazy on the facts. Here are a few places I have looked in my research about them:
DLL hell
Differences between assemblyversion assemblyfileversion and assemblyinformationalversion
Process Interoperability
Cross assembly causes performance hit
I have also run the Visual studio Performance and Diagnostics tool to try and visualize the cpu's strain graphically, and to see the number of times a function was called during the application's lifetime. All 3 options (increment, decrement, stay the same) had the function that runs the application (a timer that goes off every 50 ms) run more if it was a slower version, and less if it was a faster version.
I have tried rebuilding the dll being used with our project to x64 and building the project as Any CPU, but that too did not work. After that I hit a brick wall and have absolutely no idea where to look for more help / info about my problem I am encountering.
I am really sorry if this is difficult to answer from what I have given, or if anything is unclear. If anybody needs any clearer explanation, I will attempt to reply and do so. After 4 eastern though I wont be able to reply to questions until tomorrow in the morning.
Thanks everyone
edit* Performance measurements were made with the stopwatch class (bad idea I know). The performance is noticeably different in how fast the GUI will refresh the results on a page. (there are around 3-10 messages which can be displayed per second on the GUI)

Related

Does .Net Core compiling time decrease proportionally with the number of CPU cores?

I'm building a new PC. My main usage will be coding C# .Net Core projects. I wonder whether the added cores from CPU such as Intel 12th gen i7-12700f (10 cores/20 threads) would be worth the money.
Ideally, the compiling time should improve almost linearly with the number of cores as in case with the Chromium compiling benchmarks (specifically, with the Intel 12th gen CPUs):
https://www.youtube.com/watch?v=xBDFCoGhZ4g&t=1021s
https://www.youtube.com/watch?v=2WklVah7ERo&t=561s
Edit 1:
I did search before making this quesstion. I'm aware of the other posts such as:
https://learn.microsoft.com/en-us/visualstudio/msbuild/using-multiple-processors-to-build-projects?view=vs-2022
Does Visual Studio 2012 utilize all available CPU cores?
I know .Net compiler can take advantages of multi-core.
My question is specifically about how C#/.Net compiling time would scale with the number of CPU cores (within one CPU generation such as Intel 12th gen) as in case of C/C++/Chromium compiling benchmarks. I hope this is still a legit question.
Edit 2:
This question about the compiling time of one single medium-size project. Not compiling multiple projects which certainly scales with cores.
this is probably off-topic, but other questions like this exist for earlier versions of .net, similar to this: Does Visual Studio 2012 utilize all available CPU cores?
First you want to make sure that multi-processor compilation is enabled: Enabling Parallel Builds
Then you want to make sure you are running Visual Studio 2022 so you can take advantage of all 64 bits in the pipeline.
Then to determine if it is worth the extra $$$, use this chart from xkcd:
This is actually a serious post. What improvement measure would make it worth it?
If you care compiling 50 times a day, and the improvement was only 1 second, then it is worth spending 1 billable day worth extra. This chart states that over 5 years, those seconds will add up to 1 entire day of additional billable work.
Lets just low ball that at $200.
Now if you are like me, you will be compiling more than 50 times in a day, and each compilation cycle can be up to 60 seconds. The saving of having additional cores even so the rest of the OS and Spotify and your web browsers can keep churning away while you wait for builds, means that we will easily get at least 1 second, on top of the VS compilation.
So now we have a saving of $800. Even before we benchmark it, we can easily justify spending $800 extra, as long as you are happy with the 5 year investment.
Have I benchmarked these latest CPUs against VS 2022, no. But if I did that I'd very quickly talk myself into buying a new dev rig ;) If you are performing frequent compilations then all improvements to CPU, RAM, HDD and general bus speed will help, but it is most common to find that the HDD is the bottleneck for performance. After all that compilation the files need to be written somewhere, so if you are looking at the top of the range but are on a budget, I would invest in the best HDD and a good CPU, rather than the other way around.

C# Performance MS verse Mono Problems

I'm working on a fairly straight-forward (school) project. It's a job-shop scheduler. It's single-threaded, it has very limited File I/O (it reads a small problem description, then it goes to work trying to build a solution). The CPU should be the bottleneck. There is no user input/GUI.
On my machine, in release mode, without the debugger - in 3 minutes of CPU time, my PC can generate/evaluate 20,000 different schedules for a particular problem.
On a comparable *nix machine, executed with mono, in 3 minutes of CPU time, the server manages to generate/evaluate 2,000 different schedules. It's 1/10th the speed. I've compared Python performance between my machine and this particular server and the throughput was nearly identical.
The only 'system' call that I could see as being different was a call to
Process.GetCurrentProcess().TotalProcessorTime.Minutes
But removing it hasn't had any impact.
I've tried using
--aot -O=all
It didn't have any noticeable impact.
I've also tried to run the mono profiler against it but the results haven't been as helpful as I had hoped.
Hits % Method name
57542 37.45 /usr/bin/mono
11432 7.44 __lll_unlock_wake in /lib64/libpthread.so.0
6898 4.49 System.Linq.Enumerable:Any<jobshop2.JobTask> (System.Collections.Generic.IEnumerable`1<jobshop2.JobTask>,System.Func`2<jobshop2.JobTask, bool>)
6857 4.46 System.Collections.Generic.List`1/Enumerator<jobshop2.JobTask>:MoveNext ()
3582 2.33 pthread_cond_wait##GLIBC_2.3.2 in /lib64/libpthread.so.0
2719 1.77 __lll_lock_wait in /lib64/libpthread.so.0
Of those top six lines - I only recognize two of them as being 'my code' that I could improve upon. In the full output I can see quite a few calls in /lib64/libpthread.so.0 that seem to deal with locking, unlocking, waiting, mutexes, and pthreads. I'm confused by this because it is not a multi-threaded application.
I'm going through the Performance page on the mono site but nothing is really jumping out at me as being a problem. I have no doubt that my code is ugly and slow, but I really wasn't expecting such a big performance drop. I'm currently trying to get Linux installed on my desktop so that I can run my app in mono on the same hardware to help eliminate that variable - but I thought someone might be able to offer some suggestions/insight.
EDIT:
It is version 2.10.8 mono
Mono JIT compiler version 2.10.8 (tarball Sat Feb 16 11:51:56 UTC 2013)
Copyright (C) 2002-2011 Novell, Inc, Xamarin, Inc and Contributors. www.mono-project.com
TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: debugger softdebug
LLVM: supported, not enabled.
GC: Included Boehm (with typed GC and Parallel Mark)
This is a bit of an awkward answer, but I felt like it was the most fair way to handle it...I can't really explain what the cause was, but I don't want to imply that mono is horribly slow (it really isn't).
My concern was getting the program to run fast on the server. As other people have pointed out, the version of mono installed on the server was very old. I hope nobody sees my question and thinks that it reflects the current state of mono. Sadly, I am not able to update the version of mono on the server.
So, I re-wrote my code to remove unnecessary computation, avoid using iterators, and to limit memory allocations. My original code was doing a lot of unnecessary creation of objects and the objects were a lot larger than they needed to be. The clean up doubled the speed on my machine and made the 'server' performance about 70% of my own (a huge improvement!).
Still, it's not fair to compare different hardware - even if previous Python programs 'seemed' to run at about the same speed. I installed Linux and, with the newest version of mono, installed, my revised program ran at 96% of the Windows version.
I didn't keep digging beyond that. Using the current version of mono, on the same hardware, gave me nearly identical performance. Thanks for all the suggestions, it was incredibly helpful and saved me a lot of time.
Could be a memory leak. Mono is fighting an uphill battle; Microsoft made a system and developers had to reverse-engineer most of it. If you really can't figure it out, I would try reporting the bug to the mono developers:
Bugs - Mono (http://www.mono-project.com/Bugs)
Make sure that your mono version is up to date first; 2.10 is ancient. As of now, 3.2.6 is the latest. The packaged version from a package maintainer might not be good enough; try building it from the source tarball, and using that to run your program, before reporting bugs.
If you are using wine-mono or something like that on linux, then make sure that wine and wine-mono are up to date as well.

Make .NET executable to load faster in the first time

I made a simple Windows Forms executable in C# (by simple, I mean it has about 20 methods, but it's just one window), targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
The first time I open, the application takes about 7 seconds to load in my Windows 8 here. Then it takes less than a second in the next times I open it.
In a Windows XP I tried, it takes about 30 seconds to load in the first time. A few friends of mine, when testing the application, also complain that it takes a lot of time to load in the first time (about 30 seconds too). Then it takes faster (1 or 2 seconds).
I assume this can be because .NET Framework is not loaded yet, so it takes some time to load in their machines.
Have you ever experienced the same problem with .NET applications?
Do you have any clue why this happens and how can I fix this?
EDIT - I see some people are suggesting NGEN. However, if this needs to be done in every machine that will use the application, it can't be the solution for this. I want to release my application to a large public of "common users", and that makes no sense if I require them to do some extra stuff to use my application. It's already bad enough that we are requiring the .NET Framework. My application should only be a standalone EXE without any dependencies (except for the framework).
Thank you.
You can try pre-generating the native image using NGEN which .NET will use when your application loads.
You can find more information here - http://msdn.microsoft.com/en-GB/library/6t9t5wcf(v=vs.80).aspx
These are platform dependant and not transferable usually so you'll need to do this on each machine you deploy on.
This is most likely caused by Just-In-Time compilation of the CIL. You can compile your code for the environment that you are running on using NGen. However, you will most likely lose the platform agnostic-ness of .Net if you go down this route.
This blog entry on MSDN explains the performance benefits of NGen in some detail, I suggest giving it a read.
Update following comments
As Lloyd points out in his answer, if you give your users an installer NGen can be run at this point for the environment that the application is being installed on.
However, if NGen isn't an option, then I'd recommend starting your application with a profiler attached. It can highlight any performance bottlenecks in your code. If you have any singletons or expensive static initializers these will be highlighted as such and will give you the opportunity to refactor them.
There are many great .Net profilers out there (see this SO question for details), personally I'd recommend having a look at dotTrace - they also offer a free trial period for a month which may be all that's required for your application.
[...] targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
Actually that's not true. An application also loads types, initializes static constructors, etc.
In most cases when I have performance issues, I simply use a profiler... There can be a lot going on and a profiler is the easiest way to get some insights. There are some different options available; personally, I'm a fan of Red-Gate's profilers and they have a trial you can use.
It's noteworthy that the way this happens has changed in the .NET Framework. If you cannot get the performance you want in 2.0, I'd simply try another framework. Granted, Windows XP might be a small issue there...

Not enough storage is available to process this command in VisualStudio 2008

When I try to compile an assembly in VS 2008, I got (occasionally, usually after 2-3 hours of work with the project) the following error
Metadata file '[name].dll' could not be opened --
'Not enough storage is available to process this command.
Usually to get rid of that I need to restart Visual Studio
The assembly I need to use in my project is BIG enough (> 70 Mb) and probably this is the reason of that bug, I've never seen some thing like this in my previous projects. Ok, if this is the reason my question is why this happens and what I need to do to stop it.
I have enough of free memory on my drives and 2Gb RAM (only ~1.2 Gb are utilized when exception happens)
I googled for the answers to the questions like this.
Suggestions usually related to:
to the number of user handlers that is limited in WinXP...
to the physical limit of memory available per process
I don't think either could explain my case
For user handlers and other GUI resources - I don't think this could be a problem. The big 70Mb assembly is actually a GUI-less code that operates with sockets and implements parsers of a proprietary protocols. In my current project I have only 3 GUI forms, with total number of GUI controls < 100.
I suppose my case is closer to the fact that in Windows XP the process address space is limited with 2 GB memory (and, taking into account memory segmentation, it is possible that I don't have a free segment large enough to allocate a memory).
However, it is hard to believe that segmentation could be so big after just 2-3 hours of working with the project in Visual Studio. Task Manager shows that VS consumes about 400-500 Mb (OM + VM). During compilation, VS need to load only meta-data.
Well, there are a lot of classes and interfaces in that library, but still I would expect that 1-2 Mb is more then enough to allocate metadata that is used by compiler to find all public classes and interfaces (though it is only my suggestion, I don't know what exactly happens inside CLR when it loads assembly metadata).
In addition, I would say that entire assembly size is so big only because it is C++ CLI library that has other um-managed libraries statically linked into one DLL. I estimated (using Reflector) that .NET (managed) code is approx 5-10% of this assembly.
Any ideas how to define the real reason of that bug? Are there any restrictions or recommendations as to .NET assembly size? (Yes I know that it worth thinking of refactoring and splitting a big assembly into several smaller pieces, but it is a 3rd party component, and I can't rebuilt it)
The error is misleading. It really should say "A large enough contiguous space in virtual memory could not be found to perform the operation". Over time allocations and deallocations of virtual memory space leads to it becoming fragmented. This can lead to situations where a large allocation cannot be filled despite there being a plenty total space available.
I think this what your "segmentation" is refering to. Without knowing all the details of everything else that needs to load and other activity which occupies the 2-3 hour period its difficult to say whether this really is the cause. However I would not put it into the category of unlikely, in fact it is the most likely cause.
In my case the following fix helped:
http://confluence.jetbrains.net/display/ReSharper/OutOfMemoryException+Fix
As Anthony pointed out, the error message is a bit misleading. The issue is less about how big your assembly is and more about how much contiguous memory is available.
The problem is likely not really the size of your assembly. It's much more likely that something inside of Visual Studio is fragmenting memory to the point that a build cannot complete. The usual suspects for this type of problem are
Too many projects in the solution.
Third party add-ins
If you have more than say 10 projects in the solution. Try breaking up the solution and see if that helps.
If you have any 3rd party addins, try disabling them one at a time and seeing if the problem goes away.
I am getting this error on one of my machines and surprisingly, this problem is not seen on other dev machines. May be something wrong with VS installation.
But I found an easier solution.
If I delete the .suo file of teh solution and re-open the solution again, it will start working smoothly.
Hope this will be useful for somebody in distress..
If you are just interested to make it work then restart your computer and it will work like a charm. I Had same kind of error in my application and then after reading all of the answer here at stackoverflow, I decided to first restart my computer before doing any other modifications. And it saved me a lot of time.
Another cause for this problem can be using too many typed datasets via the designer. or other types that can be instaniated via a designer like lots of databound controls on lots of forms.
I imagine your the sort of hardcore programmer though who wouldn't drag n' drop a DS! :D
in relation to your problem, Bogdan, have you tried to reproduce the problem w/o your c++ component loaded? If you can't then maybe its this. How are you loading the component? have you tried other techniques like late binding, etc? any difference?
Additional:
Yes you are right, the other culprits are lots of controls on the form. I once saw this same issue with a dev that had imported a very VB6 app over to .net. he had literally 100's of forms. He would get periodic crashing of the IDE after a couple of hours. I'm pretty sure it was thread exhaustion. It might be worth setting up a vanilla box w/ no addins loaded just to rule addins out, but my guess is you are just hitting the wall in terms of a combined limiation of VS and your box specs. Try running Windows Vista 64bit and install some extra RAM modules.
If memory usage and VM size is small for devenv.
Explicitly kill "ALL" instances of devenv.exe running.
I had 3 devenv.exe running where as I had two instances of Visual studion opened in front.
That was solution in my case.
I know it has been a long time since this was commented on but I ran into this exact issue today with a telerik dll in VS2010. I had never seen this issue before until today when I was making some setting changes in IE.
There is a setting in Tools/Folder Option/View in the Files and Folders section called "Launch folder windows in a separate process".
I am not sure the amount of memory used for each window when using this setting but until today I have never had this checked. After checking this option for misc reasons I started getting the "not enough storage is available to process this command". The telerik dll is an 18mb dll that we are using located in our library folder as a reference in our project.
Unchecking this resolved the problem.
Just passing along as another possible solution
I also faced the same problem.
Make sure that the windows os is with 64bit.
I switched to windows 64bit from windows 32bit. I problem got solved.
I had this same issue and in my case, the exception name was very misleading. The actual problem was that the DLL couldn't be loaded at all due to invalid path. The exception i was getting said "
I used DllImport attribute in C#, ASP.NET application with declaration like below and it was causing the exception:
[DllImport(#"Calculation/lib/supplier/SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
Below is working code snippet:
[DllImport(#"Calculation\lib\supplier\SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
The actual problem was using forward slashes in path, instead of back slashes. This cost me way too much to figure out, hope this will help others.

What is the best way to improve ASP.NET/C# compilation speed?

UPDATE: Focus your answers on hardware solutions please.
What hardware/tools/add-in are you using to improve ASP.NET compilation and first execution speed? We are looking at solid state hard drives to speed things up, but the prices are really high right now.
I have two 7200rpm harddrives in RAID 0 right now and I'm not satisfied with the performance anymore.
So my main question is what is the best cost effective way right now to improve ASP.NET compilation speed and overall development performance when you do a lot of debugging?
Scott Gu has a pretty good blog post about this, anyone has anything else to suggest?
http://weblogs.asp.net/scottgu/archive/2007/11/01/tip-trick-hard-drive-speed-and-visual-studio-performance.aspx
One of the important things to do is keeping projects of not-so-often changed assemblies unloaded. When a change occurs, load it, compile and unload again. It makes huge differences in large solutions.
First make sure your that you are using Web Application Projects (WAP). In our experience, compared to Website Projects, WAP compiles roughly 10x faster.
Then, consider migrating all the logic (including complex UI components) into separate library projects. The C# compiler way faster than the ASP.NET compiler (at least for VS2005).
If you have lots of 3rd party "Referenced Assemblies" ensuring that CopyLocal=False on all projects except for the web application project makes quite a big difference.
I blogged about the perf increases I managed to get by making this simple change:
Speeding Up your Desktop Build - Part 1
You can precompile the site, which will make the first run experience better
http://msdn.microsoft.com/en-us/library/ms227972.aspx
I would recommend adding as much memory to your PC as you can. If your solutions are super large you may want to explore 64bit so that your machine can address more than 3GB of ram.
Also Visual Studio 2008 SP1 seems to be markedly faster than previous versions, make certain you are running the latest release with the Service Packs.
Good Luck!
If you are looking purely at hardware you will need to search for benchmarks around the web. There are a few articles written just on the performance of hardware on Visual Studio compilation, I am just too lazy to find them and link them.
Hardware solutions can be endless because you can get some really high end quipment if you have the money. Otherwise its the usual more memory, faster processor, and faster hard drive. Moving to a 64bit OS also helps.
If you want to be even more specific? Just off the top of my head...8GB or more of memory, if you cant afford solid state drives I'd go for 10K to 15K RPM hard drives. There is a debate of whether quad-core makes any difference but look up the benchmarks and see what works for you.
If you are looking at just hardware, Multi-core processors, high amounts of ram, and ideally 10K or 15K hard drives.
I personally have noticed a huge improvement in performance with the change to 10K RPM drives.
The best way to improve ASP.NET compile time is to throw more hardware at it. An OCZ Vertex Turbo SSD drive and an Intel i7 960 gave me a huge boost. You can see my results here.
I switched from websites to web applications. Compile time went down by a factor of ten at least.
Also, I try not to use the debugger if possible ("Run without debugging"). This cuts down the time it takes to start the web application.

Categories

Resources