Make .NET executable to load faster in the first time - c#

I made a simple Windows Forms executable in C# (by simple, I mean it has about 20 methods, but it's just one window), targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
The first time I open, the application takes about 7 seconds to load in my Windows 8 here. Then it takes less than a second in the next times I open it.
In a Windows XP I tried, it takes about 30 seconds to load in the first time. A few friends of mine, when testing the application, also complain that it takes a lot of time to load in the first time (about 30 seconds too). Then it takes faster (1 or 2 seconds).
I assume this can be because .NET Framework is not loaded yet, so it takes some time to load in their machines.
Have you ever experienced the same problem with .NET applications?
Do you have any clue why this happens and how can I fix this?
EDIT - I see some people are suggesting NGEN. However, if this needs to be done in every machine that will use the application, it can't be the solution for this. I want to release my application to a large public of "common users", and that makes no sense if I require them to do some extra stuff to use my application. It's already bad enough that we are requiring the .NET Framework. My application should only be a standalone EXE without any dependencies (except for the framework).
Thank you.

You can try pre-generating the native image using NGEN which .NET will use when your application loads.
You can find more information here - http://msdn.microsoft.com/en-GB/library/6t9t5wcf(v=vs.80).aspx
These are platform dependant and not transferable usually so you'll need to do this on each machine you deploy on.

This is most likely caused by Just-In-Time compilation of the CIL. You can compile your code for the environment that you are running on using NGen. However, you will most likely lose the platform agnostic-ness of .Net if you go down this route.
This blog entry on MSDN explains the performance benefits of NGen in some detail, I suggest giving it a read.
Update following comments
As Lloyd points out in his answer, if you give your users an installer NGen can be run at this point for the environment that the application is being installed on.
However, if NGen isn't an option, then I'd recommend starting your application with a profiler attached. It can highlight any performance bottlenecks in your code. If you have any singletons or expensive static initializers these will be highlighted as such and will give you the opportunity to refactor them.
There are many great .Net profilers out there (see this SO question for details), personally I'd recommend having a look at dotTrace - they also offer a free trial period for a month which may be all that's required for your application.

[...] targeting .NET Framework 2.0. When the application loads, it doesn't do anything else than the default InitializeComponent(); in the constructor.
Actually that's not true. An application also loads types, initializes static constructors, etc.
In most cases when I have performance issues, I simply use a profiler... There can be a lot going on and a profiler is the easiest way to get some insights. There are some different options available; personally, I'm a fan of Red-Gate's profilers and they have a trial you can use.
It's noteworthy that the way this happens has changed in the .NET Framework. If you cannot get the performance you want in 2.0, I'd simply try another framework. Granted, Windows XP might be a small issue there...

Related

I've found a bug in the JIT/CLR - now how do I debug or reproduce it?

I have a computationally-expensive multi-threaded C# app that seems to crash consistently after 30-90 minutes of running. The error it gives is
The runtime has encountered a fatal error. The address of the error was at 0xec37ebae, on thread 0xbcc. The error code is 0xc0000005. This error may be a bug in the CLR or in the unsafe or non-verifiable portions of user code. Common sources of this bug include user marshaling errors for COM-interop or PInvoke, which may corrupt the stack.
(0xc0000005 is the error-code for Access Violation)
My app does not invoke any native code, or use any unsafe blocks, or even any non-CLS compliant types like uint. In fact, the line of code that the debugger says caused the crash is
overallLength += distanceTravelled;
Where both values are of type double
Given all this, I believe the crash must be due to a bug in the compiler or CLR or JIT. I'd like to figure out what causes it, or at the very least write a smaller reproduction to send into Microsoft, but I have no idea where to even begin. I've never had to view the CIL-binary, or the compiled JIT output, or the native stacktrace (there is no managed stacktrace at the time of the crash), so I'm not sure how. I can't even figure out how to view the state of all the variables at the time of the crash (VS unfortunately won't tell me like it does after managed-exceptions, and outputting them to console/a file would slow down the app 1000-fold, which is obviously not an option).
So, how do I go about debugging this?
[Edit] Compiled under VS 2010 SP1, running latest version of .Net 4.0 Client Profile. Apparently it's ".Net 4.0C/.Net 4.0E, .Net CLR 1.1.4322"
I'd like to figure out what causes it, or at the very least write a smaller reproduction to send into Microsoft, but I have no idea where to even begin.
"Smaller reproduction" definitely sounds like a great idea here... even if "smaller" won't mean "quicker to reproduce".
Before you even start, try to reproduce the error on another machine. If you can't reproduce it on another machine, that suggests a whole different set of tests to do - hardware, installation etc.
Also, check you're on the latest version of everything. It would be annoying to spend days debugging this (which is likely, I'm afraid) and then end up with a response of "Yes, we know about this - it was a bug in .NET 4 which was fixed in .NET 4.5" for example. If you can reproduce it on a variety of framework versions, that would be even better :)
Next, cut out everything you can in the program:
Does it have a user interface at all? If possible, remove that.
Does it use a database? See if you can remove all database access: definitely any output which isn't used later, and ideally input too. If you can hard code the input within the app, that would be ideal - but if not, files are simpler for reproductions than database access.
Is it data-sensitive? Again, without knowing much about the app it's hard to know whether this is useful, but assuming it's processing a lot of data, can you use a binary search to find a relatively small amount of data which causes the problem?
Does it have to be multi-threaded? If you can remove all the threading, obviously that may well then take much longer to reproduce the problem - but does it still happen at all?
Try removing bits of business logic: if your app is componentized appropriately, you can probably fake out whole significant components by first creating a stub implementation, and then simply removing the calls.
All of this will gradually reduce the size of the app until it's more manageable. At each step, you'll need to run the app again until it either crashes or you're convinced it won't crash. If you have a lot of machines available to you, that should help...
tl;dr Make sure you're compiling to .Net 4.5
This sounds suspiciously like the same error found here. From the MSDN page:
This bug can be encountered when the Garbage Collector is freeing and compacting memory. The error can happen when the Concurrent Garbage Collection is enabled and a certain combination of foreground Garbage Collection and background Garbage Collection occurs. When this situation happens you will see the same call stack over and over. On the heap you will see one free object and before it ends you will see another free object corrupting the heap.
The fix is to compile to .Net 4.5. If for some reason you can't do this, you can also disable concurrent garbage collection by disabling gcConcurrent in the app.config file:
<configuration>
<runtime>
<gcConcurrent enabled="false"/>
</runtime>
</configuration>
Or just compile to x86.
WinDbg is your friend:
http://blogs.msdn.com/b/tess/archive/2006/02/09/net-crash-managed-heap-corruption-calling-unmanaged-code.aspx
http://www.codeproject.com/Articles/23589/Get-Started-Debugging-Memory-Related-Issues-in-Net
http://www.codeproject.com/Articles/22245/Quick-start-to-using-WinDbg
Download Debug Diagnostic Tool v1.2
Run program
Add Rule "Crash"
Select "Specific Process"
on page Advanced Configuration set your exception if you know on which exception it fails or just leave this page as is
Set userdump location
Now wait for process to crash, log file is created by DebugDiag. Now activate tab Advanced Analysis, select Crash/Hang Analyzers in top list and dump file in lower list and hit Start Analysis. This will generate html report for you. Hopes you found usefull info in that report. If you have problem with analyze, upload html report somewhere and place url here so we can focus on it.
My app does not invoke any native code, or use any unsafe blocks, or
even any non-CLS compliant types like uint
You may think this, but threading, synchronization via semaphore, mutex it any handles all are native. .net is a layer over operating system, .net itself does not support pure clr code for multithreading apps, this is because OS already does it.
Most likely this is thread synchronization error. Probably multiple threads are trying to access shared resource like file etc that is outside clr boundary.
You may think you aren't accessing com etc, but when you call certain API like get desktop folder path etc it is called through shell com API.
You have following two options,
Publish your code so that we can review the bottleneck
Redesign your app using .net parallel threading framework, which includes variety of algorithms requiring CPU intensive operations.
Most likely programs fail after certain period of time as collections grow up and operations fail to execute before other thread interfere. For example, producer consumer problem, you will not notice any problem till producer will become slower or fail to finish its operation before consumer kicks in.
Bug in clr is rare, because clr is very stable. But poorly written code may lead error to appear as bug in clr. Clr can not and will never detect whether the bug is in your code or in clr itself.
Did you run a memory test for your machine as the one time I had comparable symptoms one of my dimms turned out to be faulty (a very good memorytester is included in Win7; http://www.tomstricks.com/how-to-test-your-ram-or-memory-with-windows-memory-diagnostic-tool-in-windows-7/)
It might also be a heating/throttling issue if your CPU gets too hot after this period of time. Although that would happen sooner imho.
There should be a dumpfile that you can analyze. If you never did this find someone who did, or send that to microsoft
I will suggest you open a support case via http://support.microsoft.com immediately, as the support guys can show you how to collect the necessary information.
Generally speaking, like #paulsm4 and #psulek said, you can utilize WinDbg or Debug Diag to capture crash dumps of the process, and within it, all necessary information is embedded. However, if this is the very first time you use those tools, you might be puzzled. Microsoft support team can provide you step by step guidance on them, or they can even set up a Live Meeting session with you to capture the data, as the program crashes so often.
Once you are familiar with the tools, in the future you can perform similar troubleshooting more easily,
http://blogs.msdn.com/b/lexli/archive/2009/08/23/when-the-application-program-crashes-on-windows.aspx
BTW, it is too early to say "I've found a bug". Though you cannot obviously find in your program a dependency on native code, it might still have a dependency on native code. We should not draw a conclusion before debugging further into the issue.

How does running C# app inside a code profiler differ from running one outside code profiler?

I'm trying to understand how a code profiler (in this case the Drone Profiler) runs a .NET app differently from just running it directly. The reason I need to know this is because I have a very strange problem/corruption with my dev computer's .NET install which manifests itself outside of the profiler but very strangely not inside and if I can understand why I can probably fix my computer's issue.
The issue ONLY seems to affect calls to System.Net.NetworkInformation's methods (and only within .NET 3.5 to 2.0, if I build something against 4 all is well). I built a little test app which only does one thing, it calls System.Net.NetworkInformation.IsNetworkAvailable(). Outside of the profiler I get "Fatal Execution Engine Error" occurred in System.dll, and that's all the info it gives. From what I understand that error usually results from native method calls, which presumably occur when the System.dll lets some native DLL perform the IsNetworkAvailable() logic.
I tried to figure out the inside and outside the profiler difference using Process Monitor, recording events from both situations and comparing them. Both logs were the same until just a moment after iphlpapi.dll and winnsi.dll were called and just before the profiler-run code called dnsapi.dll and the non-profiler code began loading crash reporting related stuff. At that moment when it seemed to go wrong the profiler-run code created 4-6 new threads and the non-profiler (crashing) code only created 1 or 2. I don't know what that means, if anything.
Arguably unnecessary background
My Windows 7 included .NET installation (3.5 to 2.0) was working fine until my hard drive suffered some corruption and checkdisk began finding bad clusters. I imaged the drive to a new one and everything works fine except this one issue with .NET.
I need to resolve this problem reinstalling Windows or reverting to image backups.
Here are some of the things I've looked into:
I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption).
I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant.
I have created a new user account and cleaned up any environment variables in case environment was related. No change.
I did "sfc /scannow" and it found no integrity problems.
I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed.
I removed my virus scanner to see if it was interfering, no difference.
I tried running the test code in Safe Mode, same crash issue.
I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult.
I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc.
Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it.
Thanks!
Quinxy
The short answer is that my System.ni.dll file was damaged, I replaced it and all is well.
The long answer might help someone else by way of its approach to the solution...
My problem related to .Net being damaged in such a way that apps wouldn't run except through a profiler. I downloaded the source for the SlimTune open source profiler, built it locally, and set a break point right before the call to Process.Start(). I then compared all the parameters involved in starting the app successfully through the profiler versus manually. The only meaningful difference I found was the addition of the .NET profile parameters added to the environment variables:
cor_enable_profiling=1
cor_profiler={38A7EA35-B221-425a-AD07-D058C581611D}
I then tried setting these in my own user's environment, and voila! Now any app I ran manually would work. (I had actually tried doing the same thing a few hours earlier but I used a GUID that was included in an example and which didn't point to a real profiler and apparently .NET knew I had given it a bogus GUID and didn't run in profiling mode.)
I now went back and began reading about just how a PE file is executed by CLR hoping to figure out why it mattered that my app was run with profiling enabled. I learned a lot but nothing which seemed to apply.
I did however remember that I should recheck the chkdsk log I kept listing the files that were damaged by the drive failure. After the failure I had turned all the listed file ids into file paths/names and I had replaced all the 100+ files I could from backup but sure enough when I went back now and looked I found a note that while I had replaced 4 or 5 .NET related files successfully there was one such file I wasn't able to replace because it was "in use". That file? System.ni.dll!!! I was now able to replace this file from backup and voila my .NET install is back to normal, apps work whether profiled or not.
The frustrating thing is that when this incident first occurred I fully expected the problem to relate to a damaged file, and specifically to a file called System.dll which housed the methods that failed. And so I diffed and rediffed all files named System.dll. But I did not realize at that time that System.ni.dll was a native compiled manifestation of System.dll (or somesuch). And because I had diffed and rediffed the .NET related directories and not noticed this (no idea how I missed it) I'd given up on that approach.
Anyway... long story short, it was a damaged System.ni.dll that caused my problems, one or more clusters within it had their content replaced with 0x0 and it just so happened to manifest as the odd problem I observed.
This sounds like a timing issue, which the profiler "fixed" by making it just a little slower.
Many profilers use instrumentation (more info here), which slightly slows down the application. Apparently it slows one thread down enough that another thread can do a little bit more work, preventing that crash. Errors like these often do not manifest themselves directly on the developers machine, but surface as soon as they run on a processors with more cores or hyper-threading. Sometimes they only occur in release builds (or vice-versa in debug builds). Timing issues can be difficult to track down since the same code may give different results in different conditions (in profiler or debugger).
From your description I'll attempt to do a wild guess on how it may be fixed:
Try to find in the source where the new threads are started. Then after they are spawned add a System.Threading.Thread.Sleep(500); line there to pause the main thread and give the new threads some time to start.
Without the source code and a few stack traces of the crashes, this is quite a bit of guesswork.

ASP.NET startup Performance profiling web

I'm trying to determine the cause of a very long (imho) initial start up of an ASP.NET application.
The application uses various third party libraries, and lots of references that I'm sure could be consolidated, however, I'm trying to identify (and apportion blame) the dlls and how much they contribute to the extended startup process.
So far, the start up times vary from 2-5 minutes depending on usage of other things on the box. This is unacceptable in my opinion based on the complexity of the site, and I need to reduce this to something in the region of 30 seconds maximum.
To be clear on the scope of the performance I'm looking for, it's the time from first request to the initial Application_Start method being hit.
So where would I start with getting information on which DLL's are loaded, and how long they take to load so I can try to put a cost/benefit together on which we need to tackle/consolidate.
From an ability perspective, I've been using JetBrains dotTrace for a while, and I'm clear on how benchmark the application once we're in the application, but it appears this is outside of the application code, and therefore outside of what I currently know.
What I'm looking for is methodologies on how to get visibility of what is happening before the first entry point into my code.
Note: I know that I can call the default page on recycle/upgrade to do an initial load, but I'd rather solve the actual problem rather than papering over it.
Note2: the hardware is more than sufficiently scaled and separated in terms of functionality, therefore I'm fairly sure that this isn't the issue.
Separate answer on profiling/debugging start up code:
w3wp is just a process that runs .Net code. So you can use all profiling and debugging tools you would use for normal .Net application.
One tricky point is that w3wp process starts automatically on first request to an application and if your tools do not support attaching to process whenever it start it makes problematic to investigate startup code of your application.
Trick to solve it is to add another application to the same Application Pool. This way you can trigger w3wp creation by navigating to another application, than attach/configure your tools against already running process. When you finally trigger your original application tools will see loading happening in existing w3wp process.
With 2-5 minutes delay you may not even need profiler - simply attach Visual Studio debugger the way suggested above and randomly trigger "break all" several times during load of your site. There is a good chance that slowest portion of the code will be on the stack of one of many threads. Also watch out for debug output - may give you some clues what is going on.
You may also use WinDbg to capture stacks of all threads in similar way (could be more light way than VS).
Your DLL references are loaded as needed, not all at once.
Do external references slow down my ASP.NET application? (VS: Add Reference dialog)
If startup is taking 2-5 minutes, I would look at what happens in Application_Start, and at what the DLLs do once loaded. Are they trying to connect to a remote service that is very slow? Is the machine far too small for what it's doing (e.g. running a DB with large amounts of data plus the web server on an AWS micro instance or similar)?
Since the load time is probably not the IIS worker process resolving references, I would turn to traditional application profilers (e.g. Jetbrains, Antz, dotTrace) to see where the time is being spent as the DLLs initialize, and in your Application_Start method.
Entertainment options check along with profiling:
profile everything, add time tracing to everything and log the information
if you have many ASPX views that need to be compiled on startup (I think it is default for release configuration) than it will take some time
references to Web services or other XML serialization related code will need to compile serialization assemblies if none are present yet
access to remote services (including local SQL) may require the services start up too
aggressive caching in application/remote services may require per-population of caches
Production:
What is the goal for start up time? Figure out it first, otherwise you will not be able to reach it.
What is price you are willing to pay to decrease start up time. Adding 1-10 more servers may be cheaper than spending months of development/test time and delaying the product.
Consider multiple servers, rolling restarts with warm up calls, web gardens
If caching of DB objects or in general is an issue consider existing distributed in-memory caches...
Despite a large number of dlls I'm almost sure that for a reasonable application it cannot be a cause of problem. Most of the time it is static objects initialization is causing slow startup.
In C# static variables are initialized when a type is first time accessed. I would recommend to use a sql profiler and see what are the queries that are performed during the start time of the application and from there see what are the objects that are expensive to initialized.

Debugging and improving the efficiency C# winform code

I have written a Winform application in C#. How can I check the performance of my code. By that I mean, how can I check which forms references are active at a given time or event, so that I can remove them if they are not required (make them available for garbage collection). Is there a way to do it using VS 2005 or any free tool. Any tutorials or guide will be useful.
[Edit] Sorry if my question is confusing. I am not looking for a professional tool, but ways to know/understand the working of my code better and code more efficiently.
Thanks
Making code efficient is always a secondary step for me. First I write the code so that it works. Next, I profile it if i am unhappy with the performance. The truth is most applications run fast enough after the first time writing them. Sometimes though, better performance is needed. Performance can be gained many different ways. It all depends on your application. I write LOB apps mainly, so I deal with alot of IO to databases, services and storage. These calls are all very expensive and need to be limited so they are my first area to optimize. I optimize by lazy-loading, eager-loading, batching calls, making less frequent calls and so on. I recently had a winforms app that created hundreds of controls dynamically and it took a long time. That's another bottleneck that I have to address. I use a profiler to measure the performance of the applications.
Use the free Equatec profiler. It will show you how long calls take and how many times a call is made. The profiler gives a nice report and visual display that can drill down the call stacks.
Red Gate Performance Profiler
...it's been said here a million times before. If you suspect performance issues, profile your application. It will tell you how long calls are taking and point out the bottlenecks in your code.
Kobra,
What you're looking for is called a Memory Profiler. There happens to be one (paid) version for .NET aptly named ".NET Memory Profiler", I've not used it extensively but it should answer the questions you're asking. There are a few others ones which will do basically the same thing, like giving you instance counts of loaded types, and help you identify when instances are not being garbage collected for one reason or another (i.e. Event Handler References, Static Properties, etc).
Hope this helps,
Dylan

Not enough storage is available to process this command in VisualStudio 2008

When I try to compile an assembly in VS 2008, I got (occasionally, usually after 2-3 hours of work with the project) the following error
Metadata file '[name].dll' could not be opened --
'Not enough storage is available to process this command.
Usually to get rid of that I need to restart Visual Studio
The assembly I need to use in my project is BIG enough (> 70 Mb) and probably this is the reason of that bug, I've never seen some thing like this in my previous projects. Ok, if this is the reason my question is why this happens and what I need to do to stop it.
I have enough of free memory on my drives and 2Gb RAM (only ~1.2 Gb are utilized when exception happens)
I googled for the answers to the questions like this.
Suggestions usually related to:
to the number of user handlers that is limited in WinXP...
to the physical limit of memory available per process
I don't think either could explain my case
For user handlers and other GUI resources - I don't think this could be a problem. The big 70Mb assembly is actually a GUI-less code that operates with sockets and implements parsers of a proprietary protocols. In my current project I have only 3 GUI forms, with total number of GUI controls < 100.
I suppose my case is closer to the fact that in Windows XP the process address space is limited with 2 GB memory (and, taking into account memory segmentation, it is possible that I don't have a free segment large enough to allocate a memory).
However, it is hard to believe that segmentation could be so big after just 2-3 hours of working with the project in Visual Studio. Task Manager shows that VS consumes about 400-500 Mb (OM + VM). During compilation, VS need to load only meta-data.
Well, there are a lot of classes and interfaces in that library, but still I would expect that 1-2 Mb is more then enough to allocate metadata that is used by compiler to find all public classes and interfaces (though it is only my suggestion, I don't know what exactly happens inside CLR when it loads assembly metadata).
In addition, I would say that entire assembly size is so big only because it is C++ CLI library that has other um-managed libraries statically linked into one DLL. I estimated (using Reflector) that .NET (managed) code is approx 5-10% of this assembly.
Any ideas how to define the real reason of that bug? Are there any restrictions or recommendations as to .NET assembly size? (Yes I know that it worth thinking of refactoring and splitting a big assembly into several smaller pieces, but it is a 3rd party component, and I can't rebuilt it)
The error is misleading. It really should say "A large enough contiguous space in virtual memory could not be found to perform the operation". Over time allocations and deallocations of virtual memory space leads to it becoming fragmented. This can lead to situations where a large allocation cannot be filled despite there being a plenty total space available.
I think this what your "segmentation" is refering to. Without knowing all the details of everything else that needs to load and other activity which occupies the 2-3 hour period its difficult to say whether this really is the cause. However I would not put it into the category of unlikely, in fact it is the most likely cause.
In my case the following fix helped:
http://confluence.jetbrains.net/display/ReSharper/OutOfMemoryException+Fix
As Anthony pointed out, the error message is a bit misleading. The issue is less about how big your assembly is and more about how much contiguous memory is available.
The problem is likely not really the size of your assembly. It's much more likely that something inside of Visual Studio is fragmenting memory to the point that a build cannot complete. The usual suspects for this type of problem are
Too many projects in the solution.
Third party add-ins
If you have more than say 10 projects in the solution. Try breaking up the solution and see if that helps.
If you have any 3rd party addins, try disabling them one at a time and seeing if the problem goes away.
I am getting this error on one of my machines and surprisingly, this problem is not seen on other dev machines. May be something wrong with VS installation.
But I found an easier solution.
If I delete the .suo file of teh solution and re-open the solution again, it will start working smoothly.
Hope this will be useful for somebody in distress..
If you are just interested to make it work then restart your computer and it will work like a charm. I Had same kind of error in my application and then after reading all of the answer here at stackoverflow, I decided to first restart my computer before doing any other modifications. And it saved me a lot of time.
Another cause for this problem can be using too many typed datasets via the designer. or other types that can be instaniated via a designer like lots of databound controls on lots of forms.
I imagine your the sort of hardcore programmer though who wouldn't drag n' drop a DS! :D
in relation to your problem, Bogdan, have you tried to reproduce the problem w/o your c++ component loaded? If you can't then maybe its this. How are you loading the component? have you tried other techniques like late binding, etc? any difference?
Additional:
Yes you are right, the other culprits are lots of controls on the form. I once saw this same issue with a dev that had imported a very VB6 app over to .net. he had literally 100's of forms. He would get periodic crashing of the IDE after a couple of hours. I'm pretty sure it was thread exhaustion. It might be worth setting up a vanilla box w/ no addins loaded just to rule addins out, but my guess is you are just hitting the wall in terms of a combined limiation of VS and your box specs. Try running Windows Vista 64bit and install some extra RAM modules.
If memory usage and VM size is small for devenv.
Explicitly kill "ALL" instances of devenv.exe running.
I had 3 devenv.exe running where as I had two instances of Visual studion opened in front.
That was solution in my case.
I know it has been a long time since this was commented on but I ran into this exact issue today with a telerik dll in VS2010. I had never seen this issue before until today when I was making some setting changes in IE.
There is a setting in Tools/Folder Option/View in the Files and Folders section called "Launch folder windows in a separate process".
I am not sure the amount of memory used for each window when using this setting but until today I have never had this checked. After checking this option for misc reasons I started getting the "not enough storage is available to process this command". The telerik dll is an 18mb dll that we are using located in our library folder as a reference in our project.
Unchecking this resolved the problem.
Just passing along as another possible solution
I also faced the same problem.
Make sure that the windows os is with 64bit.
I switched to windows 64bit from windows 32bit. I problem got solved.
I had this same issue and in my case, the exception name was very misleading. The actual problem was that the DLL couldn't be loaded at all due to invalid path. The exception i was getting said "
I used DllImport attribute in C#, ASP.NET application with declaration like below and it was causing the exception:
[DllImport(#"Calculation/lib/supplier/SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
Below is working code snippet:
[DllImport(#"Calculation\lib\supplier\SupplierModule.dll", CallingConvention = CallingConvention.StdCall, CharSet = CharSet.Ansi, EntryPoint = "FunctionName")]
The actual problem was using forward slashes in path, instead of back slashes. This cost me way too much to figure out, hope this will help others.

Categories

Resources