The following code results in our Visual Studio 2012 development web server, or the alternative IISexpress, effectively killing the AppDomain of our WebApp by executing a Thread.Abort in all of it's constituent threads from the threadpool. What I do not understand is why? I cannot find anything documented which specifically states under what circumstances the web server would do this. Also, there is precious little from the "verbose" output of the web server itself.
This took several days worth of effort to track down. Normal debugging techniques proved worthless. I eventually had to resort to the tried and tested method of commenting out method bodies until I reached that "ah-ha" moment.
One of our developers had inadvertently created a temporary file and then closed it again deep within one of our ASP.NET services.
FileStream fs = File.Create(strFilePath);
|
fs.Close;
The strFilePath was accidentally set to the same location as the WebApp's assembly location. Namely:
var basePathUri = new Uri(Assembly.GetExecutingAssembly().CodeBase);
var strFilePath = Path.GetDirectoryName(Uri.UnescapeDataString(basePathUri.AbsolutePath));
It's worth saying, that this error is now corrected by correctly writing to the system temporary file space.
So once, again, we know this is bad practice, but why does the Visual Studio web server behave as it does?
PS
I know there is a deployment difference between a development web server and a production web server (e.g. apache or nginx) with regard to copying all content to a temporary ASP.NET location. In fact, this problem does not manifest in deployment.
Related
I recently ran into a problem, when deploying an application on a customer machine. I asked him to drop a number of (updated) assemblies into the program folder. Because he downloaded them via the Dropbox website, the OS marked them as blocked and the assemblies couldn't be loaded via reflection. It took me some time to figure out the issue, with the help of this post:
.net local assembly load failed with CAS policy
I am know wondering whether it is a good idea to load the assemblies with Assembly.LoadUnsafeFrom(...) instead of Assembly.LoadFrom(...), just to avoid these kind of issues in future. (I am aware of the fact that sending assemblies over the internet and letting the customer drop them into the program files folder isn't the golden path of software deployment, but in reality you sometimes need to improvise...).
As I read, the method requires the calling application to run in a full-trust environment, which is usually the case with the application I am talking about.
My question is: Apart from that - the requirement of running in full trust - are there any side effects of this method. Are there scenarios, where the application will throw an exception because of lacking privileges of the Windows User account, etc., etc.?
As far as you know, are there any problems in running a C# application from a shared exe file? This is a request from a customer asking their 20 clients to run the same exe file on shared path.
First tests didn't show problems, but don't know on long terms. I personally don't like this, don't think that framework was developed with this in mind, but they do for a quick upgrade of the exe file when needed.
Any point to discourage this?
Thanks
Sav
The first consideration is deployment concerns. Prior to .NET 3.5 SP1, this was not allowed by default because the shipped security policy treated network locations in a less trusted way. .NET 3.5 SP1 and later, this is no longer the case. You could, of course, use caspol to modify this security policy to allow this, if you are working with versions of the framework prior to that. Additionally, some more recent versions of Windows may have additional security policies outside of .NET that can prevent execution from remote locations.
The second consideration is making sure the application is designed in a way that it is aware of its environment, not assuming the environment is relative to the local machine when it is expected to be so (which could affect resolution of external resources and, depending on the situation, could result in resource contention or users overwriting each other's data).
The third is availability. What if the server hosting that executable becomes unavailable (is powered off by accident, crashes, experiences networking issues, is renamed, etc.)? Is that acceptable? How large is the executable? If it is large, that can increase network traffic and at any rate result in the executable being slow to start as it is invoked over the network.
I suppose for trivial applications, these issues may be negligible. However, there are lots of ways of installing applications on client computers in a way that they are installed and updated quickly and easily, such as ClickOnce deployment.
We currently run software designed in house. This runs off a central SQL database. Each computer is set up with a batch program which runs through Windows Start Up and downloads the current program files from the central server. The .exe is therefore run off the individuals computer and not off the server. This has been found, in our case at least, to be the most efficient method.
I'm trying to understand how a code profiler (in this case the Drone Profiler) runs a .NET app differently from just running it directly. The reason I need to know this is because I have a very strange problem/corruption with my dev computer's .NET install which manifests itself outside of the profiler but very strangely not inside and if I can understand why I can probably fix my computer's issue.
The issue ONLY seems to affect calls to System.Net.NetworkInformation's methods (and only within .NET 3.5 to 2.0, if I build something against 4 all is well). I built a little test app which only does one thing, it calls System.Net.NetworkInformation.IsNetworkAvailable(). Outside of the profiler I get "Fatal Execution Engine Error" occurred in System.dll, and that's all the info it gives. From what I understand that error usually results from native method calls, which presumably occur when the System.dll lets some native DLL perform the IsNetworkAvailable() logic.
I tried to figure out the inside and outside the profiler difference using Process Monitor, recording events from both situations and comparing them. Both logs were the same until just a moment after iphlpapi.dll and winnsi.dll were called and just before the profiler-run code called dnsapi.dll and the non-profiler code began loading crash reporting related stuff. At that moment when it seemed to go wrong the profiler-run code created 4-6 new threads and the non-profiler (crashing) code only created 1 or 2. I don't know what that means, if anything.
Arguably unnecessary background
My Windows 7 included .NET installation (3.5 to 2.0) was working fine until my hard drive suffered some corruption and checkdisk began finding bad clusters. I imaged the drive to a new one and everything works fine except this one issue with .NET.
I need to resolve this problem reinstalling Windows or reverting to image backups.
Here are some of the things I've looked into:
I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption).
I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant.
I have created a new user account and cleaned up any environment variables in case environment was related. No change.
I did "sfc /scannow" and it found no integrity problems.
I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed.
I removed my virus scanner to see if it was interfering, no difference.
I tried running the test code in Safe Mode, same crash issue.
I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult.
I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc.
Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it.
Thanks!
Quinxy
The short answer is that my System.ni.dll file was damaged, I replaced it and all is well.
The long answer might help someone else by way of its approach to the solution...
My problem related to .Net being damaged in such a way that apps wouldn't run except through a profiler. I downloaded the source for the SlimTune open source profiler, built it locally, and set a break point right before the call to Process.Start(). I then compared all the parameters involved in starting the app successfully through the profiler versus manually. The only meaningful difference I found was the addition of the .NET profile parameters added to the environment variables:
cor_enable_profiling=1
cor_profiler={38A7EA35-B221-425a-AD07-D058C581611D}
I then tried setting these in my own user's environment, and voila! Now any app I ran manually would work. (I had actually tried doing the same thing a few hours earlier but I used a GUID that was included in an example and which didn't point to a real profiler and apparently .NET knew I had given it a bogus GUID and didn't run in profiling mode.)
I now went back and began reading about just how a PE file is executed by CLR hoping to figure out why it mattered that my app was run with profiling enabled. I learned a lot but nothing which seemed to apply.
I did however remember that I should recheck the chkdsk log I kept listing the files that were damaged by the drive failure. After the failure I had turned all the listed file ids into file paths/names and I had replaced all the 100+ files I could from backup but sure enough when I went back now and looked I found a note that while I had replaced 4 or 5 .NET related files successfully there was one such file I wasn't able to replace because it was "in use". That file? System.ni.dll!!! I was now able to replace this file from backup and voila my .NET install is back to normal, apps work whether profiled or not.
The frustrating thing is that when this incident first occurred I fully expected the problem to relate to a damaged file, and specifically to a file called System.dll which housed the methods that failed. And so I diffed and rediffed all files named System.dll. But I did not realize at that time that System.ni.dll was a native compiled manifestation of System.dll (or somesuch). And because I had diffed and rediffed the .NET related directories and not noticed this (no idea how I missed it) I'd given up on that approach.
Anyway... long story short, it was a damaged System.ni.dll that caused my problems, one or more clusters within it had their content replaced with 0x0 and it just so happened to manifest as the odd problem I observed.
This sounds like a timing issue, which the profiler "fixed" by making it just a little slower.
Many profilers use instrumentation (more info here), which slightly slows down the application. Apparently it slows one thread down enough that another thread can do a little bit more work, preventing that crash. Errors like these often do not manifest themselves directly on the developers machine, but surface as soon as they run on a processors with more cores or hyper-threading. Sometimes they only occur in release builds (or vice-versa in debug builds). Timing issues can be difficult to track down since the same code may give different results in different conditions (in profiler or debugger).
From your description I'll attempt to do a wild guess on how it may be fixed:
Try to find in the source where the new threads are started. Then after they are spawned add a System.Threading.Thread.Sleep(500); line there to pause the main thread and give the new threads some time to start.
Without the source code and a few stack traces of the crashes, this is quite a bit of guesswork.
I'm a relatively new C# developer, and I'm finding myself roughly 1/5th as productive building C# MVC applications in VS 2010 than I was doing it previous in Zend using php by hand using vim.
I work in an iterative cycle of write tests / write code / run tests / integrate & debug problems.
This last step involves building the code using visual studio, debug > attach to process > w3wp.exe and then visiting http://localhost/App/ (localy copy of IIS 7.5) and trigging some event/etc that kicks the VS debugger into running.
Upon finding a bug / issue I then stop the debugger, fix it, recompile and repeat.
This is unbelievably slow.
Compiling the application takes maybe 2 seconds, but the first time loading up in the debugger takes about 60-80 seconds. Turning the debugger off improves the speed of this fractionally, but not significantly.
There is nothing special about my copy of IIS; it's just configured to run locally serving the project directory from the visual studio project.
My machine isn't fantastic, its a 3.0Ghz duel core running windows 7 with 4 GB of ram... but this is just beyond a joke. I'm spending more time staring at my screen waiting IIS to do something than I am writing code.
Subsequent page loads are obviously really quick, you know instant page loads.
The issue, I guess, is that IIS needs to run application startup and load files and do whatever it does with the application pool worker when I recompile, which means the initial startup is slow (IOC, web config files, etc).
...and yet, I can't seem to find many other people on the net complaining about this, so I guess it must be some combination of how the application is configured, how IIS is configured and my workflow which are messed up.
What am I doing wrong?
If there's no way around IIS startup, how should I be doing development to avoid this problem?
I think most people use the Built In Visual Studio Development Server (IIS-like). If you are running SP1 you can also use IIS Express which is more robust for larger and/or multi/cross website applications.
Right Click on your Web Application -> Properties -> Web Tab to select a Debug Server.
I agree with you, that it might have to do something with your local system. Obviously it's not possible for me to tell you how your system is missconfigured, but I would check that first in more detail.
Some other hints which might be helpful, even if they don't solve the root of the problem:
Use the build in development server as Erik proposes.
If you have only changed templates, you don't have to restart from scratch.
If you have good unit tests, you don't need to start the web application at all.
Does IIS 7.5 have app pools? If yes, you could restart just the app pool, which is much faster than restarting the whole IIS.
PS.: I have seen quite some strange behaviours when opening database connections, if the network is configured in a bad way. Even if it's on the local machine, you might have strange DNS lookups or something like that. Just as an additional hint about what to look for.
I am developing an application in ASP.NET MVC, using SQL Server Express as the backend and Cassini as the development web server (the one that comes with Visual Studio 2008).
The application performance is blazingly fast (nearly instantaneous page switches). However, spinning up the debugger is painfully slow; it takes about 30 seconds from the time I hit F5 to the time the ASP.NET welcome page comes up.
I have noticed a similar delay when loading SQL Server Management Studio Express, and another delay when I open a table in my database for the first time for viewing. After I open my first table, everything goes smoothly.
Given the behavior of SQL Server Management Studio Express, I suspect that the problem lies in making the initial connection to SQL Server Express. Is this really where the problem is, and if so, how can I fix it?
I'd check the auto_close property on the database.
sp_dboption 'MyDatabaseName', 'autoclose'
I think the default on express might be to set autoclose to on. When that is set to TRUE, the server will close the database and free all resources from it when there are no users in the database. Setting autoclose to FALSE will tell the server to hang on to the database so that it's in the ready state regardless of users being in the database or not.
See here for more info.
If it's only slow when debugging, then there are several chokepoints to consider:
Applications start off slower when debugging because of the precompilation the JITer has to do whenever the assemblies are rebuilt.
If you're compiling and debugging every time, it may be your compilation that's slow, not so much your application performance. How long does it take for the browser to appear once you hit F5? If you have multiple projects in your solution, building them will take time. Try setting up a build configuration that excludes class projects (make sure to rebuild them manually when necessary)
I haven't had any trouble with Cassini, but you might try IIS just for grins.
Just a few thoughts, HTH.
I finally solved the problem by rebuilding my TCP/IP stack, using Netshell from a Command Prompt window. Apparently I was getting a TCP/IP timeout.
netsh int ip reset c:resetlog.txt
http://support.microsoft.com/kb/299357
In my case I was getting exceptions being thrown and caught which were visible in the Debug > Output window which was massively slowing down my app as it was being debugged.
I ended up enabling the break on exceptions as outlined here and then just fixing the code so not so many exceptions were being thrown
Why is ASP.NET throwing so many exceptions?