I recently ran into a problem, when deploying an application on a customer machine. I asked him to drop a number of (updated) assemblies into the program folder. Because he downloaded them via the Dropbox website, the OS marked them as blocked and the assemblies couldn't be loaded via reflection. It took me some time to figure out the issue, with the help of this post:
.net local assembly load failed with CAS policy
I am know wondering whether it is a good idea to load the assemblies with Assembly.LoadUnsafeFrom(...) instead of Assembly.LoadFrom(...), just to avoid these kind of issues in future. (I am aware of the fact that sending assemblies over the internet and letting the customer drop them into the program files folder isn't the golden path of software deployment, but in reality you sometimes need to improvise...).
As I read, the method requires the calling application to run in a full-trust environment, which is usually the case with the application I am talking about.
My question is: Apart from that - the requirement of running in full trust - are there any side effects of this method. Are there scenarios, where the application will throw an exception because of lacking privileges of the Windows User account, etc., etc.?
Related
As far as you know, are there any problems in running a C# application from a shared exe file? This is a request from a customer asking their 20 clients to run the same exe file on shared path.
First tests didn't show problems, but don't know on long terms. I personally don't like this, don't think that framework was developed with this in mind, but they do for a quick upgrade of the exe file when needed.
Any point to discourage this?
Thanks
Sav
The first consideration is deployment concerns. Prior to .NET 3.5 SP1, this was not allowed by default because the shipped security policy treated network locations in a less trusted way. .NET 3.5 SP1 and later, this is no longer the case. You could, of course, use caspol to modify this security policy to allow this, if you are working with versions of the framework prior to that. Additionally, some more recent versions of Windows may have additional security policies outside of .NET that can prevent execution from remote locations.
The second consideration is making sure the application is designed in a way that it is aware of its environment, not assuming the environment is relative to the local machine when it is expected to be so (which could affect resolution of external resources and, depending on the situation, could result in resource contention or users overwriting each other's data).
The third is availability. What if the server hosting that executable becomes unavailable (is powered off by accident, crashes, experiences networking issues, is renamed, etc.)? Is that acceptable? How large is the executable? If it is large, that can increase network traffic and at any rate result in the executable being slow to start as it is invoked over the network.
I suppose for trivial applications, these issues may be negligible. However, there are lots of ways of installing applications on client computers in a way that they are installed and updated quickly and easily, such as ClickOnce deployment.
We currently run software designed in house. This runs off a central SQL database. Each computer is set up with a batch program which runs through Windows Start Up and downloads the current program files from the central server. The .exe is therefore run off the individuals computer and not off the server. This has been found, in our case at least, to be the most efficient method.
We are seeing a very high amount of CPU and memory usage from one of our .NET MVC apps and can't seem to track down what the cause of it is. Our group does not have access to the web server itself but instead gets notified automatically when certain limits are hit (90+% of CPU or memory). Running locally we can't seem to find the problem. Some items we think might be the culprit
The app has a number of threads running in the background when users take certain actions
We are using memcached (on a different machine than the web server)
We are using web sockets
Other than that the app is pretty standard as far as web applications go. Couple of forms here, login/logout there, some admin capabilities to manage users and data; nothing super fancy.
I'm looking at two different solutions and wondering what would be best.
Create a page inside the app itself (available only to app admins) that shows information about memory and CPU being used. Are there examples of this or is it even possible?
Use some type of 3rd party profiling service or application that gets installed on the web servers and allows us to drive down to find what is causing the high CPU and memory usage in the app.
i recommed the asp.net mvc miniprofiler. http://miniprofiler.com/
it is simple to implement and to extend, can run in production mode, can store its results to SQL Server. i used it many times to find difficult performance issues.
Another possibility is to use http://getglimpse.com/ in combination with the miniprofiler glimpse-plugin https://github.com/mcliment/miniprofiler-glimpse-plugin
both tools are open source and don't require admin access to the server.
You can hook up Preemptive's Runtime Intelligence to it. - http://www.preemptive.com/
Otherwise a profiler, or load test could help find the problem. Do you have anything monitoring the actual machine health? (Processor usage, memory usage, disk queue lengths, etc..).
http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/04/getting-started-with-load-testing-in-visual-studio-2012.aspx
Visual studio has a built-in profiler (depending on version and edition). You may be able to WMI query the web server that has the issues, or write/provide diagnostic recording/monitoring tools to hand them over to someone that does have access.
Do you have any output caching? what version of IIS? Is the 90% processor usage you are getting alerted to showing that your web process is actually the one doing it? ( Perhaps it's not your app if the alert is improperly configured)
I had a similar situation and I created a system monitor to my app admins based on this project
I'm trying to understand how a code profiler (in this case the Drone Profiler) runs a .NET app differently from just running it directly. The reason I need to know this is because I have a very strange problem/corruption with my dev computer's .NET install which manifests itself outside of the profiler but very strangely not inside and if I can understand why I can probably fix my computer's issue.
The issue ONLY seems to affect calls to System.Net.NetworkInformation's methods (and only within .NET 3.5 to 2.0, if I build something against 4 all is well). I built a little test app which only does one thing, it calls System.Net.NetworkInformation.IsNetworkAvailable(). Outside of the profiler I get "Fatal Execution Engine Error" occurred in System.dll, and that's all the info it gives. From what I understand that error usually results from native method calls, which presumably occur when the System.dll lets some native DLL perform the IsNetworkAvailable() logic.
I tried to figure out the inside and outside the profiler difference using Process Monitor, recording events from both situations and comparing them. Both logs were the same until just a moment after iphlpapi.dll and winnsi.dll were called and just before the profiler-run code called dnsapi.dll and the non-profiler code began loading crash reporting related stuff. At that moment when it seemed to go wrong the profiler-run code created 4-6 new threads and the non-profiler (crashing) code only created 1 or 2. I don't know what that means, if anything.
Arguably unnecessary background
My Windows 7 included .NET installation (3.5 to 2.0) was working fine until my hard drive suffered some corruption and checkdisk began finding bad clusters. I imaged the drive to a new one and everything works fine except this one issue with .NET.
I need to resolve this problem reinstalling Windows or reverting to image backups.
Here are some of the things I've looked into:
I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption).
I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant.
I have created a new user account and cleaned up any environment variables in case environment was related. No change.
I did "sfc /scannow" and it found no integrity problems.
I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed.
I removed my virus scanner to see if it was interfering, no difference.
I tried running the test code in Safe Mode, same crash issue.
I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult.
I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc.
Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it.
Thanks!
Quinxy
The short answer is that my System.ni.dll file was damaged, I replaced it and all is well.
The long answer might help someone else by way of its approach to the solution...
My problem related to .Net being damaged in such a way that apps wouldn't run except through a profiler. I downloaded the source for the SlimTune open source profiler, built it locally, and set a break point right before the call to Process.Start(). I then compared all the parameters involved in starting the app successfully through the profiler versus manually. The only meaningful difference I found was the addition of the .NET profile parameters added to the environment variables:
cor_enable_profiling=1
cor_profiler={38A7EA35-B221-425a-AD07-D058C581611D}
I then tried setting these in my own user's environment, and voila! Now any app I ran manually would work. (I had actually tried doing the same thing a few hours earlier but I used a GUID that was included in an example and which didn't point to a real profiler and apparently .NET knew I had given it a bogus GUID and didn't run in profiling mode.)
I now went back and began reading about just how a PE file is executed by CLR hoping to figure out why it mattered that my app was run with profiling enabled. I learned a lot but nothing which seemed to apply.
I did however remember that I should recheck the chkdsk log I kept listing the files that were damaged by the drive failure. After the failure I had turned all the listed file ids into file paths/names and I had replaced all the 100+ files I could from backup but sure enough when I went back now and looked I found a note that while I had replaced 4 or 5 .NET related files successfully there was one such file I wasn't able to replace because it was "in use". That file? System.ni.dll!!! I was now able to replace this file from backup and voila my .NET install is back to normal, apps work whether profiled or not.
The frustrating thing is that when this incident first occurred I fully expected the problem to relate to a damaged file, and specifically to a file called System.dll which housed the methods that failed. And so I diffed and rediffed all files named System.dll. But I did not realize at that time that System.ni.dll was a native compiled manifestation of System.dll (or somesuch). And because I had diffed and rediffed the .NET related directories and not noticed this (no idea how I missed it) I'd given up on that approach.
Anyway... long story short, it was a damaged System.ni.dll that caused my problems, one or more clusters within it had their content replaced with 0x0 and it just so happened to manifest as the odd problem I observed.
This sounds like a timing issue, which the profiler "fixed" by making it just a little slower.
Many profilers use instrumentation (more info here), which slightly slows down the application. Apparently it slows one thread down enough that another thread can do a little bit more work, preventing that crash. Errors like these often do not manifest themselves directly on the developers machine, but surface as soon as they run on a processors with more cores or hyper-threading. Sometimes they only occur in release builds (or vice-versa in debug builds). Timing issues can be difficult to track down since the same code may give different results in different conditions (in profiler or debugger).
From your description I'll attempt to do a wild guess on how it may be fixed:
Try to find in the source where the new threads are started. Then after they are spawned add a System.Threading.Thread.Sleep(500); line there to pause the main thread and give the new threads some time to start.
Without the source code and a few stack traces of the crashes, this is quite a bit of guesswork.
I am writting a client-server laucher application.
An administrator will, from the server side, select executable files ('.exes') from a list and add these to a short-list of the apps that standard users can run on the client.
To complile this list, my client app will reculsively search though all the folders in the system for exes and send this list over to the server via wcf.
To save search time, and keep the list short, I would like to avoid searching through folders that are not LIKELY to contain '.exes' that human users are intended to run directly.
Examples (i think) :
%windir%\WinSxS - Windows Side-by-Side - used to store versions of Windows components that are built to reduce configuration problems with Dynamic Link Libraries.
%windir%\installer - used to store installation information for installed programs
C:\MSOCache - MS Office local install source
Most hidden folders
What other folders should I avoid searching through and what are they likely to contain?
I am interested in WinXP/WinVista/Win7.
EDIT:
Search time is not the most important factor.
It is very important not to exclude exes that the user may need to run AND to exclude exes like:
c:\Windows\winsxs\x86_microsoft-windows-x..rtificateenrollment_31bf3856ad364e35_6.1.7600.20520_none_f43289dd08ebec20\CertEnrollCtrl.exe
that were never meant to be directly launched by the user.
Since any heuristic approach dealing in "likely" will need the ability to add further items to catch cases deemed "unlikely" but which were actually important, you can't be "wrong" as such, just not "perfect".
With this in mind, I would take the opposite approach, and concentrate on those that are likely to have executables.
Recurse through the directories contained in the directory %ProgramFiles%
points to.
Look in all other directories mentioned by the %Path% system variable (semicolon-delimited) but do not recurse it.
Look for shortcuts on desktops, start menus and quicklaunch folders.
As a rule, one would expect the former to find executables used by people selecting executables from the start menu and icons, and for the second to find executables used from the command-line and by the system. Between the two you'll find the large majority of executables on a system with relatively little wasted directory examinations.
How often are you going to run this program? It doesn't seem like it would take very long at all to just scan the whole drive for all the executables. I have a one-gigabyte drive on my development machine and "dir /b/s *.exe" only takes a minute or two. And it's very likely that my development machine has a whole lot more files on it than the typical user's computer.
Come to think of it, your client program could execute that command, capture the output, and send it to the server. Perhaps after a little pre-processing.
My point is that it shouldn't matter if the process were to take a full five minutes, if it's done only once (or very rarely) on each machine. The benefit is that you won't miss any executables that way.
I need to find a reliable way to update a running Windows Service (Service.exe).
The service is run under the LocalSystem account whereas the logged-in user is in a non-admin User account.
My current solution would be as follows:
- The Service.exe checks for updates (files) regularily
- When an update it found it starts another service (Launcher.exe) that would stop the Service.exe, copy over the files, restart Service.exe, then stop itself
After doing some online-reading and from some of my previous forum posts I beleive this would be the appropriate solution - but before I go ahead I wanted to check with all the guru's and see if I am forgetting something important or if there is a better way.
I did read-up on some method of self-updating (loading & unloading assemblies, etc...) but it just seemed very unsure and I need this to be as robust as possible - if it fails it means someone needs to manually intervene.
Any help or hints would be much appreciated.
Thanks,
The download / stop / apply changes / restart procedure is a fairly common and robust one. I definitely wouldn't try to get into the business of doing it without restarting. It may well be possible in many cases, but it's going to be a lot harder to get right.
Don't forget to make sure you can update the updater, by the way...
You could write a bootstrapper service as well.
Basically it's a lightweight service that you install, it watches a directory for dll's that match a certain interface.
Anything that matches, it loads up and runs as a service inside of itself.
Have it load two dlls to start. One is your service, the other is your update service.
Since there is virtually no code in the bootstrapper it shouldn't need to be updated.
But your updater will be able to update itself, and the service dll periodically.
How your updater works is up to you. We found that having a publish location on our network, and just watching the folder for updates, additions and delete's and synching the local dll folder was sufficient, but you could have it monitor a config file that point to dlls all over if that is what is needed (had to do that for one worldwide update system).
It's a bit tricky to get it all setup and working correctly. But once it is, it works great.
That is quite the right solution, however somebody (usually the main service) needs to carry code to update the launcher.
The launcher cannot do itself for what should be obvious reasons.
I do a double launcher (launcher spawns second service that updates both) to simplify update checks.
BTW even in the UNIX world we apply the same basic steps only we overwrite the running binary while the service is still running. In the Windows world you can do the same thing by renaming all files you are about to overwrite out of the way first.