Replacing new version/fixes to dll when using reflection - c#

We created an extendable project in wcf using reflection.
the web service loads different modules in run time depends on the input request.
We use .NET reflection for dynamically loading of module libraries.
The system runs on IIS.
During our tests we noticed that we couldn't replace our existing dlls once loaded via Reflection. We tried to copy our new dll into bin directory but we received an error similar ' the dll used by an application '
We can assure its only our system use that dll.
However replacing the dll could possible stopping the IIS.
But we require replacing the dll without stopping the IIS. Is there anyway we can handle this in code level ?
Appreciate your quick response.
IOrder orderImpl = null;
try
{
string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase) + "\\" + assemInfo.AssemblyQualifiedName + ".dll";
path = path.Replace("file:\\", "");
Assembly a = Assembly.LoadFile(path);
Type commandType = a.GetType(assemInfo.AssemblyQualifiedName + "." + assemInfo.ClassName);
orderImpl = (IOrder)commandType.GetConstructor(new System.Type[] { typeof(LocalOrderRequest) }).Invoke(new object[] { order });
}
catch (Exception ex)
{
throw new OrderImplException("-1", ex.Message);
}
Thanks
RSF

I'm going to make two assumptions from your question: 1) Uptime is critical to your app and that's why it can't be shut down for 30-seconds to update it; 2) It is not in a fault-tolerant, load-balanced farm.
If that's the case, then solving #2 will also resolve how to update the DLL with no downtime.
For an app that can't be shutdown for a few seconds to update a DLL, you should have an infrastructure that supports that needed stability. The risk of an unexpected outage is far greater than the impact of updating the app.
You should have more than one server behind a load-balancer that provides fault-tolerant routing if one of the servers goes down.
By doing this, you minimize the risk of downtime from failure and you can update the DLLs by shutting of IIS on one node, updating it, then restarting it. The load-balancing will recognize that the node is down and route traffic to the good node(s) until the updated one is again available. Repeat with the other node(s) and you've updated your app with no downtime.

You could try creating your own AppDomain and then load/unload assemblies from there. Here's an article about that: http://people.oregonstate.edu/~reeset/blog/archives/466

Related

SSIS Package Created and Executed via Code Requires Service to be Restarted after Multiple Runs

we have existing C# code to dynamically create SSIS packages that reads from different files and into a SQL Server 2016 database. It does what we need it to, but stumbled into an issue that we remain unable to resolve: being unable to keep running the .Execute command without having to restart our custom Windows service.
Right now the limit is at 2 files. If we run our code that calls the Execute command the third time, it will get stuck up on post validate based from what we're logging via the Component and Run Event handlers, and we couldn't proceed until we restart the Windows service and run again. We cannot just always restart the service because there could be other processes that can be disrupted if we go with that approach.
Things we've tried so far:
Extract the dtsx package that the code creates and attempt to run it using dtexec or BIDS locally. No errors / warnings raised, and we can keep re-running the same package over and over without fail.
Run the same code locally. Same result as #1.
Run a SQL trace with the help of our DBA. Confirmed that we have no queries locking the destination table and the database itself. We did observe some SPIDs being retained after the third run but all are in a sleeping state.
Modify our RunEventHandler and ComponentEventHandler to log which step is the process in. Also tried enabling logging via Event Viewer. No errors, really just getting stuck at post-validate as mentioned earlier come the third run.
Explicitly call the SSIS dispose methods, even tried explicitly disposing the connection managers themselves.
Play around the DelayValidations and ValidateExternalMetadata properties.
Any chance others have encountered this before and were able figure out what's causing this?
To expound on the latest comment, we've found that the issue stems from the fact that we're calling a separate AppDomain to take care of running the job and consequently executing the package. The way this separate AppDomain is created is via CreateInstanceAndUnwrap,using the following code:
private AppDomain CreateAppDomain(string applicationName)
{
var uid = Guid.NewGuid();
var fi = new FileInfo(Assembly.GetExecutingAssembly().Location);
var info = new AppDomainSetup
{
ConfigurationFile = #"D:\Deploy\App\web.config",
ApplicationBase = #"D:\Deploy\App",
ApplicationName = applicationName,
CachePath = fi.Directory.Parent.FullName + #"\Temp",
ShadowCopyDirectories = #"D:\Deploy\App\bin",
PrivateBinPath = "bin",
ShadowCopyFiles = "true"
};
var domain = AppDomain.CreateDomain(uid.ToString(), null, info);
return domain;
}
How we're able to test out this theory is by calling AppDomain.Unload(domain) against that created domain, and while we get an DomainUnloadException, this does prevent the job from freezing after being run twice.
We still haven't determined exactly what's within the domain that's getting locked up and preventing us to run the job more than twice, and guidance to learn more about that will be helpful. In the meantime, we're using this workaround of unloading the app domain for now.

ASP.NET C# external SDK not thread-safe

I have a Web API in ASP.NET/C#.
It uses an external 32-bit ActiveX SDK to communicate with a third-party application.
From my test, that SDK has problems when we connect two differents users at the same time. The second connection overwrites the first one.
If I call my API in two cURL loops, one connecting with userA and the other with userB, in some case, the call on userA will have the results of userB.
I don't have any static variables in my code, none that refer to the SDK for sure.
The only solution I can think of would be to "lock" the API while it is getting the response for the user. Is there any other solution ? If not, any pointer on how to do this in C# ?
The API has multiple controllers (think customer/invoice/payment/vendor), all of which are using the same SDK. Thus, a call to a method of CustomerController must lock calls to the other controllers too.
The lock only needs to be active while I using the SDK (which is probably 99% of the request time).
Edit 1:
The SDK is named "Interop.AcoSDK.dll", it is 32-bit. Visual Studio describe the file as "AcoSDK Library". It is an SDK for Acomba, an accounting program. The program itself has a very old structure, the origins dating back to the 80' in DOS (The program was named Fortune1000 back in those days). The interaction with the SDK is really not modern.
I've added the DLL to my project, and to use it, I call two parts.
AcoSDKX AcoSDK = new AcoSDKX();
int version = AcoSDK.VaVersionSDK;
if (AcoSDK.Start(version) != 0)
{
throw new Exception("Failed to start SDK");
}
cie = new AcombaX();
if (cie.CompanyExists(config.ciePath) == 0)
{
throw new Exception("Company not found");
}
int error = cie.OpenCompany(config.appPath, config.ciePath);
if (error != 0)
{
throw new Exception("Failed to open company: " + cie.GetErrorMessage(error));
}
AcoSDK.User User = new AcoSDK.User
{
PKey_UsNumber = config.user
};
try
{
error = User.FindKey(1, false);
}
catch
{
throw new Exception("Failed to find user");
}
if (error != 0)
{
throw new Exception("Failed to find user");
}
error = cie.LogCurrentUser(User.Key_UsCardPos, config.pass);
if (error != 0)
{
throw new Exception("Failed to login in Acomba: " + cie.GetErrorMessage(error));
}
The cie attribute above is a private AcombaX cie in the class.
That class is called from my other class to handle the connection to the SDK.
My other class declare it as a standard object (non-static).
The config above refers to an object with attributes for the company/user the API request is for. Calls for multiple companies can be made.
In the moment, my problem is that calling for different companies, data ends-up being mixed up. So values from Company-B will show in my query of Company-A, for example, when I loop 100 calls to the API in cURL to both companies at the same time. It doesn't do it each time, just some time, for some queries. Probably when a call open the SDK for company-B while the call for company-A has already connected to the SDK but haven't started requesting data.
You need to share some more information about the ActiveX SDK (there is no such thing really). There are three types of ActiveX
(great explanation here)
ActiveX EXE: Unlike a stand-alone EXE file, an ActiveX EXE file is designed to work as an OLE server, which is nothing more than a program designed to share information with another program. It has an .EXE file extension.
ActiveX DLL: ActiveX DLL files are not meant to be used by themselves. Instead, these types of files contain subprograms designed to function as building blocks when creating a stand-alone program. It has a .DLL file extension.
ActiveX Control: Unlike an ActiveX DLL or ActiveX EXE file, an ActiveX Control file usually provides both subprograms and a user interface that you can reuse in other programs. It has an .OCX file extension.
Based on the format of the SDK and the way it's being used, there might be solutions to make the calls parallel.
Updating the question with some code, example etc, might enable me to shed some more light.
This could be starting multiple applications instead of one and using them as a pool, creating multiple objects from the same library and more.
I've posted a comment asking for clarification; I'm hoping that you're at a development stage where you can redesign this feature.
ASP.NET is, generally, really poor with non-trivial synchronous processes. It's easy to exhaust the thread pool, and concurrency issues like what you describe are not uncommon when moving from "desktop" or RPC architecture.
I would strongly suggest changing your WebAPI / ASP.NET architecture for these kinds of operations to a queue/task based approach. The client submits a task and then polls/subscribes for a completion result. This will allow you to design the backend to operate in any manner necessary to prevent data corruption problems due to shared libraries. Perhaps one long-lived backend process per Company which processes requests - I don't know enough about your needs to make intelligent suggestions, but you have a lot of options here.

VS2013 publish Web deployment task failed The file is in use

I am using VS2013 Premium to publish a site to Windows Server 2012.
All files publish ok except these:
SqlServerTypes\x64\msvcr100.dll
SqlServerTypes\x64\SqlServerSpatial110.dll
SqlServerTypes\x86\msvcr100.dll
SqlServerTypes\x86\SqlServerSpatial110.dll
I get this kind of errors for each of the above files I tried to publish:
Web deployment task failed. (The file 'msvcr100.dll' is in use. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_FILE_IN_USE.)
Interrestingly, these files were published the first time (when they were not on the server), then they are no longer overwritten. Tried with 2 different web servers.
I have followed the guide here:
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
...But it only managed to put the site offline (VS is placing the app_offline.htm) but publish still fails with the same error.
All other files publish perfectly.
Any ideas?
You can take you app offline during publishing which hopefully should free up the lock on the file and allow you to update it.
I blogged about this a while back. The support outlined was shipped inside of the Azure SDK and Visual Studio Update. I don't remember the exact releases but I can find out if needed. Any update dating around/after that blog post should be fine.
Prerequisites:
VS 2012 + VS update / VS 2013 + VS Update / VS2015
MSDeploy v3
Note: if you are publishing from a CI server the CI server will need the updates above as well
Edit the publish profile
In VS when create a Web Publish profile the settings from the dialog are stored in Properties\PublishProfiles\ as files that end with .pubxml. Note: there is also a .pubxml.user file, that file should not be modified
To take your app offline in the .pubxml file add the following property.
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
Notes
ASP.NET Required
The way that this has been implemented on the MSDeploy side is that an app_offline.htm file is dropped in the root of the website/app. From there the asp.net runtime will detect that and take your app offline. Because of this if your website/app doesn't have asp.net enabled this function will not work.
Cases where it may not work
The implementation of this makes it such that the app may not strictly be offline before publish starts. First the app_offline.htm file is dropped, then MSDeploy will start publishing the files. It doesn't wait for ASP.NET to detect the file and actually take it offline. Because of this you may run into cases where you still run into the file lock. By default VS enables retrys so usually the app will go offline during one of the retrys and all is good. In some cases it may take longer for ASP.NET to respond. That is a bit more tricky.
In the case that you add <EnableMSDeployAppOffline>true</EnableMSDeployAppOffline> and your app is not getting taken offline soon enough then I suggest that you take the app offline before the publish begins. There are several ways to do this remotely, but that depends on your setup. If you only have MSDeploy access you can try the following sequence:
Use msdeploy.exe to take your site offline by dropping app_offline.htm
Use msdeploy.exe to publish your app (_make sure the sync doesn't delete the app_offline.htm file_)
Wait some amount of time
Publish the site
Use msdeploy.exe to bring the app online by deleting app_offline.htm
I have blogged how you can do this at http://sedodream.com/2012/01/08/howtotakeyourwebappofflineduringpublishing.aspx. The only thing that is missing from that blog post is the delay to wait for the site to actually be taken offline. You can also create a script that just calls msdeploy.exe directly instead of integrating it into the project build/publish process.
I have found the reason why the solution at
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
did not work for the original poster, and I have a workaround.
The issue with the EnableMSDeployAppOffline approach is that it only recycles the app domain hosting the application. It does not recycle the app pool worker process (w3wp.exe) which the app domain lives in.
Tearing down and recreating the app domain will not affect the Sql Server Spatial dlls in question. Those dlls are unmanaged code which are manually loaded via interop LoadLibray calls. Therefore the dlls live outside the purview of the app domain.
In order to release the files locks, which the app pool process puts on them, you need to either recycle the app pool, or unload the dlls from memory manually.
The Microsoft.SqlServer.Types nuget package ships a class which is used to load the Spatial dlls called SqlServerTypes.Utilities. You can modify the LoadNativeAssemblies method to unload the unmanaged dlls when the app domain is unloaded. With this modification when msdeploy copys the app_offline.htm the app domain will unload and then unload the managed dlls as well.
[DllImport("kernel32.dll", SetLastError = true)]
internal extern static bool FreeLibrary(IntPtr hModule);
private static IntPtr _msvcrPtr = IntPtr.Zero;
private static IntPtr _spatialPtr = IntPtr.Zero;
public static void LoadNativeAssemblies(string rootApplicationPath)
{
if (_msvcrPtr != IntPtr.Zero || _spatialPtr != IntPtr.Zero)
throw new Exception("LoadNativeAssemblies already called.");
var nativeBinaryPath = IntPtr.Size > 4
? Path.Combine(rootApplicationPath, #"SqlServerTypes\x64\")
: Path.Combine(rootApplicationPath, #"SqlServerTypes\x86\");
_msvcrPtr = LoadNativeAssembly(nativeBinaryPath, "msvcr100.dll");
_spatialPtr = LoadNativeAssembly(nativeBinaryPath, "SqlServerSpatial110.dll");
AppDomain.CurrentDomain.DomainUnload += (sender, e) =>
{
if (_msvcrPtr != IntPtr.Zero)
{
FreeLibrary(_msvcrPtr);
_msvcrPtr = IntPtr.Zero;
}
if (_spatialPtr != IntPtr.Zero)
{
FreeLibrary(_spatialPtr);
_spatialPtr = IntPtr.Zero;
}
};
}
There is one caveat with this approach. It assumes your application is the only one running in the worker process that is using the Spatial dlls. Since app pools can host multiple applications the file locks will not be released if another application has also loaded them. This will prevent your deploy from working with the same file locked error.
There are known issues with IIS and file-locks (why they aren't solved yet i dont know).
The question i want to ask however is if you even need to re-deploy these files?
I recognize the file-names and recall them to be system-files which should either already be present on the server or simply not need to be re-deployed.
I am not very experienced when it comes to IIS but i have ran into this problem before and several of my more experienced co-workers have told me that this is as i said a known IIS-issue and i believe the answer to your question is:
Avoid deploying unnecessary files.
try again
Reset website
try again
iisreset
I think what would be the easiest thing to do is to make these dll's as CopyLocal as true. I am assuming these dll's are pulled out from program files folder. Try marking them as copylocal true and do a deployment.Try to stop any IIS local process running in your local machine.
Watch out you don't have one of those new-fangled cloud backup services running that is taking file locks - and also you don't have things open in explorer or a DLL inspection tool.
I think it's kind of ridiculous that MS doesn't make better provisions for this problem. I find that 9 times out of 10 my deployment works just fine, but then as our traffic increases that can become 1 in 10 times.
I am going to solve the problem with :
two applications MySite.A and MySite.B, where only one is running at a time.
I always then deploy to the dormant site.
If there's a problem during the deployment it will never cause the whole site to go down.
If there's a major problem after deployment you can revert back very easily.
Not quite sure how I'm implementing it, but I think this is what I need to do.

Configure log4net logging in dll

I'm setting up a dll to be used as a third party dll for a different application. I want this dll to have it's own logging so the external application doesn't have to deal with setting up anything (I don't believe they use the same logging as we do). I've read that may not be the best solution but it's the task I've been given. We want to use log4net with this. I've looked at a few of the other questions on here and they mention that it is configurable via code, however, the main issue I'm having is that there is no clear cut entry point into our code to configure log4net. I'm curious if I should just abandon having the dll configure itself and have a method that is called by the secondary application that configures the dll's logging or if there is a better way to go about this. Any input would be much appreciated
You can configure log4net programmatically. Perhaps add this code to the constructor of your DLL.
if (!log4net.LogManager.GetRepository().Configured)
{
// my DLL is referenced by web service applications to log SOAP requests before
// execution is passed to the web method itself, so I load the log4net.config
// file that resides in the web application root folder
var configFileDirectory = (new DirectoryInfo(TraceExtension.AssemblyDirectory)).Parent; // not the bin folder but up one level
var configFile = new FileInfo(configFileDirectory.FullName + "\\log4net.config");
if (!configFile.Exists)
{
throw new FileLoadException(String.Format("The configuration file {0} does not exist", configFile));
}
log4net.Config.XmlConfigurator.Configure(configFile);
}

Register ocx files remotely

I have some VB6 .ocx files that I would like to register. These .ocx files would be on a remote machine.
What is the best way to register these .ocx files programatically?
string arg_fileinfo = "/s" + " " + "\"" + "\\<remotemachine>\\<directory>\\<ocx>" + "\"";
Process reg = new Process();
//This file registers .dll files as command components in the registry.
reg.StartInfo.FileName = "regsvr32.exe";
reg.StartInfo.Arguments = arg_fileinfo;
reg.StartInfo.UseShellExecute = false;
reg.StartInfo.CreateNoWindow = true;
reg.StartInfo.RedirectStandardOutput = true;
reg.Start();
reg.WaitForExit();
reg.Close();
I'm not getting any errors but it isn't registering the .ocx either. Any ideas?
If you want to register a remote file for use on a local machine, there is nothing special required for registering a file on a UNC path, but you do need to make sure that the UNC path or mapped drive is still available to all users, especially the user that is running regsvr32. Presumably, this will be the local admin which (by default on Windows Vista+) will require elevation which can disconnect network connections.
Also note that your example is missing the extra \ from the beginning of the UNC path. Your code will result in arg_fileinfo containing /s "\<remotemachine>\<directory>\<ocx>".
You can add the extra \, or use the # decorator which makes it a lot clearer when entering Windows paths:
string arg_fileinfo = "/s \"" + #"\\<remotemachine>\<directory>\<ocx>" + "\"";
Or just use it for the entire string and the alternative quote escaping method:
string arg_fileinfo = #"/s ""\\<remotemachine>\<directory>\<ocx>""";
Take this as a warning you're free to ignore (because I know you will anyway):
Doing this isn't a good practice. Just to begin with "run from network" PE files (EXE, DLL, OCX) need to be specially linked for it or you risk high network activity and crashes due to intermittent network interruptions. And registering anything not on the boot drive or at least a local hard drive isn't sensible anyway. Doing any of this ranks high on the "poor practices" list even though it might seem to work most of the time.
Why not just do normal deployment following accepted practices?
My guess would be that you are doing a lot of Mort development, throwing together version after version of some program hoping one of them will eventually "stick." So you want to dump some or all of it onto a network share, thinking "Installation? Installation? We don't need no steenking installation. I can just plop new files out there and have everything magically work with no effort."
I'll assume you don't have the luxury of a managed network you can use to push out updates via Group Policy, and that you aren't creating the necessary MSI installer packages handling the Product and Upgrade Codes in them.
One alternative would be to use reg-free COM, which will solve a lot of small issues for you.
Now, you could do this and still ignore the hazards of PE files run from a network share, or you could bypass that using a small launcher program. That launcher could check a network share for a new version, and if found copy the newer files to the local PC before starting the actual application and terminating. This is basically an auto-updated XCopy Deployment technique.
You can get as fancy as need be. For example if your application accepts command line parameters it might do the new version check itself and if found then start the small updater (passing it the command line parameters), then terminate. The updater app could restart and pass those parameters to the new version.
But yes, life as Mort (or even an official on-the-payroll developer) can be a pain. It can be extremely difficult to get the attention of your friendly neighborhood box jockeys to do things properly even if you are working in a managed corporate LAN environment. That goes double if your application isn't part of some highly engineered sanctioned Major Project.
I had to do this several years ago. As best I can remember, UNC names wouldn't work, a mapped drive letter was required. Whether it was strictly a regsvr32 issue, or was caused by something else (e.g. Windows 95) is lost in the fog of time.
If you want to register the file for use on the remote machine, you you need to run the code on that remote machine.
You can either do this by physically sitting in front of the computer, using remote control software, or a remote admin tool like psexec.exe.

Categories

Resources