I have a Web API in ASP.NET/C#.
It uses an external 32-bit ActiveX SDK to communicate with a third-party application.
From my test, that SDK has problems when we connect two differents users at the same time. The second connection overwrites the first one.
If I call my API in two cURL loops, one connecting with userA and the other with userB, in some case, the call on userA will have the results of userB.
I don't have any static variables in my code, none that refer to the SDK for sure.
The only solution I can think of would be to "lock" the API while it is getting the response for the user. Is there any other solution ? If not, any pointer on how to do this in C# ?
The API has multiple controllers (think customer/invoice/payment/vendor), all of which are using the same SDK. Thus, a call to a method of CustomerController must lock calls to the other controllers too.
The lock only needs to be active while I using the SDK (which is probably 99% of the request time).
Edit 1:
The SDK is named "Interop.AcoSDK.dll", it is 32-bit. Visual Studio describe the file as "AcoSDK Library". It is an SDK for Acomba, an accounting program. The program itself has a very old structure, the origins dating back to the 80' in DOS (The program was named Fortune1000 back in those days). The interaction with the SDK is really not modern.
I've added the DLL to my project, and to use it, I call two parts.
AcoSDKX AcoSDK = new AcoSDKX();
int version = AcoSDK.VaVersionSDK;
if (AcoSDK.Start(version) != 0)
{
throw new Exception("Failed to start SDK");
}
cie = new AcombaX();
if (cie.CompanyExists(config.ciePath) == 0)
{
throw new Exception("Company not found");
}
int error = cie.OpenCompany(config.appPath, config.ciePath);
if (error != 0)
{
throw new Exception("Failed to open company: " + cie.GetErrorMessage(error));
}
AcoSDK.User User = new AcoSDK.User
{
PKey_UsNumber = config.user
};
try
{
error = User.FindKey(1, false);
}
catch
{
throw new Exception("Failed to find user");
}
if (error != 0)
{
throw new Exception("Failed to find user");
}
error = cie.LogCurrentUser(User.Key_UsCardPos, config.pass);
if (error != 0)
{
throw new Exception("Failed to login in Acomba: " + cie.GetErrorMessage(error));
}
The cie attribute above is a private AcombaX cie in the class.
That class is called from my other class to handle the connection to the SDK.
My other class declare it as a standard object (non-static).
The config above refers to an object with attributes for the company/user the API request is for. Calls for multiple companies can be made.
In the moment, my problem is that calling for different companies, data ends-up being mixed up. So values from Company-B will show in my query of Company-A, for example, when I loop 100 calls to the API in cURL to both companies at the same time. It doesn't do it each time, just some time, for some queries. Probably when a call open the SDK for company-B while the call for company-A has already connected to the SDK but haven't started requesting data.
You need to share some more information about the ActiveX SDK (there is no such thing really). There are three types of ActiveX
(great explanation here)
ActiveX EXE: Unlike a stand-alone EXE file, an ActiveX EXE file is designed to work as an OLE server, which is nothing more than a program designed to share information with another program. It has an .EXE file extension.
ActiveX DLL: ActiveX DLL files are not meant to be used by themselves. Instead, these types of files contain subprograms designed to function as building blocks when creating a stand-alone program. It has a .DLL file extension.
ActiveX Control: Unlike an ActiveX DLL or ActiveX EXE file, an ActiveX Control file usually provides both subprograms and a user interface that you can reuse in other programs. It has an .OCX file extension.
Based on the format of the SDK and the way it's being used, there might be solutions to make the calls parallel.
Updating the question with some code, example etc, might enable me to shed some more light.
This could be starting multiple applications instead of one and using them as a pool, creating multiple objects from the same library and more.
I've posted a comment asking for clarification; I'm hoping that you're at a development stage where you can redesign this feature.
ASP.NET is, generally, really poor with non-trivial synchronous processes. It's easy to exhaust the thread pool, and concurrency issues like what you describe are not uncommon when moving from "desktop" or RPC architecture.
I would strongly suggest changing your WebAPI / ASP.NET architecture for these kinds of operations to a queue/task based approach. The client submits a task and then polls/subscribes for a completion result. This will allow you to design the backend to operate in any manner necessary to prevent data corruption problems due to shared libraries. Perhaps one long-lived backend process per Company which processes requests - I don't know enough about your needs to make intelligent suggestions, but you have a lot of options here.
Related
I have an application that I have just added ClickOnce to as an update method. I'm about to pull it and do something else, even after working through all the gotchas of dealing with ClickOnce in a moderately complex application. Well, it's not even a complex application, but it's going onto dedicated hardware, so I have a few odd requirements, like completely transparent and automatic updates, no odd little pup-up windows, etc. The main one is that the application starts and takes over the system at boot.
Where this causes trouble for ClickOnce is that when the system first boots, there is no network - the Wi-Fi is still getting started and connecting. The application handles this, checking for the network to get started and then connecting to our server. ClickOnce is a different matter. If there is no network when the application starts, then all the ApplicationDeployment functions will not work, even after the network is started.
So, for example, I use something like this to get the version:
if (ApplicationDeployment.IsNetworkDeployed)
Version = ApplicationDeployment.CurrentDeployment.CurrentVersion.ToString();
else
Version = "unknown";
If I run the application at boot (that is, before the network is working), this code will return "unknown" for the rest of the application run, even after the network is up. If I shut down the application and restart it, it shows the deployed version. So technically, the IsNetworkDeployed is returning an incorrect value. The application WAS network deployed; it's just not on a network NOW. I'll probably post this as a bug over on MSDN.
BTW, the application does not actually require a network to run, so at startup, I can't take the path of "wait until the network is ready, then restart the application automatically". The hardware can move around, and may be in the middle of nowhere with no available network at all. I still have to deal with that (and I don't actually return "unknown" for that case, I just pull the version from the assembly). And if the problem was just getting a version, I wouldn't care, but this means that there is no way to ever update the application, since it always starts with no network, and it will never get to my code to check for, download, and auto-update the application.
Before I write off all my ClickOnce work, I was wondering if anyone knew of a way to reinitialize ApplicationDeployment, so that it will figure out that there is a network and enable all that ClickOnce goodness.
This is basically what that check is doing:
private static bool _isNetworkDeployed;
private static bool _isNetworkDeployedChecked;
public static bool IsNetworkDeployed
{
get
{
if (!_isNetworkDeployedChecked)
{
_isNetworkDeployed = (
AppDomain.CurrentDomain != null &&
AppDomain.CurrentDomain.ActivationContext != null &&
AppDomain.CurrentDomain.ActivationContext.Identity != null &&
AppDomain.CurrentDomain.ActivationContext.Identity.FullName != null);
//_isNetworkDeployed = ApplicationDeployment.IsNetworkDeployed;
_isNetworkDeployedChecked = true;
}
return _isNetworkDeployed;
}
}
We ran into the same issue with ClickOnce and reverse engineered the check. You could modify this to do your own checking prior to calling the .NET version.
We created an extendable project in wcf using reflection.
the web service loads different modules in run time depends on the input request.
We use .NET reflection for dynamically loading of module libraries.
The system runs on IIS.
During our tests we noticed that we couldn't replace our existing dlls once loaded via Reflection. We tried to copy our new dll into bin directory but we received an error similar ' the dll used by an application '
We can assure its only our system use that dll.
However replacing the dll could possible stopping the IIS.
But we require replacing the dll without stopping the IIS. Is there anyway we can handle this in code level ?
Appreciate your quick response.
IOrder orderImpl = null;
try
{
string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase) + "\\" + assemInfo.AssemblyQualifiedName + ".dll";
path = path.Replace("file:\\", "");
Assembly a = Assembly.LoadFile(path);
Type commandType = a.GetType(assemInfo.AssemblyQualifiedName + "." + assemInfo.ClassName);
orderImpl = (IOrder)commandType.GetConstructor(new System.Type[] { typeof(LocalOrderRequest) }).Invoke(new object[] { order });
}
catch (Exception ex)
{
throw new OrderImplException("-1", ex.Message);
}
Thanks
RSF
I'm going to make two assumptions from your question: 1) Uptime is critical to your app and that's why it can't be shut down for 30-seconds to update it; 2) It is not in a fault-tolerant, load-balanced farm.
If that's the case, then solving #2 will also resolve how to update the DLL with no downtime.
For an app that can't be shutdown for a few seconds to update a DLL, you should have an infrastructure that supports that needed stability. The risk of an unexpected outage is far greater than the impact of updating the app.
You should have more than one server behind a load-balancer that provides fault-tolerant routing if one of the servers goes down.
By doing this, you minimize the risk of downtime from failure and you can update the DLLs by shutting of IIS on one node, updating it, then restarting it. The load-balancing will recognize that the node is down and route traffic to the good node(s) until the updated one is again available. Repeat with the other node(s) and you've updated your app with no downtime.
You could try creating your own AppDomain and then load/unload assemblies from there. Here's an article about that: http://people.oregonstate.edu/~reeset/blog/archives/466
I have a web application , for presentation to my client they ask me to install it on their local server so they can test it , here is my question !?
Is there any way so i can publish uniquely for that server , i did put some limitation but many features in my app are open , so they can make a disk image from server and use it anywhere else ,
Is there any method to use so my web application check if this server is same server ( by hardware id or anything i don't have any idea ) then start to work !
I saw many codes but they are win forms for generating unique hid , but how can i connect done it with asp .net
EDIT
Could u take a look at this also ,
i am using system.management class
is this reliable i mean are they unique ?
private string GetUniqueID()
{
string cpuInfo = string.Empty;
ManagementClass mc = new ManagementClass("win32_processor");
ManagementObjectCollection moc = mc.GetInstances();
foreach (ManagementObject mo in moc)
{
if (cpuInfo == "")
{
//Get only the first CPU's ID
cpuInfo = mo.Properties["processorID"].Value.ToString();
break;
}
}
ManagementObject dsk = new ManagementObject(#"win32_logicaldisk.deviceid=""" + "C" + #":""");
dsk.Get();
string volumeSerial = dsk["VolumeSerialNumber"].ToString();
string HardWareUniqueID = volumeSerial + cpuInfo;
return HardWareUniqueID;
}
Appreciate your answers,
Thanks in advance
If you want to avoid having it "phone home" an alternative is to generate some kind of certificate and place it on the machine. Use a private key that only you know to encrypt the machine name and/or IP. Then have your app use your public key to decrypt it to verify that it is allowed to run on this server. Nobody who doesn't know your private key will be able to create valid certificates.
You hae a few choices...
Lock your web site to the single IP address you install it on. To make your life easier, check for that IP in a common page base class. (Note, you could also write HTTP handlers, but the base-class approach is easier.)
Put a 'phone home' call in the app that checks with your server every time it's started up. That way you can check if they have moved it or if multiple instances are running.
Use the built-in licensing features of .NET (the same one third-party developers use for controls, etc.)
The easiest... just put in a time-bomb that lets them test it for a few weeks, then automatically blocks access. Be smart though... persist the last-checked time so you can tell if they've rolled back their clock trying to get more usage.
Just make sure to distribute a web application, not a web project so you can distribute your code as a compiled bumary rather than having to ship the code-behind files. That will keep prying eyes out, but does make deployment more a pain since you always have to recompile with every change (as opposed to on-demand compiling.)
I would put in a time bomb. It's trivial to implement. Also, your client's won't think that you don't trust them. A fixed evaluation period in the application is extremely common.
Provide them a VMware image without any user-access just allow them to open the website externally via HTTP in their web browser.
I have a DLL written in Visual C++ and an application written in C#. The application that is written in C# already uses IPC between multiple instances of itself so that it only ever runs one instance (it attempts to start an IPC server, and if it can't assumes there's already one running, in which case it sends the command line arguments over IPC to the existing application).
Now I want to send commands from a Visual C++, however, even when I create a type definition in Visual C++ that matches the one in C# (on an implementation level), it rejects the connection because they are fundamentally still two different types (from two different assemblies).
I thought about using Reflection in Visual C++ to fetch the type from the C# assembly, but I can't do that because then I'd have to ship the assembly along side the DLL (which defeats the purpose of the DLL being an API to the application).
I'm not sure of any other way I could really do it, other than store the class in yet another DLL and make both the application and the API DLL reference the class in that, but this is also not an ideal solution as I'd like a single API DLL to distribute.
Are there any suggestions as to how I can connect over IPC (other forms of communication like TCP are not permitted) to send requests to the application?
The solution was to place the InterProcess class in the API DLL and simply make the C# application use the DLL as a reference to bring in the class.
It is also important to note that in order to initialize the shared object correctly, I had to initialize the server side of the sharing in a separate AppDomain and make the C# application a client like so (this is a new version of the previous paste):
try
{
// Set up an interprocess object which will handle instructions
// from the client and pass them onto the main Manager object.
this.m_ServerDomain = AppDomain.CreateDomain("roketpack_server");
this.m_ServerDomain.DoCallBack(() =>
{
// We must give clients the permission to serialize delegates.
BinaryServerFormatterSinkProvider serverProv = new BinaryServerFormatterSinkProvider();
serverProv.TypeFilterLevel = System.Runtime.Serialization.Formatters.TypeFilterLevel.Full;
IpcServerChannel ipc = new IpcServerChannel("roketpack", "roketpack", serverProv);
ChannelServices.RegisterChannel(ipc, true);
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(API.InterProcess),
"InterProcessManager",
WellKnownObjectMode.Singleton);
});
// Now initialize the object.
IpcClientChannel client = new IpcClientChannel();
ChannelServices.RegisterChannel(client, true);
this.m_InterProcess = (API.InterProcess)Activator.GetObject(
typeof(API.InterProcess),
"ipc://" + name + "/InterProcessManager");
InterProcessHandle.Manager = this;
this.m_InterProcess.SetCalls(InterProcessHandle.CallURL,
InterProcessHandle.IsLatestVersion,
InterProcessHandle.RequestUpdate);
return true;
}
catch (RemotingException)
{
// The server appears to be already running. Connect to
// the channel as a client and send instructions back
// to the server.
IpcClientChannel client = new IpcClientChannel();
ChannelServices.RegisterChannel(client, true);
API.InterProcess i = (API.InterProcess)Activator.GetObject(
typeof(API.InterProcess),
"ipc://" + name + "/InterProcessManager");
if (i == null)
{
Errors.Raise(Errors.ErrorType.ERROR_CAN_NOT_START_OR_CONNECT_TO_IPC);
return false;
}
if (Environment.GetCommandLineArgs().Length > 1)
i.CallURL(Environment.GetCommandLineArgs()[1]);
return false;
}
I hope this solution helps someone else :)
We are currently working on an API for an existing system.
It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product.
As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection.
To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API.
So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem?
A couple options we thought of:
Write an out of process COM object (don't know if that works in .Net)
Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes
Extract this exe file from an embedded resource and execute it
None of those really seem ideal.
Any better ideas?
Do you mean something like Net.TCP port sharing?
You could fix the client-side port while opening your socket, say 45534. Since one port can be opened by only one process, only one process at a time would be able to open socket connection to the server.
Well, there are many ways to solve this as expressed in all the answers and comments, but may be the simpler way you can use is just have global status store in a place accesible for all the users of the current machine (may be you might have various users logged-in on the machine) where you store WHO has the right to have this open. Something like a "lock" as is used to be called. That store can be a field in a local or intranet database, a simple file, or whatever. That way you don't need to build or distribute extra binaries.
When a client connects to your server you create a new thread to handle him (not a process). You can store his IP address in a static dictionary (shared between all threads).
Something like:
static Dictionary<string, TcpClient> clients = new Dictionary<string, TcpClient>();
//This method is executed in a thread
void ProcessRequest(TcpClient client)
{
string ip = null;
//TODO: get client IP address
lock (clients)
{
...
if (clients.ContainsKey(ip))
{
//TODO: Deny connection
return;
}
else
{
clients.Add(ip, client);
}
}
//TODO: Answer the client
}
//TODO: Delete client from list on disconnection
The best solution we've come up with is to create a windows service that opens up a named pipe to manage multiple client processes through one socket connection to the server.
Then our API will be able to detect if the service is running/installed and fall back to creating it's own connection for the client otherwise.
3rd parties can decide if they want to bundle the service with their product or not, but core applications from our system will have it installed.
I will mark this as the answer in a few days if no one has a better option. I was hoping there was a way to execute our assembly as a new process, but all roads to do this do not seem very reliable.