Working around a external .NET library that isn't thread-safe? - c#

Our ticket system vendor provides an .NET API library and I've been trying to make use of this from a multi-threaded application. I'm running into all sorts of connection/stateful issues and I believe this is perhaps because it's not "thread-safe" (I'm still a novice with C# and threaded applications, so just my guess at the moment as some of the symptoms line up).
The API doesn't have a class that is instantiated, you just call the static methods TrebuchetApi.Api.Connect() and TrebuchetApi.Api.Login() from it's namespace (in fact all methods appear to be static).
So I maybe have one thread doing one thing, and another doing something else, and the underlying API is getting confused (static variables that should be set are null and such).
Is there any way of making each of my threads use a brand new 'instance' of the API, or is it simply unavoidable?
Update:
For clarity, after the suggestions for using AppDomains I found this article which provided the exact framework I needed:
http://www.superstarcoders.com/blogs/posts/executing-code-in-a-separate-application-domain-using-c-sharp.aspx

Using appdomains seems like the way to go.
See How do I prevent static variable sharing in the .NET runtime? or In .Net is the 'Staticness' of a public static variable limited to an AppDomain or the whole process? for similar questions people had.
The msdn guide can be found here http://msdn.microsoft.com/en-us/library/ms173138(v=vs.90).aspx
Just keep in mind that AppDomain is a .net concept, unmanaged resources can still be an issue.

Related

How to best deploy Sentry in cross-assembly environment?

So we built this library/framework thing full of code related to business processes and common elements that are shared across multiple applications (C#, .Net 4.7.1,WPF, MVVM). Our Logging stuff is all set up through this framework so naturally it felt like the best place for Sentry. All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself. So far so good.
When we set up Sentry initially everything seemed to work great. We do some updates and errors seem to be going way down. That's cause we are awesome and Sentry helped us be more awesome, right? Nope! Well I mean kind of.
The scope is being disposed of so we are no longer getting Unhandled exceptions. We didn't notice at first because we are still getting sentry logs when we are handling errors through our Logging.Log() method. This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.
We also started using Sentry for some simple Usage tracking by spinning up a separate project in Sentry called Usage-Tracker and passing a simple "DoThingApplication has been launched" with an ApplicationName.UsageTracker Enum as a parameter to our Logging method.
Question: What is a good way to handle this where my setup can have a Sentry instance that wraps my using(sentryClientStuff){ ComposeObjects(); } and still have my logging method look for the existing client and use it if it exists?
Caveats:
I believe before any of this happens we still need to make a call to send a Sentry log to our UsageTracker.
I would like to pass in as few options as possible if I'm setting up the Sentry Client/Scope in our shared library. Maybe Release and Environment. Maybe check tags for Fingerprint and set it in the Log method.
I'm open to new approaches to any of this.
Some related thoughts
Maybe there is a better way to handle references that could solve both this and some other pains of when they have become mismatched between client and shared framework/library thing
Maybe the answer can be found through adding some Unit Tests but I could use a Sentry specific example or a nudge there because I don't know a muc about that.
Maybe there is a way to use my shared library to return a Sentry Client or Scope that I could use in my client assembly that would not be so fragile and the library could somehow also use it.
Maybe there is a better solution I can't conceive because I'm just kind of an OK programmer and it escapes me. I'm open to any advice/correction/ridicule.
Maybe there is a smarter way to handle "Usage-Tracker" type signals in Sentry
Really I want a cross-assembly singleton kind of thing in practice.
There are really many things going on here. Also without looking at any code it's hard to picture how things are laid out. There's a better chance you can get the answer your are looking for if you share some (dummy even) example of the structure of your project.
I'll try to break it down and address what I can anyway:
With regards to:
Usage-Tracker:
You can create a new client and bind to a scope. That way any use of the SentrySdk static class (which I assume your Logger.Log routes to) will pick up.
In other words, call SentrySdk.Init as you currently do, with the options that are shared across any application using your shared library, and after that create a client using the DSN of your Usage-Tracker project in sentry. Push a scope, bind the client and you can use SentrySdk with it.
There's an example in the GitHub repo of the SDK:
using (SentrySdk.PushScope())
{
SentrySdk.AddBreadcrumb(request.Path, "request-path");
// Change the SentryClient in case the request is to the admin part:
if (request.Path.StartsWith("/admin"))
{
// Within this scope, the _adminClient will be used instead of whatever
// client was defined before this point:
SentrySdk.BindClient(_adminClient);
}
SentrySdk.CaptureException(new Exception("Error at the admin section"));
// Else it uses the default client
_middleware?.Invoke(request);
} // Scope is disposed.
The SDK only has to be initialized once but you can always create a new client with new SentryClient, push a new scope (SentrySdk.PushScope()) and bind it to that new scope (SentrySdk.BindClient). Once you pop the scope the client is no longer accessdible via SentrySdk.CaptureException or any other method on the static class SentrySdk.
You can also use the client directly, without binding it to the scope at all.
using (var c = new SentryClient(new SentryOptions { Dsn = new Dsn("...") })) {
c.CaptureMessage("hello world!");
}
The using block is there to make sure the background thread flushes the event.
Central place to initialize the SDK:
There will be configuration which you want to have fixed in your shared framework/library but surely each application (composition root) will have its own setting. Release is auto-discovered.
From docs.sentry.io:
The SDK will firstly look at the entry assembly’s AssemblyInformationalVersionAttribute, which accepts a string as value and is often used to set a GIT commit hash.
If that returns null, it’ll look at the default AssemblyVersionAttribute which accepts the numeric version number.
If you patch your assemblies in your build server, the correct Release should be reported automatically. If not, you could define it per application by taking a delegate that passes the SentryOptions as argument.
Something like:
Framework code:
public class MyLogging
{
void Init(Action<SentryOptions> configuration)
{
var o = new SentryOptions();
// Add things that should run for all users of this library:
o.AddInAppExclude("SomePrefixTrueForAllApplications");
o.AddEventProcessor(new GeneralEventProessor());
// Give the application a chance to reconfigure anything it needs:
configuration?.Invoke(o);
}
}
App code:
void Main()
{
MyLogging.Init(o => o.Environment = "my env");
}
The scope is being disposed of so we are no longer getting Unhandled exceptions."
Not sure I understand what's going on here. Pushing and popping (disposing) scopes don't affect the ability of the SDK to capture unhandled exceptions. Could you please share a repro?
This logging method calls SentrySdk.Init() which I suspect is disposing the client in the executing assembly.:
Unless you create a client "by hand" with new SentryClient, there's only 1 client in the running process. Please note I said running process and not assembly. Instances are not held within an assembly. The assembly only contains the code that can be executed. If you call SentrySdk.CaptureException it will dispatch the call to the SentryClient bound to the current scope. If you didn't PushScope, there's always an implicit scope, the root scope. In this case it's all transparent enough you shouldn't care there's a scope in there. You also can't dispose of that scope since you never got a handle to do so (you didn't call PushScope so you didn't get what it returns to call Dispose on).
All the references in our individual applications are manually pointed to the dlls the folder where our shared library thingy installs itself.:
One thing to consider, depending on your environment is to distribute packages via NuGet. I'm unsure whether you expect to use these libraries in non .NET Framework applications (like .NET Core). But considering .NET Core 3.0 is bringing Windows Desktop framework support like WPF and WinForm, it's possible that eventually you will. If that's the case, consider targeting .NET Standard instead of .NET Framework for your code libraries.

Wcf sharing variables between calls

I have a wcf web service which i want to share information between calls (diferent clients calls) .
for example sharing a dictionary between client calls. the dictionary can be change through calls (add/remove item etc) but i cant efford it to be deleted or to be renew for each call (it should be like a static database) .
is there any way to implement this issue?
allready tried a few suggestions but nothing realy seems to work.
You basically need a singleton defined outside of your service class - you would need to handle the very likely possibility that multiple calls will be accessing / modifying the info at the same time so will need to lock the data maybe with a ReaderWriteLock or ReaderWriterLockSlim.
A nicer way of doing it, assuming you are using .net 4.0, would be to use MemoryCache, which contains built-in thread safety. If you are pre .net 4.0, there is a cache object in the System.Web namespace link - it was a bit of a pain having to add reference to system.web if you wanted to use it when writing windows apps so the MemoryCache implementation in .net 4.0 was a welcome addition.

Dynamic Assembly Resolution/Management

Short Version
I have an application which utilizes a plug-in infrastructure. The plug-ins have configurable properties that help them know how to do their job. The plug-ins are grouped into profiles to define how to complete a task, and the profiles are stored in XML files serialized by the DataContractSerializer. The problem is when reading the configuration files, the application deserializing has to have knowledge of all of the plug-ins defined in the configuration file. I'm looking for a way to handle the resolution of unknown plug-ins. See the proposed solution section below for a couple of the ideas I've looked into implementing, but I am open to just about anything (though I'd rather not have to reinvent the application).
Detail
Background
I've developed a sort of Business Process Automation System for internal use for the company I'm currently working for in C# 4. It makes exhaustive use of 'plug-ins' to define everything (from the tasks that are to be performed to the definition of units of work) and relies heavily on a dynamic configuration model which in turn relies on C# 4/DLR dynamic objects to fulfill jobs. It's a little heavy while executing because of its dynamic nature but it works consistently and performs well enough for our needs.
It includes a WinForms configuration UI that uses Reflection extensively to determine the configurable properties/fields of the plug-ins, as well as, the properties/fields that define each unit of work to be processed. The UI is also built on top of the BPA engine so it has a thorough understanding of the (loose) object model put in place that allows the engine to do its job, which, coincidentally, has led to several user experience improvements, such as, ad-hoc job execution and configure-time validation of user input. Again there is room for improvement, however, it seems to do its job.
The configuration UI utilizes the DataContractSerializer to serialize/deserialize the settings specified, so any plug-ins referenced by the configuration must be loaded before (or at the time of) configuration load.
Structure
The BPA engine is implemented as a shared assembly (DLL) which is referenced by the BPA service (a Windows Service), the Configuration UI (WinForms app), and a plug-in tester (Console application version of the Windows Service). Each of the three applications that reference the shared assembly only include the minimum amount of code necessary to perform their specific purpose. Additionally, all plug-ins must reference a very thin assembly which basically just defines the interface(s) that the plugin must implement.
Problem
Because of the extensibility model used in the application, there has always been a requirement that the config UI is run from the same directory (on the same PC) as the Service application. That way the UI always knows about all of the assemblies that the Service knows about so they can be deserialized without running into missing assemblies. Now that we are getting close to roll out of the system, a demand to allow the Configuration UI remotely on any PC in our network has come about from our network admins for security purposes. Typically this wouldn't be a problem if there was always a known set of assemblies to deploy, however, with the ability to extend the application using user built assemblies, there has to be a way to resolve the assemblies from which the plug-ins can be instantiated/used.
Proposed (potentially obvious) Solution
Add a WCF service to the Service application to allow the typical CRUD operations against the configurations which that instance of the service is aware of and rework the configuration UI to act more like SSMS with a Connect/Disconnect model. This doesn't really solve the problem so we would also need to expose some sort of ServiceContract from the Service application to allow querying of the assemblies it knows about/has access to. That's fine and fairly straight forward however the question arises, "When should the UI find out about the assemblies that the Service is aware of?" On connect we could send all of the assemblies from the Service to the UI to ensure that it always knows about all of the assemblies the service does but that gets messy with AppDomain management (potentially unnecessarily) and assembly version conflicts. So I suggested hooking into the AppDomain.AssemblyResolve/AppDomain.TypeResolve events to only download the assemblies that the client isn't aware of yet and only as needed. This doesn't necessarily cleanup the AppDomain management issues but it definitely helps address the version conflicts and related issues.
Question
If you've stuck with me this long I applaud and thank you, but now I'm finally getting to the actual question here. After months of research and finally coming to a conclusion I am wondering if anyone here has had to deal with a similar issue and how you dealt with the pitfalls and shortcomings? Is there a standard way of handling this that I have missed completely, or do you have any recommendations based on how you have seen this successfully handled in the past? Do you see any problems with the proposed approaches or can you offer an alternative?
I'm aware that not everyone lives in my head so please let me know if you need further clarification/explanation. Thanks!
Update
I've given MEF a fair shake and feel that it is too simplistic for my purposes. It's not that it couldn't be bent to handle the plug-in requirements of my application, the problem is doing so would be too cumbersome and dirty to make it feasible. It is a nice suggestion and it has a lot of potential, but in its current state it just isn't there yet.
Any other ideas or feedback on my proposed solutions?
Update
I don't know if the issue I'm encountering is just too localized, if I failed to properly describe what I am trying to achieve, or if this question is just too unreasonably long to be read in its entirety; but the few answers I've received have been subtly helpful enough to help me think through the problem differently and identify some shortcomings in what I am after.
In short, what I'm trying to do is take three applications which in their current state share information (configuration/assemblies) using a common directory structure, and try to make those applications work across a network with minimal impact on usability and architecture.
File shares seem like the obvious answer to this problem (as #SimonMourier proposed in the comments), but using them translates into lack of control and debugability when something goes wrong. I can see them as a viable short term solution, but long term they just don't seem feasible.
tl;dr, but I'm 90% sure you should take a look into MEF.
When I first saw it I was like "aah, another acronym", but you'll see it's very simple, and it's built in into .NET 4. Best of all, it even runs seamlessly on mono and it's a matter of less than an hour (including coffee break) between hearing about it and compiling hello worlds to get used with the features. It's really that simple.
Basically, you "export" something in an assembly and "import" it into another (all via simple attribute decorations), and you choose where to search for it (example, on the applications directory, plug-ins folder, etc).
Edit: what if you try to download and load (and possibly cache) plugins on-the-fly on configuration load?
I think that you could be overlooking a relatively simple solution that derives somewhat from the Microsoft web.config approach:
Have two sections in the config file:
Section 1 contains enough information about the plugin (i.e. name, version) to allow you to load it into an app domain.
Section 2 contains the information serialized by the plugin.
On loading the plugin, pass the information in section 2 and let the plugin deserialize it according to its needs.
Maybe you can divide this problem into two
administrator allow users to download one of predefined configuration (set of libraries) and MEF helps to inject required dependencies
each activity from user should pass through security proxy, plugin modules not allowed call BL directly. Proxy could match custom security attribute and allowed activities.
i.e.
[MyRole(Name = new[] { "Security.Action" })]
void BlockAccount(string accountId){}
[MyRole(Name = new[] { "Manager.Action" })]
void CreateAccount(string userName){}
[MyRole(Name = new[] { "Security.View", "Manager.View" })]
List<> AcountList(Predicate p){}
and allow for AD groups (some abstract description)
corp\securityOperators = "Security.*" //allow calls to all security manipulation
corp\HQmanager = "Manager.View" //allow only view access
corp\Operator = "Manager.*"
I'm not sure I completely understand the problem but I think this situation calls for "type-preserving serialization" - that is, the serialized file contains enough type information to deserialize back to the original object graph without any hints from the calling application as to what types are involved.
I've used Json.NET to do this and I can highly recommend the library for type-preserving serialization of object graphs. It looks like the NetDataContractSerializer can also do this, from the MSDN Remarks
The NetDataContractSerializer differs from the DataContractSerializer in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the DataContractSerializer does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.
I chose Json.NET because it can serialize POCOs without any special attributes or interfaces. Both Json.NET and the NetDataContractSerializer allow you to use a custom SerializationBinder - in here you could put any logic regarding loading assemblies that may not yet be loaded.
Unfortunately, changing serialization schemes might be the "breaking-est" change to suggest because all your existing files will become incompatible. You might be able to write a conversion utility that deserializes a file using the old method and serializes the resulting object graph using the new method.

Best practices for organizing .NET P/Invoke code to Win32 APIs

I am refactoring a large and complicated code base in .NET that makes heavy use of P/Invoke to Win32 APIs. The structure of the project is not the greatest and I am finding DllImport statements all over the place, very often duplicated for the same function, and also declared in a variety of ways:
The import directives and methods are sometimes declared as public, sometimes private, sometimes as static and sometimes as instance methods. My worry is that refactoring may have unintended consequences but this might be unavoidable.
Are there documented best practices I can follow that can help me out?
My instict is to organize a static/shared Win32 P/Invoke API class that lists all of these methods and associated constants in one file... EDIT There are over 70 imports to the user32 DLL.
(The code base is made up of over 20 projects with a lot of windows message passing and cross-thread calls. It's also a VB.NET project upgraded from VB6 if that makes a difference.)
You might consider the way it was done in the .NET framework. It invariably declares a static class (Module in VB.NET) named NativeMethods that contains the P/Invoke declarations. You could be more organized than the Microsoft programmers, there are many duplicate declarations. Different teams working on different parts of the framework.
However, if you want to share this among all projects you have to declare these declarations Public instead of Friend. Which isn't great, it ought to be an implementation detail. I think you can solve that by re-using the source code file in every project that needs it. Normally taboo but okay in this case, I think.
I personally declare them as needed in the source code file that needs them, making them Private. That also really helps when lying about the argument types, especially for SendMessage.
Organize them into a [Safe|Unsafe]NativeMethods class. Mark the class as internal static. If you need to expose them to your own assemblies, you can use InternalsVisibleTo - though it'd be more appropriate if you could group related ones into each assembly.
Each method should be static - I honestly wasn't aware you could even mark instance methods with DllImport.
As a first step - I'd probably move everything to a Core assembly (if you have one), or create a Product.Native assembly. Then you can find dupes and overlaps easily, and look for managed equivalents. If your p/invokes are a mess, I don't suspect you have much in the way of layering in the other assemblies that will guide your grouping.
Why not create a singular file called Win32.vb and within that logically group the pinvokes into separate namespaces, for instance a GDI namespace could use all GDI pinvokes, User32 namespace could use all pinvokes that resides in the User32 kernel, and so on....it may be painful at first, but at least you will have a centralized namespaces all contained within that file? Have a look here to see what I mean...What do you think?
Are your P/Invoke calls an artifact of the migration from VB6? I have migrated 300,000 lines of code from VB6 to C# (Windows.Forms and System.EnterpriseServices), and eliminated all but a handful of P/Invokes calls--there is nearly always a managed equivalent. If you are refactoring, you may want to consider doing something similar. The resulting code should be fair easier to maintain.
The recommended way is to have a NativeMethods class per assembly with all the DllImported methods in it, with internal visibility. In this manner you know always where your imported function are and avoid duplicate declarations.
What I typically try to do in this case is to do what you are talking about, create various classes, static or not, that provide the functionality, this way it can be re-used as needed. Depending on the nature of the calls, I'd shy way from a static class implementation, but that will depend on your specific implementation.
Expansion on Above as requested.
Given the nature of P/Invoke, especially if a number of calls are needed and are of varying areas of implementation I find it better to group like items together, this way you are not pulling in a lot of other clutter, or other DLL imports when not needed.
THe desire to stay away from static methods, is due to calls to unmanaged resources and potential for memory leaks etc..

Modify an internal .NET class' method implementation

I would like to modify the way my C#/.NET application works internally. I have dug into the .NET framework with Reflector and found a pretty good place where I could use a different implementation of a method. This is an internal class in the System.Windows.Forms namespace. You obviously cannot alter the code of this class with the usual means so I thought it would be possible to replace a method in there through reflection at runtime. The method I would like to entirely replace for my application is this:
public static WindowsFontQuality WindowsFontQualityFromTextRenderingHint(Graphics g)
in the class:
internal sealed class System.Windows.Forms.Internal.WindowsFont
Is there any way to load that type and replace the method at runtime, not affecting any other applications that are currently running or started afterwards? I have tried to load the type with Type.GetType() and similar things but failed so far.
You may be able to do this with the debugger API - but it's a really, really bad idea.
For one thing, running with the debugger hooks installed may well be slower - but more importantly, tampering with the framework code could easily lead to unexpected behaviour. Do you know exactly how this method is used, in all possible scenarios, even within your own app?
It would also quite possibly have undesirable legal consequences, although you should consult a lawyer about that.
I would personally abandon this line of thinking and try to work out a different way to accomplish whatever it is you're trying to do.
Anything you do to make this happen would be an unsupported, unreliable hack that could break with any .NET Framework update
There's another, more correct, way to do what you are trying to accomplish (and I don't need to know what you're trying to do to know this for certain).
Edit: If editing core Framework code is your interest, feel free to experiment with Mono, but don't expect to redistribute your modifications if they are application-specific. :)
I realy think, this is not good idea. But if you realy need it, you can use a Mono Cecil and change the assembly content. Then you need setup a config file for Redirecting Assembly Versions.
And last but not least, your advance will be propable illegal.

Categories

Resources