I have a web service that has 8 web methods. These methods are called synchronously, the first call authenticates the user, and the rest of the methods perform a unit of work, these methods are called upon until the work is done.
I need to store the state of the work (e.g. what actions to perform next, and what work has been done and is currently being performed.) I currently have a state object that contains this information.
My question is what is the best way to persist this object between each web service call? Note that there may be multiple users calling this web service, each with it's own unique state.
Here are some scenarios that I am thinking:
Idea #1
Store the object in a session.
Idea #2
Create an instance variable that is a HashMap of a userId and their respective data. something like:
[WebService(Namespace = "http://developer.intuit.com/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class QBWCService : QBWebConnectorSvc {
// instance variable to hold current session data...
private Dictionary<Guid,Session> Sessions;
public QBWCService () {
Sessions = new Dictionary<Guid,Session>();
}
[WebMethod]
public override string[] authenticate(string strUserName, string strPassword)
{
...
Sessions.Add(UserId, new SessoionObject());
}
[WebMethod]
public override string[] authenticate(Guid UserId)
{
SessionObject o = Sessions[UserId];
}
}
I am thinking that Idea 2 is going to be the cleanest "natural way", however I do not know any of the implication of implementing this sort of scheme...which way or what else would you recommend?
This might be one of those situations where you are already too far to do a large refactor, but ...
Sounds identical to a state workflow in Windows Workflow. If your plan is eventually expose each of those methods as their own encapsulated services, it would give you all that state management for free, plus you get the added benefit of being able to visually define the workflow between these service calls.
http://msdn.microsoft.com/en-us/magazine/cc163538.aspx
[EDIT]: Shoot, Jedi beat me to it. What he said.
You should take a look at Windows Workflow Foundation (WF). You can design your workflow, then plug in persistence models and such.
That being said - you can't use the session! it won't scale once you create multiple web farms/servers. Surely the QBW developer API needs to scale and be fault tolerant!
Some more info about using this with ASP.NET is here.
Idea 2 is mimicking Session state management. I don't see an intrinsic benefit from performing your own session statement management.
Idea one has the benefit of ASP.NET managing the sessions for you. I could see the second option becoming problematic if you have users that don't complete the full lifecycle as then you have entries in the hash table that reference old sessions. At minimum if going with #2 I would be building in a cleaning process to ensure that old sessions are expiring.
If you just need to hold current step information, I'd almost vote for session as there is no point trying to re-invent it.
Related
I've got a requirement to protect my business object properties via a list of separate authorization rules. I want my authorization rules to be suspended during various operations such as converting to DTOs and executing validation rules (validating property values the current user does not have authorization to see).
The approach I'm looking at wraps the calls in a scope object that uses a [ThreadStatic] property to determine whether the authorization rules should be run:
public class SuspendedAuthorizationScope : IDisposable
{
[ThreadStatic]
public static bool AuthorizationRulesAreSuspended;
public SuspendedAuthorizationScope()
{
AuthorizationRulesAreSuspended = true;
}
public void Dispose()
{
AuthorizationRulesAreSuspended = false;
}
}
Here is the IsAuthorized check (from base class):
public bool IsAuthorized(string memberName, AuthorizedAction authorizationAction)
{
if (SuspendedAuthorizationScope.AuthorizationRulesAreSuspended)
return true;
var context = new RulesContext();
_rules.OfType<IAuthorizationRule>()
.Where(r => r.PropertyName == memberName)
.Where(r => r.AuthorizedAction == authorizationAction)
.ToList().ForEach(r => r.Execute(context));
return context.HasNoErrors();
}
Here is the ValidateProperty method demonstrating usage (from the base class):
private void ValidateProperty(string propertyName, IEnumerable<IValidationRule> rules)
{
using (new SuspendedAuthorizationScope())
{
var context = new RulesContext();
rules.ToList().ForEach(rule => rule.Execute(context));
if (HasNoErrors(context))
RemoveErrorsForProperty(propertyName);
else
AddErrorsForProperty(propertyName, context.Results);
}
NotifyErrorsChanged(propertyName);
}
I've got some tests around the scoping object that show that the expected/correct value of SuspendedAuthorizationScope.AuthorizationRulesAreSuspended is used as long as a lambda resolves in the scope of the using statement.
Are there any obvious flaws to this design? Is there anything in ASP.NET that I should be concerned with as far as threading goes?
There are two concerns that I see with your proposed approach:
One's failure to use using when creating SuspendedAuthorizationScope will lead to retaining open access beyond intended scope. In other words, an easy to make mistake will cause security hole (especially thinking in terms of future-proofing your code/design when a new hire starts digging in unknown code and misses this subtle case).
Attaching this magic flag to ThreadStatic now magnifies the previous bullet by having possibility of leaving open access to another page since the thread will be used to process another request after it's done with the current page, and its authorization flag has not been previously reset. So now the scope of authorization lingering longer than it should goes not just beyond missing call to .Dispose(), but actually can leak to another request / page and of completely different user.
That said, the approaches I've seen to solving this problem did involve essentially checking the authorization and marking a magic flag that allowed bypass later on, and then resetting it.
Suggestions:
1. To at least solve the worst variant (#2 above), can you move magic cookie to be a member of your base page class, and have it an instance field that is only valid to the scope of that page and not other instances?
2. To solve all cases, is it possible to use Functors or similar means that you'd pass to authorization function, that would then upon successful authorization will launch your Functor that runs all the logic and then guarantees cleanup? See pseudo code example below:
void myBizLogicFunction()
{
DoActionThatRequiresAuthorization1();
DoActionThatRequiresAuthorization2();
DoActionThatRequiresAuthorization3();
}
void AuthorizeAndRun(string memberName, AuthorizedAction authorizationAction, Func privilegedFunction)
{
if (IsAuthorized(memberName, authorizationAction))
{
try
{
AuthorizationRulesAreSuspended = true;
privilegedFunction();
}
finally
{
AuthorizationRulesAreSuspended = true;
}
}
}
With the above, I think it can be thread static as finally is guaranteed to run, and thus authorization cannot leak beyond call to privilegedFunction. I think this would work, though could use validation and validation by others...
If you have complete control over your code and don't care about hidden dependencies due to magic static value you approach will work. Note that you putting big burden on you/whoever supports your code to make sure there is never asynchronous processing inside your using block and each usage of magic value is wrapped with proper using block.
In general it is bad idea because:
Threads and requests are not tied one-to one so you can run into cases when you thread local object is changing state of some other request. This will even more likely to happen in you use ASP.Net MVC4+ with async handlers.
static values of any kind are code smell and you should try to avoid them.
Storing request related information should be done in HttpContext.Items or maybe Session (also session will last much longer and require more careful management of cleaning up state).
My concern would be about the potential delay between the time you leave your using block and the time it takes the garbage collector to get around to disposing of your object. You may be in a false "authorized" state longer than you intend to be.
I'm reading Vaughn Vernon's book on Implementing Domain Driven design. I have also been going through the book code, C# version, from his github here.
The Java version of the book has decorators #Transactional which I believe are from the spring framework.
public class ProductBacklogItemService
{
#Transactional
public void assignTeamMemberToTask(
string aTenantId,
string aBacklogItemId,
string aTaskId,
string aTeamMemberId)
{
BacklogItem backlogItem =
backlogItemRepository.backlogItemOfId(
new TenantId(aTenantId),
new BacklogItemId(aBacklogItemId));
Team ofTeam =
teamRepository.teamOfId(
backlogItem.tennantId(),
backlogItem.teamId());
backlogItem.assignTeamMemberToTask(
new TeamMemberId(aTeamMemberId),
ofTeam,
new TaskId(aTaskId));
}
}
What would be the equivalent manual implementation in C#? I'm thinking something along the lines of:
public class ProductBacklogItemService
{
private static object lockForAssignTeamMemberToTask = new object();
private static object lockForOtherAppService = new object();
public voice AssignTeamMemberToTask(string aTenantId,
string aBacklogItemId,
string aTaskId,
string aTeamMemberId)
{
lock(lockForAssignTeamMemberToTask)
{
// application code as before
}
}
public voice OtherAppsService(string aTenantId)
{
lock(lockForOtherAppService)
{
// some other code
}
}
}
This leaves me with the following questions:
Do we lock by application service, or by repository? i.e. Should we not be doing backlogItemRepository.lock()?
When we are reading multiple repositories as part of our application service, how do we protect dependencies between repositories during transactions (where aggregate roots reference other aggregate roots by identity) - do we need to have interconnected locks between repositories?
Are there any DDD infrastructure frameworks that handle any of this locking?
Edit
Two useful answers came in to use transactions, as I haven't selected my persistence layer I am using in-memory repositories, these are pretty raw and I wrote them (they don't have transaction support as I don't know how to add!).
I will design the system so I do not need to commit to atomic changes to more than one aggregate root at the same time, I will however need to read consistently across a number of repositories (i.e. if a BacklogItemId is referenced from multiple other aggregates, then we need to protect against race conditions should BacklogItemId be deleted).
So, can I get away with just using locks, or do I need to look at adding TransactionScope support on my in-memory repository?
TL;DR version
You need to wrap your code in a System.Transactions.TransactionScope. Be careful about multi-threading btw.
Full version
So the point of aggregates is that the define a consistency boundary. That means any changes should result in the state of the aggregate still honouring it's invariants. That's not necessarily the same as a transaction. Real transactions are a cross-cutting implementation detail, so should probably be implemented as such.
A warning about locking
Don't do locking. Try and forget any notion you have of implementing pessimistic locking. To build scalable systems you have no real choice. The very fact that data takes time to be requested and get from disk to your screen means you have eventual consistency, so you should build for that. You can't really protect against race conditions as such, you just need to account for the fact they could happen and be able to warn the "losing" user that their command failed. Often you can start finding these issues later on (seconds, minutes, hours, days, whatever your domain experts tell you the SLA is) and tell users so they can do something about it.
For example, imagine if two payroll clerks paid an employee's expenses at the same time with the bank. They would find out later on when the books were being balanced and take some compensating action to rectify the situation. You wouldn't want to scale down your payroll department to a single person working at a time in order to avoid these (rare) issues.
My implementation
Personally I use the Command Processor style, so all my Application Services are implemented as ICommandHandler<TCommand>. The CommandProcessor itself is the thing looking up the correct handler and asking it to handle the command. This means that the CommandProcessor.Process(command) method can have it's entire contents processed in a System.Transactions.TransactionScope.
Example:
public class CommandProcessor : ICommandProcessor
{
public void Process(Command command)
{
using (var transaction = new TransactionScope())
{
var handler = LookupHandler(command);
handler.Handle(command);
transaction.Complete();
}
}
}
You've not gone for this approach so to make your transactions a cross-cutting concern you're going to need to move them a level higher in the stack. This is highly-dependent on the tech you're using (ASP.NET, WCF, etc) so if you add a bit more detail there might be an obvious place to put this stuff.
Locking wouldn't allow any concurrency on those code paths.
I think you're looking for a transaction scope instead.
I don't know what persistency layer you are going to use but the standard ones like ADO.NET, Entity Framework etc. support the TransactionScope semantics:
using(var tr = new TransactionScope())
{
doStuff();
tr.Complete();
}
The transaction is committed if tr.Complete() is called. In any other case it is rolled back.
Typically, the aggregate is a unit of transactional consistency. If you need the transaction to spread across multiple aggregates, then you should probably reconsider your model.
lock(lockForAssignTeamMemberToTask)
{
// application code as before
}
This takes care of synchronization. However, you also need to revert the changes in case of any exception. So, the pattern will be something like:
lock(lockForAssignTeamMemberToTask)
{
try {
// application code as before
} catch (Exception e) {
// rollback/restore previous values
}
}
Dynamics CRM 2011 on premise. (But this problem exists in many situations away from Dynamics CRM.)
CRM plugins have an entry point:
void IPlugin.Execute (IServiceProvider serviceProvider)
(http://msdn.microsoft.com/en-us/library/microsoft.xrm.sdk.iplugin.execute.aspx)
serviceProvider is a reference to the plugin execution context. Anything useful that a plugin does requires accessing serviceProvider, or a member of it.
Some plugins are large and complex and contain several classes. For example, I'm working on a plugin that has a class which is instantiated multiple times. This class needs to use serviceProvider.
One way to get access to serviceProvider from all the classes that need it would be to add a property to all those classes and then to set that property. Or to add properties for the parts of serviceProvider that each class needs. Either of these approaches would result in lots of duplicate code.
Another approach would be to have a global variable in the scope of the thread. However, according to http://msdn.microsoft.com/en-us/library/cc151102.aspx one "should not use global class variables in plug-ins."
So what is the best way to have access to serviceProvider without passing it around everywhere?
P.S. If an example helps, serviceProvider provides access to a logging object. I want almost every class to log. I don't want to pass a reference to the logging object to every class.
That's not quite what the warning in the documentation is getting at. The IServiceProvider isn't a global variable in this context; it's a method parameter, and so each invocation of Execute gets its own provider.
For improved performance, Microsoft Dynamics CRM caches plug-in instances. The plug-in's Execute method should be written to be stateless because the constructor is not called for every invocation of the plug-in. In addition, multiple threads could be running the plug-in at the same time. All per invocation state information is stored in the context. This means that you should not use global class variables in plug-ins [Emphasis mine].
There's nothing wrong with passing objects from the context to helper classes which need them. The warning advises against storing something in a field ("class variable") on the plugin class itself, which may affect a subsequent call to Execute on the same instance, or cause concurrency problems if Execute is called by multiple threads on the same instance simultaneously.
Of course, this "globalness" has to be considered transitively. If you store anything in either the plugin class or in a helper class in any way that multiple calls to Execute can access (using fields on the plugin class or statics on either plugin or helper classes, for example), you leave yourself open to the same problem.
As a separate consideration, I would write the helper classes involved to accept types as specific to their function as possible - down to the level of individual entities - rather than the entire IServiceProvider. It's much easier to test a class which needs only an EntityReference than one which needs to have an entire IServiceProvider and IPluginExecutionContext mocked up.
On global variables vs injecting values required by classes
You're right, this is something that comes up everywhere in object-oriented code. Take a look at these two implementations:
public class CustomEntityFrubber
{
public CustomEntityFrubber(IOrganizationService service, Guid entityIdToFrub)
{
this.service = service;
this.entityId = entityIdToFrub;
}
public void FrubTheEntity()
{
// Do something with service and entityId.
}
private readonly IOrganizationService service;
private readonly Guid entityId;
}
// Initialised by the plugin's Execute method.
public static class GlobalPluginParameters
{
public static IOrganizationService Service
{
get { return service; }
set { service = value; }
}
public static Guid EntityIdToFrub
{
get { return entityId; }
set { entityId = value; }
}
[ThreadStatic]
private static IOrganizationService service;
[ThreadStatic]
private static Guid entityId;
}
public class CustomEntityFrubber
{
public FrubTheEntity()
{
// Do something with the members on GlobalPluginParameters.
}
}
So assume you've implemented something like the second approach, and now you have a bunch of classes using GlobalPluginParameters. Everything is going fine until you discover that one of them is occasionally failing because it needs an instance of IOrganizationService obtained by calling CreateOrganizationService(null), so it accesses CRM as the system user rather than the calling user (who doesn't always have the required privileges).
Fixing the second approach requires you to add another field to your growing list of global variables, remembering to make it ThreadStatic to avoid concurrency problems, then changing the code of CustomEntityFrubber to use the new SystemService property. You have tight coupling between all these classes.
Not only that, all these global variables hang around between plugin invocations. If your code has a bug that somehow bypasses the assignment of GlobalPluginParameters.EntityIdToFrub, suddenly your plugin is inexplicably operating on data that wasn't passed to it by the current call to Execute.
It's also not obvious exactly which of these global variables the CustomEntityFrubber requires, unless you read its code. Multiply that by however many helper classes you have, and maintaining this code starts to become a headache. "Now, does this object need me to have set Guid1 or Guid2 before I call it?" On top of that, the class itself can't be sure that some other code won't go and change the values of global variables it was relying on.
If you used the first approach, you simply pass in a different value to the CustomEntityFrubber constructor, with no further code changes needed. Furthermore, there's no stale data hanging around. The constructor makes it obvious which dependencies the class has, and once it has them, it can be sure that they don't change except in ways they were designed for.
As you say, you shouldn't put a member variable on the plugin since instances are cached and reused between requests by the plugin pipeline.
The approach I take is to create a class that perform the task you need and pass a modified LocalPluginContext (making it a public class) provided by the Developer Toolkit (http://msdn.microsoft.com/en-us/library/hh372957.aspx) on the constructor. Your class then can store the instance for the purposes of executing it's work just in the same way you would with any other piece of code. You are essentially de-coupling from the restrictions imposed by the Plugin framework. This approach also makes it easier to unit test since you only need to provide the execution context to your class rather than mocking the entire plugin pipeline.
It's worth noting that there is a bug in the automatically generated Plugin.cs class in the Developer Toolkit where it doesn't set the ServiceProvider property - At the end of the constructor of the LocalPluginContext add the line:
this.ServiceProvider = serviceProvider;
I have seen some implementations of an IoC approach in Plugins - but IMHO it makes the plugin code way too complex. I'd recommend making your plugins lean and simple to avoid threading/performance issues.
There are multiple things I would worry about in this design request (not that it's bad, just that one should be aware of, and anticipate).
IOrganizationService is not multi-thread safe. I'm assuming that other aspects of the IServiceProvider are not as well.
Testing things at an IServiceProvider level is much more complicated due to the additional properties that have to be mocked
You'd need a method for handle logging if you ever decided to call logic that is currently in your plugin, outside of the plugin (e.g. a command line service).
If you don't want to be passing the object around everywhere, the simple solution is to create a static property on some class that you can set it upon plugin execution, and then access from anywhere.
Of course now you have to handle issue #1 from above, so it'd have to be a singleton manager of some sort, that would probably use the current thread's id to set and retrieve the value for that thread. That way if the plugin is fired twice, you could retrieve the correct context based on your currently executing thread. (Edit Rather than some funky thread id lookup dictionary, #shambulator's ThreadStatic property should work)
For issue #2, I wouldn't be storing the IServiceProvider as is, but split up it's different properties (e.g. IPluginExecutionContext, IOrganizationService, etc)
For issue #3, it might make sense to store an action or a function in your manager rather than the object values themselves. For example, if rather than storing the IPluginExecutionContext, store a func that accepts a string to log and uses the IPlurginExeuctionContext to log. This allows other code to setup it's own logging, without being dependent on executing from within a plugin.
I haven't made any of these plugins myself, but I would treat the IServiceProvider like an I/O device.
Get the data you need from it and convert that data to format that suits your plugin. Use the transformed data to set up the other classes. Get the the output from the other classes and then translate back to terms the IServiceProvider can understand and use.
Your input and output are dependent on the IServiceProvider, but the processing doesn't have to be.
From Eduardo Avaria at http://social.microsoft.com/Forums/en-US/f433fafa-aff7-493d-8ff7-5868c09a9a9b/how-to-avoid-passing-a-context-reference-among-classes
Well, as someone at SO already told you, the global variables restriction is there cause the plugin won't instantiate again if it's called within the same context (the object context and probably other environmental conditions), so any custom global variable would be shared between that instances, but since the context will be the same, there's no problem in assigning it to a global variable if you want to share it between a lot of classes.
Anyways, I'd rather pass the context on the constructors and share it have a little more control over it, but that's just me.
I'm tring to pass "statistics" from one program to another (my first question is here how to pass some "statistics" from c# program to another program?)
To pass statistics I first need to collect it.
I've decided to implement central storage, like StatisticsStorage with one method StatisticsStorage.joinStatistics(string groupName, string indicatorName, callback getValueMethod)
Then for example Thermometer class should look like that (pseudo code):
class Thermometer {
Thermometer(string installationPlace) {
StatisticsStorage.joinStatistics("temperature", installationPlace, this.getThermometerValue);
}
callback double getThermometerValue {
return this.thermometerValue;
}
private double thermometerValue;
//.....
}
StatisticsStorage should call callBack method for all indicators periodically.
Once statistics is collected I can pass it one way or another.
Questions:
do you see any problems with my approach?
how to implement callbacks on c# better? (i'm pretty novice to c#)
There are probably many ways to achieve your desire result.
I would probably publish a WCF service, maybe hosted in a windows service, and you can connect and post stats on that. This will keep a good separation of system concerns and can be reused from other systems etc etc.
I suppose it depends on how in depth you want to go and the requirements in this scenario.
Then again, i could just be over analyzing what you are trying to do :)
Why use a GlobalClass? What are they for? I have inherited some code (shown below) and as far as I can see there is no reason why strUserName needs this. What is all for?
public static string strUserName
{
get { return m_globalVar; }
set { m_globalVar = value; }
}
Used later as:
GlobalClass.strUserName
Thanks
You get all the bugs of global state and none of the yucky direct variable access.
If you're going to do it, then your coder implemented it pretty well. He/She probably thought (correctly) that they would be free to swap out an implementation later.
Generally it's viewed as a bad idea since it makes it difficult to test the system as a whole the more globals you have in it.
My 2 cents.
When you want to use a static member of a type, you use it like ClassName.MemberName. If your code snippet is in the same class as the member you're referring (in this example, you're coding in a GlobalClass member, and using strUserName) you can omit the class name. Otherwise, it's required as the compiler wouldn't have any knowledge of what class you're referring to.
This is a common approach when dealing with Context in ASP.Net; however, the implementation would never use a single variable. So if this is a web app I could see this approach being used to indicate who the current user is (Although there are better ways to do this).
I use a simillar approach where I have a MembershipService.CurrentUser property which then pulls a user out from either SessionState or LogicalCallContext (if its a web or client app).
But in these cases these aren't global as they are scoped within narrow confines (Like the http session state).
One case where I have used a global like this would be if I have some data which is static and never changes, and is loaded from the DB (And there's not enough of the data to justify storing it in a cache). You could just store it in a static variable so you don;t have to go back to the DB.
One a side note why was the developer using Hungarian notation to name Properties? even when there was no intellisense and all the goodness our IDEs provide we never used hungarian notation on Properties.
#Jayne, #Josh, it's hard to tell - but the code in the question could also be a static accessor to a static field - somewhat different than #Josh's static helper example (where you use instance or context variables within your helper).
Static Helper methods are a good way to conveniently abstract stateless chunks of functionality. However in the example there is potential for the global variable to be stateful - Demeter's Law guides us that you should only play with state that you own or are given e.g. by parameters.
http://www.c2.com/cgi/wiki?LawOfDemeter
Given the rules there occasional times when it is necessary to break them. You should trade the risk of using global state (primarily the risk of creating state/concurrency bugs) vs. the necessity to use globals.
Well if you want a piece of data to be available to any other class running in the jvm then the Global Class is the way to go.
There are only two slight problems;
One. The implmentation shown is not thread safe. The set... method of any global class should be marked critical or wrapped in a mutex.
Even in the niave example above consider what happens if two threads run simultaniously:
set("Joe") and set("Frederick") could result in "Joederick" or "Fre" or some other permutation.
Two. It doesnt scale well. "Global" refers to a single jvm. A more complex runtime environment like Jboss could be runnning several inter communicating jvms. So the global userid could be 'Joe' or 'Frederick' depending on which jvm your EJB is scheduled.