WF4 - resuming from instance store and InstanceNotReadyException - c#

I am using WF4 and the WorkflowApplication to host a workflow. The workflow is very simple (at testing stage). It boils down to a series of logging activities with a delay activity and then more logging before finishing. I am using SqlWorkflowInstanceStore for saving persisting the workflows.
The workflow runs fine until it reaches the delay activity, here I can see that it gets saved into the persist database and then unloaded. I have looked at code examples and use the current code (at bottom) for resuming the workflow after the delay has expired. The code is run in a loop to ensure new (resumable) workflows are loaded. It all seems to work fine, the workflow gets resumed and I can se the expected logging output - however after the workflow completes it seems to try to resume it again. The call
WaitForEvents(_handle, TimeSpan.MaxValue)
contiunes again just like its going to resume a completed workflow. Next the
hasRunnableWorkflows
gets set to true. When the code reaches
wfApp.LoadRunnableInstance();
The exception InstanceNotReadyException (No runnable workflow instances were found in the InstanceStore for this WorkflowApplication to load.) is thrown.
I dont understand why this is happening and how to prevent the exception getting thrown. If I ignore the exception everything seems to work fine but I want to know why this is happening and if I'm doing someting wrong.
Code for resuming the workflow:
public Task ResumePendingFlows()
{
var tcs = new TaskCompletionSource<Guid>();
var store = _workflowInstanceStore?.Store;
if (store != null)
{
bool hasRunnableWorkflows = false;
//wait until a event has occurred
foreach (var currentEvent in store.WaitForEvents(_handle, TimeSpan.MaxValue))
{
if (currentEvent == HasRunnableWorkflowEvent.Value)
{
hasRunnableWorkflows = true;
break;
}
}
if (hasRunnableWorkflows)
{
//create WorkflowApplication with extensions and instance store
var wfApp = CreateWorkflowApplication();
wfApp.LoadRunnableInstance();
Logger?.Debug("Found runnable workflows");
//register completed, unloaded event passing the task completion source
RegisterWorkflowEvents(tcs, wfApp);
wfApp.Run();
}
else
{
Logger?.Debug("Did not find runnable workflows");
tcs.SetResult(Guid.Empty);
}
}
return tcs.Task;
}

Related

C# WinForms Application exits unexpectedly with no exception, but only when the API piece is not on the same machine

I am developing an application which is to run as a WinForms thick-client, accessing both an API to be running in the cloud (Azure), and a local SQL Server DB for data.
To allow users to log in, the login screen is triggered as a Modal prompt when the application starts up with the following code in the HomeScreen form which is the 'main' page of the application:
using (Form loginScreen = new LoginForm())
{
loginScreen.ShowDialog(this);
}
Once the login screen has been passed, the user can see the home screen, if they cancel it, the application closes. Once they get to the home screen, another API call is run to retrieve data about the user from the API for display on the home screen.
All API calls execute the same code, which is below (this is very early code for a 'working prototype' and I am aware there are probably issues with it that require a refactor, at this point I'm really only interested in understanding what is causing my call to PostAsJsonAsync to fail:
public async Task<ApiResponse> sendApiRequest(RequestDetail reqDet)
{
//create a variable to track if the action was done or we need to retry after a timeout and login
bool actionDone = false;
//instantiate a variable for the ApiResponse so it can be used later outside of the scope of the actionDone loop
ApiResponse res = null;
while (actionDone == false)
{
//populate the main SessionKey of the packet from the GlobalData var (for initial dev, to be refactored out)
reqDet.SessionKey = GlobalData.SessionKey;
//populate the SessionKey in the array underneath the main object (for future use)
reqDet.strParameters["SessionKey"] = GlobalData.SessionKey;
//instantiate a new ApiRequest object to hold the main request body
ApiRequest req = new ApiRequest("ClientRequest", reqDet);
//Create HttpClient class for communication with the server
HttpClient client = new HttpClient();
//Set URL and Headers (URL will be in a config file in future
client.BaseAddress = new Uri("https://removed.the.url.for.se/api/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
//actually call the service, wait for the response, and read it out into the response object
HttpResponseMessage response = await client.PostAsJsonAsync((string)req.requestBody.ApiLocation, req);
res = await response.Content.ReadAsAsync<ApiResponse>();
//check if the response was successful or we need to show an error
if (res.responseType == "Success")
{
//set action done to TRUE so we exit the loop
actionDone = true;
}
else
{
//Use the MessageService to dispaly the error
Error err = res.responseError;
MessagesService ms = new MessagesService();
await ms.displayErrorPrompt(err);
//trigger a login screen and restart the service call if the user's session has expired
if (err.ErrorText.Equals("Session has expired, please log in again"))
{
using (Form login = new LoginForm())
{
login.ShowDialog();
} // Dispose form
}
else
{
// set ActionDone to True if it's not a login error so we don't endlessly call the service
actionDone = true;
}
}
}
//return the final result
return res;
}
When running the entire stack locally, this all works perfectly, I can login and traverse the rest of my application as normal. When running the client locally in VS and the API in Azure, the first call to the Login API succeeds (I can call it multiple times e.g. with a wrong password and it behaves as normal), however the second call to get the user's data to paint on the home screen fails.If I put a breakpoint on the PostAsJsonAsync line, I can see that the line executes once and continues as normal, but immediately after stepping over the line the second time for the user details call, the entire application exits without executing the subsequent code.
What is strange about this is that it exits with a 0x0 return code, does not throw an exception, or in any way behave abnormally other than shutting down after just that line.
I have tried manually calling the APIs on the Azure service in Postman and they all return exactly the same (correct) results I get when running it locally, so I know it is not the deployment to the App Service that is the issue.
Things I have tried to fix it after Googling, reading other SE posts and looking at comments on this question
I have tried enabling first-chance exceptions in Visual Studio for all CLR exceptions. Nothing is caught or thrown that I can see.
Here is a screenshot of my settings in case I've done something wrong
I have tried wrapping just that line in a try-catch block that catches all exceptions. It still immediately stops executing after the PostAsJsonAsync and never reaches the Catch block
Adding the following code to my Program.cs file to catch unhandled exceptions (is never run when I put a breakpoint on it and nothing is written to the console that I can see):
static void Main()
{
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.UnhandledException += new UnhandledExceptionEventHandler(MyHandler);
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new HomeScreen());
}
static void MyHandler(object sender, UnhandledExceptionEventArgs args)
{
Exception e = (Exception)args.ExceptionObject;
Console.WriteLine("MyHandler caught : " + e.Message);
}
Setting a DumpFolder that is writable by all users, and a DumpType of 2 in a key named after my executable at Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps\ - I've tried both keys named MyApplication and MyApplication.exe and neither results in a file being produced when the app crashes.
The Windows Event Viewer after the 'crash' (nothing from my application)
Reviewing the request/response in Fiddler - the first 'login' request and response is shown correctly, but the second is not shown at all, so it looks like it's crashing before even sending the request
I'd be extremely grateful for any suggestions you can provide, even if it is only a workaround or 'patch' to resolve the issue. It's extremely strange to me both that it exits the program with no exception and without running the subsequent code, that it only does this when the API piece is running in Azure, not when running locally, and finally that it's only when it gets to the subsequent request after the login.
Update
I have tried commenting out the line that runs the RefreshScreen() function to call the web service again and the application still exits in the same way after the login, but just without hitting my breakpoint a second time. However again only when the application is running against the Azure API and not locally. If I break at the last line of the HomeScreen constructor and keep stepping, it goes back to my Main() method and ends the application. Is there something I'm doing wrong here?
I think the PostAsJsonAsync may have been a red herring so have taken it out of the title.
public HomeScreen()
{
InitializeComponent();
if(GlobalData.SessionKey == null)
{
using (Form loginScreen = new LoginForm())
{
loginScreen.ShowDialog(this);
}
// Dispose form
}
refreshScreen();
}
public async Task refreshScreen()
{
ApiService srv = new ApiService();
ApiResponse res = await srv.sendApiRequest(new Sessions_GetUserDetailsRequest());
if (res.responseType == "Success")
{
foreach (dynamic usrItem in JsonConvert.DeserializeObject(res.responseContent))
{
lblUserName.Text = usrItem.UserGivenName + " " + usrItem.UserSurname;
lblSiteName.Text = usrItem.TenantName;
}
}
}
So after doing some research to answer the helpful comments on this question, I stumbled across the answer.
I have an event in the application that is designed to close the entire application if the user exits the login page without logging in, since otherwise it would return to the 'home screen' form in an invalid state. It contained the following code, designed to close the application if the user didn't have a token (i.e. had cancelled the page):
Because my login process is asynchronous (code above) when I was stepping through the process in VS, I was getting to the "PostAsJsonAsync" step, and it was closing the application without showing me it was running the 'on close' event. However, unknown to me when testing locally, the code had a race condition where it would jump ahead to the 'close form' bit while still awaiting the web service call, and therefore execute the following code:
private void DoOnFormClosing(object sender, FormClosingEventArgs e)
{
if(GlobalData.SessionKey == null || GlobalData.SessionExpiry <= DateTime.Now)
{
Application.Exit();
}
}
The solution was to remove this event as part of the login process, after the login had been validated, meaning this code would never be called if the user had successfully logged in.

.Net Session (StateServer mode) not synchronizing if manipulated after request end

I'm getting a bit frustrated with this problem:
I have a web site that manage some files to download, cause these files are very big, and must be organized in folders and then compacted, I build an Ajax structure that do this job in background, and when these files is ready to be downloaded, this job changes the status of an object in the user session (bool isReady = true, simple like that).
To achieve this, when the user clicks "download", a jquery Post is send to an API, and this API starts the "organizer" job and finish the code (main thread, the request scoped one), leaving a background thread doing the magic (it's so beautiful haha).
This "organizer" job is a background thread that receive HttpSessionState (HttpContext.Current.Session) by parameter. It organize and ZIP the files, create a download link and, in the end, change an object in the session using the HttpSessionState that received by param.
This works great when I'm using the session "InProc" mode (I was very happy to deploy this peace of art in production after the tests).
But, my nightmares started when I have deployed the project in production environment, cause we use "StateServer" mode in this environment.
In these environment, the changes is not applied.
What I have noticed, until now, is that in the StateServer, every change I make in the background thread is not "commited" to the session when the changes occurs AFTER the user request ends (the thread that starts the thread).
If i write a thread.join() to wait the thread to finish, the changes made inside the thread is applied.
I'm thinking about use the DB to store these values, but, I will lose some performance :(
[HttpPost]
[Route("startDownloadNow")]
public void StartDownloadNow(DownloadStatusProxy input)
{
//some pieces of code...
...
//add the download request in the user session
Downloads.Add(data);
//pass the session as parameter to the thread
//cause the thread itself don't know the current httpcontext session
HttpSessionState session = HttpContext.Current.Session;
Thread thread = new Thread(() => ProccessDownload(data, session));
thread.Start();
//here, if I put a thread.join(), the changes inside the thread are applied correctly, but I can't do this, otherwise, it ceases to be ajax
}
private void ProccessDownload(DownloadStatus currentDownload, HttpSessionState session)
{
List<DownloadStatus> listDownload = ((List<DownloadStatus>)session["Downloads"]);
try
{
//just make the magic...
string downloadUrl = CartClient.CartDownloadNow(currentDownload.idRegion, currentDownload.idUser, currentDownload.idLanguage, currentDownload.listCartAsset.ToArray(), currentDownload.listCartAssetThumb.ToArray());
listDownload.Find(d => d.hashId == currentDownload.hashId).downloadUrl = downloadUrl;
listDownload.Find(d => d.hashId == currentDownload.hashId).isReady = true;
//in this point, if I inspect the current session, the values are applied but, in the next user request, these values are in the previous state... sad... .net bad dog...
}
catch (Exception e)
{
listDownload.Find(d => d.hashId == currentDownload.hashId).msgError = Utils.GetAllErrors(e);
LogService.Log(e);
}
//this was a desesperated try, I retrieve the object, manipulated and put it back again to the session, but it doesn't works too...
session["Downloads"] = listDownload;
}

Task stops working on different environment

I'm having hard time with this one.
So in my asp.net application there is such a method:
public CopyResponse thirdStage(CopyRequest request)
{
CopyCCResponse response = new CopyCCResponse();
Task.Run(() =>
{
performCopying(request);
});
return response;
}
private void performCopying(CopyCCRequest request)
{
using (Repository = new myDbContext())
{
// do some initial action
try
{
// in general it looks like below
foreach(var children in father)
{
var newChildren = chldren.Copy();
Repository.Childrens.Add(newChildren);
foreach (var grandchldren in children.grandchildrens)
{
var newGrandchildren = grandchldren.Copy();
newGrandchildren.Parent = newChildren;
Repository.Grandchildrens.Add(newGrandchildren);
}
Repository.SaveChanges();
}
}
catch (Exception ex)
{
// log that action failed
throw ex;
}
}
}
This method and all other (there are some similar) works as designed on my local computer without any problems.
Unfortunately, on another environment those methods fail:
Copying smaller parts of data works fine. But when there is over 3000 objects to operate on, method fails.
Main application is responding correctly nevertheless.
Most of the operation is done well (most data is copied and saved in database)
Application doesn't enter catch block. Instructions for failed copying are not executed. Exception isn't caught by the error handler (BTW, I know by default the app can't catch exceptions from independent task, I wrote my handler so it will manage to do so).
IIS worker process seems to take over 300MB and 0% of processor power after copying stopped. More than half of RAM on server is still free.
I looked into windows event log, but haven't found anything.
Do you have any suggestions how I can handle this issue?
You can't do reliable "Fire and forget" tasks from inside IIS, if the site is not being served the application pool will get its AppDomain shut down after a while.
Two options to use are:
HostingEnvironment.QueueBackgroundWorkItem to tell IIS you are doing background work. This will let the server know of the work and it will delay the shutdown as long as it can (default up to 90 seconds max) before it kills your process.
public CopyResponse thirdStage(CopyRequest request)
{
CopyCCResponse response = new CopyCCResponse();
HostingEnvironment.QueueBackgroundWorkItem(() =>
{
performCopying(request);
});
return response;
}
Another option is to use a 3rd party library that is designed for doing background work in IIS like Hangfire.io, this will run a service inside of IIS that does the work and attempts to keep the instance alive till the work is done. You can also configure Hangfire to run as a separate process so you don't need to rely on the lifetime of the IIS instance.
public CopyResponse thirdStage(CopyRequest request)
{
CopyCCResponse response = new CopyCCResponse();
BackgroundJob.Enqueue(() =>
{
performCopying(request);
});
return response;
}
Note, using hangfire with a seperate process may require you to do a little redesign of performCopying(CopyCCRequest request) to support being run from a separate process, using it from inside the IIS instance should not require any changes.

Monitor.TryEnter and Threading.Timer race condition

I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.

WF4: Workflow stay locked

I've an application that host WF4 workflow in IIS using WorkflowApplication
The workflow is defined by user (using a rehosted workflow designer) and the xml is stored in the database. Then, depending on user actions using the application, an xml is selected in database and the workflow are created / resumed.
My problem is: when the workflow reach a bookmarks and go idle, it stay locked for a various amount of time. Then, if the user try to make another action concerning this workflow, I got this exception:
The execution of an InstancePersistenceCommand was interrupted because the instance '52da4562-896e-4959-ae40-5cd016c4ae79' is locked by a different instance owner. This error usually occurs because a different host has the instance loaded. The instance owner ID of the owner or host with a lock on the instance is 'd7339374-2285-45b9-b4ea-97b18c968c19'.
Now it's time for some piece of code
When I workflow goes idle, I specify it should be unloaded:
private PersistableIdleAction handlePersistableIdle(WorkflowApplicationIdleEventArgs arg)
{
this.Logger.DebugFormat("Workflow '{1}' is persistableIdle on review '{0}'", arg.GetReviewId(), arg.InstanceId);
return PersistableIdleAction.Unload;
}
Foreach WorkflowApplication I need, I create a new SqlWorkflowInstanceStore:
var store = new SqlWorkflowInstanceStore(this._connectionString);
store.RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5);
store.InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry;
Here is how my WorkflowApplication is created
WorkflowApplication wfApp = new WorkflowApplication(root.RootActivity);
wfApp.Extensions.Add(...);
wfApp.InstanceStore = this.createStore();
wfApp.PersistableIdle = this.handlePersistableIdle;
wfApp.OnUnhandledException = this.handleException;
wfApp.Idle = this.handleIdle;
wfApp.Unloaded = this.handleUnloaded;
wfApp.Aborted = this.handleAborted;
wfApp.SynchronizationContext = new CustomSynchronizationContext();
return wfApp;
Then I call the Run method to start it.
Some explanations:
- root.RootActivity: it's the activity created from the workflow XML stored in database
- CustomSynchronizationContext: a synchronisation context that handle authorisations
- in the handleUnloaded method I log when a workflow is unloaded, and I see that the workflow is correctly unloaded before the next user action, but it seems the workflow stay locked after being unloaded (?)
Then, later, when I need to resume the workflow, I create the workflow the same way then I call:
wfApp.Load(workflowInstanceId);
which throw the "locked" exception specified above.
If I wait a few minutes, and try again, it works fine.
I read a post here that say we need to set an owner.
So I've also tried using a static SqlWorkflowInstanceStore with the owner set using this code:
if (_sqlWorkflowInstanceStore != null)
return _sqlWorkflowInstanceStore;
lock (_mutex)
{
if (_sqlWorkflowInstanceStore != null)
return _sqlWorkflowInstanceStore;
// Configure store
_sqlWorkflowInstanceStore = new SqlWorkflowInstanceStore(this._connectionString);
_sqlWorkflowInstanceStore.RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5);
_sqlWorkflowInstanceStore.InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry;
// Set owner - Store will be read-only beyond this point and will not be configurable anymore
var handle = _sqlWorkflowInstanceStore.CreateInstanceHandle();
var view = _sqlWorkflowInstanceStore.Execute(handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(5));
handle.Free();
_sqlWorkflowInstanceStore.DefaultInstanceOwner = view.InstanceOwner;
}
return _sqlWorkflowInstanceStore;
But then I've this kind of exception:
The execution of an InstancePersistenceCommand was interrupted because
the instance owner registration for owner ID
'9efb4434-8560-469f-9d03-098a2d48821e' has become invalid. This error
indicates that the in-memory copy of all instances locked by this
owner have become stale and should be discarded, along with the
InstanceHandles. Typically, this error is best handled by restarting
the host.
Does anyone know how to be sure that the lock on the workflow is released immediately when the workflow is unloaded ?
I've see some post doing this with a WorkflowServiceHost (using WorkflowIdleBehavior) but here I'm not using WorkflowServiceHost, I'm using WorkflowApplication
Thank you for any help!
I suspect the problem is with the InstanceOwner of the SqlWorkflowInstanceStore. It isn't deleted so the workflow needs to wait for the ownership of the previous one to time out.
Creating an instance owner
var instanceStore = new SqlWorkflowInstanceStore(connStr);
var instanceHandle = instanceStore.CreateInstanceHandle();
var createOwnerCmd = new CreateWorkflowOwnerCommand();
var view = instanceStore.Execute(instanceHandle, createOwnerCmd, TimeSpan.FromSeconds(30));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;
Deleting an instance owner
var deleteOwnerCmd = new DeleteWorkflowOwnerCommand();
instanceStore.Execute(instanceHandle, deleteOwnerCmd, TimeSpan.FromSeconds(30));
Another possible issue is that when a workflow aborts the Unloaded callback isn't called.

Categories

Resources