Releasing lock without executing a method - c#

The system I am working on is composed of a Windows service hosting various WCF services. Multiple clients can talk to a same service, but only one client can talk to a service at time.
I am therefore using a "lock ()" to prevent multiple clients to conflict. If Client1 makes a first request to a service and then Client2 makes another requested while the former is still executing, the "lock" puts that second request on hold until the first one is done.
So far so good. Now the problem is that I have to deal with events (I don't think a simple callback would do the trick here).
In other words, while the 1st request is running, something may happen that Client2 needs to know about.
When that happens, Client2 will receive that event and should eventually stop using that service all together.
The problem here is that our 2nd request is already in the locker's queue.
So how would I prevent that second request from running. Can it get cancelled?
Maybe it is just a matter of adding a dirty flag but I am hoping there are better ways to do that.
Here is what it may look like on the service side with the dirty flag "canRun":
lock (_locker)
{
if (canRun)
return SomeMethod();
else
return null;
}

If you can switch to async/await and something like a SemaphoreSlim, then: CancellationToken is what you're looking for; a CancellationTokenSource can be marked as cancelled at any time, and code can respond accordingly - and most framework/library code already correctly handles cancellation, for example: during async semaphore acquisition.
If you need to stick in a synchronous world, then your best bet is probably to change to a looped Monitor mode that can re-check every timeout, for example:
bool lockTaken = false;
try
{
do
{
if (!canRun) throw new OperationCanceledException();
Monitor.TryEnter(_locker, someTimeout, ref lockTaken);
}
while (!lockTaken);
// now we have the conch
SomeMethod();
}
finally
{
if (lockTaken) Monitor.Exit(_locker);
}
Note that if canRun is a field you would want to ensure that it is either volatile or otherwise accessed suitably to avoid being cached in a register or similar.

Related

How to handle multiple tasks running in parallel at different intervals inside a C# based Windows service?

I already have some experience in working with threads in Windows but most of that experience comes from using Win32 API functions in C/C++ applications. When it comes to .NET applications however, I am often not sure about how to properly deal with multithreading. There are threads, tasks, the TPL and all sorts of other things I can use for multithreading but I never know when to use which of those options.
I am currently working on a C# based Windows service which needs to periodically validate different groups of data from different data sources. Implementing the validation itself is not really an issue for me but I am unsure about how to handle all of the validations running simultaneously.
I need a solution for this which allows me to do all of the following things:
Run the validations at different (predefined) intervals.
Control all of the different validations from one place so I can pause and/or stop them if necessary, for example when a user stops or restarts the service.
Use the system ressources as efficiently as possible to avoid performance issues.
So far I've only had one similar project before where I simply used Thread objects combined with a ManualResetEvent and a Thread.Join call with a timeout to notify the threads about when the service is stopped. The logic inside those threads to do something periodically then looked like this:
while (!shutdownEvent.WaitOne(0))
{
if (DateTime.Now > nextExecutionTime)
{
// Do something
nextExecutionTime = nextExecutionTime.AddMinutes(interval);
}
Thread.Sleep(1000);
}
While this did work as expected, I've often heard that using threads directly like this is considered "oldschool" or even a bad practice. I also think that this solution does not use threads very efficiently as they are just sleeping most of the time. How can I achive something like this in a more modern and efficient way?
If this question is too vague or opinion-based then please let me know and I will try my best to make it as specific as possible.
Question feels a bit broad but we can use the provided code and try to improve it.
Indeed the problem with the existing code is that for the majority of the time it holds thread blocked while doing nothing useful (sleeping). Also thread wakes up every second only to check the interval and in most cases go to sleep again since it's not validation time yet. Why it does that? Because if you will sleep for longer period - you might block for a long time when you signal shutdownEvent and then join a thread. Thread.Sleep doesn't provide a way to be interrupted on request.
To solve both problems we can use:
Cooperative cancellation mechanism in form of CancellationTokenSource + CancellationToken.
Task.Delay instead of Thread.Sleep.
For example:
async Task ValidationLoop(CancellationToken ct) {
while (!ct.IsCancellationRequested) {
try {
var now = DateTime.Now;
if (now >= _nextExecutionTime) {
// do something
_nextExecutionTime = _nextExecutionTime.AddMinutes(1);
}
var waitFor = _nextExecutionTime - now;
if (waitFor.Ticks > 0) {
await Task.Delay(waitFor, ct);
}
}
catch (OperationCanceledException) {
// expected, just exit
// otherwise, let it go and handle cancelled task
// at the caller of this method (returned task will be cancelled).
return;
}
catch (Exception) {
// either have global exception handler here
// or expect the task returned by this method to fail
// and handle this condition at the caller
}
}
}
Now we do not hold a thread any more, because await Task.Delay doesn't do this. Instead, after specificed time interval it will execute the subsequent code on a free thread pool thread (it's more complicated that this but we won't go into details here).
We also don't need to wake up every second for no reason, because Task.Delay accepts cancellation token as a parameter. When that token is signalled - Task.Delay will be immediately interrupted with exception, which we expect and break from the validation loop.
To stop the provided loop you need to use CancellationTokenSource:
private readonly CancellationTokenSource _cts = new CancellationTokenSource();
And you pass its _cts.Token token into the provided method. Then when you want to signal the token, just do:
_cts.Cancel();
To futher improve the resource management - IF your validation code uses any IO operations (reads files from disk, network, database access etc) - use Async versions of said operations. Then also while performing IO you will hold no unnecessary threads blocked waiting.
Now you don't need to manage threads yourself anymore and instead you operatate in terms of tasks you need to perform, letting framework \ OS manage threads for you.
You should use Microsoft's Reactive Framework (aka Rx) - NuGet System.Reactive and add using System.Reactive.Linq; - then you can do this:
Subject<bool> starter = new Subject<bool>();
IObservable<Unit> query =
starter
.StartWith(true)
.Select(x => x
? Observable.Interval(TimeSpan.FromSeconds(5.0)).SelectMany(y => Observable.Start(() => Validation()))
: Observable.Never<Unit>())
.Switch();
IDisposable subscription = query.Subscribe();
That fires off the Validation() method every 5.0 seconds.
When you need to pause and resume, do this:
starter.OnNext(false);
// Now paused
starter.OnNext(true);
// Now restarted.
When you want to stop it all call subscription.Dispose().

Receive concurrent asynchronous requests and process them one at a time

Background
We have a service operation that can receive concurrent asynchronous requests and must process those requests one at a time.
In the following example, the UploadAndImport(...) method receives concurrent requests on multiple threads, but its calls to the ImportFile(...) method must happen one at a time.
Layperson Description
Imagine a warehouse with many workers (multiple threads). People (clients) can send the warehouse many packages (requests) at the same time (concurrently). When a package comes in a worker takes responsibility for it from start to finish, and the person who dropped off the package can leave (fire-and-forget). The workers' job is to put each package down a small chute, and only one worker can put a package down a chute at a time, otherwise chaos ensues. If the person who dropped off the package checks in later (polling endpoint), the warehouse should be able to report on whether the package went down the chute or not.
Question
The question then is how to write a service operation that...
can receive concurrent client requests,
receives and processes those requests on multiple threads,
processes requests on the same thread that received the request,
processes requests one at a time,
is a one way fire-and-forget operation, and
has a separate polling endpoint that reports on request completion.
We've tried the following and are wondering two things:
Are there any race conditions that we have not considered?
Is there a more canonical way to code this scenario in C#.NET with a service oriented architecture (we happen to be using WCF)?
Example: What We Have Tried?
This is the service code that we have tried. It works though it feels like somewhat of a hack or kludge.
static ImportFileInfo _inProgressRequest = null;
static readonly ConcurrentDictionary<Guid, ImportFileInfo> WaitingRequests =
new ConcurrentDictionary<Guid, ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request)
{
// Receive the incoming request
WaitingRequests.TryAdd(request.OperationId, request);
while (null != Interlocked.CompareExchange(ref _inProgressRequest, request, null))
{
// Wait for any previous processing to complete
Thread.Sleep(500);
}
// Process the incoming request
ImportFile(request);
Interlocked.Exchange(ref _inProgressRequest, null);
WaitingRequests.TryRemove(request.OperationId, out _);
}
public bool UploadAndImportIsComplete(Guid operationId) =>
!WaitingRequests.ContainsKey(operationId);
This is example client code.
private static async Task UploadFile(FileInfo fileInfo, ImportFileInfo importFileInfo)
{
using (var proxy = new Proxy())
using (var stream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read))
{
importFileInfo.FileByteStream = stream;
proxy.UploadAndImport(importFileInfo);
}
await Task.Run(() => Poller.Poll(timeoutSeconds: 90, intervalSeconds: 1, func: () =>
{
using (var proxy = new Proxy())
{
return proxy.UploadAndImportIsComplete(importFileInfo.OperationId);
}
}));
}
It's hard to write a minimum viable example of this in a Fiddle, but here is a start that give a sense and that compiles.
As before, the above seems like a hack/kludge, and we are asking both about potential pitfalls in its approach and for alternative patterns that are more appropriate/canonical.
Simple solution using Producer-Consumer pattern to pipe requests in case of thread count restrictions.
You still have to implement a simple progress reporter or event. I suggest to replace the expensive polling approach with an asynchronous communication which is offered by Microsoft's SignalR library. It uses WebSocket to enable async behavior. The client and server can register their callbacks on a hub. Using RPC the client can now invoke server side methods and vice versa. You would post progress to the client by using the hub (client side). In my experience SignalR is very simple to use and very good documented. It has a library for all famous server side languages (e.g. Java).
Polling in my understanding is the totally opposite of fire-and-forget. You can't forget, because you have to check something based on an interval. Event based communication, like SignalR, is fire-and-forget since you fire and will get a reminder (cause you forgot). The "event side" will invoke your callback instead of you waiting to do it yourself!
Requirement 5 is ignored since I didn't get any reason. Waiting for a thread to complete would eliminate the fire and forget character.
private BlockingCollection<ImportFileInfo> requestQueue = new BlockingCollection<ImportFileInfo>();
private bool isServiceEnabled;
private readonly int maxNumberOfThreads = 8;
private Semaphore semaphore = new Semaphore(numberOfThreads);
private readonly object syncLock = new object();
public void UploadAndImport(ImportFileInfo request)
{
// Start the request handler background loop
if (!this.isServiceEnabled)
{
this.requestQueue?.Dispose();
this.requestQueue = new BlockingCollection<ImportFileInfo>();
// Fire and forget (requirement 4)
Task.Run(() => HandleRequests());
this.isServiceEnabled = true;
}
// Cache multiple incoming client requests (requirement 1) (and enable throttling)
this.requestQueue.Add(request);
}
private void HandleRequests()
{
while (!this.requestQueue.IsCompleted)
{
// Wait while thread limit is exceeded (some throttling)
this.semaphore.WaitOne();
// Process the incoming requests in a dedicated thread (requirement 2) until the BlockingCollection is marked completed.
Task.Run(() => ProcessRequest());
}
// Reset the request handler after BlockingCollection was marked completed
this.isServiceEnabled = false;
this.requestQueue.Dispose();
}
private void ProcessRequest()
{
ImportFileInfo request = this.requestQueue.Take();
UploadFile(request);
// You updated your question saying the method "ImportFile()" requires synchronization.
// This a bottleneck and will significantly drop performance, when this method is long running.
lock (this.syncLock)
{
ImportFile(request);
}
this.semaphore.Release();
}
Remarks:
BlockingCollection is a IDisposable
TODO: You have to "close" the BlockingCollection by marking it completed:
"BlockingCollection.CompleteAdding()" or it will loop indeterminately waiting for further requests. Maybe you introduce a additional request methods for the client to cancel and/ or to update the process and to mark adding to the BlockingCollection as completed. Or a timer that waits an idle time before marking it as completed. Or make your request handler thread block or spin.
Replace Take() and Add(...) with TryTake(...) and TryAdd(...) if you want cancellation support
Code is not tested
Your "ImportFile()" method is a bottleneck in your multi threading environment. I suggest to make it thread safe. In case of I/O that requires synchronization, I would cache the data in a BlockingCollection and then write them to I/O one by one.
The problem is that your total bandwidth is very small-- only one job can run at a time-- and you want to handle parallel requests. That means that queue time could vary wildly. It may not be the best choice to implement your job queue in-memory, as it would make your system much more brittle, and more difficult to scale out when your business grows.
A traditional, scaleable way to architect this would be:
An HTTP service to accept requests, load balanced/redundant, with no session state.
A SQL Server database to persist the requests in a queue, returning a persistent unique job ID.
A Windows service to process the queue, one job at a time, and mark jobs as complete. The worker process for the service would probably be single-threaded.
This solution requires you to choose a web server. A common choice is IIS running ASP.NET. On that platform, each request is guaranteed to be handled in a single-threaded manner (i.e. you don't need to worry about race conditions too much), but due to a feature called thread agility the request might end with a different thread, but in the original synchronization context, which means you will probably never notice unless you are debugging and inspecting thread IDs.
Given the constraints context of our system, this is the implementation we ended up using:
static ImportFileInfo _importInProgressItem = null;
static readonly ConcurrentQueue<ImportFileInfo> ImportQueue =
new ConcurrentQueue<ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request) {
UploadFile(request);
ImportFileSynchronized(request);
}
// Synchronize the file import,
// because the database allows a user to perform only one write at a time.
private void ImportFileSynchronized(ImportFileInfo request) {
ImportQueue.Enqueue(request);
do {
ImportQueue.TryPeek(out var next);
if (null != Interlocked.CompareExchange(ref _importInProgressItem, next, null)) {
// Queue processing is already under way in another thread.
return;
}
ImportFile(next);
ImportQueue.TryDequeue(out _);
Interlocked.Exchange(ref _importInProgressItem, null);
}
while (ImportQueue.Any());
}
public bool UploadAndImportIsComplete(Guid operationId) =>
ImportQueue.All(waiting => waiting.OperationId != operationId);
This solution works well for the loads we are expecting. That load involves a maximum of about 15-20 concurrent PDF file uploads. The batch of up to 15-20 files tends to arrive all at once and then to go quiet for several hours until the next batch arrives.
Criticism and feedback is most welcome.

How to covert async call to work as sync call in C#

I'm a newbie in C#, and I'm going to develop a small program using a third party network library to send the requests.
Suppose there have some requests (just simple strings) stored in the queue qTasks, and it will handle those requests one by one with the order as submitted, the queue can be updated during execution, and it should be stopped whenever there has error returned.
I can just use a for loop to call the send request command in the array one by one, but unfortunately the sendrequest command is an async method with callback OnStageChanged, and I need to check the result before sending the next request when the status is "Done".
I'm now using the following method to handle it:
In the main UI Thread,
// Put those request text in a queue names qTasks, then call a goNextTask() to process the request one by one.
// The queue can be updated by the UI thread at anytime, goNextTask will be called periodically to handle those pending request in the queue.
private void goNextTask(bool lastSuccess = true)
{
if (lastSuccess)
{
if (qTasks.Count > 0)
{
// continue to next request
string requestText = qTasks.Dequeue();
SendRequest(requestText, OnStageChangeHandler);
} else {
// Report for all request sent successfully
}
} else {
// stop and show error
}
}
The callback method OnStageChangeHandler will be called by the library whenever the stage changes, and it will have state "Done" when completed.
private void OnStageChangeHandler(object sender, StageChangeEventArgs e)
{
if (e.newState == SessionStates.Done)
{
// check result here
bool success = <...>
// then call the goNextTask in UI thread with the result of current request.
Application.Current.Dispatcher.BeginInvoke(
System.Windows.Threading.DispatcherPriority.Normal,
(Action)(() => goNextTask(success)));
}
}
Although it works fine now, I think it's a little bit stupid as it has a somewhat recursive flow (A -> B -> A -> B ->....).
I learnt that MS has improved the web request handling, so that it can work in sync mode.
I'd like to know if I can have a wrapper to make the above async call work as a sync call, so that it can be done in a simple flow as a loop like that:
while (qTaks.Count > 0)
{
if (!sendAndWaitReturn(qTasks.Dequeue())) {
// Report error and quit
}
}
// all tasks completed
This sendAndWaitReturn method will send the request, then wait for the status "Done", and then return the result.
I found some example that may use a control flag to indicate the status of the current request, and the callback function will update this control flag, while the UI thread will loop on this flag using a while loop:
while (!requestDone);
so that it will not continue to nextRequest until requestDone. But in this case, the UI will be blocked.
Is there any better way to convert the async call to work as a sync call without blocking the UI thread?
The difficulty you're going to run into is you have conflicting desires. On one hand, you want to avoid blocking the UI thread. On the other hand, you don't want to run things asynchronously and so you're going to block the UI thread.
You're going to have to pick one, and there's absolutely no reason to keep on doing things synchronously (especially in light of blocking the UI thread). If it hurts when you do that, don't do that.
You haven't specified, but I'm guessing that you're starting this processing from a button click event. Make the method invoked by that click event async. For example:
private async void StartProcessing_Click(object sender, EventArgs e)
{
await Task.Run(() => StartProcessing());
}
There, you've started processing and the UI thread isn't tied up.
The next thing is that, you're right, having the event behave in that cyclical manner is silly. The event is to notify someone that the state has changed, its goal isn't to manage queue policy. The queue should manage queue policy (or if you'd rather not abstract that out, the method that processes requests).
So how would you do that? Well, you've said that SendRequest hands the session object back to the caller. The caller is presumably the one who is orchestrating queue policy and determining whether or not to call SendRequest again.
Have the caller check the session object for validity and decide whether to keep going based on that.
Additionally, I'm unfamiliar with that particular library, but briefly glancing at the documentation it looks like there's also a SendRequestAndWait() method with the same signature and that sounds like it might better meet your needs.

A lock designed for asynchronous applications

I have a class which has an inner state which can be changed.
These state changes are never simple, and often consist of several asynchronous operations which occur across multiple threads, such as opening a connection and sending some data
By using a lock and a boolean to indicate whether the state is currently changing I can ensure that only one operation can ever have access to the state at any given time
lock (thisLock) {
while (stateChanging)
Monitor.Wait(thisLock);
stateChanging= true;
//now free to go away and do other things while maintaining exclusive access to the inner state
}
This works fine, but it means there is needless blocking occurring in threads waiting to get exclusive access the state
So what I envision is a lock based on callbacks, where a state changing operation does something like this -
sharedLock.ObtainLock(delegate() {
//we now have exclusive access to the state
//do some work
socket = new Socket();
socket.BeginConnect(hostname, connectcallback);
});
void connectcallback(IAsyncResult result) {
socket.EndConnect(result);
isConnected = true;
sharedLock.ReleaseLock();
}
Is such a concept common? Does it have a name? Am I approaching things incorrectly?
I ended up creating an asynchronous semaphore and it works really really well without feeling hacky in any way.
Usually you would use Mutex or Semaphores for this purpose. For example, if a semaphore has just one token and one operation has taken that token, no other operation can be executed until the first operation is finished and the token has been put back to the semaphore.
Within your second code example you just called ObtainLock and ReleaseLock, but then the sharedLock does not know which operation called Obtain/Release. This is why ObtainLock usually returns a token that can either be diposed or released when the operation finished.
IDisposable myLock;
myLock = sharedLock.ObtainLock(delegate() {
socket = new Socket();
socket.BeginConnect(hostname, connectcallback);
});
void connectcallback(IAsyncResult result) {
socket.EndConnect(result);
isConnected = true;
myLock.Dispose();
}
The class that implements you sharedLock manages these tokens and according to the state of each token it knows if it's busy or not. Actually nothing more than a reference counter.
As another alternative you might use ManualResetEvent or AutoResetEvents as tokens that you return on ObtainLock. When your operation is finished, just call event.Set()

When implementing time-constrained methods, should I abort the worker thread or let it run its course?

I'm currently writing a web services based front-end to an existing application. To do that, I'm using the WCF LOB Adapter SDK, which allows one to create custom WCF bindings that expose external data and operations as web services.
The SDK provides a few interfaces to implement, and some of their methods are time-constrained: the implementation is expected to complete its work within a specified timespan or throw a TimeoutException.
Investigations led me to the question "Implement C# Generic Timeout", which wisely advises to use a worker thread. Armed with that knowledge, I can write:
public MetadataRetrievalNode[] Browse(string nodeId, int childStartIndex,
int maxChildNodes, TimeSpan timeout)
{
Func<MetadataRetrievalNode[]> work = () => {
// Return computed metadata...
};
IAsyncResult result = work.BeginInvoke(null, null);
if (result.AsyncWaitHandle.WaitOne(timeout)) {
return work.EndInvoke(result);
} else {
throw new TimeoutException();
}
}
However, the consensus is not clear about what to do with the worker thread if it times out. One can just forget about it, like the code above does, or one can abort it:
public MetadataRetrievalNode[] Browse(string nodeId, int childStartIndex,
int maxChildNodes, TimeSpan timeout)
{
Thread workerThread = null;
Func<MetadataRetrievalNode[]> work = () => {
workerThread = Thread.CurrentThread;
// Return computed metadata...
};
IAsyncResult result = work.BeginInvoke(null, null);
if (result.AsyncWaitHandle.WaitOne(timeout)) {
return work.EndInvoke(result);
} else {
workerThread.Abort();
throw new TimeoutException();
}
}
Now, aborting a thread is widely considered as wrong. It breaks work in progress, leaks resources, messes with locking and does not even guarantee the thread will actually stop running. That said, HttpResponse.Redirect() aborts a thread every time it's called, and IIS seems to be perfectly happy with that. Maybe it's prepared to deal with it somehow. My external application probably isn't.
On the other hand, if I let the worker thread run its course, apart from the resource contention increase (less available threads in the pool), wouldn't memory be leaked anyway, because work.EndInvoke() never gets called? More specifically, wouldn't the MetadataRetrievalNode[] array returned by work remain around forever?
Is this only a matter of choosing the lesser of two evils, or is there a way not to abort the worker thread and still reclaim the memory used by BeginInvoke()?
Well, first off Thread.Abort is not nearly as bad as it used it to be. There were several improvements made to the CLR in 2.0 that fixed several of the major issues with aborting threads. It is still bad, mind you, so avoiding it is the best course of action. If you must resort to aborting threads then at the very least you should consider tearing down the application domain from where the abort originated. That is going to be incredibly invasive in most scenarios and would not resolve the possible corruption of unmanaged resources.
Aside from that, aborting in this case is going to have other implications. The most important being that you are attempting to abort a ThreadPool thread. I am really not sure what the end result of that would be and it could be different depending on which version of the framework is in play.
The best course of action is to have your Func<MetadataRetrievalNode[]> delegate poll a variable at safe points to see if it should terminate execution on its own.
public MetadataRetrievalNode[] Browse(string nodeId, int childStartIndex, int maxChildNodes, TimeSpan timeout)
{
bool terminate = false;
Func<MetadataRetrievalNode[]> work =
() =>
{
// Do some work.
Thread.MemoryBarrier(); // Ensure a fresh read of the terminate variable.
if (terminate) throw new InvalidOperationException();
// Do some work.
Thread.MemoryBarrier(); // Ensure a fresh read of the terminate variable.
if (terminate) throw new InvalidOperationException();
// Return computed metadata...
};
IAsyncResult result = work.BeginInvoke(null, null);
terminate = !result.AsyncWaitHandle.WaitOne(timeout);
return work.EndInvoke(result); // This blocks until the delegate completes.
}
The tricky part is how to deal with blocking calls inside your delegate. Obviously, you cannot check the terminate flag if the delegate is in the middle of a blocking call. But, assuming the blocking call is initiated from one of the canned BCL waiting mechansisms (WaitHandle.WaitOne, Monitor.Wait, etc.) then you could use Thread.Interrupt to "poke" it and that should immediately unblock it.
The answer depends on the type of work your worker thread is performing. My guess is it's working with external resources like a data connection. Thread.Abort() is indeed evil in any case of threads working with hooks to unmanaged resources, no matter how well-wrapped.
Basically, you want your service to give up if it times out. At this point, theoretically, the caller no longer cares how long the thread's going to take; it only cares that it's "too long", and should move on. Barring a bug in the worker thread's running method, it WILL end eventually; the caller just no longer cares when because it's not waiting any longer.
Now, if the reason the thread timed out is because it's caught in an infinite loop, or is told to wait forever on some other operation like a service call, then you have a problem that you should fix, but the fix is not to kill the thread. That would be analagous to sending your kid into a grocery store to buy bread while you wait in the car. If your kid keeps spending 15 minutes in the store when you think it should take 5, you eventually get curious, go in and find out what they're doing. If it's not what you thought they should be doing, like they've spent all the time looking at pots & pans, you "correct" their behavior for future occasions. If you go in and see your kid standing in a long checkout line, then you just start waiting longer. In neither of these cases should you press the button that detonates the explosive vest they're wearing; that just makes a big mess that will likely interfere with the next kid's ability to do the same errand later.

Categories

Resources