I have a controller which returns a large json object. If this object does not exist, it will generate and return it afterwards. The generation takes about 5 seconds, and if the client sent the request multiple times, the object gets generated with x-times the children. So my question is: Is there a way to block the second request, until the first one finished, independent who sent the request?
Normally I would do it with a Singleton, but because I am having scoped services, singleton does not work here
Warning: this is very oppinionated and maybe not suitable for Stack Overflow, but here it is anyway
Although I'll provide no code... when things take a while to generate, you don't usually spend that time directly in controller code, but do something like "start a background task to generate the result, and provide a "task id", which can be queried on another different call).
So, my preferred course of action for this would be having two different controller actions:
Generate, which creates the background job, assigns it some id, and returns the id
GetResult, to which you pass the task id, and returns either different error codes for "job id doesn't exist", "job id isn't finished", or a 200 with the result.
This way, your clients will need to call both, however, in Generate, you can check if the job is already being created and return an existing job id.
This of course moves the need to "retry and check" to your client: in exchange, you don't leave the connection to the server opened during those 5 seconds (which could potentially be multiplied by a number of clients) and return fast.
Otherwise, if you don't care about having your clients wait for a response during those 5 seconds, you could do a simple:
if(resultDoesntExist) {
resultDoesntExist = false; // You can use locks for the boolean setters or Interlocked instead of just setting a member
resultIsBeingGenerated = true;
generateResult(); // <-- this is what takes 5 seconds
resultIsBeingGenerated = false;
}
while(resultIsBeingGenerated) { await Task.Delay(10); } // <-- other clients will wait here
var result = getResult(); // <-- this should be fast once the result is already created
return result;
note: those booleans and the actual loop could be on the controller, or on the service, or wherever you see fit: just be wary of making them thread-safe in however method you see appropriate
So you basically make other clients wait till the first one generates the result, with "almost" no CPU load on the server... however with a connection open and a thread from the threadpool used, so I just DO NOT recommend this :-)
PS: #Leaky solution above is also good, but it also shifts the responsability to retry to the client, and if you are going to do that, I'd probably go directly with a "background job id", instead of having the first (the one that generates the result) one take 5 seconds. IMO, if it can be avoided, no API action should ever take 5 seconds to return :-)
Do you have an example for Interlocked.CompareExchange?
Sure. I'm definitely not the most knowledgeable person when it comes to multi-threading stuff, but this is quite simple (as you might know, Interlocked has no support for bool, so it's customary to represent it with an integral type):
public class QueryStatus
{
private static int _flag;
// Returns false if the query has already started.
public bool TrySetStarted()
=> Interlocked.CompareExchange(ref _flag, 1, 0) == 0;
public void SetFinished()
=> Interlocked.Exchange(ref _flag, 0);
}
I think it's the safest if you use it like this, with a 'Try' method, which tries to set the value and tells you if it was already set, in an atomic way.
Besides simply adding this (I mean just the field and the methods) to your existing component, you can also use it as a separate component, injected from the IOC container as scoped. Or even injected as a singleton, and then you don't have to use a static field.
Storing state like this should be good for as long as the application is running, but if the hosted application is recycled due to inactivity, it's obviously lost. Though, that won't happen while a request is still processing, and definitely won't happen in 5 seconds.
(And if you wanted to synchronize between app service instances, you could 'quickly' save a flag to the database, in a transaction with proper isolation level set. Or use e.g. Azure Redis Cache.)
Example solution
As Kit noted, rightly so, I didn't provide a full solution above.
So, a crude implementation could go like this:
public class SomeQueryService : ISomeQueryService
{
private static int _hasStartedFlag;
private static bool TrySetStarted()
=> Interlocked.CompareExchange(ref _hasStartedFlag, 1, 0) == 0;
private static void SetFinished()
=> Interlocked.Exchange(ref _hasStartedFlag, 0);
public async Task<(bool couldExecute, object result)> TryExecute()
{
if (!TrySetStarted())
return (couldExecute: false, result: null);
// Safely execute long query.
SetFinished();
return (couldExecute: true, result: result);
}
}
// In the controller, obviously
[HttpGet()]
public async Task<IActionResult> DoLongQuery([FromServices] ISomeQueryService someQueryService)
{
var (couldExecute, result) = await someQueryService.TryExecute();
if (!couldExecute)
{
return new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = "Another request has already started. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
return Ok(result);
}
Of course possibly you'd want to extract the 'blocking' logic from the controller action into somewhere else, for example an action filter. In that case the flag should also go into a separate component that could be shared between the query service and the filter.
General use action filter
I felt bad about my inelegant solution above, and I realized that this problem can be generalized into basically a connection number limiter on an endpoint.
I wrote this small action filter that can be applied to any endpoint (multiple endpoints), and it accepts the number of allowed connections:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ConcurrencyLimiterAttribute : ActionFilterAttribute
{
private readonly int _allowedConnections;
private static readonly ConcurrentDictionary<string, int> _connections = new ConcurrentDictionary<string, int>();
public ConcurrencyLimiterAttribute(int allowedConnections = 1)
=> _allowedConnections = allowedConnections;
public override async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
var key = context.HttpContext.Request.Path;
if (_connections.AddOrUpdate(key, 1, (k, v) => ++v) > _allowedConnections)
{
Close(withError: true);
return;
}
try
{
await next();
}
finally
{
Close();
}
void Close(bool withError = false)
{
if (withError)
{
context.Result = new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = $"Maximum {_allowedConnections} simultaneous connections are allowed. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
_connections.AddOrUpdate(key, 0, (k, v) => --v);
}
}
}
Related
I do have a singleton component that manages some information blocks. An information block is a calculated information identified by some characteristics (concrete an Id and a time period). These calculations may take some seconds. All information blocks are stored in a collection.
Some other consumers are using these information blocks. The calculation should start when the first request for this Id and time period comes. I had following flow in mind:
The first consumer requests the data identified by Id and time period.
The component checks if the information block already exists
If not: Create the information block, put it into the collection and start the calculation in a background task. If yes: Take it from the collection
After that the flow goes to the information block:
When the calculation is already finished (by a former call), a callback from the consumer is called with the result of the calculation.
When the calculation is still in process, the callback is called when the calculation is finished.
So long, so good.
The critical section comes when the second (or any other subsequent) call is coming and the calculation is still running. The idea is that the calculation method holds each consumers callback and then when the calculation is finished all consumers callbacks are called.
public class SingletonInformationService
{
private readonly Collection<InformationBlock> blocks = new();
private object syncObject = new();
public void GetInformationBlock(Guid id, TimePersiod timePeriod,
Action<InformationBlock> callOnFinish)
{
InformationBlock block = null;
lock(syncObject)
{
// check out if the block already exists
block = blocks.SingleOrDefault(b => b.Id ...);
if (block == null)
{
block = new InformationBlock(...);
blocks.Add(block);
}
}
block?.BeginCalculation(callOnFinish);
return true;
}
}
public class InformationBlock
{
private Task calculationTask = null;
private CalculationState isCalculating isCalculating = CalculationState.Unknown;
private List<Action<InformationBlock> waitingRoom = new();
internal void BeginCalculation(Action<InformationBlock> callOnFinish)
{
if (isCalculating == CalculationState.Finished)
{
callOnFinish(this);
return;
}
else if (isCalculating == CalculationState.IsRunning)
{
waitingRoom.Add(callOnFinish);
return;
}
// add the first call to the waitingRoom
waitingRoom.Add(callOnFinish);
isCalculating = CalculationState.IsRunning;
calculationTask = Task.Run(() => { // run the calculation})
.ContinueWith(taskResult =>
{
//.. apply the calculation result to local properties
this.Property1 = taskResult.Result.Property1;
// set the state to mark this instance as complete
isCalculating = CalculationState.Finished;
// inform all calls about the result
waitingRoom.ForEach(c => c(this));
waitingRoom.Clear();
}, TaskScheduler.FromCurrentSynchronizationContext());
}
}
Is that approach a good idea? Do you see any failures or possible deadlocks? The method BeginCalculation might be called more than once while the calculation is running. Should I await for the calculationTask?
To have deadlocks, you'll need some cycles: object A depends of object B, that depends on object A again (image below). As I see, that's not your case, since the InformationBlock class doesn't access the service, but is only called by it.
The lock block is also very small, so probably it'll not put you in troubles.
You could look for the Thread-Safe Collection from C# standard libs. This could simplify your code.
I suggest you to use a ConcurrentDictionary, because it's fastest then iterate over the collection every request.
I need to test if there's any memory leak in our application and monitor to see if memory usage increases too much while processing the requests.
I'm trying to develop some code to make multiple simultaneous calls to our api/webservice method. This api method is not asynchronous and takes some time to complete its operation.
I've made a lot of research about Tasks, Threads and Parallelism, but so far I had no luck. The problem is, even after trying all the below solutions, the result is always the same, it appears to be processing only two requests at the time.
Tried:
-> Creating tasks inside a simple for loop and starting them with and without setting them with TaskCreationOptions.LongRunning
-> Creating threads inside a simple for loop and starting them with and without high priority
-> Creating a list of actions on a simple for loop and starting them using
Parallel.Foreach(list, options, item => item.Invoke)
-> Running directly inside a Parallel.For loop (below)
-> Running TPL methods with and without Options and TaskScheduler
-> Tried with different values for MaxParallelism and maximum threads
-> Checked this post too, but it didn't help either. (Could I be missing something?)
-> Checked some other posts here in Stackoverflow, but with F# solutions that I don't know how to properly translate them to C#. (I never used F#...)
(Task Scheduler class taken from msdn)
Here's the basic structure that I have:
public class Test
{
Data _data;
String _url;
public Test(Data data, string url)
{
_data = data;
_url = url;
}
public ReturnData Execute()
{
ReturnData returnData;
using(var ws = new WebService())
{
ws.Url = _url;
ws.Timeout = 600000;
var wsReturn = ws.LongRunningMethod(data);
// Basically convert wsReturn to my method return, with some logic if/else etc
}
return returnData;
}
}
sealed class ThreadTaskScheduler : TaskScheduler, IDisposable
{
// The runtime decides how many tasks to create for the given set of iterations, loop options, and scheduler's max concurrency level.
// Tasks will be queued in this collection
private BlockingCollection<Task> _tasks = new BlockingCollection<Task>();
// Maintain an array of threads. (Feel free to bump up _n.)
private readonly int _n = 100;
private Thread[] _threads;
public TwoThreadTaskScheduler()
{
_threads = new Thread[_n];
// Create unstarted threads based on the same inline delegate
for (int i = 0; i < _n; i++)
{
_threads[i] = new Thread(() =>
{
// The following loop blocks until items become available in the blocking collection.
// Then one thread is unblocked to consume that item.
foreach (var task in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(task);
}
});
// Start each thread
_threads[i].IsBackground = true;
_threads[i].Start();
}
}
// This method is invoked by the runtime to schedule a task
protected override void QueueTask(Task task)
{
_tasks.Add(task);
}
// The runtime will probe if a task can be executed in the current thread.
// By returning false, we direct all tasks to be queued up.
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
return false;
}
public override int MaximumConcurrencyLevel { get { return _n; } }
protected override IEnumerable<Task> GetScheduledTasks()
{
return _tasks.ToArray();
}
// Dispose is not thread-safe with other members.
// It may only be used when no more tasks will be queued
// to the scheduler. This implementation will block
// until all previously queued tasks have completed.
public void Dispose()
{
if (_threads != null)
{
_tasks.CompleteAdding();
for (int i = 0; i < _n; i++)
{
_threads[i].Join();
_threads[i] = null;
}
_threads = null;
_tasks.Dispose();
_tasks = null;
}
}
}
And the test code itself:
private void button2_Click(object sender, EventArgs e)
{
var maximum = 100;
var options = new ParallelOptions
{
MaxDegreeOfParallelism = 100,
TaskScheduler = new ThreadTaskScheduler()
};
// To prevent UI blocking
Task.Factory.StartNew(() =>
{
Parallel.For(0, maximum, options, i =>
{
var data = new Data();
// Fill data
var test = new Test(data, _url); //_url is pre-defined
var ret = test.Execute();
// Check return and display on screen
var now = DateTime.Now.ToString("HH:mm:ss");
var newText = $"{Environment.NewLine}[{now}] - {ret.ReturnId}) {ret.ReturnDescription}";
AppendTextBox(newText, ref resultTextBox);
}
}
public void AppendTextBox(string value, ref TextBox textBox)
{
if (InvokeRequired)
{
this.Invoke(new ActionRef<string, TextBox>(AppendTextBox), value, textBox);
return;
}
textBox.Text += value;
}
And the result that I get is basically this:
[10:08:56] - (0) OK
[10:08:56] - (0) OK
[10:09:23] - (0) OK
[10:09:23] - (0) OK
[10:09:49] - (0) OK
[10:09:50] - (0) OK
[10:10:15] - (0) OK
[10:10:16] - (0) OK
etc
As far as I know there's no limitation on the server side. I'm relatively new to the Parallel/Multitasking world. Is there any other way to do this? Am I missing something?
(I simplified all the code for clearness and I believe that the provided code is enough to picture the mentioned scenarios. I also didn't post the application code, but it's a simple WinForms screen just to call and show results. If any code is somehow relevant, please let me know, I can edit and post it too.)
Thanks in advance!
EDIT1: I checked on the server logs that it's receiving the requests two by two, so it's indeed something related to sending them, not receiving.
Could it be a network problem/limitation related to how the framework manages the requests/connections? Or something with the network at all (unrelated to .net)?
EDIT2: Forgot to mention, it's a SOAP webservice.
EDIT3: One of the properties that I send (inside data) needs to change for each request.
EDIT4: I noticed that there's always an interval of ~25 secs between each pair of request, if it's relevant.
I would recommend not to reinvent the wheel and just use one of the existing solutions:
Most obvious choice: if your Visual Studio license allows you can use MS Load Testing Framework, most likely you won't even have to write a single line of code: How to: Create a Web Service Test
SoapUI is a free and open source web services testing tool, it has some limited load testing capabilities
If for some reasons SoapUI is not suitable (i.e. you need to run load tests in clustered mode from several hosts or you need more enhanced reporting) you can use Apache JMeter - free and open source multiprotocol load testing tool which supports web services load testing as well.
A good solution to create load tests without write a own project is use this service https://loader.io/targets
It is free for small tests, you can POST Parameters, Header,... and you have a nice reporting.
Isnt the "two requests at a time" the result of the default maxconnection=2 limit on connectionManagement?
<configuration>
<system.net>
<connectionManagement>
<add address = "http://www.contoso.com" maxconnection = "4" />
<add address = "*" maxconnection = "2" />
</connectionManagement>
</system.net>
</configuration>
My favorite load testing library is NBomber. It has an easy and powerful API, realistic user simulations, and provides you with nice HTML reports about latency and requests per second.
I used it to test my API and wrote an article about how I did it.
I'm new in C# and trying to understand how to work with Lazy.
I need to handle concurrent request by waiting the result of an already running operation. Requests for data may come in simultaneously with same/different credentials.
For each unique set of credentials there can be at most one GetDataInternal call in progress, with the result from that one call returned to all queued waiters when it is ready
private readonly ConcurrentDictionary<Credential, Lazy<Data>> Cache
= new ConcurrentDictionary<Credential, Lazy<Data>>();
public Data GetData(Credential credential)
{
// This instance will be thrown away if a cached
// value with our "credential" key already exists.
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential),
LazyThreadSafetyMode.ExecutionAndPublication
);
Lazy<Data> lazy = Cache.GetOrAdd(credential, newLazy);
bool added = ReferenceEquals(newLazy, lazy); // If true, we won the race.
Data data;
try
{
// Wait for the GetDataInternal call to complete.
data = lazy.Value;
}
finally
{
// Only the thread which created the cache value
// is allowed to remove it, to prevent races.
if (added) {
Cache.TryRemove(credential, out lazy);
}
}
return data;
}
Is that right way to use Lazy or my code is not safe?
Update:
Is it good idea to start using MemoryCache instead of ConcurrentDictionary? If yes, how to create a key value, because it's a string inside MemoryCache.Default.AddOrGetExisting()
This is correct. This is a standard pattern (except for the removal) and it's a really good cache because it prevents cache stampeding.
I'm not sure you want to remove from the cache when the computation is done because the computation will be redone over and over that way. If you don't need the removal you can simplify the code by basically deleting the second half.
Note, that Lazy has a problem in the case of an exception: The exception is stored and the factory will never be re-executed. The problem persists forever (until a human restarts the app). In my mind this makes Lazy completely unsuitable for production use in most cases.
This means that a transient error such as a network issue can render the app unavailable permanently.
This answer is directed to the updated part of the original question. See #usr answer regarding thread-safety with Lazy<T> and the potential pitfalls.
I would like to know how to avoid using ConcurrentDictionary<TKey, TValue> and start
using MemoryCache? How to implement
MemoryCache.Default.AddOrGetExisting()?
If you're looking for a cache which has a mechanism for auto expiry, then MemoryCache is a good choice if you don't want to implement the mechanics yourself.
In order to utilize MemoryCache which forces a string representation for a key, you'll need to create a unique string representation of a credential, perhaps a given user id or a unique username?
If you can, you can create an override of ToString which represents your unique identifier or simply use the said property, and utilize MemoryCache like this:
public class Credential
{
public Credential(int userId)
{
UserId = userId;
}
public int UserId { get; private set; }
}
And now your method will look like this:
private const EvictionIntervalMinutes = 10;
public Data GetData(Credential credential)
{
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential), LazyThreadSafetyMode.ExecutionAndPublication);
CacheItemPolicy evictionPolicy = new CacheItemPolicy
{
AbsoluteExpiration = DateTimeOffset.UtcNow.AddMinutes(EvictionIntervalMinutes)
};
var result = MemoryCache.Default.AddOrGetExisting(
new CacheItem(credential.UserId.ToString(), newLazy), evictionPolicy);
return result != null ? ((Lazy<Data>)result.Value).Value : newLazy.Value;
}
MemoryCache provides you with a thread-safe implementation, this means that two threads accessing AddOrGetExisting will only cause a single cache item to be added or retrieved. Further, Lazy<T> with ExecutionAndPublication guarantess only a single unique invocation of the factory method.
I have a slow and expensive method that return some data for me:
public Data GetData(){...}
I don't want to wait until this method will execute. Rather than I want to return a cached data immediately.
I have a class CachedData that contains one property Data cachedData.
So I want to create another method public CachedData GetCachedData() that will initiate a new task(call GetData inside of it) and immediately return cached data and after task will finish we will update the cache.
I need to have thread safe GetCachedData() because I will have multiple request that will call this method.
I will have a light ping "is there anything change?" each minute and if it will return true (cachedData != currentData) then I will call GetCachedData().
I'm new in C#. Please, help me to implement this method.
I'm using .net framework 4.5.2
The basic idea is clear:
You have a Data property which is wrapper around an expensive function call.
In order to have some response immediately the property holds a cached value and performs updating in the background.
No need for an event when the updater is done because you poll, for now.
That seems like a straight-forward design. At some point you may want to use events, but that can be added later.
Depending on the circumstances it may be necessary to make access to the property thread-safe. I think that if the Data cache is a simple reference and no other data is updated together with it, a lock is not necessary, but you may want to declare the reference volatile so that the reading thread does not rely on a stale cached (ha!) version. This post seems to have good links which discuss the issues.
If you will not call GetCachedData at the same time, you may not use lock. If data is null (for sure first run) we will wait long method to finish its work.
public class SlowClass
{
private static object _lock;
private static Data _cachedData;
public SlowClass()
{
_lock = new object();
}
public void GetCachedData()
{
var task = new Task(DoStuffLongRun);
task.Start();
if (_cachedData == null)
task.Wait();
}
public Data GetData()
{
if (_cachedData == null)
GetCachedData();
return _cachedData;
}
private void DoStuffLongRun()
{
lock (_lock)
{
Console.WriteLine("Locked Entered");
Thread.Sleep(5000);//Do Long Stuff
_cachedData = new Data();
}
}
}
I have tested on console application.
static void Main(string[] args)
{
var mySlow = new SlowClass();
var mySlow2 = new SlowClass();
mySlow.GetCachedData();
for (int i = 0; i < 5; i++)
{
Console.WriteLine(i);
mySlow.GetData();
mySlow2.GetData();
}
mySlow.GetCachedData();
Console.Read();
}
Maybe you can use the MemoryCache class,
as explained here in MSDN
I am building a class to use parallel loop to access messages from message queue, in order to explain my issue I created a simplified version of code:
public class Worker
{
private IMessageQueue mq;
public Worker(IMessageQueue mq)
{
this.mq = mq;
}
public int Concurrency
{
get
{
return 5;
}
}
public void DoWork()
{
int totalFoundMessage = 0;
do
{
// reset for every loop
totalFoundMessage = 0;
Parallel.For<int>(
0,
this.Concurrency,
() => 0,
(i, loopState, localState) =>
{
Message data = this.mq.GetFromMessageQueue("MessageQueueName");
if (data != null)
{
return localState + 1;
}
else
{
return localState + 0;
}
},
localState =>
{
Interlocked.Add(ref totalFoundMessage, localState);
});
}
while (totalFoundMessage >= this.Concurrency);
}
}
The idea is to set the worker class a concurrency value to control the parallel loop. If after each loop the number of message to retrieve from message queue equals to the concurrency number I assume there are potential more messages in the queue and continue to fetch from queue until the message number is smaller than the concurrency. The TPL code is also inspired by TPL Data Parallelism Issue post.
I have the interface to message queue and message object.
public interface IMessageQueue
{
Message GetFromMessageQueue(string queueName);
}
public class Message
{
}
Thus I created my unit test codes and I used Moq to mock the IMessageQueue interface
[TestMethod()]
public void DoWorkTest()
{
Mock<IMessageQueue> mqMock = new Mock<IMessageQueue>();
Message data = new Message();
Worker w = new Worker(mqMock.Object);
int callCounter = 0;
int messageNumber = 11;
mqMock.Setup(x => x.GetFromMessageQueue("MessageQueueName")).Returns(() =>
{
callCounter++;
if (callCounter < messageNumber)
{
return data;
}
else
{
// simulate MSMQ's behavior last call to empty queue returns null
return (Message)null;
}
}
);
w.DoWork();
int expectedCallTimes = w.Concurrency * (messageNumber / w.Concurrency);
if (messageNumber % w.Concurrency > 0)
{
expectedCallTimes += w.Concurrency;
}
mqMock.Verify(x => x.GetFromMessageQueue("MessageQueueName"), Times.Exactly(expectedCallTimes));
}
I used the idea from Moq to set up a function return based on called times to set up call times based response.
During the unit testing I noticed the testing result is unstable, if you run it multiple times you will see in most cases the test passes, but occasionally the test fails for various reasons.
I have no clue what caused the situation and look for some input from you. Thanks
The problem is that your mocked GetFromMessageQueue() is not thread-safe, but you're calling it from multiple threads at the same time. ++ is inherently thread-unsafe operation.
Instead, you should use locking or Interlocked.Increment().
Also, in your code, you're likely not going to benefit from parallelism, because starting and stopping Parallel.ForEach() has some overhead. A better way would be to have a while (or do-while) inside the Parallel.ForEach(), not the other way around.
My approach would be to restructure. When testing things like timing or concurrency, it is usually prudent to abstract your calls (in this case, use of PLINQ) into a separate class that accepts a number of delegates. You can then test the correct calls are being made to the new class. Then, because the new class is a lot simpler (only a single PLINQ call) and contains no logic, you can leave it untested.
I advocate not testing in this case because unless you are working on something super-critical (life support systems, airplanes, etc), it becomes more trouble than it's worth to test. Trust the framework will execute the PLINQ query as expected. You should only be testing those things which make sense to test, and that provide value to your project or client.