Request/Response pattern with TPL Dataflow - c#

We have a problem where we need a request/response pattern while using the TPL Dataflow library. Our problem is we have a .NET core API that calls a dependent service. The dependent service limits concurrent requests. Our API does not limit concurrent requests; therefore, we could receive thousands of requests at a time. In this case, the dependent service would reject requests after reaching its limit. Therefore, we implemented a BufferBlock<T> and a TransformBlock<TIn, TOut>. The performance is solid and works great. We tested our API front end with 1000 users issuing 100 requests/sec with 0 problems. The buffer block buffers requests and the transform block executes in parallel our desired amount of requests. The dependency service receives our requests and responds. We return that response in the transform block action and all is well. Our problem is the buffer block and transform block are disconnected which means requests/responses are not in sync. We are experiencing an issue where a request will receive the response of another requester (please see the code below).
Specific to the code below, our problem lies in the GetContent method. That method is called from a service layer in our API which ultimately is called from our controller. The code below and the service layer are singletons. The SendAsync to the buffer is disconnected from the transform block ReceiveAsync so that arbitrary responses are returned and not necessarily the request that was issued.
So, our question is: Is there a way using the dataflow blocks to correlate request/responses? The ultimate goal is a request comes in to our API, gets issued to the dependent service, and is returned to the client. The code for our data flow implementation is below.
public class HttpClientWrapper : IHttpClientManager
{
private readonly IConfiguration _configuration;
private readonly ITokenService _tokenService;
private HttpClient _client;
private BufferBlock<string> _bufferBlock;
private TransformBlock<string, JObject> _actionBlock;
public HttpClientWrapper(IConfiguration configuration, ITokenService tokenService)
{
_configuration = configuration;
_tokenService = tokenService;
_bufferBlock = new BufferBlock<string>();
var executionDataFlowBlockOptions = new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
};
var dataFlowLinkOptions = new DataflowLinkOptions
{
PropagateCompletion = true
};
_actionBlock = new TransformBlock<string, JObject>(t => ProcessRequest(t),
executionDataFlowBlockOptions);
_bufferBlock.LinkTo(_actionBlock, dataFlowLinkOptions);
}
public void Connect()
{
_client = new HttpClient();
_client.DefaultRequestHeaders.Add("x-ms-client-application-name",
"ourappname");
}
public async Task<JObject> GetContent(string request)
{
await _bufferBlock.SendAsync(request);
var result = await _actionBlock.ReceiveAsync();
return result;
}
private async Task<JObject> ProcessRequest(string request)
{
if (_client == null)
{
Connect();
}
try
{
var accessToken = await _tokenService.GetTokenAsync(_configuration);
var httpRequestMessage = new HttpRequestMessage(HttpMethod.Post,
new Uri($"https://{_configuration.Uri}"));
// add the headers
httpRequestMessage.Headers.Add("Authorization", $"Bearer {accessToken}");
// add the request body
httpRequestMessage.Content = new StringContent(request, Encoding.UTF8,
"application/json");
var postRequest = await _client.SendAsync(httpRequestMessage);
var response = await postRequest.Content.ReadAsStringAsync();
return JsonConvert.DeserializeObject<JObject>(response);
}
catch (Exception ex)
{
// log error
return new JObject();
}
}
}

What you have to do is tag each incoming item with an id so that you can correlate the data input to the result output. Here's an example of how to do that:
namespace ConcurrentFlows.DataflowJobs {
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
/// <summary>
/// A generic interface defining that:
/// for a specified input type => an awaitable result is produced.
/// </summary>
/// <typeparam name="TInput">The type of data to process.</typeparam>
/// <typeparam name="TOutput">The type of data the consumer expects back.</typeparam>
public interface IJobManager<TInput, TOutput> {
Task<TOutput> SubmitRequest(TInput data);
}
/// <summary>
/// A TPL-Dataflow based job manager.
/// </summary>
/// <typeparam name="TInput">The type of data to process.</typeparam>
/// <typeparam name="TOutput">The type of data the consumer expects back.</typeparam>
public class DataflowJobManager<TInput, TOutput> : IJobManager<TInput, TOutput> {
/// <summary>
/// It is anticipated that jobHandler is an injected
/// singleton instance of a Dataflow based 'calculator', though this implementation
/// does not depend on it being a singleton.
/// </summary>
/// <param name="jobHandler">A singleton Dataflow block through which all jobs are processed.</param>
public DataflowJobManager(IPropagatorBlock<KeyValuePair<Guid, TInput>, KeyValuePair<Guid, TOutput>> jobHandler) {
if (jobHandler == null) { throw new ArgumentException("Argument cannot be null.", "jobHandler"); }
this.JobHandler = JobHandler;
if (!alreadyLinked) {
JobHandler.LinkTo(ResultHandler, new DataflowLinkOptions() { PropagateCompletion = true });
alreadyLinked = true;
}
}
private static bool alreadyLinked = false;
/// <summary>
/// Submits the request to the JobHandler and asynchronously awaits the result.
/// </summary>
/// <param name="data">The input data to be processd.</param>
/// <returns></returns>
public async Task<TOutput> SubmitRequest(TInput data) {
var taggedData = TagInputData(data);
var job = CreateJob(taggedData);
Jobs.TryAdd(job.Key, job.Value);
await JobHandler.SendAsync(taggedData);
return await job.Value.Task;
}
private static ConcurrentDictionary<Guid, TaskCompletionSource<TOutput>> Jobs {
get;
} = new ConcurrentDictionary<Guid, TaskCompletionSource<TOutput>>();
private static ExecutionDataflowBlockOptions Options {
get;
} = GetResultHandlerOptions();
private static ITargetBlock<KeyValuePair<Guid, TOutput>> ResultHandler {
get;
} = CreateReplyHandler(Options);
private IPropagatorBlock<KeyValuePair<Guid, TInput>, KeyValuePair<Guid, TOutput>> JobHandler {
get;
}
private KeyValuePair<Guid, TInput> TagInputData(TInput data) {
var id = Guid.NewGuid();
return new KeyValuePair<Guid, TInput>(id, data);
}
private KeyValuePair<Guid, TaskCompletionSource<TOutput>> CreateJob(KeyValuePair<Guid, TInput> taggedData) {
var id = taggedData.Key;
var jobCompletionSource = new TaskCompletionSource<TOutput>();
return new KeyValuePair<Guid, TaskCompletionSource<TOutput>>(id, jobCompletionSource);
}
private static ExecutionDataflowBlockOptions GetResultHandlerOptions() {
return new ExecutionDataflowBlockOptions() {
MaxDegreeOfParallelism = Environment.ProcessorCount,
BoundedCapacity = 1000
};
}
private static ITargetBlock<KeyValuePair<Guid, TOutput>> CreateReplyHandler(ExecutionDataflowBlockOptions options) {
return new ActionBlock<KeyValuePair<Guid, TOutput>>((result) => {
RecieveOutput(result);
}, options);
}
private static void RecieveOutput(KeyValuePair<Guid, TOutput> result) {
var jobId = result.Key;
TaskCompletionSource<TOutput> jobCompletionSource;
if (!Jobs.TryRemove(jobId, out jobCompletionSource)) {
throw new InvalidOperationException($"The jobId: {jobId} was not found.");
}
var resultValue = result.Value;
jobCompletionSource.SetResult(resultValue);
}
}
}
Also see this answer for reference.

Doing a simple throttling is not a particularly enticing use case for the TPL Dataflow library, and using instead a SemaphoreSlim seems simpler and more attractive. But if you wanted more features, like for example enforcing a minimum duration for each request, or having a way for waiting all the pending requests to complete, then the TPL Dataflow could have something to offer that the SemaphoreSlim couldn't. The basic idea is to avoid passing naked input values to the block, and trying later to associate them with the produced results. It is much safer to create tasks immediately upon request, send the tasks to an ActionBlock<Task>, and let the block activate and await asynchronously these tasks using its specified MaxDegreeOfParallelism. This way the input value and its result will be unambiguously tied together forever.
public class ThrottledExecution<T>
{
private readonly ActionBlock<Task<Task<T>>> _actionBlock;
private readonly CancellationToken _cancellationToken;
public ThrottledExecution(int concurrencyLevel, int minDurationMilliseconds = 0,
CancellationToken cancellationToken = default)
{
if (minDurationMilliseconds < 0) throw new ArgumentOutOfRangeException();
_actionBlock = new ActionBlock<Task<Task<T>>>(async task =>
{
try
{
var delay = Task.Delay(minDurationMilliseconds, cancellationToken);
task.RunSynchronously();
await task.Unwrap().ConfigureAwait(false);
await delay.ConfigureAwait(false);
}
catch { } // Ignore exceptions (errors are propagated through the task)
}, new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = concurrencyLevel,
CancellationToken = cancellationToken,
});
_cancellationToken = cancellationToken;
}
public Task<T> Run(Func<Task<T>> function)
{
// Create a cold task (the function will be invoked later)
var task = new Task<Task<T>>(function, _cancellationToken);
var accepted = _actionBlock.Post(task);
if (!accepted)
{
_cancellationToken.ThrowIfCancellationRequested();
throw new InvalidOperationException(
"The component has been marked as complete.");
}
return task.Unwrap();
}
public void Complete() => _actionBlock.Complete();
public Task Completion => _actionBlock.Completion;
}
Usage example:
private ThrottledExecution<JObject> throttledExecution
= new ThrottledExecution<JObject>(concurrencyLevel: 10);
public Task<JObject> GetContent(string request)
{
return throttledExecution.Run(() => ProcessRequest(request));
}

I appreciate the answer provided by JSteward. His is a perfectly acceptable approach; however, I ended up doing this by using a SemaphoreSlim. SemaphoreSlim provides two things that allow this to be a powerful solution. First, it provides a constructor overload where you can send in a count. This count refers to the number of concurrent items that will be able to get past the semaphore waiting mechanism. The waiting mechanism is provided by a method called WaitAsync. With the below approach where the Worker class is as a Singleton, concurrent requests come in, are limited to 10 at a time executing the http request, and the responses are all returned to the correct requests. So an implementation might look like the following:
public class Worker: IWorker
{
private readonly IHttpClientManager _httpClient;
private readonly ITokenService _tokenService;
private readonly SemaphoreSlim _semaphore;
public Worker(IHttpClientManager httpClient, ITokenService tokenService)
{
_httpClient = httpClient;
_tokenService = tokenService;
// we want to limit the number of items here
_semaphore = new SemaphoreSlim(10);
}
public async Task<JObject> ProcessRequestAsync(string request, string route)
{
try
{
var accessToken = await _tokenService.GetTokenAsync(
_timeSeriesConfiguration.TenantId,
_timeSeriesConfiguration.ClientId,
_timeSeriesConfiguration.ClientSecret);
var cancellationToken = new CancellationTokenSource();
cancellationToken.CancelAfter(30000);
await _semaphore.WaitAsync(cancellationToken.Token);
var httpResponseMessage = await _httpClient.SendAsync(new HttpClientRequest
{
Method = HttpMethod.Post,
Uri = $"https://someuri/someroute",
Token = accessToken,
Content = request
});
var response = await httpResponseMessage.Content.ReadAsStringAsync();
return response;
}
catch (Exception ex)
{
// do some logging
throw;
}
finally
{
_semaphore.Release();
}
}
}

Related

How to have only one thread for fire and forget task in asp.net webapi?

I have a need in my asp.net webapi (framework .Net 4.7.2) to call Redis (using StackExchange.Redis) in order to delete a key in a fire and forget way and I am making some stress test.
As I am comparing the various way to have the max speed :
I have already test executing the command with the FireAndForget flag,
I have also measured a simple command to Redis by await it.
And I am now searching a way to collect a list of commands received in a window of 15ms and execute them all in one go by pipeling them.
I have first try to use a Task.Run Action to call Redis but the problem that I am observing is that under stress, the memory of my webapi keep climbing.
The memory is full of System.Threading.IThreadPoolWorkItem[] objects with the folowing code :
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
Task.Run(async () => {
await db.StringSetAsync($"BASKET_TO_DELETE_{cpt}",cpt.ToString())
.ConfigureAwait(false);
});
return response;
}
So I think that under stress my api keep enqueing background task in memory and execute them one after the other as fast as it can but less than the request coming in...
So I am searching for a way to have only one long lived background thread running with the asp.net webapi, that could capture the commands to send to Redis and execute them by pipeling them.
I was thinking in runnning a background task by implementing IHostedService interface, but it seems that in this case the background task would not share any state with my current http request. So implementing a IhostedService would be handy for a scheduled background task but not in my case, or I do not know how...
Based on StackExchange.Redis documentation you can use CommandFlags.FireAndForget flag:
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
db.StringSet($"BASKET_TO_DELETE_{cpt}", cpt.ToString(), flags: CommandFlags.FireAndForget);
return response;
}
Edit 1: another solution based on comment
You can use pub/sub approach. Something like this should work:
public class MessageBatcher
{
private readonly IDatabase target;
private readonly BlockingCollection<Action<IDatabaseAsync>> tasks = new();
private Task worker;
public MessageBatcher(IDatabase target) => this.target = target;
public void AddMessage(Action<IDatabaseAsync> task) => tasks.Add(task);
public IDisposable Start(int batchSize)
{
var cancellationTokenSource = new CancellationTokenSource();
worker = Task.Factory.StartNew(state =>
{
var count = 0;
var tokenSource = (CancellationTokenSource) state;
var box = new StrongBox<IBatch>(target.CreateBatch());
tokenSource.Token.Register(b => ((StrongBox<IBatch>)b).Value.Execute(), box);
foreach (var task in tasks.GetConsumingEnumerable(tokenSource.Token))
{
var batch = box.Value;
task(batch);
if (++count == batchSize)
{
batch.Execute();
box.Value = target.CreateBatch();
count = 0;
}
}
}, cancellationTokenSource, cancellationTokenSource.Token, TaskCreationOptions.LongRunning, TaskScheduler.Current);
return new Disposer(worker, cancellationTokenSource);
}
private class Disposer : IDisposable
{
private readonly Task worker;
private readonly CancellationTokenSource tokenSource;
public Disposer(Task worker, CancellationTokenSource tokenSource) => (this.worker, this.tokenSource) = (worker, tokenSource);
public void Dispose()
{
tokenSource.Cancel();
worker.Wait();
tokenSource.Dispose();
}
}
}
Usage:
private readonly MessageBatcher batcher;
ctor(MessageBatcher batcher) // ensure that passed `handler` is singleton and already already started
{
this.batcher= batcher;
}
[HttpPost]
[Route("api/values/testpostfireforget")]
public ApiResult<int> DeleteFromBasketId([FromBody] int basketId)
{
var response = new DeleteFromBasketResponse<int>();
var cpt = Interlocked.Increment(ref counter);
batcher.AddMessage(db => db.StringSetAsync($"BASKET_TO_DELETE_{cpt}", cpt.ToString(), flags: CommandFlags.FireAndForget));
return response;
}

Distributing Requests In Parallel

I have like that scenario:
I have an endpoint and this endpoint will save the requests in List or Queue in memory and it will return immediately success response to the consumer. This requirement is critical, the consumer should not wait for the responses, it will get responses from a different endpoint if it needs. So, this endpoint must return as quickly as possible after saving the request message in memory.
Another thread will distribute these requests to other endpoints and save the responses in memory as well.
What I did till now:
I created a controller api to save these requests in the memory. I saved them in a static request List like below:
public static class RequestList
{
public static event EventHandler<RequestEventArgs> RequestReceived;
private static List<DistributionRequest> Requests { get; set; } = new List<DistributionRequest>();
public static int RequestCount { get => RequestList.Requests.Count; }
public static DistributionRequest Add(DistributionRequest request)
{
request.RequestId = Guid.NewGuid().ToString();
RequestList.Requests.Add(request);
OnRequestReceived(new RequestEventArgs { Request = request });
return request;
}
public static bool Remove(DistributionRequest request) => Requests.Remove(request);
private static void OnRequestReceived(RequestEventArgs e)
{
RequestReceived?.Invoke(null, e);
}
}
public class RequestEventArgs : EventArgs
{
public DistributionRequest Request { get; set; }
}
And another class is subscribed to that event that exists in that static class and I am creating a new thread to make some background web requests to be able to achieve 2. item which I stated above.
private void RequestList_RequestReceived(object sender, RequestEventArgs e)
{
_logger.LogInformation($"Request Id: {e.Request.RequestId}, New request received");
Task.Factory.StartNew(() => Distribute(e.Request));
_logger.LogInformation($"Request Id: {e.Request.RequestId}, New task created for the new request");
//await Distribute(e.Request);
}
public async Task<bool> Distribute(DistributionRequest request)
{
//Some logic running here to send post request to different endpoints
//and to save results in memory
}
And here is my controller method:
[HttpPost]
public IActionResult Post([FromForm] DistributionRequest request)
{
var response = RequestList.Add(request);
return Ok(new DistributionResponse { Succeeded = true, RequestId = response.RequestId });
}
I tried that approach but it did not work as I expected, it should return within milliseconds since I am not waiting for responses but it seems to wait for something, and after every single request waiting time is increasing as below:
What am I doing wrong? Or Do you have a better idea? How can I achieve my goal?
Based on you example code I tried to implement it without "eventing". Therefore I get much better request times. I cannot say if this is related to your implementation or the eventing itself for this you have to do profiling.
I did it this way
RequestsController
Just like you had it in your example. Take the request and add it to the requests list.
[Route("requests")]
public class RequestsController : ControllerBase
{
private readonly RequestManager _mgr;
public RequestsController(RequestManager mgr)
{
_mgr = mgr;
}
[HttpPost]
public IActionResult AddRequest([FromBody] DistributionRequest request)
{
var item = _mgr.Add(request);
return Accepted(new { Succeeded = true, RequestId = item.RequestId });
}
}
RequestManager
Manage the request list and forward them to some distribor.
public class RequestManager
{
private readonly ILogger _logger;
private readonly RequestDistributor _distributor;
public IList<DistributionRequest> Requests { get; } = new List<DistributionRequest>();
public RequestManager(RequestDistributor distributor, ILogger<RequestManager> logger)
{
_distributor = distributor;
_logger = logger;
}
public DistributionRequest Add(DistributionRequest request)
{
_logger.LogInformation($"Request Id: {request.RequestId}, New request received");
/// Just add to the list of requests
Requests.Add(request);
/// Create and start a new task to distribute the request
/// forward it to the distributor.
/// Be sure to not add "await" here
Task.Factory.StartNew(() => _distributor.DistributeAsync(request));
_logger.LogInformation($"Request Id: {request.RequestId}, New task created for the new request");
return request;
}
}
RequestDistributor
Distribution logic can be implemented here
public class RequestDistributor
{
public async Task DistributeAsync(DistributionRequest request)
{
/// do your distribution here
/// currently just a mocked time range
await Task.Delay(5);
}
}
Wire up
... add all these things to your dependency injection configuration
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddSingleton<RequestDistributor>();
services.AddSingleton<RequestManager>();
}
Tests
With the here provided code pieces I received all the requests back in less than 10 ms.
Note
This is just an example try to always add interfaces to your services to make them testable ;).

gRPC keeping response streams open for subscriptions

I've tried to define a gRPC service where client can subscribe to receive broadcasted messages and they can also send them.
syntax = "proto3";
package Messenger;
service MessengerService {
rpc SubscribeForMessages(User) returns (stream Message) {}
rpc SendMessage(Message) returns (Close) {}
}
message User {
string displayName = 1;
}
message Message {
User from = 1;
string message = 2;
}
message Close {}
My idea was that when a client requests to subscribe to the messages, the response stream would be added to a collection of response streams, and when a message is sent, the message is sent through all the response streams.
However, when my server attempts to write to the response streams, I get an exception System.InvalidOperationException: 'Response stream has already been completed.'
Is there any way to tell the server to keep the streams open so that new messages can be sent through them? Or is this not something that gRPC was designed for and a different technology should be used?
The end goal service would be allows multiple types of subscriptions (could be to new messages, weather updates, etc...) through different clients written in different languages (C#, Java, etc...). The different languages part is mainly the reason I chose gRPC to try this, although I intend on writing the server in C#.
Implementation example
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Grpc.Core;
using Messenger;
namespace SimpleGrpcTestStream
{
/*
Dependencies
Install-Package Google.Protobuf
Install-Package Grpc
Install-Package Grpc.Tools
Install-Package System.Interactive.Async
Install-Package System.Linq.Async
*/
internal static class Program
{
private static void Main()
{
var messengerServer = new MessengerServer();
messengerServer.Start();
var channel = Common.GetNewInsecureChannel();
var client = new MessengerService.MessengerServiceClient(channel);
var clientUser = Common.GetUser("Client");
var otherUser = Common.GetUser("Other");
var cancelClientSubscription = AddCancellableMessageSubscription(client, clientUser);
var cancelOtherSubscription = AddCancellableMessageSubscription(client, otherUser);
client.SendMessage(new Message { From = clientUser, Message_ = "Hello" });
client.SendMessage(new Message { From = otherUser, Message_ = "World" });
client.SendMessage(new Message { From = clientUser, Message_ = "Whoop" });
cancelClientSubscription.Cancel();
cancelOtherSubscription.Cancel();
channel.ShutdownAsync().Wait();
messengerServer.ShutDown().Wait();
}
private static CancellationTokenSource AddCancellableMessageSubscription(
MessengerService.MessengerServiceClient client,
User user)
{
var cancelMessageSubscription = new CancellationTokenSource();
var messages = client.SubscribeForMessages(user);
var messageSubscription = messages
.ResponseStream
.ToAsyncEnumerable()
.Finally(() => messages.Dispose());
messageSubscription.ForEachAsync(
message => Console.WriteLine($"New Message: {message.Message_}"),
cancelMessageSubscription.Token);
return cancelMessageSubscription;
}
}
public static class Common
{
private const int Port = 50051;
private const string Host = "localhost";
private static readonly string ChannelAddress = $"{Host}:{Port}";
public static User GetUser(string name) => new User { DisplayName = name };
public static readonly User ServerUser = GetUser("Server");
public static readonly Close EmptyClose = new Close();
public static Channel GetNewInsecureChannel() => new Channel(ChannelAddress, ChannelCredentials.Insecure);
public static ServerPort GetNewInsecureServerPort() => new ServerPort(Host, Port, ServerCredentials.Insecure);
}
public sealed class MessengerServer : MessengerService.MessengerServiceBase
{
private readonly Server _server;
public MessengerServer()
{
_server = new Server
{
Ports = { Common.GetNewInsecureServerPort() },
Services = { MessengerService.BindService(this) },
};
}
public void Start()
{
_server.Start();
}
public async Task ShutDown()
{
await _server.ShutdownAsync().ConfigureAwait(false);
}
private readonly ConcurrentDictionary<User, IServerStreamWriter<Message>> _messageSubscriptions = new ConcurrentDictionary<User, IServerStreamWriter<Message>>();
public override async Task<Close> SendMessage(Message request, ServerCallContext context)
{
await Task.Run(() =>
{
foreach (var (_, messageStream) in _messageSubscriptions)
{
messageStream.WriteAsync(request);
}
}).ConfigureAwait(false);
return await Task.FromResult(Common.EmptyClose).ConfigureAwait(false);
}
public override async Task SubscribeForMessages(User request, IServerStreamWriter<Message> responseStream, ServerCallContext context)
{
await Task.Run(() =>
{
responseStream.WriteAsync(new Message
{
From = Common.ServerUser,
Message_ = $"{request.DisplayName} is listening for messages!",
});
_messageSubscriptions.TryAdd(request, responseStream);
}).ConfigureAwait(false);
}
}
public static class AsyncStreamReaderExtensions
{
public static IAsyncEnumerable<T> ToAsyncEnumerable<T>(this IAsyncStreamReader<T> asyncStreamReader)
{
if (asyncStreamReader is null) { throw new ArgumentNullException(nameof(asyncStreamReader)); }
return new ToAsyncEnumerableEnumerable<T>(asyncStreamReader);
}
private sealed class ToAsyncEnumerableEnumerable<T> : IAsyncEnumerable<T>
{
public IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken = default)
=> new ToAsyncEnumerator<T>(_asyncStreamReader, cancellationToken);
private readonly IAsyncStreamReader<T> _asyncStreamReader;
public ToAsyncEnumerableEnumerable(IAsyncStreamReader<T> asyncStreamReader)
{
_asyncStreamReader = asyncStreamReader;
}
private sealed class ToAsyncEnumerator<TEnumerator> : IAsyncEnumerator<TEnumerator>
{
public TEnumerator Current => _asyncStreamReader.Current;
public async ValueTask<bool> MoveNextAsync() => await _asyncStreamReader.MoveNext(_cancellationToken);
public ValueTask DisposeAsync() => default;
private readonly IAsyncStreamReader<TEnumerator> _asyncStreamReader;
private readonly CancellationToken _cancellationToken;
public ToAsyncEnumerator(IAsyncStreamReader<TEnumerator> asyncStreamReader, CancellationToken cancellationToken)
{
_asyncStreamReader = asyncStreamReader;
_cancellationToken = cancellationToken;
}
}
}
}
}
The problem you're experiencing is due to the fact that MessengerServer.SubscribeForMessages returns immediately. Once that method returns, the stream is closed.
You'll need an implementation similar to this to keep the stream alive:
public class MessengerService : MessengerServiceBase
{
private static readonly ConcurrentDictionary<User, IServerStreamWriter<Message>> MessageSubscriptions =
new Dictionary<User, IServerStreamWriter<Message>>();
public override async Task SubscribeForMessages(User request, IServerStreamWriter<ReferralAssignment> responseStream, ServerCallContext context)
{
if (!MessageSubscriptions.TryAdd(request))
{
// User is already subscribed
return;
}
// Keep the stream open so we can continue writing new Messages as they are pushed
while (!context.CancellationToken.IsCancellationRequested)
{
// Avoid pegging CPU
await Task.Delay(100);
}
// Cancellation was requested, remove the stream from stream map
MessageSubscriptions.TryRemove(request);
}
}
As far as unsubscribing / cancellation goes, there are two possible approaches:
The client can hold onto a CancellationToken and call Cancel() when it wants to disconnect
The server can hold onto a CancellationToken which you would then store along with the IServerStreamWriter in the MessageSubscriptions dictionary via a Tuple or similar. Then, you could introduce an Unsubscribe method on the server which looks up the CancellationToken by User and calls Cancel on it server-side
Similar to Jon Halliday's answer, an indefinately long Task.Delay(-1) could be used and passed the context's cancellation token.
A try catch can be used to remove end the server's response stream when the task is cancelled.
public override async Task SubscribeForMessages(User request, IServerStreamWriter<Message> responseStream, ServerCallContext context)
{
if (_messageSubscriptions.ContainsKey(request))
{
return;
}
await responseStream.WriteAsync(new Message
{
From = Common.ServerUser,
Message_ = $"{request.DisplayName} is listening for messages!",
}).ConfigureAwait(false);
_messageSubscriptions.TryAdd(request, responseStream);
try
{
await Task.Delay(-1, context.CancellationToken);
}
catch (TaskCanceledException)
{
_messageSubscriptions.TryRemove(request, out _);
}
}

TPL BufferBlock Consume Method Not Being Called

I want to implement the consumer/producer pattern using the BufferBlock that runs continuously similar to the question here and the code here.
I tried to use an ActionBlock like the OP, but if the bufferblock is full and new messages are in it's queue then the new messages never get added to the ConcurrentDictionary _queue.
In the code below the ConsumeAsync method never gets called when a new message is added to the bufferblock with this call:_messageBufferBlock.SendAsync(message)
How can I correct the code below so that the ConsumeAsync method is called every time a new message is added using _messageBufferBlock.SendAsync(message)?
public class PriorityMessageQueue
{
private volatile ConcurrentDictionary<int,MyMessage> _queue = new ConcurrentDictionary<int,MyMessage>();
private volatile BufferBlock<MyMessage> _messageBufferBlock;
private readonly Task<bool> _initializingTask; // not used but allows for calling async method from constructor
private int _dictionaryKey;
public PriorityMessageQueue()
{
_initializingTask = Init();
}
public async Task<bool> EnqueueAsync(MyMessage message)
{
return await _messageBufferBlock.SendAsync(message);
}
private async Task<bool> ConsumeAsync()
{
try
{
// This code does not fire when a new message is added to the buffereblock
while (await _messageBufferBlock.OutputAvailableAsync())
{
// A message object is never received from the bufferblock
var message = await _messageBufferBlock.ReceiveAsync();
}
return true;
}
catch (Exception ex)
{
return false;
}
}
private async Task<bool> Init()
{
var executionDataflowBlockOptions = new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = Environment.ProcessorCount,
BoundedCapacity = 50
};
var prioritizeMessageBlock = new ActionBlock<MyMessage>(msg =>
{
SetMessagePriority(msg);
}, executionDataflowBlockOptions);
_messageBufferBlock = new BufferBlock<MyMessage>();
_messageBufferBlock.LinkTo(prioritizeMessageBlock, new DataflowLinkOptions { PropagateCompletion = true, MaxMessages = 50});
return await ConsumeAsync();
}
}
EDIT
I have removed all the extra code and added comments.
I'm still not completely certain what you're trying to accomplish but I'll try to point you in the right direction. Most of the code in the example isn't strictly necessary.
I need to know when a new message arrives
If this is your only requirement then I'll assume you just need to run some arbitrary code whenever a new message is passed in. The easiest way to do that in dataflow is to use a TransformBlock and set that block as the initial receiver in your pipeline. Each block has it's own buffer so unless you have need for another buffer you can leave it out.
public class PriorityMessageQueue {
private TransformBlock<MyMessage, MyMessage> _messageReciever;
public PriorityMessageQueue() {
var executionDataflowBlockOptions = new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism = Environment.ProcessorCount,
BoundedCapacity = 50
};
var prioritizeMessageBlock = new ActionBlock<MyMessage>(msg => {
SetMessagePriority(msg);
}, executionDataflowBlockOptions);
_messageReciever = new TransformBlock<MyMessage, MyMessage>(msg => NewMessageRecieved(msg), executionDataflowBlockOptions);
_messageReciever.LinkTo(prioritizeMessageBlock, new DataflowLinkOptions { PropagateCompletion = true });
}
public async Task<bool> EnqueueAsync(MyMessage message) {
return await _messageReciever.SendAsync(message);
}
private MyMessage NewMessageRecieved(MyMessage message) {
//do something when a new message arrives
//pass the message along in the pipeline
return message;
}
private void SetMessagePriority(MyMessage message) {
//Handle a message
}
}
Of course the other option you have would be to do whatever it is you need to immediately within EnqueAsync before returning the task from SendAsync but the TransformBlock gives you extra flexibility.

Create delay between two message reads of a Queue?

I am using Azure Queues to perform a bulk import.
I am using WebJobs to perform the process in the background.
The queue dequeues very frequently. How do I create a delay between 2 message
reads?
This is how I am adding a message to the Queue
public async Task<bool> Handle(CreateFileUploadCommand message)
{
var queueClient = _queueService.GetQueueClient(Constants.Queues.ImportQueue);
var brokeredMessage = new BrokeredMessage(JsonConvert.SerializeObject(new ProcessFileUploadMessage
{
TenantId = message.TenantId,
FileExtension = message.FileExtension,
FileName = message.Name,
DeviceId = message.DeviceId,
SessionId = message.SessionId,
UserId = message.UserId,
OutletId = message.OutletId,
CorrelationId = message.CorrelationId,
}))
{
ContentType = "application/json",
};
await queueClient.SendAsync(brokeredMessage);
return true;
}
And Below is the WebJobs Function.
public class Functions
{
private readonly IValueProvider _valueProvider;
public Functions(IValueProvider valueProvider)
{
_valueProvider = valueProvider;
}
public async Task ProcessQueueMessage([ServiceBusTrigger(Constants.Constants.Queues.ImportQueue)] BrokeredMessage message,
TextWriter logger)
{
var queueMessage = message.GetBody<string>();
using (var client = new HttpClient())
{
client.BaseAddress = new Uri(_valueProvider.Get("ServiceBaseUri"));
var stringContent = new StringContent(queueMessage, Encoding.UTF8, "application/json");
var result = await client.PostAsync(RestfulUrls.ImportMenu.ProcessUrl, stringContent);
if (result.IsSuccessStatusCode)
{
await message.CompleteAsync();
}
else
{
await message.AbandonAsync();
}
}
}
}
As far as I know, azure webjobs sdk enable concurrent processing on a single instance(the default is 16).
If you run your webjobs, it will read 16 queue messages(peeklock and calls Complete on the message if the function finishes successfully, or calls Abandon) and create 16 processes to execute the trigger function at same time. So you feel the queue dequeues very frequently.
If you want to disable concurrent processing on a single instance.
I suggest you could set ServiceBusConfiguration's MessageOptions.MaxConcurrentCalls to 1.
More details, you could refer to below codes:
In the program.cs:
JobHostConfiguration config = new JobHostConfiguration();
ServiceBusConfiguration serviceBusConfig = new ServiceBusConfiguration();
serviceBusConfig.MessageOptions.MaxConcurrentCalls = 1;
config.UseServiceBus(serviceBusConfig);
JobHost host = new JobHost(config);
host.RunAndBlock();
If you want to create a delay between 2 message reads, I suggest you could create a custom ServiceBusConfiguration.MessagingProvider.
It contains CompleteProcessingMessageAsync method, this method completes processing of the specified message, after the job function has been invoked.
I suggest you could add thread.sleep method in CompleteProcessingMessageAsync to achieve delay read.
More detail, you could refer to below code sample:
CustomMessagingProvider.cs:
Notice: I override the CompleteProcessingMessageAsync method codes.
public class CustomMessagingProvider : MessagingProvider
{
private readonly ServiceBusConfiguration _config;
public CustomMessagingProvider(ServiceBusConfiguration config)
: base(config)
{
_config = config;
}
public override NamespaceManager CreateNamespaceManager(string connectionStringName = null)
{
// you could return your own NamespaceManager here, which would be used
// globally
return base.CreateNamespaceManager(connectionStringName);
}
public override MessagingFactory CreateMessagingFactory(string entityPath, string connectionStringName = null)
{
// you could return a customized (or new) MessagingFactory here per entity
return base.CreateMessagingFactory(entityPath, connectionStringName);
}
public override MessageProcessor CreateMessageProcessor(string entityPath)
{
// demonstrates how to plug in a custom MessageProcessor
// you could use the global MessageOptions, or use different
// options per entity
return new CustomMessageProcessor(_config.MessageOptions);
}
private class CustomMessageProcessor : MessageProcessor
{
public CustomMessageProcessor(OnMessageOptions messageOptions)
: base(messageOptions)
{
}
public override Task<bool> BeginProcessingMessageAsync(BrokeredMessage message, CancellationToken cancellationToken)
{
// intercept messages before the job function is invoked
return base.BeginProcessingMessageAsync(message, cancellationToken);
}
public override async Task CompleteProcessingMessageAsync(BrokeredMessage message, FunctionResult result, CancellationToken cancellationToken)
{
if (result.Succeeded)
{
if (!MessageOptions.AutoComplete)
{
// AutoComplete is true by default, but if set to false
// we need to complete the message
cancellationToken.ThrowIfCancellationRequested();
await message.CompleteAsync();
Console.WriteLine("Begin sleep");
//Sleep 5 seconds
Thread.Sleep(5000);
Console.WriteLine("Sleep 5 seconds");
}
}
else
{
cancellationToken.ThrowIfCancellationRequested();
await message.AbandonAsync();
}
}
}
}
Program.cs main method:
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
var sbConfig = new ServiceBusConfiguration
{
MessageOptions = new OnMessageOptions
{
AutoComplete = false,
MaxConcurrentCalls = 1
}
};
sbConfig.MessagingProvider = new CustomMessagingProvider(sbConfig);
config.UseServiceBus(sbConfig);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
Result:

Categories

Resources