Transactional messages without receiver look - session peeking - c#

I have a distributed service that takes anywhere from 10 sec to 10 min to process a message. The service starts on a user's (== browser) request which is received through an API. Due to various limitations, the result of the service has to be returned outside of the initial request (timeouts, dropped client connections ...) Therefore I return a SessionId to the requesting user which can be used to retrieve the result until it expires.
Now it can happen that a user makes multiple consecutive requests for the result while the session is still locked. For example the following code gets hit with the same SessionId within 60 seconds:
var session = await responseQueue.AcceptMessageSessionAsync(sessionId);
var response = await session.ReceiveAsync(TimeSpan.FromSeconds(60));
if (response == null)
{
session.Abort();
return null;
}
await response.AbandonAsync();
What I need is a setup without locking and the ability to read a message multiple times until it expires plus the ability to wait for yet non-existent messages.
Which ServiceBus solution fits that bill?
UPDATE
Here's a dirty solution, still looking for a better way:
MessageSession session = null;
try
{
session = await responseQueue.AcceptMessageSessionAsync(sessionId);
}
catch
{
// ... and client has to try again until the 60 sec lock from other requests is released
}
var response = await session.ReceiveAsync(TimeSpan.FromSeconds(60));
// ...

Related

The correct way to wait for a specific message in a Service Bus Queue in a multi threaded environment (Azure Functions)

I have created a solution based on Azure Functions and Azure Service Bus, where clients can retrieve information from multiple back-end systems using a single API. The API is implemented in Azure Functions, and based on the payload of the request it is relayed to a Service Bus Queue, picked up by a client application running somewhere on-premise, and the answer sent back by the client to another Service Bus Queue, the "reply-" queue. Meanwhile, the Azure Function is waiting for a message in the reply-queue, and when it finds the message that belongs to it, it sends the payload back to the caller.
The Azure Function Activity Root Id is attached to the Service Bus Message as the CorrelationId. This way each running function knows which message contains the response to the callers request.
My question is about the way I am currently retrieving the messages from the reply queue. Since multiple instances can be running at the same time, each Azure Function instance needs to get it's response from the client without blocking other instances. Besides that, a time out needs to be observed. The client is expected to respond within 20 seconds. While waiting, the Azure Function should not be blocking other instances.
This is the code I have so far:
internal static async Task<(string, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var receiver = new MessageReceiver(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var sw = Stopwatch.StartNew();
while (sw.Elapsed < timeout)
{
var message = await receiver.ReceiveAsync(timeout.Subtract(sw.Elapsed));
if (message != null)
{
if (message.CorrelationId == operationId)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
else
{
log.LogInformation("Ignoring message for operation {OperationId}", message.CorrelationId);
}
}
}
return (null, false);
}
finally
{
await receiver.CloseAsync();
}
}
The code is based on a few assumptions. I am having a hard time trying to find any documentation to verify my assumptions are correct:
I expect subsequent calls to ReceiveAsync not to fetch messages I have previously fetched and not explicitly abandoned.
I expect new messages that arrive on the queue to be received by ReceiveAsync, even though they may have arrived after my first call to ReceiveAsync and even though there might still be other messages in the queue that I haven't received yet either. E.g. there are 10 messages in the queue, I start receiving the first few message, meanwhile new messages arrive, and after I have read the 10 pre-existing messages, I get the new messages too.
I expect that when I call ReceiveAsync for a second time, that the lock is released from the message I received with the first call, although I did not explicitly Abandon that first message.
Could anyone tell me if my assumptions are correct?
Note: please don't suggest that Durable Functions where designed specifically for this, because they simply do not fill the requirements. Most notably, Durable Functions are invoked by a process that polls a queue with a sliding interval, so after not having any requests for a few minutes, the first new request can take a minute to start, which is not acceptable for my use case.
I would consider session enabled topics or queues for this.
The Message sessions documentation explains this in detail but the essential bit is that a session receiver is created by a client accepting a session. When the session is accepted and held by a client, the client holds an exclusive lock on all messages with that session's session ID in the queue or subscription. It will also hold exclusive locks on all messages with the session ID that will arrive later.
This makes it perfect for facilitating the request/reply pattern.
When sending the message to the queue that the on-premises handlers receive messages on, set the ReplyToSessionId property on the message to your operationId.
Then, the on-premises handlers need to set the SessionId property of the messages they send to the reply queue to the value of the ReplyToSessionId property of the message they processed.
Then finally you can update your code to use a SessionClient and then use the 'AcceptMessageSessionAsync()' method on that to start listening for messages on that session.
Something like the following should work:
internal static async Task<(string?, bool)> WaitForMessageAsync(string queueName, string operationId, TimeSpan timeout, ILogger log)
{
log.LogInformation("Connecting to service bus queue {QueueName} to wait for reply...", queueName);
var sessionClient = new SessionClient(_connectionString, queueName, ReceiveMode.PeekLock);
try
{
var receiver = await sessionClient.AcceptMessageSessionAsync(operationId);
// message will be null if the timeout is reached
var message = await receiver.ReceiveAsync(timeout);
if (message != null)
{
log.LogInformation("Reply received for operation {OperationId}", message.CorrelationId);
var reply = Encoding.UTF8.GetString(message.Body);
var error = message.UserProperties.ContainsKey("ErrorCode");
await receiver.CompleteAsync(message.SystemProperties.LockToken);
return (reply, error);
}
return (null, false);
}
finally
{
await sessionClient.CloseAsync();
}
}
Note: For all this to work, the reply queue will need Sessions enabled. This will require the Standard or Premium tier of Azure Service Bus.
Both queues and topic subscriptions support enabling sessions. The topic subscriptions allow you to mix and match session enabled scenarios as your needs arise. You could have some subscriptions with it enabled, and some without.
The queue used to send the message to the on-premises handlers does not need Sessions enabled.
Finally, when Sessions are enabled on a queue or a topic subscription, the client applications can no longer send or receive regular messages. All messages must be sent as part of a session (by setting the SessionId) and received by accepting the session.
It seems that the feature can not be achieved now.
You can give your voice here where if others have same demand, they will vote up your idea.

Lock cache with Apache Ignite.NET thin client

Currently we use Apache Ignite.NET thin client to cache different sets of data. When data request has came we check if data is already stored is the cache and, if not, request data from database and put it into the cache.
I want to prevent several database requests if two data requests has came at the same time.
Is there any way to manually lock cache before the first database request started? Thus second data request could wait until first request is completed.
I cannot solve the task isung .NET concurrency primitives cause cache could be used by multiple client instances (load-balancing).
I've already found ICache.Lock(TK key) method, but it seems that it locks only specified rows in cache and is supported only for in self-hosted mode, not for Ignite.NET this client.
Small piece of code that illustrates the issue:
var key = "cache_key";
using (var ignite = Ignition.StartClient(new Core.Client.IgniteClientConfiguration { Host = "127.0.0.1" }))
{
var cacheNames = ignite.GetCacheNames();
if (cacheNames.Contains(key))
{
return ignite.GetCache<int, Employee>(key).AsCacheQueryable();
}
else
{
var data = RequestDataFromDatabase();
var cache = ignite.CreateCache<int, Employee>(new CacheClientConfiguration(
EmployeeCacheName, new QueryEntity(typeof(int), typeof(Employee))));
cache.PutAll(data);
return cache.AsCacheQueryable();
}
}
The thin client doesn't have the required API.
If you don't need to check for individual records and it's only required to know whether the cache is available, you might just call CreateCache multiple times. It should throw an exception saying that the cache with a particular name already has started for further invocations.
try {
var cache = ignite.CreateCache<int, Employee>(new CacheClientConfiguration(
EmployeeCacheName, new QueryEntity(typeof(int), typeof(Employee))));
// Cache created by this call => add data here
} catch (IgniteClientException e) when (e.Message.Contains("already started")) {
// Return existing cache, don't add data
}
Alexandr has provided a good and simple solution if you just need to initialize the cache once.
If you need more complex synchronization logic, atomic cache operations (PutIfAbsent, Replace) can often replace locks. For example, we could have a special cache to track the status of other caches:
var statusCache = Client.GetOrCreateCache<string, string>("status");
if (statusCache.PutIfAbsent("cache-name", "created"))
{
// Just created, add data
...
//
statusCache.Put("cache-name", "populated");
}
else
{
// Already exists, wait for data
while (statusCache["cache-name"] != "populated")
Thread.Sleep(1000);
}

Fetching session state without sending HTTP request from front end

Setup ASP.NET Core with Reactjs + Redux + Saga. It needs to notify the user when asp.net core session is expired. But the problem is by sending GET requests to check the session status we extend the session which means the session will not ever be over unless the tab in the browser will be closed(then GET requests won't be sent). This is the session setup Startup.cs
services.AddSession(options =>
{
options.IdleTimeout = TimeSpan.FromSeconds(60 * 60);
options.Cookie.HttpOnly = true;
});
And then we send every 5 minute request from client to get identity:
function* checkSessionIsValid() {
try {
const response = yield call(axios.get, 'api/Customer/FetchIdentity');
if (!response.data) {
return yield put({
type: types.SESSION_EXPIRED,
});
yield delay(300000);
return yield put({ type: types.CHECK_SESSION_IS_VALID });
}
return;
} catch (error) {
return;
}
}
And the backend endpoint(_context is IHttpContextAccessor):
[HttpGet("FetchIdentity")]
public LoginInfo GetIdentity()
{
if (SessionExtension.GetString(_context.Session, "_emailLogin") != null)
{
return new LoginInfo()
{
LoggedInWith = "email",
LoggedIn = true,
Identity = ""
};
}
return null;
}
So we get session info from SessionExtension. But probably there is some way of getting it without connecting to the back end?
What you're asking isn't possible, and frankly doesn't make sense when you understand how sessions work. A session only exists in the context of a request. HTTP is a stateless protocol. Sessions essentially fake state by having the server and client pass around a "session id". When a server wants to establish a "session" with a client, it issues a session id to the client, usually via the Set-Cookie response header. The client then, will pass back this id on each subsequent request, usually via the Cookie request header. When the server receives this id, it looks up the corresponding session from the store, and then has access to whatever state was previously in play.
The point is that without a request first, the server doesn't know or care about what's going on with a particular session. The expiration part happens when the server next tries to look up the session. If it's been too long (the session expiration has passed), then it destroys the previous session and creates a new one. It doesn't actively monitor sessions to do anything at a particular time. And, since as you noted, sessions are sliding, each request within the expiration timeframe resets that timeframe. As a result, a session never expires as long as the client is actively using it.
Long and short, the only way to know the state of the session is to make a request with a session id, in order to prompt the server to attempt to restore that session. The best you can do if you want to track session expiration client-side is to set a timer client-side based on the known timeout. You then need to reset said time, with every further request.
var sessionTimeout = setTimeout(doSomething, 20 * 60 * 1000); // 20 minutes
Then, in your AJAX callbacks:
clearTimeout(sessionTimeout);
sessionTimeout = setTimeout(doSomething, 20 * 60 * 1000);
Perhaps the SessionState attribute can be utilized.
[SessionState(SessionStateBehavior.Disabled)]
https://msdn.microsoft.com/en-US/library/system.web.mvc.sessionstateattribute.aspx?cs-save-lang=1&cs-lang=cpp
EDIT: Realized that you're using .NET Core, not sure if the SessionState attribute or anything similar exist as of today.

.NET HttpClient GET request very slow after ~100 seconds idle

The first request or a request after idling roughly 100 seconds is very slow and takes 15-30 seconds. Any request without idling takes less than a second. I am fine with the first request taking time, just not the small idle time causing the slowdown.
The slowdown is not unique to the client, if I keep making requests on one client then it stays quick on the other. Only when all are idle for 100 seconds does it slowdown.
Here are some changes that I have tried:
Setting HttpClient to a singleton and not disposing it using a using() block
Setting ServicePointManager.MaxServicePointIdleTime to a higher value since by default it is 100 seconds. Since the time period is the same as mine I thought this was the issue but it did not solve it.
Setting a higher ServicePointManager.DefaultConnectionLimit
Default proxy settings set via web.config
using await instead of httpClient.SendAsync(request).Result
It is not related to IIS application pool recycling since the default there is set to 20mn and the rest of the application remains quick.
The requests are to a web service which communicates with AWS S3 to get files. I am at a loss for ideas at this point and all my research has led me to the above points that I already tried. Any ideas would be appreciated!
Here is the method:
`
//get httpclient singleton or create
var httpClient = HttpClientProvider.FileServiceHttpClient;
var queryString = string.Format("?key={0}", key);
var request = new HttpRequestMessage(HttpMethod.Get, queryString);
var response = httpClient.SendAsync(request).Result;
if (response.IsSuccessStatusCode)
{
var metadata = new Dictionary<string, string>();
foreach (var header in response.Headers)
{
//grab tf headers
if (header.Key.StartsWith(_metadataHeaderPrefix))
{
metadata.Add(header.Key.Substring(_metadataHeaderPrefix.Length), header.Value.First());
}
}
var virtualFile = new VirtualFile
{
QualifiedPath = key,
FileStream = response.Content.ReadAsStreamAsync().Result,
Metadata = metadata
};
return virtualFile;
}
return null;
The default idle timeout is about 1-2 mins. After that, the client has to re handshake with the server. So, you will find that it will be slow after 100s.
You could use socket handler to extend the idle timeout.
var socketsHandler = new SocketsHttpHandler
{
PooledConnectionIdleTimeout = TimeSpan.FromHours(27),//Actually 5 mins can be idle at maximum. Note that timeouts longer than the TCP timeout may be ignored if no keep-alive TCP message is set at the transport level.
MaxConnectionsPerServer = 10
};
client = new HttpClient(socketsHandler);
As you can see, although I set the idle timeout to 27 hours, but actually it just keep 5 mins alive.
So, finally I just call the target endpoint using the same HttpClient every 1 min. In this case, there is always an established connection. You could use netstat to check that. It works fine.

how to control if web service is unavailable?

I have a webservice that gets a list from client and inserts it to database. Client has a windows service that is sending a list per 10 seconds. But there is a problem. What if it cannot reach to webservice(server). I should not lost any of the data. I decided to save data to a txt or binary if server is not reachable, and then upload them after the server starts to run. However, how can I decide whether the webservice is unavaliable. If I store the data to a file in a catch block, it will store when ever it gets an error, not only webservice unavaliable error. Any advice?
You can make an http request on the service's endpoint url and check if everything is ok :
var url = "http://....";
//OR
var url = service_object.Url;
var request = (HttpWebRequest)WebRequest.Create(url);
request.Timeout = 2000; //timeout 20 seconds
HttpWebResponse response = null;
try
{
response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode != HttpStatusCode.OK)
{
throw new ApplicationException(response.StatusDescription);
}
}
catch (ApplicationException ex)
{
//Do what you want here, create a file for example...
}
I'd introduce a queuing system (such as MSMQ or NServiceBus) so that the windows service only needs to place message(s) into the queue and something else (co-located with the web service) can dequeue messages and apply them (either directly or via the web service methods).
When everything is up and running, it shouldn't introduce much more overhead over your current (direct) web service call, and when the web service is down, everything just builds up in the queuing system.

Categories

Resources