I have a client to OData system, and it (system) works in following way.
first I have to send a special request to retrieve a token.
then I make my requests, attaching token to each request.
at certain time request may fail, saying that token is outdated. then I should make another special request to get a new token
This is easy when I have only one thread. But I want to have multiple threads doing requests and all sharing the same token. Also if more than one concurrent requests fail with token being invalidated I want to send a special request exactly once and other clients to start using the updated token.
If it matters I am using C#.
Is there a common solution to synchronize such requests?
Without knowing much more about your implementation, one option could be MemoryCache.
Your threads could check the cache for a specific 'tokenkey' and get its value. You can set a expiration in your MemoryCache ahead of a known expiration if you wanted to prevent 401s or other unauthorized results.
Here's an example I use to get/set a new token required for auth header in web api calls:
private string GetNewToken()
{
lock (cacheLock)
{
// no token in cache so go get a new one
var newToken = TokenServiceAgent.GetJwt();
// number of minutes (offset) before JWT expires that will trigger update of cache
var cacheLifetime = 15
CacheItemPolicy cip = new CacheItemPolicy()
{
AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(cacheLifetime.Value))
};
MemoryCache.Default.Set("tokenkey", newToken, cip);
return newToken;
}
}
EDIT: can't get the code block to play nice in the SO editor
Related
Setup ASP.NET Core with Reactjs + Redux + Saga. It needs to notify the user when asp.net core session is expired. But the problem is by sending GET requests to check the session status we extend the session which means the session will not ever be over unless the tab in the browser will be closed(then GET requests won't be sent). This is the session setup Startup.cs
services.AddSession(options =>
{
options.IdleTimeout = TimeSpan.FromSeconds(60 * 60);
options.Cookie.HttpOnly = true;
});
And then we send every 5 minute request from client to get identity:
function* checkSessionIsValid() {
try {
const response = yield call(axios.get, 'api/Customer/FetchIdentity');
if (!response.data) {
return yield put({
type: types.SESSION_EXPIRED,
});
yield delay(300000);
return yield put({ type: types.CHECK_SESSION_IS_VALID });
}
return;
} catch (error) {
return;
}
}
And the backend endpoint(_context is IHttpContextAccessor):
[HttpGet("FetchIdentity")]
public LoginInfo GetIdentity()
{
if (SessionExtension.GetString(_context.Session, "_emailLogin") != null)
{
return new LoginInfo()
{
LoggedInWith = "email",
LoggedIn = true,
Identity = ""
};
}
return null;
}
So we get session info from SessionExtension. But probably there is some way of getting it without connecting to the back end?
What you're asking isn't possible, and frankly doesn't make sense when you understand how sessions work. A session only exists in the context of a request. HTTP is a stateless protocol. Sessions essentially fake state by having the server and client pass around a "session id". When a server wants to establish a "session" with a client, it issues a session id to the client, usually via the Set-Cookie response header. The client then, will pass back this id on each subsequent request, usually via the Cookie request header. When the server receives this id, it looks up the corresponding session from the store, and then has access to whatever state was previously in play.
The point is that without a request first, the server doesn't know or care about what's going on with a particular session. The expiration part happens when the server next tries to look up the session. If it's been too long (the session expiration has passed), then it destroys the previous session and creates a new one. It doesn't actively monitor sessions to do anything at a particular time. And, since as you noted, sessions are sliding, each request within the expiration timeframe resets that timeframe. As a result, a session never expires as long as the client is actively using it.
Long and short, the only way to know the state of the session is to make a request with a session id, in order to prompt the server to attempt to restore that session. The best you can do if you want to track session expiration client-side is to set a timer client-side based on the known timeout. You then need to reset said time, with every further request.
var sessionTimeout = setTimeout(doSomething, 20 * 60 * 1000); // 20 minutes
Then, in your AJAX callbacks:
clearTimeout(sessionTimeout);
sessionTimeout = setTimeout(doSomething, 20 * 60 * 1000);
Perhaps the SessionState attribute can be utilized.
[SessionState(SessionStateBehavior.Disabled)]
https://msdn.microsoft.com/en-US/library/system.web.mvc.sessionstateattribute.aspx?cs-save-lang=1&cs-lang=cpp
EDIT: Realized that you're using .NET Core, not sure if the SessionState attribute or anything similar exist as of today.
I am currently working on a system that makes calls to an external service and caches some of the data in the HttpContext.Current.Items collection for performance. The data can change quite regularly and it is user sensitive which is why we are currently storing it only for the duration of the current HttpRequest.
Example:
if (HttpContext.Current.Items[cacheKey] != null)
{
LogHelper.Debug<ExampleService>("[- CACHED RESULT -] GetUser({0})", () => email);
return (ExampleUser)HttpContext.Current.Items[cacheKey];
}
using (var client = new UserServiceClient())
{
using (new OperationContextScope(client.InnerChannel))
{
LogHelper.Debug<ExampleService>("GetUser({0})", () => email);
exampleUser = svc.GetUser(email);
HttpContext.Current.Items.Add(cacheKey, exampleUser);
}
}
In my local environment this behaves as expected and mostly also does in staging where the same thread is used for the duration of the request however in production this is not the case and there are still multiple calls to the external service in the same request. This can be seen from the logs which show that the value in HttpContext.Current.Items[cacheKey] is not returned in cases where the Thread ID does not match the original request.
This I guess means that my current understanding of HttpContext.Current.Items is wrong and that this is not a suitable solution for my needs.
My question therefore is can this be made to work across threads in the same request and if so should it, otherwise what suitable alternative is there?
One option is to use Session to store your data. Unfortunately it's not applicable for API-specific requests (e.g mobile device makes a call to server API). Besides, server session state requires all of your data serializable (DB session state doesn't).
If session does not satisfy your requirements, then you should go to next option: Using cache protected by something that represents your requests coming from the same user (a.k.a access token).
I am not exactly sure how to explain this so I'll give it my best shot.
Basically I have an application that connects to DropBox. One of the situations I have run into is with the registering process. At the moment during the registration process it connects to DropBox using the default browser, which it then requires the user to login and click allow app to use the service. In return you get a token which the app can use to connect to the service with. The problem I am having is getting the application to wait until the above process is completed. The only way I have figured out to get it to wait is to use system.threading(int). however if the person takes longer then the timeout then it fails to register properly.
I am hoping someone may be able to point me in the right direction and get it to wait without the threading function. I was hoping I could use a if loop or something but i have no idea how to do that properly.
here is the actual Oauth code:
private static OAuthToken GetAccessToken()
{
string consumerKey = "*****";
string consumerSecret = "****";
var oauth = new OAuth();
var requestToken = oauth.GetRequestToken(new Uri(DropboxRestApi.BaseUri), consumerKey, consumerSecret);
var authorizeUri = oauth.GetAuthorizeUri(new Uri(DropboxRestApi.AuthorizeBaseUri), requestToken);
Process.Start(authorizeUri.AbsoluteUri);
return oauth.GetAccessToken(new Uri(DropboxRestApi.BaseUri), consumerKey, consumerSecret, requestToken);
}
and here is the complete oauth function that is called when the registration button is clicked:
var accesstoken = GetAccessToken();
You need to make the Async (asynchronous) version of their GetAccessToken call. One that will call some function of yours when it is complete.
You could also loop until the information is ready, e.g.
while (dataIsNotReady()) {
Thread.Sleep(1000); // sleep for a bit. this is bad, locks up entire thread maybe even application while it sleeps. Make it shorter for less impact.
// TODO add a "timeout", i.e. only try this for X amount of time before breaking out
}
// Now we data is ready let's go
Update:
Perhaps you are better off using a library that can do it async for you e.g. this Dropbox C# library: https://github.com/dkarzon/DropNet
I'm calling a 3rd party API that uses OAuth for authentication, and I'm wondering how to make this threadsafe:
var token = _tokenService.GetCurrentToken(); // eg token could be "ABCDEF"
var newToken = oauth.RenewAccessToken(token); // eg newToken could be "123456"
_tokenService.UpdateCurrentToken(newToken); // save newToken to the database
What this does is to use the previous token every time RenewAccessToken() is called. But there is a problem if two users initiate this at the same time (two different threads will run the code at the same time), and we end up with that code executed in this order:
[Thread 1] var token = _tokenService.GetCurrentToken(); // returns "ABCDEF"
[Thread 2] var token = _tokenService.GetCurrentToken(); // returns "ABCDEF"
[Thread 1] var newToken = oauth.RenewAccessToken("ABCDEF"); // returns "123456"
[Thread 2] var newToken = oauth.RenewAccessToken("ABCDEF");
// throws an invalid token exception
What has happened is that in thread 2, it should actually be calling oauth.RenewAccessToken("123456"); (because that is the latest token value. But the latest token hasnt even been saved to the database yet, so thread 2 always has the wrong value for current token.
What can I do to fix this?
Edit: It has been suggested to use a lock like this:
private object tokenLock = new object();
lock(tokenLock)
{
var token = _tokenService.GetCurrentToken();
var newToken = oauth.RenewAccessToken(token);
_tokenService.UpdateCurrentToken(newToken);
}
Edit 2: The lock didn't actually work anyway, this is from my logs:
[43 22:38:26:9963] Renewing now using token JHCBTW1ZI96FF
[36 22:38:26:9963] Renewing now using token JHCBTW1ZI96FF
[36 22:38:29:1790] OAuthException exception
The first number is the thread id and the second is a timestamp. Both threads executed at the exact same time down to the milliseconds. I don't know why the lock failed to stop thread 36 until after thread 43 had finished.
Edit 3: And again, this time after changing the object tokenLock to be a class variable instead of a local variable, the lock did not work.
[25 10:53:58:3870] Renewing now using token N95984XVORY
[9 10:53:58:3948] Renewing now using token N95984XVORY
[9 10:54:55:7981] OAuthException exception
EDIT
Given that this is an ASP.NET application, the easy route (a Monitor lock using a lock { } block) is not suitable. You'll need to use a named Mutex in order to solve this problem.
Given your example code, something along these lines would work:
using(var m = new Mutex("OAuthToken"))
{
m.WaitOne();
try
{
var token = _tokenService.GetCurrentToken();
var newToken = oauth.RenewAccessToken(token);
_tokenService.UpdateCurrentToken(newToken);
}
finally
{
m.ReleaseMutex();
}
}
Note the finally clause; it's very important that you release the mutex. Because it's a system-wide object, its state will persist beyond your application. If you were to encounter an exception in your OAuth code above, you would not be able to reenter the code until the system was restarted.
Also, if you have some sort of durable identifier for sessions that use the same OAuth token (something that won't get changed as a result of this process), you could potentially use that token as the mutex name instead of "OAuth" as I have above. This would make the synchronization specific to a given token, so you would not have to worry about operations having to wait on unrelated tokens being renewed. This should offset the increase in cost of a mutex over a Monitor lock.
For the sake of helping others who might find this question, I've left my original answer below:
Original Answer
You just need a simple lock around your operation.
Create an instance (or static, if these functions are static) variable of type object:
private object tokenLock = new object();
In your code, enclose the steps that need to be atomic within a lock(tokenLock) block:
lock(tokenLock)
{
var token = _tokenService.GetCurrentToken();
var newToken = oauth.RenewAccessToken(token);
_tokenService.UpdateCurrentToken(newToken);
}
This will prevent one thread from starting this process while another is executing it.
Having set up a ReferenceDataRequest I send it along to an EventQueue
Service refdata = _session.GetService("//blp/refdata");
Request request = refdata.CreateRequest("ReferenceDataRequest");
// append the appropriate symbol and field data to the request
EventQueue eventQueue = new EventQueue();
Guid guid = Guid.NewGuid();
CorrelationID id = new CorrelationID(guid);
_session.SendRequest(request, eventQueue, id);
long _eventWaitTimeout = 60000;
myEvent = eventQueue.NextEvent(_eventWaitTimeout);
Normally I can grab the message from the queue, but I'm hitting the situation now that if I'm making a number of requests in the same run of the app (normally around the tenth), I see a TIMEOUT EventType
if (myEvent.Type == Event.EventType.TIMEOUT)
throw new Exception("Timed Out - need to rethink this strategy");
else
msg = myEvent.GetMessages().First();
These are being made on the same thread, but I'm assuming that there's something somewhere along the line that I'm consuming and not releasing.
Anyone have any clues or advice?
There aren't many references on SO to BLP's API, but hopefully we can start to rectify that situation.
I just wanted to share something, thanks to the code you included in your initial post.
If you make a request for historical intraday data for a long duration (which results in many events generated by Bloomberg API), do not use the pattern specified in the API documentation, as it may end up making your application very slow to retrieve all events.
Basically, do not call NextEvent() on a Session object! Use a dedicated EventQueue instead.
Instead of doing this:
var cID = new CorrelationID(1);
session.SendRequest(request, cID);
do {
Event eventObj = session.NextEvent();
...
}
Do this:
var cID = new CorrelationID(1);
var eventQueue = new EventQueue();
session.SendRequest(request, eventQueue, cID);
do {
Event eventObj = eventQueue.NextEvent();
...
}
This can result in some performance improvement, though the API is known to not be particularly deterministic...
I didn't really ever get around to solving this question, but we did find a workaround.
Based on a small, apparently throwaway, comment in the Server API documentation, we opted to create a second session. One session is responsible for static requests, the other for real-time. e.g.
_marketDataSession.OpenService("//blp/mktdata");
_staticSession.OpenService("//blp/refdata");
The means one session operates in subscription mode, the other more synchronously - I think it was this duality which was at the root of our problems.
Since making that change, we've not had any problems.
My reading of the docs agrees that you need separate sessions for the "//blp/mktdata" and "//blp/refdata" services.
A client appeared to have a similar problem. I solved it by making hundreds of sessions rather than passing in hundreds of requests in one session. Bloomberg may not be to happy with this BFI (brute force and ignorance) approach as we are sending the field requests for each session but it works.
Nice to see another person on stackoverflow enjoying the pain of bloomberg API :-)
I'm ashamed to say I use the following pattern (I suspect copied from the example code). It seems to work reasonably robustly, but probably ignores some important messages. But I don't get your time-out problem. It's Java, but all the languages work basically the same.
cid = session.sendRequest(request, null);
while (true) {
Event event = session.nextEvent();
MessageIterator msgIter = event.messageIterator();
while (msgIter.hasNext()) {
Message msg = msgIter.next();
if (msg.correlationID() == cid) {
processMessage(msg, fieldStrings, result);
}
}
if (event.eventType() == Event.EventType.RESPONSE) {
break;
}
}
This may work because it consumes all messages off each event.
It sounds like you are making too many requests at once. BB will only process a certain number of requests per connection at any given time. Note that opening more and more connections will not help because there are limits per subscription as well. If you make a large number of time consuming requests simultaneously, some may timeout. Also, you should process the request completely(until you receive RESPONSE message), or cancel them. A partial request that is outstanding is wasting a slot. Since splitting into two sessions, seems to have helped you, it sounds like you are also making a lot of subscription requests at the same time. Are you using subscriptions as a way to take snapshots? That is subscribe to an instrument, get initial values, and de-subscribe. If so, you should try to find a different design. This is not the way the subscriptions are intended to be used. An outstanding subscription request also uses a request slot. That is why it is best to batch as many subscriptions as possible in a single subscription list instead of making many individual requests. Hope this helps with your use of the api.
By the way, I can't tell from your sample code, but while you are blocked on messages from the event queue, are you also reading from the main event queue while(in a seperate event queue)? You must process all the messages out of the queue, especially if you have outstanding subscriptions. Responses can queue up really fast. If you are not processing messages, the session may hit some queue limits which may be why you are getting timeouts. Also, if you don't read messages, you may be marked a slow consumer and not receive more data until you start consuming the pending messages. The api is async. Event queues are just a way to block on specific requests without having to process all messages from the main queue in a context where blocking is ok, and it would otherwise be be difficult to interrupt the logic flow to process parts asynchronously.