C# HttpClient response time break down - c#

I am trying to measure server response time for specific requests using a .net 6 client running a HttpClient.
Putting a Stopwatch before the GetAsync call normally returns between 120-140 milliseconds.
Checking Same Url on a browser (both Chrome and FireFox , network tab on developer tools) returns normally after about 30-40 milliseconds.
Looking at the server logs normally shows 20-25 milliseconds. (shows only pure server response time)
I figured the reason for the big gaps is because of in my HttpClient Mesaurements i count also the DNS lookup , TLS handshake and so on.
Using Curl i can get a break down of the times :
time_namelookup: 0.003154s
time_connect: 0.049069s
time_appconnect: 0.121174s
time_pretransfer: 0.121269s
time_redirect: 0.000000s
time_starttransfer: 0.170894s
time_total: 0.171033s
Any way I can perform this kind of a time break down using c# HttpClient ?
I made a little progress using DelegatingHandler and putting a Stopwatch before SendAsync. (got about 5 milliseconds less than total time - my guess its after dns lookup)
I found very little documentation regarding this , any pointers ?
Thanks in advance.

You can use HttpClient telemetry events as described here:
class Program
{
private static readonly HttpClient _client = new HttpClient();
static async Task Main()
{
// Instantiate the listener which subscribes to the events.
using var listener = new HttpEventListener();
// Send an HTTP request.
using var response = await _client.GetAsync("https://github.com/runtime");
}
}
internal sealed class HttpEventListener : EventListener
{
// Constant necessary for attaching ActivityId to the events.
public const EventKeywords TasksFlowActivityIds = (EventKeywords)0x80;
protected override void OnEventSourceCreated(EventSource eventSource)
{
// List of event source names provided by networking in .NET 5.
if (eventSource.Name == "System.Net.Http" ||
eventSource.Name == "System.Net.Sockets" ||
eventSource.Name == "System.Net.Security" ||
eventSource.Name == "System.Net.NameResolution")
{
EnableEvents(eventSource, EventLevel.LogAlways);
}
// Turn on ActivityId.
else if (eventSource.Name == "System.Threading.Tasks.TplEventSource")
{
// Attach ActivityId to the events.
EnableEvents(eventSource, EventLevel.LogAlways, TasksFlowActivityIds);
}
}
protected override void OnEventWritten(EventWrittenEventArgs eventData)
{
var sb = new StringBuilder().Append($"{eventData.TimeStamp:HH:mm:ss.fffffff} {eventData.ActivityId}.{eventData.RelatedActivityId} {eventData.EventSource.Name}.{eventData.EventName}(");
for (int i = 0; i < eventData.Payload?.Count; i++)
{
sb.Append(eventData.PayloadNames?[i]).Append(": ").Append(eventData.Payload[i]);
if (i < eventData.Payload?.Count - 1)
{
sb.Append(", ");
}
}
sb.Append(")");
Console.WriteLine(sb.ToString());
}
}
This produces the following log (I removed long event ids to make it shorter):
08:25:27.6440324 System.Net.Http.RequestStart(scheme: https, host: github.com, port: 443, pathAndQuery: /runtime, versionMajor: 1, versionMinor: 1, versionPolicy: 0)
08:25:27.6782964 System.Net.NameResolution.ResolutionStart(hostNameOrAddress: github.com)
08:25:27.8075834 System.Net.NameResolution.ResolutionStop()
08:25:27.8082958 System.Net.Sockets.ConnectStart(address: InterNetworkV6:28:{1,187,0,0,0,0,0,0,0,0,0,0,0,0,0,0,255,255,140,82,121,3,0,0,0,0})
08:25:27.8805709 System.Net.Sockets.ConnectStop()
08:25:27.8829670 System.Net.Security.HandshakeStart(isServer: False, targetHost: github.com)
08:25:28.1419994 System.Net.Security.HandshakeStop(protocol: 12288)
08:25:28.1431643 System.Net.Http.ConnectionEstablished(versionMajor: 1, versionMinor: 1)
08:25:28.1443727 System.Net.Http.RequestLeftQueue(timeOnQueueMilliseconds: 474,269, versionMajor: 1, versionMinor: 1)
08:25:28.1454417 System.Net.Http.RequestHeadersStart()
08:25:28.1461159 System.Net.Http.RequestHeadersStop()
08:25:28.6777661 System.Net.Http.ResponseHeadersStart()
08:25:28.6783369 System.Net.Http.ResponseHeadersStop()
08:25:28.6826666 System.Net.Http.ResponseContentStart()
08:25:28.8978144 System.Net.Http.ResponseContentStop()
08:25:28.8978472 System.Net.Http.RequestStop()
As you can see, these events contain all information you need: DNS lookup time, connection time, SSL handshake time, and actual request timings.
Update: regarding your concern that with this approach you might receive events not related to the specific web request you are interested in. You can do the correlation using AsyncLocal variable, as documentation mentions here. The idea is simple - you use AsyncLocal variable and set its value to something (such as class holding your timing information) before doing request with HttpClient. Then you perform request. Now, when new event comes in - you check the value of AsyncLocal variable. If it's not null - then this event is related to the current request, otherwise you can ignore it.
Here is extended version of the code above with this approach in mind:
class Program
{
private static readonly HttpClient _client = new HttpClient();
static async Task Main()
{
using (var listener = new HttpEventListener())
{
// we start new listener scope here
// only this specific request timings will be measured
// this implementation assumes usage of exactly one HttpEventListener per request
using var response = await _client.GetAsync("https://github.com/runtime");
var timings = listener.GetTimings();
if (timings.Request != null)
Console.WriteLine($"Total time: {timings.Request.Value.TotalMilliseconds:N0}ms");
if (timings.Dns != null)
Console.WriteLine($"DNS: {timings.Dns.Value.TotalMilliseconds:N0}ms");
if (timings.SocketConnect != null)
Console.WriteLine($"Socket connect: {timings.SocketConnect.Value.TotalMilliseconds:N0}ms");
if (timings.SslHandshake != null)
Console.WriteLine($"SSL Handshake: {timings.SslHandshake.Value.TotalMilliseconds:N0}ms");
if (timings.RequestHeaders != null)
Console.WriteLine($"Request headers: {timings.RequestHeaders.Value.TotalMilliseconds:N0}ms");
if (timings.ResponseHeaders != null)
Console.WriteLine($"Response headers: {timings.ResponseHeaders.Value.TotalMilliseconds:N0}ms");
if (timings.ResponseContent != null)
Console.WriteLine($"Response content: {timings.ResponseContent.Value.TotalMilliseconds:N0}ms");
}
}
}
internal sealed class HttpEventListener : EventListener
{
// Constant necessary for attaching ActivityId to the events.
public const EventKeywords TasksFlowActivityIds = (EventKeywords)0x80;
private AsyncLocal<HttpRequestTimingDataRaw> _timings = new AsyncLocal<HttpRequestTimingDataRaw>();
internal HttpEventListener()
{
// set variable here
_timings.Value = new HttpRequestTimingDataRaw();
}
protected override void OnEventSourceCreated(EventSource eventSource)
{
// List of event source names provided by networking in .NET 5.
if (eventSource.Name == "System.Net.Http" ||
eventSource.Name == "System.Net.Sockets" ||
eventSource.Name == "System.Net.Security" ||
eventSource.Name == "System.Net.NameResolution")
{
EnableEvents(eventSource, EventLevel.LogAlways);
}
// Turn on ActivityId.
else if (eventSource.Name == "System.Threading.Tasks.TplEventSource")
{
// Attach ActivityId to the events.
EnableEvents(eventSource, EventLevel.LogAlways, TasksFlowActivityIds);
}
}
protected override void OnEventWritten(EventWrittenEventArgs eventData)
{
var timings = _timings.Value;
if (timings == null)
return; // some event which is not related to this scope, ignore it
var fullName = eventData.EventSource.Name + "." + eventData.EventName;
switch (fullName)
{
case "System.Net.Http.RequestStart":
timings.RequestStart = eventData.TimeStamp;
break;
case "System.Net.Http.RequestStop":
timings.RequestStop = eventData.TimeStamp;
break;
case "System.Net.NameResolution.ResolutionStart":
timings.DnsStart = eventData.TimeStamp;
break;
case "System.Net.NameResolution.ResolutionStop":
timings.DnsStop = eventData.TimeStamp;
break;
case "System.Net.Sockets.ConnectStart":
timings.SocketConnectStart = eventData.TimeStamp;
break;
case "System.Net.Sockets.ConnectStop":
timings.SocketConnectStop = eventData.TimeStamp;
break;
case "System.Net.Security.HandshakeStart":
timings.SslHandshakeStart = eventData.TimeStamp;
break;
case "System.Net.Security.HandshakeStop":
timings.SslHandshakeStop = eventData.TimeStamp;
break;
case "System.Net.Http.RequestHeadersStart":
timings.RequestHeadersStart = eventData.TimeStamp;
break;
case "System.Net.Http.RequestHeadersStop":
timings.RequestHeadersStop = eventData.TimeStamp;
break;
case "System.Net.Http.ResponseHeadersStart":
timings.ResponseHeadersStart = eventData.TimeStamp;
break;
case "System.Net.Http.ResponseHeadersStop":
timings.ResponseHeadersStop = eventData.TimeStamp;
break;
case "System.Net.Http.ResponseContentStart":
timings.ResponseContentStart = eventData.TimeStamp;
break;
case "System.Net.Http.ResponseContentStop":
timings.ResponseContentStop = eventData.TimeStamp;
break;
}
}
public HttpRequestTimings GetTimings(){
var raw = _timings.Value!;
return new HttpRequestTimings{
Request = raw.RequestStop - raw.RequestStart,
Dns = raw.DnsStop - raw.DnsStart,
SslHandshake = raw.SslHandshakeStop - raw.SslHandshakeStart,
SocketConnect = raw.SocketConnectStop - raw.SocketConnectStart,
RequestHeaders = raw.RequestHeadersStop - raw.RequestHeadersStart,
ResponseHeaders = raw.ResponseHeadersStop - raw.ResponseHeadersStart,
ResponseContent = raw.ResponseContentStop - raw.ResponseContentStart
};
}
public class HttpRequestTimings{
public TimeSpan? Request{get;set;}
public TimeSpan? Dns{get;set;}
public TimeSpan? SslHandshake{get;set;}
public TimeSpan? SocketConnect {get;set;}
public TimeSpan? RequestHeaders{get;set;}
public TimeSpan? ResponseHeaders{get;set;}
public TimeSpan? ResponseContent{get;set;}
}
private class HttpRequestTimingDataRaw
{
public DateTime? DnsStart { get; set; }
public DateTime? DnsStop { get; set; }
public DateTime? RequestStart { get; set; }
public DateTime? RequestStop { get; set; }
public DateTime? SocketConnectStart { get; set; }
public DateTime? SocketConnectStop { get; set; }
public DateTime? SslHandshakeStart { get; set; }
public DateTime? SslHandshakeStop { get; set; }
public DateTime? RequestHeadersStart { get; set; }
public DateTime? RequestHeadersStop { get; set; }
public DateTime? ResponseHeadersStart { get; set; }
public DateTime? ResponseHeadersStop { get; set; }
public DateTime? ResponseContentStart { get; set; }
public DateTime? ResponseContentStop { get; set; }
}
}
In this version I also collect only timings without unrelated info from events. Note that there might be more events for different http requests (for example ones with body).

Related

Rx - Reactive extensions - conditional switch from first Observable to second

I have 2 data sources: online and offline (cached). Both of them returns IObservable of object which contains 2 flags - IsSuccess and IsCached. I would like to get data from online source but only when IsSuccess=true. If this fail I would like to get data from offline source. Additionally I want to save new data in cache for future. I am not sure how to do it best in RX.
Here is my implementation of that but I think it can be done much better
public IObservable<Result<SampleModel>> GetSampleModel()
{
IObservable<Result<SampleModel>> onlineObservable = _onlineSource.GetData<SampleModel>();
IObservable<Result<SampleModel>> offlineObservable = _offlineSource.GetData<SampleModel>();
var subject = new Subject<Result<SampleModel>>();
onlineObservable.Do(async (result) =>
{
if (result.IsSuccess)
{
await _offlineSource.CacheData(result.Data).ConfigureAwait(false);
}
}).Subscribe((result) =>
{
if (result.IsSuccess)
{
subject.OnNext(result);
}
subject.OnCompleted();
});
return subject.Concat(offlineObservable).Take(1);
}
Result class - wrapper for data:
public class Result<T>
{
public Result(Exception exception)
{
Exception = exception;
}
public Result(T data, bool isCached = false)
{
IsCached = isCached;
IsSuccess = true;
Data = data;
}
public bool IsSuccess { get; private set; }
public bool IsCached { get; private set; }
public T Data { get; private set; }
public Exception Exception { get; private set; }
}
Your implementation will not work reliably, because there is a race condition in there. Consider this:
var temp = GetSampleModel(); // #1
// Do some long operation here
temp.Subscribe(p => Console.WriteLine(p)); // #2
In this case, fetching data will start in #1, and if the data is received and pushed to subject before #2 executes, nothing will be printed no matter how long you wait.
Usually, you should avoid subscribing inside a function returning IObservable to avoid such issues. Using Do is also a bad smell. You could fix the code using ReplaySubject or AsyncSubject, but in such cases I generally prefer Observable.Create. Here is my rewrite:
public IObservable<SampleModel> GetSampleModel(IScheduler scheduler = null)
{
scheduler = scheduler ?? TaskPoolScheduler.Default;
return Observable.Create<SampleModel>(observer =>
{
return scheduler.ScheduleAsync(async (s, ct) =>
{
var onlineResult = await _onlineSource.GetData<SampleModel>().FirstAsync();
if (onlineResult.IsSuccess)
{
observer.OnNext(onlineResult.Data);
await _offlineSource.CacheData(onlineResult.Data);
observer.OnCompleted();
}
else
{
var offlineResult = await _offlineSource.GetData<SampleModel>().FirstAsync();
if (offlineResult.IsSuccess)
{
observer.OnNext(offlineResult.Data);
observer.OnCompleted();
}
else
{
observer.OnError(new Exception("Could not receive model"));
}
}
return Disposable.Empty;
});
});
}
You can see that it still isn't terribly pretty. I think that it's because you chose not to use natural Rx system of handling errors, but instead to wrap your values in Result type. If you alter your repository methods to handle errors in Rx way, resulting code is much more concise. (Note that I changed your Result type to MaybeCached, and I assume that now both sources return IObservable<SampleModel>, which is a cold observable either returning a single result or an error):
public class MaybeCached<T>
{
public MaybeCached(T data, bool isCached)
{
IsCached = isCached;
IsSuccess = true;
}
public bool IsCached { get; private set; }
public T Data { get; private set; }
}
public IObservable<SampleModel> GetSampleModel()
{
_onlineSource
.GetData<SampleModel>()
.Select(d => new MaybeCached(d, false))
.Catch(_offlineSource
.GetData<SampleModel>()
.Select(d => new MaybeCached(d, true))
.SelectMany(data => data.IsCached ? Observable.Return(data.Data) : _offlineSource.CacheData(data.Data).Select(_ => data.Data));
}
Catch is used here in order to obtain a conditional switch you asked for.

Writing to DB using EF6 in async process randomly yielding "property 'Id' is part of object's key ..."

I have a base class called ServicePluginBase that implements logging.
public class PluginLog
{
public int Id { get; set; }
public int? ServiceId { get; set; }
public string Event { get; set; }
public string Details { get; set; }
public DateTime DateTime { get; set; }
public string User { get; set; }
}
public class SQLPluginLogger : IPluginLogger
{
//EFLogginContext maps PluginLog like so:
// modelBuilder.Entity<PluginLog>().ToTable("log").HasKey(l => l.Id)
private EFLoggingContext _logger = new EFLoggingContext();
public IQueryable<PluginLog> LogItems
{
get { return _logger.LogItems; }
}
public void LogEvent(PluginLog item)
{
_logger.LogItems.Add(item);
_logger.SaveChanges();
}
}
public abstract class ServicePluginBase : IPlugin
{
private IPluginLogger _logger;
public ServicePluginBase(IPluginLogger logger)
{
_logger = logger;
}
protected LogEvent(string eventName, string details)
{
PluginLog _event = new PluginLog()
{
ServiceId = this.Id,
Event = eventName,
Details = details,
User = Thread.CurrentPrincipal.Identity.Name,
DateTime = DateTime.Now
};
_logger.LogEvent(_event);
}
}
Now, within my concrete class, I log events as they happen. In one class, I have some asynchronous methods running -- and logging. Sometimes this works great. Other times, I get the error stating that "Property 'Id' is part of the object's key and cannot be updated." Interestingly enough, I have absolutely no code that updates the value of Id and I never do Updates of log entries -- I only Add new ones.
Here is the async code from one of my plugins.
public class CPTManager : ServicePluginBase
{
public override async Task HandlePluginProcessAsync()
{
...
await ProcessUsersAsync(requiredUsersList, currentUsersList);
}
private async Task ProcessUsersAsync(List<ExtendedUser> required, List<ExtendedUser> current)
{
using (var http = new HttpClient())
{
var removals = currentUsers.Where(cu => !required.Select(r => r.id).Contains(cu.id)).ToList();
await DisableUsersAsync(removals http);
await AddRequiredUsersAsync(requiredUsers.Where(ru => ru.MustAdd).ToList());
}
}
private async Task DisableUsersAsync(List<ExtendedUser> users, HttpClient client)
{
LogEvent("Disable Users","Total to disable: " + users.Count());
await Task.WhenAll(users.Select(async user =>
{
... //Http call to disable user via Web API
string status = "Disable User";
if(response.status == "ERROR")
{
EmailFailDisableNotification(user);
status += " - Fail";
}
LogEvent(statusText, ...);
if(response.status != "ERROR")
{
//Update FoxPro database via OleDbConnection (not EF)
LogEvent("ClearUDF", ...);
}
}));
}
private async Task AddRequiredUsersAsync(List<ExtendedUser> users, HttpClient client)
{
LogEvent("Add Required Users", "Total users to add: " + users.Count());
await Task.WhenAll(users.Select(async user =>
{
... //Http call to add user via web API
LogEvent("Add User", ...);
if(response.status != "ERROR")
{
//Update FoxPro DB via OleDBConnection (not EF)
LogEvent("UDF UPdate",...);
}
}));
}
}
First, I'm confused why I'm getting the error mentioned above -- "Id can't be updated" ... I'm not populating the Id field nor am I doing updates to the Log file. There are no related tables -- just the single log table.
My guess is that I'm doing something improperly in regards to asynchronous processing, but I'm having trouble seeing it if I am.
Any ideas as to what may be going on?

How can I improve and/or modularize my handling of event based tasks?

So I have a server and I'm making calls to it through a wrapped up WebSocket (WebSocket4Net) and one of the requirements of the library I'm building is the ability to await on the return of the request. So I have a class MessageEventHandler that contains events that are triggered by the class MessageHandler as messages come in.
MessageEventHandler ex.
public class MessageEventHandler : IMessageEventHandler
{
public delegate void NodeNameReceived(string name);
public event Interfaces.NodeNameReceived OnNodeNameReceived;
public void NodeNameReceive(string name)
{
if (this.OnNodeNameReceived != null)
{
this.OnNodeNameReceived(name);
}
}
}
MessageHandler ex.
public class MessageHandler : IMessageHandler
{
private IMessageEventHandler eventHandler;
public MessageHandler(IMessageEventHandler eventHandler)
{
this.eventHandler = eventHandler;
}
public void ProcessDataCollectorMessage(string message)
{
var serviceMessage = JsonConvert.DeserializeObject<ServiceMessage>(message);
switch (message.MessageType)
{
case MessageType.GetNodeName:
{
var nodeName = serviceMessage.Data as string;
if (nodeName != null)
{
this.eventHandler.NodeNameReceive(nodeName);
}
break;
}
default:
{
throw new NotImplementedException();
}
}
}
Now building upon those classes I have the class containing my asynchronous function that handles the call to get the node name.
public class ClientServiceInterface : IClientServiceInterface
{
public delegate void RequestReady(ServiceMessage serviceMessage);
public event Interfaces.RequestReady OnRequestReady;
public int ResponseTimeout { get; private set; }
private IMessageEventHandler messageEventHandler;
public ClientServiceInterface(IMessageEventHandler messageEventHandler, int responseTimeout = 5000)
{
this.messageEventHandler = messageEventHandler;
this.ResponseTimeout = responseTimeout;
}
public Task<string> GetNodeNameAsync()
{
var taskCompletionSource = new TaskCompletionSource<string>();
var setHandler = default(NodeNameReceived);
setHandler = name =>
{
taskCompletionSource.SetResult(name);
this.messageEventHandler.OnNodeNameReceived -= setHandler;
};
this.messageEventHandler.OnNodeNameReceived += setHandler;
var ct = new CancellationTokenSource(this.ResponseTimeout);
var registration = new CancellationTokenRegistration();
registration = ct.Token.Register(
() =>
{
taskCompletionSource.TrySetCanceled();
this.messageEventHandler.OnNodeNameReceived -= setHandler;
registration.Dispose();
},
false);
var serviceMessage = new ServiceMessage() { Type = MessageType.GetNodeName };
this.ReadyMessage(serviceMessage);
return taskCompletionSource.Task;
}
}
As you can see I wouldn't call it pretty and I apologize if anyone threw up a little reading it. But this is my first attempt at wrapping a Task with Asynchronous Event. So with that on the table I could use some help.
Is there a better way to accomplish what I'm trying to achieve here? Remembering that I want a user of the library to either subscribe to the event and listen for all callbacks OR they can simply await the return depending on
their needs.
var nodeName = await GetNodeNameAsync();
Console.WriteLine(nodeName);
or
messageEventHandler.OnNodeNameReceived += (name) => Console.WriteLine(name);
GetNodeNameAsync();
Alternatively if my approach is actually 'good' can anyone provide any advice as to how I can write a helper function to abstract out setting up each function in this way? Any help would be greatly appreciated.
So I've written a couple classes to solve the problem I was having. The first of which is my CallbackHandle class which contains the task inside the TaskCompletionSource so each time that a request is made in my example a new callback handle is created.
public class CallbackHandle<T>
{
public CallbackHandle(int timeout)
{
this.TaskCompletionSource = new TaskCompletionSource<T>();
var cts = new CancellationTokenSource(timeout);
cts.Token.Register(
() =>
{
if (this.Cancelled != null)
{
this.Cancelled();
}
});
this.CancellationToken = cts;
}
public event Action Cancelled;
public CancellationTokenSource CancellationToken { get; private set; }
public TaskCompletionSource<T> TaskCompletionSource { get; private set; }
}
Then I have a 'handler' that manages the handles and their creation.
public class CallbackHandler<T>
{
private readonly IList<CallbackHandle<T>> callbackHandles;
private readonly object locker = new object();
public CallbackHandler()
{
this.callbackHandles = new List<CallbackHandle<T>>();
}
public CallbackHandle<T> AddCallback(int timeout)
{
var callback = new CallbackHandle<T>(timeout);
callback.Cancelled += () =>
{
this.callbackHandles.Remove(callback);
callback.TaskCompletionSource.TrySetResult("Error");
};
lock (this.locker)
{
this.callbackHandles.Add(callback);
}
return callback;
}
public void EventTriggered(T eventArgs)
{
lock (this.locker)
{
if (this.callbackHandles.Count > 0)
{
CallbackHandle<T> callback =
this.callbackHandles.First();
if (callback != null)
{
this.callbackHandles.Remove(callback);
callback.TaskCompletionSource.SetResult(eventArgs);
}
}
}
}
}
This is a simplified version of my actual implementation but it should get someone started if they need something similar. So to use this on my ClientServiceInterface class in my example I would start by creating a class level handler and using it like this:
public class ClientServiceInterface : IClientServiceInterface
{
private readonly CallbackHandler<string> getNodeNameHandler;
public ClientServiceInterface(IMessageEventHandler messageEventHandler, int responseTimeout = 5000)
{
this.messageEventHandler = messageEventHandler;
this.ResponseTimeout = responseTimeout;
this.getNodeNameHandler = new
CallbackHandler<string>();
this.messageEventHandler.OnNodeNameReceived += args => this.getNodeNameHandler.EventTriggered(args);
}
public Task<string> GetNodeNameAsync()
{
CallbackHandle<string> callbackHandle = this.getNodeNameHandler.AddCallback(this.ResponseTimeout);
var serviceMessage = new ServiceMessage
{
Type = MessageType.GetNodeName.ToString()
};
this.ReadyMessage(serviceMessage);
return callbackHandle.TaskCompletionSource.Task;
}
// Rest of class declaration removed for brevity
}
Which is much better looking than what I had before (at least in my opinion) and it's easy to extend.
For starters follow a thread-safe pattern:
public void NodeNameReceive(string name)
{
var evt = this.OnNodeNameReceived;
if (evt != null)
{
evt (name);
}
}
If you do not take a reference to the event object it can be set to null between the time you check null and call the method.

StackExchange.Redis ConnectionMultiplexer - handling disconnects

What is the correct way to handle socket failure in a ConnectionMultiplexer? I know it will try again silently in the background, but is there any accepted canonical way to handle the time between such disconnects? Since I wrap this up in our own client anyway, I was thinking something like the following:
private async Task<IDatabase> GetDb(int dbToGet)
{
int numberOfRetries = 0;
while (!multiplexer.IsConnected && numberOfRetries < MAX_RETRIES)
{
await Task.Delay(20);
numberOfRetries++;
}
if (!multiplexer.IsConnected)
{
// Panic, die, etc.
}
// Continue as though connected here
}
It seems a bit clumsy, though, so I'm wondering if there's a better way to handle this.
(This is all in version 1.0.414 of StackExchange.Redis, the latest version from NuGet)
I just wrapped multiplexer,
by default it has auto reconnect definition,
the real problem is that you have subscribe/Psubscribe to Redis with current socket connection,
therefore I used the ConnectionRestored Event to re-Register the subscribe patterns to the relevant channels/actions.
Class Example:
public class RedisInstanceManager
{
public RedisInstanceCredentials m_redisInstanceCredentials { get; set; }
public DateTime? m_lastUpdatedDate { get; set; }
public ConnectionMultiplexer redisClientsFactory { get; set; }
public Timer _ConnectedTimer;
public Action _reconnectAction;
public RedisInstanceManager(ConnectionMultiplexer redisClients, Action _reconnectActionIncoming)
{
string host,port;
string[] splitArray = redisClients.Configuration.Split(':');
host = splitArray[0];
port = splitArray[1];
this.redisClientsFactory = redisClients;
this.m_redisInstanceCredentials = new RedisInstanceCredentials(host, port);
this.m_lastUpdatedDate = null;
_ConnectedTimer = new Timer(connectedTimer, null, 1500, 1500);
_reconnectAction = _reconnectActionIncoming;
this.redisClientsFactory.ConnectionRestored += redisClientsFactory_ConnectionRestored;
}
private void connectedTimer(object o)
{
_ConnectedTimer.Change(Timeout.Infinite, Timeout.Infinite);
if (!this.redisClientsFactory.IsConnected)
{
Console.WriteLine("redis dissconnected");
}
_ConnectedTimer.Change(1500,1500);
}
void redisClientsFactory_ConnectionRestored(object sender, ConnectionFailedEventArgs e)
{
Console.WriteLine("Redis Connected again");
if (_reconnectAction != null)
_reconnectAction();
}
public ConnectionMultiplexer GetClient()
{
return this.redisClientsFactory;
}
}

Caching WebAPI 2

EDIT: For each request, a new instance of controller is created. However, this is not true with Attribute classes. Once they are created, it is used for multiple requests. I hope it helps.
I wrote my own WebAPI (using latest version of WebAPI and .net framework) caching action filter. I am aware about CacheCow & this. However, i wanted mine anyways.
However, there is some issue with my code because i don't get exepected output when i use it in my project on live server. On local machine everything works fine.
I used below code in my blog RSS generator and i cache the data for each category. There are around 5 categories (food, tech, personal etc).
Issue: When i navigate to say api/GetTech it returns me the rss feed items from personal blog category. When i navigate to say api/GetPersonal , it returns me api/Food
I am not able to find the root cause but I think this is due to use of static method/variable. I have double checked that my _cachekey has unique value for each category of my blog.
Can someone point out any issues with this code esp when we have say 300 requests per minute ?
public class WebApiOutputCacheAttribute : ActionFilterAttribute
{
// Cache timespan
private readonly int _timespan;
// cache key
private string _cachekey;
// cache repository
private static readonly MemoryCache _webApiCache = MemoryCache.Default;
/// <summary>
/// Initializes a new instance of the <see cref="WebApiOutputCacheAttribute"/> class.
/// </summary>
/// <param name="timespan">The timespan in seconds.</param>
public WebApiOutputCacheAttribute(int timespan)
{
_timespan = timespan;
}
public override void OnActionExecuting(HttpActionContext ac)
{
if (ac != null)
{
_cachekey = ac.Request.RequestUri.PathAndQuery.ToUpperInvariant();
if (!_webApiCache.Contains(_cachekey)) return;
var val = (string)_webApiCache.Get(_cachekey);
if (val == null) return;
ac.Response = ac.Request.CreateResponse();
ac.Response.Content = new StringContent(val);
var contenttype = (MediaTypeHeaderValue)_webApiCache.Get("response-ct") ?? new MediaTypeHeaderValue("application/rss+xml");
ac.Response.Content.Headers.ContentType = contenttype;
}
else
{
throw new ArgumentNullException("ac");
}
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
if (_webApiCache.Contains(_cachekey)) return;
var body = actionExecutedContext.Response.Content.ReadAsStringAsync().Result;
if (actionExecutedContext.Response.StatusCode == HttpStatusCode.OK)
{
lock (WebApiCache)
{
_wbApiCache.Add(_cachekey, body, DateTime.Now.AddSeconds(_timespan));
_webApiCache.Add("response-ct", actionExecutedContext.Response.Content.Headers.ContentType, DateTimeOffset.UtcNow.AddSeconds(_timespan));
}
}
}
}
The same WebApiOutputCacheAttribute instance can be used to cache multiple simultaneous requests, so you should not store cache keys on the instance of the attribute. Instead, regenerate the cache key during each request / method override. The following attribute works to cache HTTP GET requests.
using System;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Web.Http.Controllers;
using System.Web.Http.Filters;
using Newtonsoft.Json;
// based on strathweb implementation
// http://www.strathweb.com/2012/05/output-caching-in-asp-net-web-api/
public class CacheHttpGetAttribute : ActionFilterAttribute
{
public int Duration { get; set; }
public ILogExceptions ExceptionLogger { get; set; }
public IProvideCache CacheProvider { get; set; }
private bool IsCacheable(HttpRequestMessage request)
{
if (Duration < 1)
throw new InvalidOperationException("Duration must be greater than zero.");
// only cache for GET requests
return request.Method == HttpMethod.Get;
}
private CacheControlHeaderValue SetClientCache()
{
var cachecontrol = new CacheControlHeaderValue
{
MaxAge = TimeSpan.FromSeconds(Duration),
MustRevalidate = true,
};
return cachecontrol;
}
private static string GetServerCacheKey(HttpRequestMessage request)
{
var acceptHeaders = request.Headers.Accept;
var acceptHeader = acceptHeaders.Any() ? acceptHeaders.First().ToString() : "*/*";
return string.Join(":", new[]
{
request.RequestUri.AbsoluteUri,
acceptHeader,
});
}
private static string GetClientCacheKey(string serverCacheKey)
{
return string.Join(":", new[]
{
serverCacheKey,
"response-content-type",
});
}
public override void OnActionExecuting(HttpActionContext actionContext)
{
if (actionContext == null) throw new ArgumentNullException("actionContext");
var request = actionContext.Request;
if (!IsCacheable(request)) return;
try
{
// do NOT store cache keys on this attribute because the same instance
// can be reused for multiple requests
var serverCacheKey = GetServerCacheKey(request);
var clientCacheKey = GetClientCacheKey(serverCacheKey);
if (CacheProvider.Contains(serverCacheKey))
{
var serverValue = CacheProvider.Get(serverCacheKey);
var clientValue = CacheProvider.Get(clientCacheKey);
if (serverValue == null) return;
var contentType = clientValue != null
? JsonConvert.DeserializeObject<MediaTypeHeaderValue>(clientValue.ToString())
: new MediaTypeHeaderValue(serverCacheKey.Substring(serverCacheKey.LastIndexOf(':') + 1));
actionContext.Response = actionContext.Request.CreateResponse();
// do not try to create a string content if the value is binary
actionContext.Response.Content = serverValue is byte[]
? new ByteArrayContent((byte[])serverValue)
: new StringContent(serverValue.ToString());
actionContext.Response.Content.Headers.ContentType = contentType;
actionContext.Response.Headers.CacheControl = SetClientCache();
}
}
catch (Exception ex)
{
ExceptionLogger.Log(ex);
}
}
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
try
{
var request = actionExecutedContext.Request;
// do NOT store cache keys on this attribute because the same instance
// can be reused for multiple requests
var serverCacheKey = GetServerCacheKey(request);
var clientCacheKey = GetClientCacheKey(serverCacheKey);
if (!CacheProvider.Contains(serverCacheKey))
{
var contentType = actionExecutedContext.Response.Content.Headers.ContentType;
object serverValue;
if (contentType.MediaType.StartsWith("image/"))
serverValue = actionExecutedContext.Response.Content.ReadAsByteArrayAsync().Result;
else
serverValue = actionExecutedContext.Response.Content.ReadAsStringAsync().Result;
var clientValue = JsonConvert.SerializeObject(
new
{
contentType.MediaType,
contentType.CharSet,
});
CacheProvider.Add(serverCacheKey, serverValue, new TimeSpan(0, 0, Duration));
CacheProvider.Add(clientCacheKey, clientValue, new TimeSpan(0, 0, Duration));
}
if (IsCacheable(actionExecutedContext.Request))
actionExecutedContext.ActionContext.Response.Headers.CacheControl = SetClientCache();
}
catch (Exception ex)
{
ExceptionLogger.Log(ex);
}
}
}
Just replace the CacheProvider with your MemoryCache.Default. In fact, the code above uses the same by default during development, and uses azure cache when deployed to a live server.
Even though your code resets the _cachekey instance field during each request, these attributes are not like controllers where a new one is created for each request. Instead, the attribute instance can be repurposed to service multiple simultaneous requests. So don't use an instance field to store it, regenerate it based on the request each and every time you need it.

Categories

Resources