Issue in asynchronous API calls from SQLCLR - c#

In a nutshell, I need to notify a Web API service from SQL Server asynchronously as and when there are changes in a particular table.
To achieve the above, I have created a SQLCLR stored procedure which contains the asynchronous API call to notify the service. The SQLCLR stored procedure is called via a trigger as and when there is an insert into a table called Table1. The main challenge here is API has to read data from the same table (Table1).
If I use HttpWebRequest.GetResponse() which is the synchronous version, the entire operation is getting locked out due to the implicit lock of the insert trigger. To avoid this, I have used HttpWebRequest.GetResponseAsync() method which calls the API and doesn't wait for the response. So it fires the API request and the program control moves on so the trigger transaction doesn't hold any lock(s) on table1 and the API was able to read data from table1.
Now I have to implement an error notification mechanism as and when there are failures (like unable to connect to remote server) and I need to send an email to the admin team. I have wrote the mail composition logic inside the catch() block. If I proceed with the above HttpWebRequest.GetResponseAsync().Result method, the entire operation becomes synchronous and it locks the entire operation.
If I use the BeginGetResponse() and EndGetResponse() method implementation suggested in Microsoft documents and run the SQLCLR stored procedure, SQL Server hangs without any information, why? What am I doing wrong here? Why does the RespCallback() method not get executed?
Sharing the SQLCLR code snippets below.
public class RequestState
{
// This class stores the State of the request.
// const int BUFFER_SIZE = 1024;
// public StringBuilder requestData;
// public byte[] BufferRead;
public HttpWebRequest request;
public HttpWebResponse response;
// public Stream streamResponse;
public RequestState()
{
// BufferRead = new byte[BUFFER_SIZE];
// requestData = new StringBuilder("");
request = null;
// streamResponse = null;
}
}
public partial class StoredProcedures
{
private static SqlString _mailServer = null;
private static SqlString _port = null;
private static SqlString _fromAddress = null;
private static SqlString _toAddress = null;
private static SqlString _mailAcctUserName = null;
private static SqlString _decryptedPassword = null;
private static SqlString _subject = null;
private static string _mailContent = null;
private static int _portNo = 0;
public static ManualResetEvent allDone = new ManualResetEvent(false);
const int DefaultTimeout = 20000; // 50 seconds timeout
#region TimeOutCallBack
/// <summary>
/// Abort the request if the timer fires.
/// </summary>
/// <param name="state">request state</param>
/// <param name="timedOut">timeout status</param>
private static void TimeoutCallback(object state, bool timedOut)
{
if (timedOut)
{
HttpWebRequest request = state as HttpWebRequest;
if (request != null)
{
request.Abort();
SendNotifyErrorEmail(null, "The request got timedOut!,please check the API");
}
}
}
#endregion
#region APINotification
[SqlProcedure]
public static void Notify(SqlString weburl, SqlString username, SqlString password, SqlString connectionLimit, SqlString mailServer, SqlString port, SqlString fromAddress
, SqlString toAddress, SqlString mailAcctUserName, SqlString mailAcctPassword, SqlString subject)
{
_mailServer = mailServer;
_port = port;
_fromAddress = fromAddress;
_toAddress = toAddress;
_mailAcctUserName = mailAcctUserName;
_decryptedPassword = mailAcctPassword;
_subject = subject;
if (!(weburl.IsNull && username.IsNull && password.IsNull && connectionLimit.IsNull))
{
var url = Convert.ToString(weburl);
var uname = Convert.ToString(username);
var pass = Convert.ToString(password);
var connLimit = Convert.ToString(connectionLimit);
int conLimit = Convert.ToInt32(connLimit);
try
{
if (!(string.IsNullOrEmpty(url) && string.IsNullOrEmpty(uname) && string.IsNullOrEmpty(pass) && conLimit > 0))
{
SqlContext.Pipe.Send("Entered inside the notify method");
HttpWebRequest httpWebRequest = WebRequest.Create(url) as HttpWebRequest;
string encoded = Convert.ToBase64String(Encoding.GetEncoding("ISO-8859-1").GetBytes(uname + ":" + pass));
httpWebRequest.Headers.Add("Authorization", "Basic " + encoded);
httpWebRequest.Method = "POST";
httpWebRequest.ContentLength = 0;
httpWebRequest.ServicePoint.ConnectionLimit = conLimit;
// Create an instance of the RequestState and assign the previous myHttpWebRequest
// object to its request field.
RequestState requestState = new RequestState();
requestState.request = httpWebRequest;
SqlContext.Pipe.Send("before sending the notification");
//Start the asynchronous request.
IAsyncResult result =
(IAsyncResult)httpWebRequest.BeginGetResponse(new AsyncCallback(RespCallback), requestState);
SqlContext.Pipe.Send("after BeginGetResponse");
// this line implements the timeout, if there is a timeout, the callback fires and the request becomes aborted
ThreadPool.RegisterWaitForSingleObject(result.AsyncWaitHandle, new WaitOrTimerCallback(TimeoutCallback), requestState, DefaultTimeout, true);
//SqlContext.Pipe.Send("after RegisterWaitForSingleObject");
// The response came in the allowed time. The work processing will happen in the
// callback function.
allDone.WaitOne();
//SqlContext.Pipe.Send("after allDone.WaitOne();");
// Release the HttpWebResponse resource.
requestState.response.Close();
SqlContext.Pipe.Send("after requestState.response.Close()");
}
}
catch (Exception exception)
{
SqlContext.Pipe.Send(" Main Exception");
SqlContext.Pipe.Send(exception.Message.ToString());
//TODO: log the details in a error table
SendNotifyErrorEmail(exception, null);
}
}
}
#endregion
#region ResposnseCallBack
/// <summary>
/// asynchronous Httpresponse callback
/// </summary>
/// <param name="asynchronousResult"></param>
private static void RespCallback(IAsyncResult asynchronousResult)
{
try
{
SqlContext.Pipe.Send("Entering the respcallback");
// State of request is asynchronous.
RequestState httpRequestState = (RequestState)asynchronousResult.AsyncState;
HttpWebRequest currentHttpWebRequest = httpRequestState.request;
httpRequestState.response = (HttpWebResponse)currentHttpWebRequest.EndGetResponse(asynchronousResult);
SqlContext.Pipe.Send("exiting the respcallBack");
}
catch (Exception ex)
{
SqlContext.Pipe.Send("exception in the respcallBack");
SendNotifyErrorEmail(ex, null);
}
allDone.Set();
}
#endregion
}
One alternative approach for above is using SQL Server Service Broker which has the queuing mechanism which will help us to implement asynchronous triggers. But do we have any solution for the above situation? Am I doing anything wrong in an approach perspective? Please guide me.

There are a couple of things that stand out as possible issues:
Wouldn't the call to allDone.WaitOne(); block until a response is received anyway, negating the need / use of all this async stuff?
Even if this does work, are you testing this in a single session? You have several static member (class-level) variables, such as public static ManualResetEvent allDone, that are shared across all sessions. SQLCLR uses a shared App Domain (App Domains are per Database / per Assembly owner). Hence multiple sessions will over-write each other's values of these shared static variables. That is very dangerous (hence why non-readonly static variables are only allowed in UNSAFE Assemblies). This model only works if you can guarantee a single caller at any given moment.
Beyond any SQLCLR technicalities, I am not sure that this is a good model even if you do manage to get past this particular issue.
A better, safer model would be to:
create a queue table to log these changes. you typically only need the key columns and a timestamp (DATETIME or DATETIME2, not the TIMESTAMP datatype).
have the trigger log the current time and rows modified into the queue table
create a stored procedure that takes items from the queue, starting with the oldest records, processes them (which can definitely be calling your SQLCLR stored procedure to do the web server call, but no need for it to be async, so remove that stuff and set the Assembly back to EXTERNAL_ACCESS since you don't need/want UNSAFE).
Do this in a transaction so that the records are not fully removed from the queue table if the "processing" fails. Sometimes using the OUTPUT clause with DELETE to reserve rows that you are working on into a local temp table helps.
Process multiple records at a time, even if calling the SQLCLR stored procedure needs to be done on a per-row basis.
create a SQL Server agent job to execute the stored procedure every minute (or less depending on need)
Minor issues:
the pattern of copying input parameters into static variables (e.g. _mailServer = mailServer;) is pointless at best, and error-prone regardless due to not being thread-safe. remember, static variables are shared across all sessions, so any concurrent sessions will be overwriting the previous values, hence ensure race conditions. Please remove all of the variables with names starting with an underscore.
the pattern of using Convert.To... is also unnecessary and a slight hit to performance. All Sql* types have a Value property that returns the expected .NET type. Hence, you only need: string url = weburl.Value;
there is no need to use incorrect datatypes that require conversion later on. Meaning, rather than using SqlString for connectionLimit, instead use SqlInt32 and then you can simply do int connLimit = connectionLimit.Value;
You probably don't need to do the security manually (i.e. httpWebRequest.Headers.Add("Authorization", "Basic " + encoded);). I am pretty sure you can just create a new NetworkCredential using uname and pass and assign that to the request.

Hi there (and Happy New Year) :)!
A couple of things:
If you can, stay away from triggers! They can cause a lot of unexpected side effects, especially if you use them for business logic. What I mean with that, is that they are good for auditing purposes, but apart from that I try and avoid them like the plague.
So, if you do not use triggers, how do you know when something is inserted? Well, I hope that your inserts happen through stored procedures, and are not direct inserts in the table(s). If you use stored procedures you can have a procedure which is doing your logic, and that procedure is called from the procedure which does the insert.
Back to your question. I don't really have an answer to the specifics, but I would stay away from using SQLCLR in this case (sidenote: I am a BIG proponent of SQLCLR, and I have implemented a lot of SQLCLR processes which are doing something similar to what you do, so I don't say this because I do not like SQLCLR).
In your case, I would look at either use Change Notifications, or as you mention in your post - Service Broker. Be aware that with SSB you can run into performance issues (latches, locks, etc.) if your system is highly volatile (+2,000 tx/sec). At least that is what we experienced.

Related

Block Controller Method while already running

I have a controller which returns a large json object. If this object does not exist, it will generate and return it afterwards. The generation takes about 5 seconds, and if the client sent the request multiple times, the object gets generated with x-times the children. So my question is: Is there a way to block the second request, until the first one finished, independent who sent the request?
Normally I would do it with a Singleton, but because I am having scoped services, singleton does not work here
Warning: this is very oppinionated and maybe not suitable for Stack Overflow, but here it is anyway
Although I'll provide no code... when things take a while to generate, you don't usually spend that time directly in controller code, but do something like "start a background task to generate the result, and provide a "task id", which can be queried on another different call).
So, my preferred course of action for this would be having two different controller actions:
Generate, which creates the background job, assigns it some id, and returns the id
GetResult, to which you pass the task id, and returns either different error codes for "job id doesn't exist", "job id isn't finished", or a 200 with the result.
This way, your clients will need to call both, however, in Generate, you can check if the job is already being created and return an existing job id.
This of course moves the need to "retry and check" to your client: in exchange, you don't leave the connection to the server opened during those 5 seconds (which could potentially be multiplied by a number of clients) and return fast.
Otherwise, if you don't care about having your clients wait for a response during those 5 seconds, you could do a simple:
if(resultDoesntExist) {
resultDoesntExist = false; // You can use locks for the boolean setters or Interlocked instead of just setting a member
resultIsBeingGenerated = true;
generateResult(); // <-- this is what takes 5 seconds
resultIsBeingGenerated = false;
}
while(resultIsBeingGenerated) { await Task.Delay(10); } // <-- other clients will wait here
var result = getResult(); // <-- this should be fast once the result is already created
return result;
note: those booleans and the actual loop could be on the controller, or on the service, or wherever you see fit: just be wary of making them thread-safe in however method you see appropriate
So you basically make other clients wait till the first one generates the result, with "almost" no CPU load on the server... however with a connection open and a thread from the threadpool used, so I just DO NOT recommend this :-)
PS: #Leaky solution above is also good, but it also shifts the responsability to retry to the client, and if you are going to do that, I'd probably go directly with a "background job id", instead of having the first (the one that generates the result) one take 5 seconds. IMO, if it can be avoided, no API action should ever take 5 seconds to return :-)
Do you have an example for Interlocked.CompareExchange?
Sure. I'm definitely not the most knowledgeable person when it comes to multi-threading stuff, but this is quite simple (as you might know, Interlocked has no support for bool, so it's customary to represent it with an integral type):
public class QueryStatus
{
private static int _flag;
// Returns false if the query has already started.
public bool TrySetStarted()
=> Interlocked.CompareExchange(ref _flag, 1, 0) == 0;
public void SetFinished()
=> Interlocked.Exchange(ref _flag, 0);
}
I think it's the safest if you use it like this, with a 'Try' method, which tries to set the value and tells you if it was already set, in an atomic way.
Besides simply adding this (I mean just the field and the methods) to your existing component, you can also use it as a separate component, injected from the IOC container as scoped. Or even injected as a singleton, and then you don't have to use a static field.
Storing state like this should be good for as long as the application is running, but if the hosted application is recycled due to inactivity, it's obviously lost. Though, that won't happen while a request is still processing, and definitely won't happen in 5 seconds.
(And if you wanted to synchronize between app service instances, you could 'quickly' save a flag to the database, in a transaction with proper isolation level set. Or use e.g. Azure Redis Cache.)
Example solution
As Kit noted, rightly so, I didn't provide a full solution above.
So, a crude implementation could go like this:
public class SomeQueryService : ISomeQueryService
{
private static int _hasStartedFlag;
private static bool TrySetStarted()
=> Interlocked.CompareExchange(ref _hasStartedFlag, 1, 0) == 0;
private static void SetFinished()
=> Interlocked.Exchange(ref _hasStartedFlag, 0);
public async Task<(bool couldExecute, object result)> TryExecute()
{
if (!TrySetStarted())
return (couldExecute: false, result: null);
// Safely execute long query.
SetFinished();
return (couldExecute: true, result: result);
}
}
// In the controller, obviously
[HttpGet()]
public async Task<IActionResult> DoLongQuery([FromServices] ISomeQueryService someQueryService)
{
var (couldExecute, result) = await someQueryService.TryExecute();
if (!couldExecute)
{
return new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = "Another request has already started. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
return Ok(result);
}
Of course possibly you'd want to extract the 'blocking' logic from the controller action into somewhere else, for example an action filter. In that case the flag should also go into a separate component that could be shared between the query service and the filter.
General use action filter
I felt bad about my inelegant solution above, and I realized that this problem can be generalized into basically a connection number limiter on an endpoint.
I wrote this small action filter that can be applied to any endpoint (multiple endpoints), and it accepts the number of allowed connections:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ConcurrencyLimiterAttribute : ActionFilterAttribute
{
private readonly int _allowedConnections;
private static readonly ConcurrentDictionary<string, int> _connections = new ConcurrentDictionary<string, int>();
public ConcurrencyLimiterAttribute(int allowedConnections = 1)
=> _allowedConnections = allowedConnections;
public override async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
var key = context.HttpContext.Request.Path;
if (_connections.AddOrUpdate(key, 1, (k, v) => ++v) > _allowedConnections)
{
Close(withError: true);
return;
}
try
{
await next();
}
finally
{
Close();
}
void Close(bool withError = false)
{
if (withError)
{
context.Result = new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = $"Maximum {_allowedConnections} simultaneous connections are allowed. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
_connections.AddOrUpdate(key, 0, (k, v) => --v);
}
}
}

How to prevent multiple initialization of static field in static method?

In my web application I need to cache some data as they are required frequently
but changes less often. To hold them I have made a sepearate static class which hold these field as static values. These field get initialized on first call. see a sample below.
public static class gtu
{
private static string mostsearchpagedata = "";
public static string getmostsearchpagedata()
{
if (mostsearchpagedata == "")
{
using (WebClient client = new WebClient())
{
mostsearchpagedata = client.DownloadString("https://xxx.yxc");
}
}
return mostsearchpagedata;
}
}
Here webrequest is only made one time, it works ok but if they are called in quick succession when there are large no. of users and apppool has restarted,
webrequest is made multiple times depending upon mostsearchpagedata was initialized or not.
How can I make sure that webrequest happens only one time, and all other request wait till the completion of the first webrequest?
You could use System.Lazy<T>:
public static class gtu
{
private static readonly Lazy<string> mostsearchedpagedata =
new Lazy<string>(
() => {
using (WebClient client = new WebClient())
{
mostsearchpagedata =
client.DownloadString("https://xxx.yxc");
}
},
// See https://msdn.microsoft.com/library/system.threading.lazythreadsafetymode(v=vs.110).aspx for more info
// on the relevance of this.
// Hint: since fetching a web page is potentially
// expensive you really want to do it only once.
LazyThreadSafeMode.ExecutionAndPublication
);
// Optional: provide a "wrapper" to hide the fact that Lazy is used.
public static string MostSearchedPageData => mostsearchedpagedata.Value;
}
In short, the lambda code (your DownloadString essentially) will be called, when the first thread calls .Value on the Lazy-instance. Other threads will either do the same or wait for the first thread to finish (see LazyThreadSafeMode for more info). Subsequent invocations of the Value-property will get the value already stored in the Lazy-instance.

Shared object among different requests

I'm working with .NET 3.5 with a simple handler for http requests. Right now, on each http request my handler opens a tcp connection with 3 remote servers in order to receive some information from them. Then closes the sockets and writes the server status back to Context.Response.
However, I would prefer to have a separate object that every 5 minutes connects to the remote servers via tcp, gets the information and keeps it. So the HttpRequest, on each request would be much faster just asking this object for the information.
So my questions here are, how to keep a shared global object in memory all the time that can also "wake" an do those tcp connections even when no http requests are coming and have the object accesible to the http request handler.
A service may be overkill for this.
You can create a global object in your application start and have it create a background thread that does the query every 5 minutes. Take the response (or what you process from the response) and put it into a separate class, creating a new instance of that class with each response, and use System.Threading.Interlocked.Exchange to replace a static instance each time the response is retrieved. When you want to look the the response, simply copy a reference the static instance to a stack reference and you will have the most recent data.
Keep in mind, however, that ASP.NET will kill your application whenever there are no requests for a certain amount of time (idle time), so your application will stop and restart, causing your global object to get destroyed and recreated.
You may read elsewhere that you can't or shouldn't do background stuff in ASP.NET, but that's not true--you just have to understand the implications. I have similar code to the following example working on an ASP.NET site that handles over 1000 req/sec peak (across multiple servers).
For example, in global.asax.cs:
public class BackgroundResult
{
public string Response; // for simplicity, just use a public field for this example--for a real implementation, public fields are probably bad
}
class BackgroundQuery
{
private BackgroundResult _result; // interlocked
private readonly Thread _thread;
public BackgroundQuery()
{
_thread = new Thread(new ThreadStart(BackgroundThread));
_thread.IsBackground = true; // allow the application to shut down without errors even while this thread is still running
_thread.Name = "Background Query Thread";
_thread.Start();
// maybe you want to get the first result here immediately?? Otherwise, the first result may not be available for a bit
}
/// <summary>
/// Gets the latest result. Note that the result could change at any time, so do expect to reference this directly and get the same object back every time--for example, if you write code like: if (LatestResult.IsFoo) { LatestResult.Bar }, the object returned to check IsFoo could be different from the one used to get the Bar property.
/// </summary>
public BackgroundResult LatestResult { get { return _result; } }
private void BackgroundThread()
{
try
{
while (true)
{
try
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://example.com/samplepath?query=query");
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream(), System.Text.Encoding.UTF8))
{
// get what I need here (just the entire contents as a string for this example)
string result = reader.ReadToEnd();
// put it into the results
BackgroundResult backgroundResult = new BackgroundResult { Response = result };
System.Threading.Interlocked.Exchange(ref _result, backgroundResult);
}
}
}
catch (Exception ex)
{
// the request failed--cath here and notify us somehow, but keep looping
System.Diagnostics.Trace.WriteLine("Exception doing background web request:" + ex.ToString());
}
// wait for five minutes before we query again. Note that this is five minutes between the END of one request and the start of another--if you want 5 minutes between the START of each request, this will need to change a little.
System.Threading.Thread.Sleep(5 * 60 * 1000);
}
}
catch (Exception ex)
{
// we need to get notified of this error here somehow by logging it or something...
System.Diagnostics.Trace.WriteLine("Error in BackgroundQuery.BackgroundThread:" + ex.ToString());
}
}
}
private static BackgroundQuery _BackgroundQuerier; // set only during application startup
protected void Application_Start(object sender, EventArgs e)
{
// other initialization here...
_BackgroundQuerier = new BackgroundQuery();
// get the value here (it may or may not be set quite yet at this point)
BackgroundResult result = _BackgroundQuerier.LatestResult;
// other initialization here...
}

StackExchange.Redis with Azure Redis is unusably slow or throws timeout errors

I'm moving all of my existing Azure In-Role cache use to Redis and decided to use the Azure Redis preview along with the StackExchange.Redis library (https://github.com/StackExchange/StackExchange.Redis). I wrote all the code for it without much problem, but when running it is absolutely unusably slow and constantly throws timeout errors (my timeout period is set to 15 seconds).
Here is the relevant code for how I am setting up the Redis connection and using it for simple operations:
private static ConnectionMultiplexer _cacheService;
private static IDatabase _database;
private static object _lock = new object();
private void Initialize()
{
if (_cacheService == null)
{
lock (_lock)
{
if (_cacheService == null)
{
var options = new ConfigurationOptions();
options.EndPoints.Add("{my url}", 6380);
options.Ssl = true;
options.Password = "my password";
// needed for FLUSHDB command
options.AllowAdmin = true;
// necessary?
options.KeepAlive = 30;
options.ConnectTimeout = 15000;
options.SyncTimeout = 15000;
int database = 0;
_cacheService = ConnectionMultiplexer.Connect(options);
_database = _cacheService.GetDatabase(database);
}
}
}
}
public void Set(string key, object data, TimeSpan? expiry = null)
{
if (_database != null)
{
_database.Set(key, data, expiry: expiry);
}
}
public object Get(string key)
{
if (_database != null)
{
return _database.Get(key);
}
return null;
}
Performing very simple commands like Get and Set often time out or take 5-10 seconds to complete. Seems like it kind of negates the whole purpose of using it as a cache if it's WAY slower than actually fetching the real data from my database :)
Am I doing anything obviously incorrect?
Edit: here are some stats that I pulled from the server (using Redis Desktop Manager) in case that sheds some light on anything.
Server
redis_version:2.8.12
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:2876
tcp_port:6379
uptime_in_seconds:109909
uptime_in_days:1
hz:10
lru_clock:16072421
config_file:C:\Resources\directory\xxxx.Kernel.localStore\1\redis_2092_port6379.conf
Clients
connected_clients:5
client_longest_output_list:0
client_biggest_input_buf:0
client_total_writes_outstanding:0
client_total_sent_bytes_outstanding:0
blocked_clients:0
Memory
used_memory:4256488
used_memory_human:4.06M
used_memory_rss:67108864
used_memory_rss_human:64.00M
used_memory_peak:5469760
used_memory_peak_human:5.22M
used_memory_lua:33792
mem_fragmentation_ratio:15.77
mem_allocator:dlmalloc-2.8
Persistence
loading:0
rdb_changes_since_last_save:72465
rdb_bgsave_in_progress:0
rdb_last_save_time:1408471440
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
Stats
total_connections_received:25266
total_commands_processed:123389
instantaneous_ops_per_sec:10
bytes_received_per_sec:275
bytes_sent_per_sec:65
bytes_received_per_sec_human:
Edit 2: Here are the extension methods I'm using for Get/Set -- they are very simple methods that just turn an object into JSON and call StringSet.
public static object Get(this IDatabase cache, string key)
{
return DeserializeJson<object>(cache.StringGet(key));
}
public static void Set(this IDatabase cache, string key, object value, TimeSpan? expiry = null)
{
cache.StringSet(key, SerializeJson(value), expiry: expiry);
}
Edit 3: here are a couple example error messages:
A first chance exception of type 'System.TimeoutException' occurred in StackExchange.Redis.dll
Timeout performing GET MyCachedList, inst: 11, queue: 1, qu=1, qs=0, qc=0, wr=0/1, in=0/0
A first chance exception of type 'System.TimeoutException' occurred in StackExchange.Redis.dll
Timeout performing GET MyCachedList, inst: 1, queue: 97, qu=0, qs=97, qc=0, wr=0/0, in=3568/0
Here is the recommended pattern, from the Azure Redis Cache documentation:
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => {
return ConnectionMultiplexer.Connect("mycache.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
});
public static ConnectionMultiplexer Connection {
get {
return lazyConnection.Value;
}
}
A few important points:
It uses Lazy<T> to handle thread-safe initialization
It sets "abortConnect=false", which means if the initial connect attempt fails, the ConnectionMultiplexer will silently retry in the background rather than throw an exception.
It does not check the IsConnected property, since ConnectionMultiplexer will automatically retry in the background if the connection is dropped.
I was having similar issues. Redis cache was unusually slow but was definitely caching. In some cases, it took 20-40 seconds to load a page.
I realized that the cache server was in a different location than the site's. I updated the cache server to live in the same location as the website and now everything works as expected.
That same page now loads in 4-6 seconds.
Good luck to anyone else who's having these issues.
The problem is how the connection object created and used. we faced exact problem initially and fixed with a single connection object getting used across all web requests. And we check is it null or connected in session start for graceful re creating object. that fixed the issue.
Note: Also check in which Zone of Azure your Redis Cache instance is
running and Which Zone your Web Server exist. It is better to maintain
both in Same Zone
In Global.ascx.cs file
public static ConnectionMultiplexer RedisConnection;
public static IDatabase RedisCacheDb;
protected void Session_Start(object sender, EventArgs e)
{
if (ConfigurationManager.ConnectionStrings["RedisCache"] != null)
{
if (RedisConnection == null || !RedisConnection.IsConnected)
{
RedisConnection = ConnectionMultiplexer.Connect(ConfigurationManager.ConnectionStrings["RedisCache"].ConnectionString);
}
RedisCacheDb = RedisConnection.GetDatabase();
}
}
It worked in my case. Don't forget to increase the SyncTimeout. The default is 1 second.
private static Lazy<ConnectionMultiplexer> ConnectionMultiplexerItem = new Lazy<ConnectionMultiplexer>(() =>
{
var redisConfig = ConfigurationOptions.Parse("mycache.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
redisConfig.SyncTimeout = 3000;
return ConnectionMultiplexer.Connect(redisConfig);
});
Check if you your Azure Redis Cache and the Client in the same region in Azure. For example, you might be getting timeouts when your cache is in East US but the client is in West US and the request doesn't complete in synctimeout time or you might be getting timeouts when you are debugging from your local development machinex.
It’s highly recommended to have the cache and in the client in the same Azure region. If you have a scenario to do a cross region calls, you would want to set the synctimeout to a higher value.
Read more:
https://azure.microsoft.com/en-us/blog/investigating-timeout-exceptions-in-stackexchange-redis-for-azure-redis-cache/
In our case the issue is when using SSL connection. You're showing that your desktop manager is running on the non-SSL port, but your code is using SSL.
A quick benchmark on our Azure redis without SSL, retrieving around 80k values with an LRANGE command (also with .net and StackExchange.Redis) is basically instand. When SSL is used, the same query takes 27 seconds.
WebApp: Standard S2
Redis: Standard 1 GB
Edit: Checking the SLOWLOG, Redis itself seems to actually hit slowlog with it's 14ms time it takes or so to grab the rows, but this is far from the actual transfer with SSL enabled. We ended up with a premium Redis to have some sort of security between Redis and Web Apps.

Using SQL Server application locks to solve locking requirements

I have a large application based on Dynamics CRM 2011 that in various places has code that must query for a record based upon some criteria and create it if it doesn't exist else update it.
An example of the kind of thing I am talking about would be similar to this:
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
record.Id = Guid.NewGuid();
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
In terms of CRM 2011 implementation (although not strictly relevant to this question) the code could be triggered from synchronous or asynchronous plugins. The issue is that the code is not thread safe, between checking if the record exists and creating it if it doesn't, another thread could come in and do the same thing first resulting in duplicate records.
Normal locking methods are not reliable due to the architecture of the system, various services using multiple threads could all be using the same code, and these multiple services are also load balanced across multiple machines.
In trying to find a solution to this problem that doesn't add massive amounts of extra complexity and doesn't compromise the idea of not having a single point of failure or a single point where a bottleneck could occur I came across the idea of using SQL Server application locks.
I came up with the following class:
public class SQLLock : IDisposable
{
//Lock constants
private const string _lockMode = "Exclusive";
private const string _lockOwner = "Transaction";
private const string _lockDbPrincipal = "public";
//Variable for storing the connection passed to the constructor
private SqlConnection _connection;
//Variable for storing the name of the Application Lock created in SQL
private string _lockName;
//Variable for storing the timeout value of the lock
private int _lockTimeout;
//Variable for storing the SQL Transaction containing the lock
private SqlTransaction _transaction;
//Variable for storing if the lock was created ok
private bool _lockCreated = false;
public SQLLock (string lockName, int lockTimeout = 180000)
{
_connection = Connection.GetMasterDbConnection();
_lockName = lockName;
_lockTimeout = lockTimeout;
//Create the Application Lock
CreateLock();
}
public void Dispose()
{
//Release the Application Lock if it was created
if (_lockCreated)
{
ReleaseLock();
}
_connection.Close();
_connection.Dispose();
}
private void CreateLock()
{
_transaction = _connection.BeginTransaction();
using (SqlCommand createCmd = _connection.CreateCommand())
{
createCmd.Transaction = _transaction;
createCmd.CommandType = System.Data.CommandType.Text;
StringBuilder sbCreateCommand = new StringBuilder();
sbCreateCommand.AppendLine("DECLARE #res INT");
sbCreateCommand.AppendLine("EXEC #res = sp_getapplock");
sbCreateCommand.Append("#Resource = '").Append(_lockName).AppendLine("',");
sbCreateCommand.Append("#LockMode = '").Append(_lockMode).AppendLine("',");
sbCreateCommand.Append("#LockOwner = '").Append(_lockOwner).AppendLine("',");
sbCreateCommand.Append("#LockTimeout = ").Append(_lockTimeout).AppendLine(",");
sbCreateCommand.Append("#DbPrincipal = '").Append(_lockDbPrincipal).AppendLine("'");
sbCreateCommand.AppendLine("IF #res NOT IN (0, 1)");
sbCreateCommand.AppendLine("BEGIN");
sbCreateCommand.AppendLine("RAISERROR ( 'Unable to acquire Lock', 16, 1 )");
sbCreateCommand.AppendLine("END");
createCmd.CommandText = sbCreateCommand.ToString();
try
{
createCmd.ExecuteNonQuery();
_lockCreated = true;
}
catch (Exception ex)
{
_transaction.Rollback();
throw new Exception(string.Format("Unable to get SQL Application Lock on '{0}'", _lockName), ex);
}
}
}
private void ReleaseLock()
{
using (SqlCommand releaseCmd = _connection.CreateCommand())
{
releaseCmd.Transaction = _transaction;
releaseCmd.CommandType = System.Data.CommandType.StoredProcedure;
releaseCmd.CommandText = "sp_releaseapplock";
releaseCmd.Parameters.AddWithValue("#Resource", _lockName);
releaseCmd.Parameters.AddWithValue("#LockOwner", _lockOwner);
releaseCmd.Parameters.AddWithValue("#DbPrincipal", _lockDbPrincipal);
try
{
releaseCmd.ExecuteNonQuery();
}
catch {}
}
_transaction.Commit();
}
}
I would use this in my code to create a SQL Server application lock using the unique key I am querying for as the lock name like this
using (var sqlLock = new SQLLock(id))
{
//Code to check for and create or update record here
}
Now this approach seems to work, however I am by no means any kind of SQL Server expert and am wary about putting this anywhere near production code.
My question really has 3 parts
1. Is this a really bad idea because of something I haven't considered?
Are SQL Server application locks completely unsuitable for this purpose?
Is there a maximum number of application locks (with different names) you can have at a time?
Are there performance considerations if a potentially large number of locks are created?
What else could be an issue with the general approach?
2. Is the solution actually implemented above any good?
If SQL Server application locks are usable like this, have I actually used them properly?
Is there a better way of using SQL Server to achieve the same result?
In the code above I am getting a connection to the Master database and creating the locks in there. Does that potentially cause other issues? Should I create the locks in a different database?
3. Is there a completely alternative approach that could be used that doesn't use SQL Server application locks?
I can't use stored procedures to create and update the record (unsupported in CRM 2011).
I don't want to add a single point of failure.
You can do this much easier.
//make sure your plugin runs within a transaction, this is the case for stage 20 and 40
//you can check this with IExecutionContext.IsInTransaction
//works not with offline plugins but works within CRM Online (Cloud) and its fully supported
//also works on transaction rollback
var lockUpdateEntity = new dummy_lock_entity(); //simple technical entity with as many rows as different lock barriers you need
lockUpdateEntity.Id = Guid.parse("well known guid"); //well known guid for this barrier
lockUpdateEntity.dummy_field=Guid.NewGuid(); //just update/change a field to create a lock, no matter of its content
//--------------- this is untested by me, i use the next one
context.UpdateObject(lockUpdateEntity);
context.SaveChanges();
//---------------
//OR
//--------------- i use this one, but you need a reference to your OrganizationService
OrganizationService.Update(lockUpdateEntity);
//---------------
//threads wait here if they have no lock for dummy_lock_entity with "well known guid"
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
//record.Id = Guid.NewGuid(); //not needed
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
//let the pipeline flow and the transaction complete ...
For more background info refer to http://www.crmsoftwareblog.com/2012/01/implementing-robust-microsoft-dynamics-crm-2011-auto-numbering-using-transactions/

Categories

Resources