Can mutithreading be used to performance tune my app - c#

In my application i open connection for about 70 servers each having 8 databases on average(the servers Are categorized into environments viz development, production, UaT, sit,training, misc,Qa ).
The application will check for existence of a user in the each database and fetch details if the user exists.
I have used a method to call the service this method will pass the user id as input inturn the service will check the user across the databases And fetch The details.
this whole process is taking too much of time that the idle time in UI is around 5 - 10 mins.
How can we tune the performance of this application. I thought Of implementing multi-threading and fetching detains in environment basis. but I am not sure if we can call a method with Return type in the application and with input parameters.
Please suggest a way to improve performance.
public List<AccessDetails> GetAccessListOfMirror(string mirrorId,string server)
{
List<AccessDetails> accessOfMirror = new List<AccessDetails>();
string loginUserId = SessionManager.Session.Current.LoggedInUserName;
string userPassword = SessionManager.Session.Current.Password;
using (Service1Client client = new Service1Client())
{
client.Open();
accessOfMirror = client.GetMirrorList(mirrorId, server, loginUserId, userPassword);
}
return accessOfMirror;
}
Service method
public List<AccessDetails> GetMirrorList(string mirrorId, string server, string userId, string userPassword)
{
string mirrorUser = mirrorId.ToString();
List<ConnectionStringContract> connectionStrings = new List<ConnectionStringContract>();
try
{
connectionStrings = GetConnectionString(server);
}
catch (FaultException<ServiceData> exe)
{
throw exe;
}
AseConnection aseConnection = default(AseConnection);
List<AccessRequest> mirrorUsers = new List<AccessRequest>();
List<FacetsOnlineAccess> foaAccess = new List<FacetsOnlineAccess>();
List<AccessDetails> accessDetails = new List<AccessDetails>();
AccessDetails accDetails = new AccessDetails();
AccessRequest access;
if (!String.IsNullOrEmpty(server))
connectionStrings = connectionStrings.Where(x => x.Server == server).ToList();
foreach (ConnectionStringContract connection in connectionStrings)
{
string connectionString = connection.ConnectionString;
AseCommand aseCommand = new AseCommand();
using (aseConnection = new AseConnection(connectionString))
{
try
{
aseConnection.Open();
try
{
List<Parameter> parameter = new List<Parameter>();
Parameter param;
param = new Parameter();
param.Name = "#name_in_db";
param.Value = mirrorUser.ToLower().Trim();
parameter.Add(param);
int returnCode = 0;
DataSet ds = new DataSet();
try
{
ds = DataAccess.ExecuteStoredProcedure(connectionString, Constant.SP_HELPUSER, parameter, out returnCode);
}
catch (Exception ex)
{
}
if(ds.Tables.Count > 0 && ds.Tables[0].Rows.Count > 0)
{
foreach (DataRow row in ds.Tables[0].Rows)
{
access = new AccessRequest();
if (row.ItemArray[0].ToString() == mirrorUser)
access.Group = row.ItemArray[2].ToString();
else
access.Group = row.ItemArray[0].ToString();
access.Environment = connection.Environment.Trim();
access.Server = connection.Server.Trim();
access.Database = connection.Database.Trim();
mirrorUsers.Add(access);
}
}
}
catch (Exception ex)
{
}
}
catch (Exception ConEx)
{
}
}
}
accDetails.AccessList = mirrorUsers;
//accDetails.FOAList = foaAccess;
accessDetails.Add(accDetails);
return accessDetails;
}
Thanks in advance

Loops can sometimes reduce speeds, especially loops inside loops. I/O operations are always pretty slow. You have a loop with i?o. So if you would execute this I/O operations on parallel threads, the performance would increase.
You could translate
foreach (ConnectionStringContract connection in connectionStrings)
{
...
}
into:
Parallel.ForEach(connectionStrings, connectionString =>
{
...
}
Inside you should lock the commonly used variables, like mirrorUsers with a lock.
I think this is a great start. Meanwhile I will look for other performance issues.

You should be able to take advantage of multi threading without too much hassle...
You should probably make use of the threadpool, so you don't spawn too many threads all running at the same time. You can read about the built in threadpool here:
http://msdn.microsoft.com/en-us/library/3dasc8as%28v=vs.80%29.aspx
What you should do is to extract the body of your foreach loop to a static method. Any parameters should be made into a model that can be passed to the thread. Because then you can use ThreadPool.QueueUserWorkItem(object) to start the threads.
Regarding multiple threads writing to the same resource (list or whatever) you can use any kind of mutex, lock or threadsafe component. Using locks is quite simple:
http://msdn.microsoft.com/en-us/library/c5kehkcz%28v=vs.80%29.aspx

Related

Hangfire in single application on multiple physical servers

I am running hangfire in a single web application, my application is being run on 2 physical servers but hangfire is in 1 database.
At the moment, i am generating a server for each queue, because each queue i need to run 1 worker at a time and they must be in order. I set them up like this
// core
services.AddHangfire(options =>
{
options.SetDataCompatibilityLevel(CompatibilityLevel.Version_170);
options.UseSimpleAssemblyNameTypeSerializer();
options.UseRecommendedSerializerSettings();
options.UseSqlServerStorage(appSettings.Data.DefaultConnection.ConnectionString, storageOptions);
});
// add multiple servers, this way we get to control how many workers are in each queue
services.AddHangfireServer(options =>
{
options.ServerName = "workflow-queue";
options.WorkerCount = 1;
options.Queues = new string[] { "workflow-queue" };
options.SchedulePollingInterval = TimeSpan.FromSeconds(10);
});
services.AddHangfireServer(options =>
{
options.ServerName = "alert-schedule";
options.WorkerCount = 1;
options.Queues = new string[] { "alert-schedule" };
options.SchedulePollingInterval = TimeSpan.FromMinutes(1);
});
services.AddHangfireServer(options =>
{
options.ServerName = string.Format("trigger-schedule");
options.WorkerCount = 1;
options.Queues = new string[] { "trigger-schedule" };
options.SchedulePollingInterval = TimeSpan.FromMinutes(1);
});
services.AddHangfireServer(options =>
{
options.ServerName = "report-schedule";
options.WorkerCount = 1;
options.Queues = new string[] { "report-schedule" };
options.SchedulePollingInterval = TimeSpan.FromMinutes(1);
});
services.AddHangfireServer(options =>
{
options.ServerName = "maintenance";
options.WorkerCount = 5;
options.Queues = new string[] { "maintenance" };
options.SchedulePollingInterval = TimeSpan.FromMinutes(10);
});
My problem is that it is generating multiple queues on the servers, with different ports.
In my code i am then trying to stop jobs from running if they are queued/retrying, but if the job is being run on a different physical server, it is not found and queued again.
Here is the code to check if its running already
public async Task<bool> IsAlreadyQueuedAsync(PerformContext context)
{
var disableJob = false;
var monitoringApi = JobStorage.Current.GetMonitoringApi();
// get the jobId, method and queue using performContext
var jobId = context.BackgroundJob.Id;
var methodInfo = context.BackgroundJob.Job.Method;
var queueAttribute = (QueueAttribute)Attribute.GetCustomAttribute(context.BackgroundJob.Job.Method, typeof(QueueAttribute));
// enqueuedJobs
var enqueuedjobStatesToCheck = new[] { "Processing" };
var enqueuedJobs = monitoringApi.EnqueuedJobs(queueAttribute.Queue, 0, 1000);
var enqueuedJobsAlready = enqueuedJobs.Count(e => e.Key != jobId && e.Value != null && e.Value.Job != null && e.Value.Job.Method.Equals(methodInfo) && enqueuedjobStatesToCheck.Contains(e.Value.State));
if (enqueuedJobsAlready > 0)
disableJob = true;
// scheduledJobs
if (!disableJob)
{
// check if there are any scheduledJobs that are processing
var scheduledJobs = monitoringApi.ScheduledJobs(0, 1000);
var scheduledJobsAlready = scheduledJobs.Count(e => e.Key != jobId && e.Value != null && e.Value.Job != null && e.Value.Job.Method.Equals(methodInfo));
if (scheduledJobsAlready > 0)
disableJob = true;
}
// failedJobs
if (!disableJob)
{
var failedJobs = monitoringApi.FailedJobs(0, 1000);
var failedJobsAlready = failedJobs.Count(e => e.Key != jobId && e.Value != null && e.Value.Job != null && e.Value.Job.Method.Equals(methodInfo));
if (failedJobsAlready > 0)
disableJob = true;
}
// if runBefore is true, then lets remove the current job running, else it will write a "successful" message in the logs
if (disableJob)
{
// use hangfire delete, for cleanup
BackgroundJob.Delete(jobId);
// create our sqlBuilder to remove the entries altogether including the count
var sqlBuilder = new SqlBuilder()
.DELETE_FROM("Hangfire.[Job]")
.WHERE("[Id] = {0};", jobId);
sqlBuilder.Append("DELETE TOP(1) FROM Hangfire.[Counter] WHERE [Key] = 'stats:deleted' AND [Value] = 1;");
using (var cmd = _context.CreateCommand(sqlBuilder))
await cmd.ExecuteNonQueryAsync();
return true;
}
return false;
}
Each method has something like the following attributes as well
public interface IAlertScheduleService
{
[Hangfire.Queue("alert-schedule")]
[Hangfire.DisableConcurrentExecution(60 * 60 * 5)]
Task RunAllAsync(PerformContext context);
}
Simple implementation of the interface
public class AlertScheduleService : IAlertScheduleService
{
public Task RunAllAsync(PerformContext context)
{
if (IsAlreadyQueuedAsync(context))
return;
// guess it isnt queued, so run it here....
}
}
Here is how i am adding my scheduled jobs
//// our recurring jobs
//// set these to run hourly, so they can play "catch-up" if needed
RecurringJob.AddOrUpdate<IAlertScheduleService>(e => e.RunAllAsync(null), Cron.Hourly(0), queue: "alert-schedule");
Why does this happen? How can i stop it happening?
Somewhat of a blind shot, preventing a job to be queued if a job is already queued in the same queue.
The try-catch logic is quite ugly but I have no better idea right now...
Also, really not sure the lock logic always prevents from having two jobs in EnqueudState, but it should help anyway. Maybe mixing with an IApplyStateFilter.
public class DoNotQueueIfAlreadyQueued : IElectStateFilter
{
public void OnStateElection(ElectStateContext context)
{
if (context.CandidateState is EnqueuedState)
{
EnqueuedState es = context.CandidateState as EnqueuedState;
IDisposable distributedLock = null;
try
{
while (distributedLock == null)
{
try
{
distributedLock = context.Connection.AcquireDistributedLock($"{nameof(DoNotQueueIfAlreadyQueued)}-{es.Queue}", TimeSpan.FromSeconds(1));
}
catch { }
}
var m = context.Storage.GetMonitoringApi();
if (m.EnqueuedCount(es.Queue) > 0)
{
context.CandidateState = new DeletedState();
}
}
finally
{
distributedLock.Dispose();
}
}
}
}
The filter can be declared as in this answer
There seems to be a bug with your currently used hangfire storage implementation:
https://github.com/HangfireIO/Hangfire/issues/1025
The current options are:
Switching to HangFire.LiteDB as commented here: https://github.com/HangfireIO/Hangfire/issues/1025#issuecomment-686433594
Implementing your own logic to enqueue a job, but this would take more effort.
Making your job execution idempotent to avoid side effects in case it's executed multiple times.
In either option, you should still apply DisableConcurrentExecution and make your job execution idempotent as explained below, so i think you can just go with below option:
Applying DisableConcurrentExecution is necessary, but it's not enough as there are no reliable automatic failure detectors in distributed systems. That's the nature of distributed systems, we usually have to rely on timeouts to detect failures, but it's not reliable.
Hangfire is designed to run with at-least-once execution semantics. Explained below:
One of your servers may be executing the job, but it's detected as being failed due to various reasons. For example: your current processing server does not send heartbeats in time due to a temporary network issue or due to temporary high load.
When the current processing server is assumed to be failed (but it's not), the job will be scheduled to another server which causes it to be executed more than once.
The solution should be still applying DisableConcurrentExecution attribute as a best effort to prevent multiple executions of the same job, but the main thing is that you need to make the execution of the job idempotent which does not cause side effects in case it's executed multiple times.
Please refer to some quotes from https://docs.hangfire.io/en/latest/background-processing/throttling.html:
Throttlers apply only to different background jobs, and there’s no
reliable way to prevent multiple executions of the same background job
other than by using transactions in background job method itself.
DisableConcurrentExecution may help a bit by narrowing the safety
violation surface, but it heavily relies on an active connection,
which may be broken (and lock is released) without any notification
for our background job.
As there are no reliable automatic failure detectors in distributed
systems, it is possible that the same job is being processed on
different workers in some corner cases. Unlike OS-based mutexes,
mutexes in this package don’t protect from this behavior so develop
accordingly.
DisableConcurrentExecution filter may reduce the probability of
violation of this safety property, but the only way to guarantee it is
to use transactions or CAS-based operations in our background jobs to
make them idempotent.
You can also refer to this as Hangfire timeouts behavior seems to be dependent on storage as well: https://github.com/HangfireIO/Hangfire/issues/1960#issuecomment-962884011

Why is (Dapper) async IO extremely slow?

I'm using load-tests to analyze the "ball park" performance of Dapper accessing SQL Server. My laptop is simultaneously the load-generator and the test target. My laptop has 2 cores, 16GB RAM, and is running Windows 10 Pro, v1709. The database is SQL Server 2017 running in a Docker container (the container's Hyper-V VM has 3GB RAM). My load-test and test code is using .net 4.6.1.
My load-test results after 15 seconds of a simulated 10 simultaneous clients are as follows:
Synchronous Dapper code: 750+ transactions per second.
Asynchronous Dapper code: 4 to 8 transactions per second. YIKES!
I realize that async can sometimes be slower than synchronous code. I also realize that my test setup is weak. However, I shouldn't be seeing such horrible performance from asynchronous code.
I've narrowed the problem to something associated with Dapper and the System.Data.SqlClient.SqlConnection. I need help to finally solve this. Profiler results are below.
I figured out a cheesy way to force my async code to achieve 650+ transactions per second, which I'll discuss in a bit, but now first it is time to show my code which is just a console app. I have a test class:
public class FitTest
{
private List<ItemRequest> items;
public FitTest()
{
//Parameters used for the Dapper call to the stored procedure.
items = new List<ItemRequest> {
new ItemRequest { SKU = "0010015488000060", ReqQty = 2 } ,
new ItemRequest { SKU = "0010015491000060", ReqQty = 1 }
};
}
... //the rest not listed.
Synchronous Test Target
Within the FitTest class, under load, the following test-target method achieves 750+ transactions per second:
public Task LoadDB()
{
var skus = items.Select(x => x.SKU);
string procedureName = "GetWebInvBySkuList";
string userDefinedTable = "[dbo].[StringList]";
string connectionString = "Data Source=localhost;Initial Catalog=Web_Inventory;Integrated Security=False;User ID=sa;Password=1Secure*Password1;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False";
var dt = new DataTable();
dt.Columns.Add("Id", typeof(string));
foreach (var sku in skus)
{
dt.Rows.Add(sku);
}
using (var conn = new SqlConnection(connectionString))
{
var inv = conn.Query<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure);
return Task.CompletedTask;
}
}
I am not explicitly opening or closing the SqlConnection. I understand that Dapper does that for me. Also, the only reason the above code returns a Task is because my load-generation code is designed to work with that signature.
Asynchronous Test Target
The other test-target method in my FitTest class is this:
public async Task<IEnumerable<Inventory>> LoadDBAsync()
{
var skus = items.Select(x => x.SKU);
string procedureName = "GetWebInvBySkuList";
string userDefinedTable = "[dbo].[StringList]";
string connectionString = "Data Source=localhost;Initial Catalog=Web_Inventory;Integrated Security=False;User ID=sa;Password=1Secure*Password1;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False";
var dt = new DataTable();
dt.Columns.Add("Id", typeof(string));
foreach (var sku in skus)
{
dt.Rows.Add(sku);
}
using (var conn = new SqlConnection(connectionString))
{
return await conn.QueryAsync<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure).ConfigureAwait(false);
}
}
Again, I'm not explicitly opening or closing the connection - because Dapper does that for me. I have also tested this code with explicitly opening and closing; it does not change the performance. The profiler results for the load-generator acting against the above code (4 TPS) is as follows:
What DOES change the performance is if I change the above as follows:
//using (var conn = new SqlConnection(connectionString))
//{
var inv = await conn.QueryAsync<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure);
var foo = inv.ToArray();
return inv;
//}
In this case I've converted the SqlConnection into a private member of the FitTest class and initialized it in the constructor. That is, one SqlConnection per client per load-test session. It is never disposed during the load-test. I also changed the connection string to include "MultipleActiveResultSets=True", because now I started getting those errors.
With these changes, my results become: 640+ transactions per second, and with 8 exceptions thrown. The exceptions were all "InvalidOperationException: BeginExecuteReader requires an open and available Connection. The connection's current state is connecting." The profiler results in this case:
That looks to me like a synchronization bug in Dapper with the SqlConnection.
Load-Generator
My load-generator, a class called Generator, is designed to be given a list of delegates when constructed. Each delegate has a unique instantiation of the FitTest class. If I supply an array of 10 delegates, it is interpreted as representing 10 clients to be used for generating load in parallel.
To kick off the load test, I have this:
//This `testFuncs` array (indirectly) points to either instances
//of the synchronous test-target, or the async test-target, depending
//on what I'm measuring.
private Func<Task>[] testFuncs;
private Dictionary<int, Task> map;
private TaskCompletionSource<bool> completionSource;
public void RunWithMultipleClients()
{
completionSource = new TaskCompletionSource<bool>();
//Create a dictionary that has indexes and Task completion status info.
//The indexes correspond to the testFuncs[] array (viz. the test clients).
map = testFuncs
.Select((f, j) => new KeyValuePair<int, Task>(j, Task.CompletedTask))
.ToDictionary(p => p.Key, v => v.Value);
//scenario.Duration is usually '15'. In other words, this test
//will terminate after generating load for 15 seconds.
Task.Delay(scenario.Duration * 1000).ContinueWith(x => {
running = false;
completionSource.SetResult(true);
});
RunWithMultipleClientsLoop();
completionSource.Task.Wait();
}
So much for the setup, the actual load is generated as follows:
public void RunWithMultipleClientsLoop()
{
//while (running)
//{
var idleList = map.Where(x => x.Value.IsCompleted).Select(k => k.Key).ToArray();
foreach (var client in idleList)
{
//I've both of the following. The `Task.Run` version
//is about 20% faster for the synchronous test target.
map[client] = Task.Run(testFuncs[client]);
//map[client] = testFuncs[client]();
}
Task.WhenAny(map.Values.ToArray())
.ContinueWith(x => { if (running) RunWithMultipleClientsLoop(); });
// Task.WaitAny(map.Values.ToArray());
//}
}
The while loop and Task.WaitAny, commented out, represent a different approach that has nearly the same performance; I keep it around for experiments.
One last detail. Each of the "client" delegates I pass in is first wrapped inside a metrics-capture function. The metrics capture function looks like this:
private async Task LoadLogic(Func<Task> testCode)
{
try
{
if (!running)
{
slagCount++;
return;
}
//This is where the actual test target method
//is called.
await testCode().ConfigureAwait(false);
if (running)
{
successCount++;
}
else
{
slagCount++;
}
}
catch (Exception ex)
{
if (ex.Message.Contains("Assert"))
{
errorCount++;
}
else
{
exceptionCount++;
}
}
}
When my code runs, I do not receive any errors or exceptions.
Ok, what am I doing wrong? In the worst case scenario, I would expect the async code to be only slightly slower than synchronous.

How to make two SQL queries really asynchronous

My problem is based on a real project problem, but I have never used the System.Threading.Tasks library or performing any serious programming involving threads so my question may be a mix of lacking knowledge about the specific library and more general misunderstanding of what asynchronous really means in terms of programming.
So my real world case is this - I need to fetch data about an user. In my current scenario it's financial data so let say I need all Accounts, all Deposits and all Consignations for a certain user. In my case this means to query million of records for each property and each query is relatively slow itself, however to fetch the Accounts is several times slower than fetching the Deposits. So I have defined three classes for the three bank products I'm going to use and when I want to fetch the data for all the bank products of certain user I do something like this :
List<Account> accounts = GetAccountsForClient(int clientId);
List<Deposit> deposits = GetDepositsForClient(int clientId);
List<Consignation> consignations = GetConsignationsForClient(int clientId);
So the problem starts here I need to get all those three list at the same time, cause I'm going to pass them to the view where I display all users data. But as it is right now the execution is synchronous (If I'm using the term correctly here) so the total time for collecting the data for all three products is:
Total_Time = Time_To_Get_Accounts + Time_To_Get_Deposits + Time_To_Get_Consignations
This is not good because the each query is relatively slow so the total time is pretty big, but also, the accounts query takes much more time than the other two queries so the idea that get into my head today was - "What if I could execute this queries simultaneously". Maybe here comes my biggest misunderstanding on the topic but for me the closest to this idea is to make them asynchronous so maybe then Total_Time won't be the time of the slowest query but yet will be much faster than the sum of all three queries.
Since my code is complicated I created a simple use case which I think, reflect what I'm trying to do pretty well. I have two methods :
public static async Task<int> GetAccounts()
{
int total1 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query1 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Accounts]";
SqlCommand command = new SqlCommand(query1, connection);
connection.Open();
for (int i = 0; i < 19000000; i++)
{
string s = i.ToString();
}
total1 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total1.ToString());
}
return total1;
}
and the second method :
public static async Task<int> GetDeposits()
{
int total2 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query2 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Deposits]";
SqlCommand command = new SqlCommand(query2, connection);
connection.Open();
total2 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total2.ToString());
}
return total2;
}
which I call like this:
static void Main(string[] args)
{
Console.WriteLine(GetAccounts().Result.ToString());
Console.WriteLine(GetDeposits().Result.ToString());
}
As you can see I call GetAccounts() first and I slow the execution down on purpose so I give a chance the execution to continue to the next method. However I'm not getting any result for a certain period of time and then I get all printed on the console at the same time.
So the problem - how to make so that I don't wait for the first method to finish, in order to go to the next method. In general the code structure is not that important, what I really want to figure out is if there's any way to make both queries to execute at the same time. The sample here is the result of my research which maybe could be extended to the point where I'll get the desired result.
P.S
I'm using ExecuteScalarAsync(); just because I started with a method which was using it. In reality I'm gonna use Scalar and Reader.
When you use the Result property on a task that hasn't completed yet the calling thread will block until the operation completes. That means in your case that the GetAccounts operation need to complete before the call to GetDeposits starts.
If you want to make sure these method are parallel (including the synchronous CPU-intensive parts) you need to offload that work to another thread. The simplest way to do so would be to use Task.Run:
static async Task Main()
{
var accountTask = Task.Run(async () => Console.WriteLine(await GetAccounts()));
var depositsTask = Task.Run(async () => Console.WriteLine(await GetDeposits()));
await Task.WhenAll(accountTask, depositsTask);
}
Here's a way to to perform two tasks asynchronously and in parallel:
Task<int> accountTask = GetAccounts();
Task<int> depositsTask = GetDeposits();
int[] results = await Task.WhenAll(accountTask, depositsTask);
int accounts = results[0];
int deposits = results[1];
I generally prefer to use Task.WaitAll. To setup for this code segment, I changed the GetAccounts/GetDeposits signatures just to return int (public static int GetAccounts())
I placed the Console.WriteLine in the same thread as assigning the return to validate that one GetDeposits returns before GetAccounts has, but this is unnecessary and probably best to move it after the Task.WaitAll
private static void Main(string[] args) {
int getAccountsTask = 0;
int getDepositsTask = 0;
List<Task> tasks = new List<Task>() {
Task.Factory.StartNew(() => {
getAccountsTask = GetAccounts();
Console.WriteLine(getAccountsTask);
}),
Task.Factory.StartNew(() => {
getDepositsTask = GetDeposits();
Console.WriteLine(getDepositsTask);
})
};
Task.WaitAll(tasks.ToArray());
}
If it's ASP.NET use AJAX to fetch after the page is rendered and put the data in a store. Each AJAX fetch is asynchronous. If you want to create simultaneous SQL queries on the server?
Usage:
// Add some queries ie. ThreadedQuery.NamedQuery([Name], [SQL])
var namedQueries= new ThreadedQuery.NamedQuery[]{ ... };
System.Data.DataSet ds = ThreadedQuery.RunThreadedQuery(
"Server=foo;Database=bar;Trusted_Connection=True;",
namedQueries).Result;
string msg = string.Empty;
foreach (System.Data.DataTable tt in ds.Tables)
msg += string.Format("{0}: {1}\r\n", tt.TableName, tt.Rows.Count);
Source:
public class ThreadedQuery
{
public class NamedQuery
{
public NamedQuery(string TableName, string SQL)
{
this.TableName = TableName;
this.SQL = SQL;
}
public string TableName { get; set; }
public string SQL { get; set; }
}
public static async System.Threading.Tasks.Task<System.Data.DataSet> RunThreadedQuery(string ConnectionString, params NamedQuery[] queries)
{
System.Data.DataSet dss = new System.Data.DataSet();
List<System.Threading.Tasks.Task<System.Data.DataTable>> asyncQryList = new List<System.Threading.Tasks.Task<System.Data.DataTable>>();
foreach (var qq in queries)
asyncQryList.Add(fetchDataTable(qq, ConnectionString));
foreach (var tsk in asyncQryList)
{
System.Data.DataTable tmp = await tsk.ConfigureAwait(false);
dss.Tables.Add(tmp);
}
return dss;
}
private static async System.Threading.Tasks.Task<System.Data.DataTable> fetchDataTable(NamedQuery qry, string ConnectionString)
{
// Create a connection, open it and create a command on the connection
try
{
System.Data.DataTable dt = new System.Data.DataTable(qry.TableName);
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
await connection.OpenAsync().ConfigureAwait(false);
System.Diagnostics.Debug.WriteLine("Connection Opened ... " + qry.TableName);
using (SqlCommand command = new SqlCommand(qry.SQL, connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
System.Diagnostics.Debug.WriteLine("Query Executed ... " + qry.TableName);
dt.Load(reader);
System.Diagnostics.Debug.WriteLine(string.Format("Record Count '{0}' ... {1}", dt.Rows.Count, qry.TableName));
return dt;
}
}
}
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine("Exception Raised ... " + qry.TableName);
System.Diagnostics.Debug.WriteLine(ex.Message);
return new System.Data.DataTable(qry.TableName);
}
}
}
Async is great if the process takes a long time. Another option would be to have one stored procedure that returns all three record sets.
adp = New SqlDataAdapter(cmd)
dst = New DataSet
adp.Fill(dst)
In the code behind of the page, reference them as dst.Tables(0), dst.Tables(1), and dst.Tables(2). The tables will be in the same order as the select statements in the stored procedure.

Using SQL Server application locks to solve locking requirements

I have a large application based on Dynamics CRM 2011 that in various places has code that must query for a record based upon some criteria and create it if it doesn't exist else update it.
An example of the kind of thing I am talking about would be similar to this:
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
record.Id = Guid.NewGuid();
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
In terms of CRM 2011 implementation (although not strictly relevant to this question) the code could be triggered from synchronous or asynchronous plugins. The issue is that the code is not thread safe, between checking if the record exists and creating it if it doesn't, another thread could come in and do the same thing first resulting in duplicate records.
Normal locking methods are not reliable due to the architecture of the system, various services using multiple threads could all be using the same code, and these multiple services are also load balanced across multiple machines.
In trying to find a solution to this problem that doesn't add massive amounts of extra complexity and doesn't compromise the idea of not having a single point of failure or a single point where a bottleneck could occur I came across the idea of using SQL Server application locks.
I came up with the following class:
public class SQLLock : IDisposable
{
//Lock constants
private const string _lockMode = "Exclusive";
private const string _lockOwner = "Transaction";
private const string _lockDbPrincipal = "public";
//Variable for storing the connection passed to the constructor
private SqlConnection _connection;
//Variable for storing the name of the Application Lock created in SQL
private string _lockName;
//Variable for storing the timeout value of the lock
private int _lockTimeout;
//Variable for storing the SQL Transaction containing the lock
private SqlTransaction _transaction;
//Variable for storing if the lock was created ok
private bool _lockCreated = false;
public SQLLock (string lockName, int lockTimeout = 180000)
{
_connection = Connection.GetMasterDbConnection();
_lockName = lockName;
_lockTimeout = lockTimeout;
//Create the Application Lock
CreateLock();
}
public void Dispose()
{
//Release the Application Lock if it was created
if (_lockCreated)
{
ReleaseLock();
}
_connection.Close();
_connection.Dispose();
}
private void CreateLock()
{
_transaction = _connection.BeginTransaction();
using (SqlCommand createCmd = _connection.CreateCommand())
{
createCmd.Transaction = _transaction;
createCmd.CommandType = System.Data.CommandType.Text;
StringBuilder sbCreateCommand = new StringBuilder();
sbCreateCommand.AppendLine("DECLARE #res INT");
sbCreateCommand.AppendLine("EXEC #res = sp_getapplock");
sbCreateCommand.Append("#Resource = '").Append(_lockName).AppendLine("',");
sbCreateCommand.Append("#LockMode = '").Append(_lockMode).AppendLine("',");
sbCreateCommand.Append("#LockOwner = '").Append(_lockOwner).AppendLine("',");
sbCreateCommand.Append("#LockTimeout = ").Append(_lockTimeout).AppendLine(",");
sbCreateCommand.Append("#DbPrincipal = '").Append(_lockDbPrincipal).AppendLine("'");
sbCreateCommand.AppendLine("IF #res NOT IN (0, 1)");
sbCreateCommand.AppendLine("BEGIN");
sbCreateCommand.AppendLine("RAISERROR ( 'Unable to acquire Lock', 16, 1 )");
sbCreateCommand.AppendLine("END");
createCmd.CommandText = sbCreateCommand.ToString();
try
{
createCmd.ExecuteNonQuery();
_lockCreated = true;
}
catch (Exception ex)
{
_transaction.Rollback();
throw new Exception(string.Format("Unable to get SQL Application Lock on '{0}'", _lockName), ex);
}
}
}
private void ReleaseLock()
{
using (SqlCommand releaseCmd = _connection.CreateCommand())
{
releaseCmd.Transaction = _transaction;
releaseCmd.CommandType = System.Data.CommandType.StoredProcedure;
releaseCmd.CommandText = "sp_releaseapplock";
releaseCmd.Parameters.AddWithValue("#Resource", _lockName);
releaseCmd.Parameters.AddWithValue("#LockOwner", _lockOwner);
releaseCmd.Parameters.AddWithValue("#DbPrincipal", _lockDbPrincipal);
try
{
releaseCmd.ExecuteNonQuery();
}
catch {}
}
_transaction.Commit();
}
}
I would use this in my code to create a SQL Server application lock using the unique key I am querying for as the lock name like this
using (var sqlLock = new SQLLock(id))
{
//Code to check for and create or update record here
}
Now this approach seems to work, however I am by no means any kind of SQL Server expert and am wary about putting this anywhere near production code.
My question really has 3 parts
1. Is this a really bad idea because of something I haven't considered?
Are SQL Server application locks completely unsuitable for this purpose?
Is there a maximum number of application locks (with different names) you can have at a time?
Are there performance considerations if a potentially large number of locks are created?
What else could be an issue with the general approach?
2. Is the solution actually implemented above any good?
If SQL Server application locks are usable like this, have I actually used them properly?
Is there a better way of using SQL Server to achieve the same result?
In the code above I am getting a connection to the Master database and creating the locks in there. Does that potentially cause other issues? Should I create the locks in a different database?
3. Is there a completely alternative approach that could be used that doesn't use SQL Server application locks?
I can't use stored procedures to create and update the record (unsupported in CRM 2011).
I don't want to add a single point of failure.
You can do this much easier.
//make sure your plugin runs within a transaction, this is the case for stage 20 and 40
//you can check this with IExecutionContext.IsInTransaction
//works not with offline plugins but works within CRM Online (Cloud) and its fully supported
//also works on transaction rollback
var lockUpdateEntity = new dummy_lock_entity(); //simple technical entity with as many rows as different lock barriers you need
lockUpdateEntity.Id = Guid.parse("well known guid"); //well known guid for this barrier
lockUpdateEntity.dummy_field=Guid.NewGuid(); //just update/change a field to create a lock, no matter of its content
//--------------- this is untested by me, i use the next one
context.UpdateObject(lockUpdateEntity);
context.SaveChanges();
//---------------
//OR
//--------------- i use this one, but you need a reference to your OrganizationService
OrganizationService.Update(lockUpdateEntity);
//---------------
//threads wait here if they have no lock for dummy_lock_entity with "well known guid"
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
//record.Id = Guid.NewGuid(); //not needed
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
//let the pipeline flow and the transaction complete ...
For more background info refer to http://www.crmsoftwareblog.com/2012/01/implementing-robust-microsoft-dynamics-crm-2011-auto-numbering-using-transactions/

Task launched from a Threading.Timer

*Edit: Please see my answer below for the solution.
Is there any danger in the following? I'm trying to track down what I think might be a race condition. I figured I'd start with this and go from there.
private BlockingCollection<MyTaskType>_MainQ = new BlockingCollection<MyTaskType>();
private void Start()
{
_CheckTask = new Timer(new TimerCallback(CheckTasks), null, 10, 5000);
}
private void CheckTasks(object state)
{
_CheckTask.Change(Timeout.Infinite, Timeout.Infinite);
GetTask();
_CheckTask.Change(5000,5000);
}
private void GetTask()
{
//get task from database to object
Task.Factory.StartNew( delegate {
AddToWorkQueue(); //this adds to _MainQ which is a BlockingCollection
});
}
private void AddToWorkQueue()
{
//do some stuff to get stuff to move
_MainQ.Add(dataobject);
}
edit: I am also using a static class to handle writing to the database. Each call should have it's own unique data called from many threads, so it is not sharing data. Do you think this could be a source of contention?
Code below:
public static void ExecuteNonQuery(string connectionString, string sql, CommandType commandType, List<FastSqlParam> paramCollection = null, int timeout = 60)
{
//Console.WriteLine("{0} [Thread {1}] called ExecuteNonQuery", DateTime.Now.ToString("HH:mm:ss:ffffff"), System.Threading.Thread.CurrentThread.ManagedThreadId);
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand(sql, connection))
{
try
{
if (paramCollection != null)
{
foreach (FastSqlParam fsqlParam in paramCollection)
{
try
{
SqlParameter param = new SqlParameter();
param.Direction = fsqlParam.ParamDirection;
param.Value = fsqlParam.ParamValue;
param.ParameterName = fsqlParam.ParamName;
param.SqlDbType = fsqlParam.ParamType;
command.Parameters.Add(param);
}
catch (ArgumentNullException anx)
{
throw new Exception("Parameter value was null", anx);
}
catch (InvalidCastException icx)
{
throw new Exception("Could not cast parameter value", icx);
}
}
}
connection.Open();
command.CommandType = commandType;
command.CommandTimeout = timeout;
command.ExecuteNonQuery();
if (paramCollection != null)
{
foreach (FastSqlParam fsqlParam in paramCollection)
{
if (fsqlParam.ParamDirection == ParameterDirection.InputOutput || fsqlParam.ParamDirection == ParameterDirection.Output)
try
{
fsqlParam.ParamValue = command.Parameters[fsqlParam.ParamName].Value;
}
catch (ArgumentNullException anx)
{
throw new Exception("Output parameter value was null", anx);
}
catch (InvalidCastException icx)
{
throw new Exception("Could not cast parameter value", icx);
}
}
}
}
catch (SqlException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
}
}
per request:
FastSql.ExecuteNonQuery(connectionString, "someProc", System.Data.CommandType.StoredProcedure, new List<FastSqlParam>() { new FastSqlParam(SqlDbType.Int, "#SomeParam", variable)});
Also, I wanted to note that this code seems to fail at random running it from VS2010 [Debug or Release]. When I do a release build, run setup on a dev server that will be hosting it, the application has failed to crash and has been running smoothly.
per request:
Current architecture of threads:
Thread A reading 1 record from a database scheduling table
Thread A, if a row is returned, launches a Task to login to resource to see if there are files to transfer. The task is referencing an object that contains data from the DataTable that was creating using a static call. Basically as below.
If there are files found, Task adds to _MainQ the files to move
//Called from Thread A
void ProcessTask()
{
var parameters = new List<FastSqlParam>() { new FastSqlParam(SqlDbType.Int, "#SomeParam", variable) };
using (DataTable someTable = FastSql.ExecuteDataTable(connectionString, "someProc", CommandType.StoredProcedure, parameters))
{
SomeTask task = new Task();
//assign task some data from dt.Rows[0]
if (task != null)
{
Task.Factory.StartNew(delegate { AddFilesToQueue(task); });
}
}
}
void AddFilesToQueue(Task task)
{
//connect to remote system and build collection of files to WorkItem
//e.g, WorkItem will have a collection of collections to transfer. We control this throttling mechanism to allow more threads to split up the work
_MainQ.Add(WorkItem);
}
Do you think there could be a problem returning a value from FastSql.ExecuteDataTable since it is a static class and then using it with a using block?
I'd personally be wary of introducing extra threads in quite so many places - "Here be Dragons" is a useful rule when it comes to working with threads! I can't see any problems with what you have, but if it were simpler it'd be easier to be more certain. I'm assuming you want the call to "AddToWorkQueue" to be done in a different thread (to test the race condition) so I've left that in.
Does this do what you need it to? (eye compiled so may be wrong)
while(true) {
Task.Factory.StartNew( delegate { AddToWorkQueue(); });
Thread.Sleep(5000);
}
random aside - prefer "throw;" to "throw ex;" - the former preserves the original call stack, the latter will only give you the line number of the "throw ex;" call. Even better, omit the try/catch in this case as all you do is re-throw the exceptions, so you may as well save yourself the overhead of the try.
It turns out the problem was a very, very strange one.
I converted the original solution from a .NET 3.5 solution to a .NET 4.0 solution. The locking up problem went away when I re-created the entire solution in a brand new .NET 4.0 solution. No other changes were introduced, so I am very confident the problem was the upgrade to 4.0.

Categories

Resources