Why is (Dapper) async IO extremely slow? - c#

I'm using load-tests to analyze the "ball park" performance of Dapper accessing SQL Server. My laptop is simultaneously the load-generator and the test target. My laptop has 2 cores, 16GB RAM, and is running Windows 10 Pro, v1709. The database is SQL Server 2017 running in a Docker container (the container's Hyper-V VM has 3GB RAM). My load-test and test code is using .net 4.6.1.
My load-test results after 15 seconds of a simulated 10 simultaneous clients are as follows:
Synchronous Dapper code: 750+ transactions per second.
Asynchronous Dapper code: 4 to 8 transactions per second. YIKES!
I realize that async can sometimes be slower than synchronous code. I also realize that my test setup is weak. However, I shouldn't be seeing such horrible performance from asynchronous code.
I've narrowed the problem to something associated with Dapper and the System.Data.SqlClient.SqlConnection. I need help to finally solve this. Profiler results are below.
I figured out a cheesy way to force my async code to achieve 650+ transactions per second, which I'll discuss in a bit, but now first it is time to show my code which is just a console app. I have a test class:
public class FitTest
{
private List<ItemRequest> items;
public FitTest()
{
//Parameters used for the Dapper call to the stored procedure.
items = new List<ItemRequest> {
new ItemRequest { SKU = "0010015488000060", ReqQty = 2 } ,
new ItemRequest { SKU = "0010015491000060", ReqQty = 1 }
};
}
... //the rest not listed.
Synchronous Test Target
Within the FitTest class, under load, the following test-target method achieves 750+ transactions per second:
public Task LoadDB()
{
var skus = items.Select(x => x.SKU);
string procedureName = "GetWebInvBySkuList";
string userDefinedTable = "[dbo].[StringList]";
string connectionString = "Data Source=localhost;Initial Catalog=Web_Inventory;Integrated Security=False;User ID=sa;Password=1Secure*Password1;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False";
var dt = new DataTable();
dt.Columns.Add("Id", typeof(string));
foreach (var sku in skus)
{
dt.Rows.Add(sku);
}
using (var conn = new SqlConnection(connectionString))
{
var inv = conn.Query<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure);
return Task.CompletedTask;
}
}
I am not explicitly opening or closing the SqlConnection. I understand that Dapper does that for me. Also, the only reason the above code returns a Task is because my load-generation code is designed to work with that signature.
Asynchronous Test Target
The other test-target method in my FitTest class is this:
public async Task<IEnumerable<Inventory>> LoadDBAsync()
{
var skus = items.Select(x => x.SKU);
string procedureName = "GetWebInvBySkuList";
string userDefinedTable = "[dbo].[StringList]";
string connectionString = "Data Source=localhost;Initial Catalog=Web_Inventory;Integrated Security=False;User ID=sa;Password=1Secure*Password1;Connect Timeout=30;Encrypt=False;TrustServerCertificate=True;ApplicationIntent=ReadWrite;MultiSubnetFailover=False";
var dt = new DataTable();
dt.Columns.Add("Id", typeof(string));
foreach (var sku in skus)
{
dt.Rows.Add(sku);
}
using (var conn = new SqlConnection(connectionString))
{
return await conn.QueryAsync<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure).ConfigureAwait(false);
}
}
Again, I'm not explicitly opening or closing the connection - because Dapper does that for me. I have also tested this code with explicitly opening and closing; it does not change the performance. The profiler results for the load-generator acting against the above code (4 TPS) is as follows:
What DOES change the performance is if I change the above as follows:
//using (var conn = new SqlConnection(connectionString))
//{
var inv = await conn.QueryAsync<Inventory>(
procedureName,
new { skuList = dt.AsTableValuedParameter(userDefinedTable) },
commandType: CommandType.StoredProcedure);
var foo = inv.ToArray();
return inv;
//}
In this case I've converted the SqlConnection into a private member of the FitTest class and initialized it in the constructor. That is, one SqlConnection per client per load-test session. It is never disposed during the load-test. I also changed the connection string to include "MultipleActiveResultSets=True", because now I started getting those errors.
With these changes, my results become: 640+ transactions per second, and with 8 exceptions thrown. The exceptions were all "InvalidOperationException: BeginExecuteReader requires an open and available Connection. The connection's current state is connecting." The profiler results in this case:
That looks to me like a synchronization bug in Dapper with the SqlConnection.
Load-Generator
My load-generator, a class called Generator, is designed to be given a list of delegates when constructed. Each delegate has a unique instantiation of the FitTest class. If I supply an array of 10 delegates, it is interpreted as representing 10 clients to be used for generating load in parallel.
To kick off the load test, I have this:
//This `testFuncs` array (indirectly) points to either instances
//of the synchronous test-target, or the async test-target, depending
//on what I'm measuring.
private Func<Task>[] testFuncs;
private Dictionary<int, Task> map;
private TaskCompletionSource<bool> completionSource;
public void RunWithMultipleClients()
{
completionSource = new TaskCompletionSource<bool>();
//Create a dictionary that has indexes and Task completion status info.
//The indexes correspond to the testFuncs[] array (viz. the test clients).
map = testFuncs
.Select((f, j) => new KeyValuePair<int, Task>(j, Task.CompletedTask))
.ToDictionary(p => p.Key, v => v.Value);
//scenario.Duration is usually '15'. In other words, this test
//will terminate after generating load for 15 seconds.
Task.Delay(scenario.Duration * 1000).ContinueWith(x => {
running = false;
completionSource.SetResult(true);
});
RunWithMultipleClientsLoop();
completionSource.Task.Wait();
}
So much for the setup, the actual load is generated as follows:
public void RunWithMultipleClientsLoop()
{
//while (running)
//{
var idleList = map.Where(x => x.Value.IsCompleted).Select(k => k.Key).ToArray();
foreach (var client in idleList)
{
//I've both of the following. The `Task.Run` version
//is about 20% faster for the synchronous test target.
map[client] = Task.Run(testFuncs[client]);
//map[client] = testFuncs[client]();
}
Task.WhenAny(map.Values.ToArray())
.ContinueWith(x => { if (running) RunWithMultipleClientsLoop(); });
// Task.WaitAny(map.Values.ToArray());
//}
}
The while loop and Task.WaitAny, commented out, represent a different approach that has nearly the same performance; I keep it around for experiments.
One last detail. Each of the "client" delegates I pass in is first wrapped inside a metrics-capture function. The metrics capture function looks like this:
private async Task LoadLogic(Func<Task> testCode)
{
try
{
if (!running)
{
slagCount++;
return;
}
//This is where the actual test target method
//is called.
await testCode().ConfigureAwait(false);
if (running)
{
successCount++;
}
else
{
slagCount++;
}
}
catch (Exception ex)
{
if (ex.Message.Contains("Assert"))
{
errorCount++;
}
else
{
exceptionCount++;
}
}
}
When my code runs, I do not receive any errors or exceptions.
Ok, what am I doing wrong? In the worst case scenario, I would expect the async code to be only slightly slower than synchronous.

Related

The network path was not found exception when running multiple LINQ queries in parallel

I'm using Entity Framework with .NET 5. My LINQ queries are inside repository and for each query I create and dispose new DBContext. It looks something like these:
class MyRepository
{
public List<entity> Query1(int x)
{
//connect to SQL Server to get data
using(var context = new MyDbContext())
{
return context.SomeTable1.Where(s => s.Prop == x).ToList();
}
}
public List<entity> Query2(string y)
{
//connect to SQL Server to get data
using(var context = new MyDbContext())
{
return context.SomeTable2.Where(s => s.Prop2 == y).ToList();
}
}
}
class Calculator
{
MyRepository _repo;
public Calculator(MyRepository repo)
{
_repo = repo;
}
public void Calculate(CalcParam calcParam)
{
//one of these LINQ queries can throw exception from time to time when run in parallel.
//It works fine when run in sequence
var r1 = _repo.Query1(calcParam.x);
var r2 = _repo.Query2(calcParam.y);
var results = InMemeoryColculation(r1, r2);
calcParam.AddResults(results);
}
}
I have to perform that queries hundreds of time to solve my problem.
When calculations are performed in sequence everything works great. I'm starting to get that issue when I start increasing number of concurrent calls.
But queries are independent so I can run them in parallel like that:
//template action
Action<CalcParam> myAction = ((calcParam) =>
{
//Calculator will use MyRepository multiple times
var calculator = new Calculator(new MyRepository());
calculator.Calculate(calcParam);
}
//run in parallel
var options = new ParallelOptions { MaxDegreeOfParallelism = 4 };
Parallel.ForEach(allCalcParams, options, (calcParam, state, index) =>
{
myAction(calcParam);
}
Or when using Polly, inside Parallel.ForEach
Policy.Handle<Exception>()
.WaitAndRetry(WaitAndRetryArray, (exception, timeSpan, context) =>
{
Log.Exception(exception);
})
.Execute(() => myAction(calcParam));
where WaitAndRetryArray is
private readonly TimeSpan[] WaitAndRetryArray = new[]
{
TimeSpan.FromSeconds(2),
TimeSpan.FromSeconds(4),
TimeSpan.FromSeconds(8),
TimeSpan.FromSeconds(16)
};
The problem is that I'm frequently getting:
System.ComponentModel.Win32Exception (0x80004005): The network path was not found
exception on a random basis.
App Server and DB Server are on different machines but they live in the same basement in dedicated network. Also, during the calculation App Server and DB Server are using like 20% of their capabilities (nothing else is running there).
Queries are not heavy as well, longest one takes 6 seconds to complete (DB Server CPU utilization never goes beyond 20%).
To mitigate that problem I'm using Polly and retry failing calculations but I can never go beyond the 20% utilization and DegreeOfParallelism = 4.
Could that problem be solely attributed to network issues?
Or maybe I should be running multiple queries using different approach than just looping over in Parallel.ForEach? Could it be that it opens connections and keeps them alive after DBContext is disposed?
It feels like current solution is tripping over its own feet when I try to scale it up.
When calculations are performed in sequence
Thank you for help!

Can Execution Time be set in Entity Framework?

I have an application leveraging Entity Framework 6. For queries that are relatively fast (e.g. taking less than a minute to execute) it is working fine.
But I have a stored procedure that queries a table which doesn't have appropriate indices and so the time taken to execute the query has been clocked to take anywhere between 55 and 63 seconds. Obviously, indexing the table would bring that time down but unfortunately I don't have the luxury of controlling the situation and have to deal the hand I was dealt.
What I am seeing is when EF6 is used to call the stored procedure it continues through the code in less than 3 seconds total time and returns a result of 0 records; when I know there are 6 records the SPROC will return when executed directly in the database.
There are no errors whatsoever, so the code is executing fine.
Performing a test; I constructed some code using the SqlClient library and made the same call and it returned 6 records. Also noted that unlike the EF6 execution, that it actually took a few more seconds as if it were actually waiting to receive a response.
Setting the CommandTimeout on the context doesn't appear to make any difference either and I suspect possibly because it isn't timing out but rather not waiting for the result before it continues through the code?
I don't recall seeing this behavior in prior versions but then again maybe the time required to execute my prior queries were within the expected range of EF???
Is there a way to set the actual time that EF will wait for a response before continuing through the code? Or is there a way that I can enforce an asynchronous operation since it seems to be a default synchronous task by default?? Or is there a potential flaw in the code?
Sample of Code exhibiting (synchronous) execution: No errors but no records returned
public static List<Orphan> GetOrphanItems()
{
try
{
using (var ctx = new DBEntities(_defaultConnection))
{
var orphanage = from orp in ctx.GetQueueOrphans(null)
select orp;
var orphans = orphanage.Select(o => new Orphan
{
ServiceQueueId = o.ServiceQueueID,
QueueStatus = o.QueueStatus,
OrphanCode = o.OrphanCode,
Status = o.Status,
EmailAddress = o.EmailAddress,
TemplateId = o.TemplateId
}).ToList();
return orphans;
}
}
catch(Exception exc)
{
// Handle the error
}
}
Sample Code using SqlClient Library (asynchronous) takes slightly longer to execute but returns 6 records
public static List<Orphan> GetOrphanItems()
{
long ServiceQueueId = 0;
bool QueueStatus;
var OrphanCode = String.Empty;
DateTime Status;
var EmailAddress = String.Empty;
int TemplateId = 0;
var orphans = new List<Orphan> ();
SqlConnection conn = new SqlConnection(_defaultConnection);
try
{
var cmdText = "EXEC dbo.GetQueueOrphans";
SqlCommand cmd = new SqlCommand(cmdText, conn);
conn.Open();
SqlDataReader reader;
reader = cmd.ExecuteReader();
while(reader.Read())
{
long.TryParse(reader["ServiceQueueId"].ToString(), out ServiceQueueId);
bool.TryParse(reader["QueueStatus"].ToString(), out QueueStatus);
OrphanCode = reader["OrphanCode"].ToString();
DateTime.TryParse(reader["Status"].ToString(), out Status);
EmailAddress = reader["EmailAddress"].ToString();
int.TryParse(reader["TemplateId"].ToString(), out TemplateId);
orphans.Add(new Orphan { ServiceQueueId = ServiceQueueId, QueueStatus=QueueStatus, OrphanCode=OrphanCode,
EmailAddress=EmailAddress, TemplateId=TemplateId});
}
conn.Close();
catch(Exception exc)
{
// Handle the error
}
finally
{
conn.Close();
}
}
Check the type of executing method.
private async void MyMethod()
{
db.executeProdecudeAsync();
}
Forgetting to await task in async void method can cause described behavior without any InteliSense warning.
Fix:
private async Task MyMethod()
{
await db.executeProdecudeAsync();
}
Or just use db.executeProdecudeAsync().Wait() if you want to run in synchronous mode.

The timeout period elapsed prior to completion of the operation error with 1000 seconds

I have 24 millions records in source(12 millions) and target(12 millions) and i am trying to compare records between source and target.
Error :
Timeout expired. The timeout period elapsed prior to completion of the
operation
When i run my Test method for once then it is working fine but when i have running it in parallel(thrice) then i am getting above error.
I can increase timeout to some more but i have read that it is really bad design and bad idea to increase timeout.
I am failing to understand the reason behind this that how it is working fine with calling Test method once and why it is failing when it is called thrice.
With this i am trying to test amount of load database server can handle if i am submiting lots of jobs of comparision like below :
Code :
static void Main(string[] args)
{
Task<string>[] taskArray = { Task<string>.Factory.StartNew(() =>Test()),
Task<string>.Factory.StartNew(() => Test()),
Task<string>.Factory.StartNew(() => Test())
};
}
public static string Test()
{
var srcConnection = new SqlConnection("SourceConnectionString");
srcConnection.Open();
var command1 = new SqlCommand("Source Sql Query", srcConnection);
var tgtConnection = new SqlConnection("TargetConnectionString");
tgtConnection.Open();
var command2 = new SqlCommand("Target Sql Query", tgtConnection);
var drA = GetReader(srcConnection, command1);
var drB = GetReader(tgtConnection, command2);
bool moreA = drA.Read();
bool moreB = drB.Read();
while (moreA && moreB)
{
//my logic for 1 - 1 record comparision in case of ordered record on Primary key on source and target.
}
}
private static IDataReader GetReader(SqlConnection con, SqlCommand command)
{
command.CommandTimeout = 1000;
return command.ExecuteReader();
}
I am failing to understand that though i have set timeout to be 1000 seconds why it is failing.
Task<string>[] taskArray = { Task<string>.Factory.StartNew(() => Test())//Works fine
Task<string>[] taskArray = { Task<string>.Factory.StartNew(() =>Test()),
Task<string>.Factory.StartNew(() => Test()),
Task<string>.Factory.StartNew(() => Test())
};//Timeout Error
Source Query : select Id as LinkedColumn,CompareColumn from Source
Target Query : select Id as LinkedColumn,CompareColumn from Target
Source Query Execution time in Sql server management studio for 12377200 records:
00:02:10
Target Query Execution time in Sql server management studio for 12266800 records:
00:03:37
What would be the best way to handle this scenario considering what i am trying to do?

How to make two SQL queries really asynchronous

My problem is based on a real project problem, but I have never used the System.Threading.Tasks library or performing any serious programming involving threads so my question may be a mix of lacking knowledge about the specific library and more general misunderstanding of what asynchronous really means in terms of programming.
So my real world case is this - I need to fetch data about an user. In my current scenario it's financial data so let say I need all Accounts, all Deposits and all Consignations for a certain user. In my case this means to query million of records for each property and each query is relatively slow itself, however to fetch the Accounts is several times slower than fetching the Deposits. So I have defined three classes for the three bank products I'm going to use and when I want to fetch the data for all the bank products of certain user I do something like this :
List<Account> accounts = GetAccountsForClient(int clientId);
List<Deposit> deposits = GetDepositsForClient(int clientId);
List<Consignation> consignations = GetConsignationsForClient(int clientId);
So the problem starts here I need to get all those three list at the same time, cause I'm going to pass them to the view where I display all users data. But as it is right now the execution is synchronous (If I'm using the term correctly here) so the total time for collecting the data for all three products is:
Total_Time = Time_To_Get_Accounts + Time_To_Get_Deposits + Time_To_Get_Consignations
This is not good because the each query is relatively slow so the total time is pretty big, but also, the accounts query takes much more time than the other two queries so the idea that get into my head today was - "What if I could execute this queries simultaneously". Maybe here comes my biggest misunderstanding on the topic but for me the closest to this idea is to make them asynchronous so maybe then Total_Time won't be the time of the slowest query but yet will be much faster than the sum of all three queries.
Since my code is complicated I created a simple use case which I think, reflect what I'm trying to do pretty well. I have two methods :
public static async Task<int> GetAccounts()
{
int total1 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query1 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Accounts]";
SqlCommand command = new SqlCommand(query1, connection);
connection.Open();
for (int i = 0; i < 19000000; i++)
{
string s = i.ToString();
}
total1 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total1.ToString());
}
return total1;
}
and the second method :
public static async Task<int> GetDeposits()
{
int total2 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query2 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Deposits]";
SqlCommand command = new SqlCommand(query2, connection);
connection.Open();
total2 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total2.ToString());
}
return total2;
}
which I call like this:
static void Main(string[] args)
{
Console.WriteLine(GetAccounts().Result.ToString());
Console.WriteLine(GetDeposits().Result.ToString());
}
As you can see I call GetAccounts() first and I slow the execution down on purpose so I give a chance the execution to continue to the next method. However I'm not getting any result for a certain period of time and then I get all printed on the console at the same time.
So the problem - how to make so that I don't wait for the first method to finish, in order to go to the next method. In general the code structure is not that important, what I really want to figure out is if there's any way to make both queries to execute at the same time. The sample here is the result of my research which maybe could be extended to the point where I'll get the desired result.
P.S
I'm using ExecuteScalarAsync(); just because I started with a method which was using it. In reality I'm gonna use Scalar and Reader.
When you use the Result property on a task that hasn't completed yet the calling thread will block until the operation completes. That means in your case that the GetAccounts operation need to complete before the call to GetDeposits starts.
If you want to make sure these method are parallel (including the synchronous CPU-intensive parts) you need to offload that work to another thread. The simplest way to do so would be to use Task.Run:
static async Task Main()
{
var accountTask = Task.Run(async () => Console.WriteLine(await GetAccounts()));
var depositsTask = Task.Run(async () => Console.WriteLine(await GetDeposits()));
await Task.WhenAll(accountTask, depositsTask);
}
Here's a way to to perform two tasks asynchronously and in parallel:
Task<int> accountTask = GetAccounts();
Task<int> depositsTask = GetDeposits();
int[] results = await Task.WhenAll(accountTask, depositsTask);
int accounts = results[0];
int deposits = results[1];
I generally prefer to use Task.WaitAll. To setup for this code segment, I changed the GetAccounts/GetDeposits signatures just to return int (public static int GetAccounts())
I placed the Console.WriteLine in the same thread as assigning the return to validate that one GetDeposits returns before GetAccounts has, but this is unnecessary and probably best to move it after the Task.WaitAll
private static void Main(string[] args) {
int getAccountsTask = 0;
int getDepositsTask = 0;
List<Task> tasks = new List<Task>() {
Task.Factory.StartNew(() => {
getAccountsTask = GetAccounts();
Console.WriteLine(getAccountsTask);
}),
Task.Factory.StartNew(() => {
getDepositsTask = GetDeposits();
Console.WriteLine(getDepositsTask);
})
};
Task.WaitAll(tasks.ToArray());
}
If it's ASP.NET use AJAX to fetch after the page is rendered and put the data in a store. Each AJAX fetch is asynchronous. If you want to create simultaneous SQL queries on the server?
Usage:
// Add some queries ie. ThreadedQuery.NamedQuery([Name], [SQL])
var namedQueries= new ThreadedQuery.NamedQuery[]{ ... };
System.Data.DataSet ds = ThreadedQuery.RunThreadedQuery(
"Server=foo;Database=bar;Trusted_Connection=True;",
namedQueries).Result;
string msg = string.Empty;
foreach (System.Data.DataTable tt in ds.Tables)
msg += string.Format("{0}: {1}\r\n", tt.TableName, tt.Rows.Count);
Source:
public class ThreadedQuery
{
public class NamedQuery
{
public NamedQuery(string TableName, string SQL)
{
this.TableName = TableName;
this.SQL = SQL;
}
public string TableName { get; set; }
public string SQL { get; set; }
}
public static async System.Threading.Tasks.Task<System.Data.DataSet> RunThreadedQuery(string ConnectionString, params NamedQuery[] queries)
{
System.Data.DataSet dss = new System.Data.DataSet();
List<System.Threading.Tasks.Task<System.Data.DataTable>> asyncQryList = new List<System.Threading.Tasks.Task<System.Data.DataTable>>();
foreach (var qq in queries)
asyncQryList.Add(fetchDataTable(qq, ConnectionString));
foreach (var tsk in asyncQryList)
{
System.Data.DataTable tmp = await tsk.ConfigureAwait(false);
dss.Tables.Add(tmp);
}
return dss;
}
private static async System.Threading.Tasks.Task<System.Data.DataTable> fetchDataTable(NamedQuery qry, string ConnectionString)
{
// Create a connection, open it and create a command on the connection
try
{
System.Data.DataTable dt = new System.Data.DataTable(qry.TableName);
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
await connection.OpenAsync().ConfigureAwait(false);
System.Diagnostics.Debug.WriteLine("Connection Opened ... " + qry.TableName);
using (SqlCommand command = new SqlCommand(qry.SQL, connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
System.Diagnostics.Debug.WriteLine("Query Executed ... " + qry.TableName);
dt.Load(reader);
System.Diagnostics.Debug.WriteLine(string.Format("Record Count '{0}' ... {1}", dt.Rows.Count, qry.TableName));
return dt;
}
}
}
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine("Exception Raised ... " + qry.TableName);
System.Diagnostics.Debug.WriteLine(ex.Message);
return new System.Data.DataTable(qry.TableName);
}
}
}
Async is great if the process takes a long time. Another option would be to have one stored procedure that returns all three record sets.
adp = New SqlDataAdapter(cmd)
dst = New DataSet
adp.Fill(dst)
In the code behind of the page, reference them as dst.Tables(0), dst.Tables(1), and dst.Tables(2). The tables will be in the same order as the select statements in the stored procedure.

Can mutithreading be used to performance tune my app

In my application i open connection for about 70 servers each having 8 databases on average(the servers Are categorized into environments viz development, production, UaT, sit,training, misc,Qa ).
The application will check for existence of a user in the each database and fetch details if the user exists.
I have used a method to call the service this method will pass the user id as input inturn the service will check the user across the databases And fetch The details.
this whole process is taking too much of time that the idle time in UI is around 5 - 10 mins.
How can we tune the performance of this application. I thought Of implementing multi-threading and fetching detains in environment basis. but I am not sure if we can call a method with Return type in the application and with input parameters.
Please suggest a way to improve performance.
public List<AccessDetails> GetAccessListOfMirror(string mirrorId,string server)
{
List<AccessDetails> accessOfMirror = new List<AccessDetails>();
string loginUserId = SessionManager.Session.Current.LoggedInUserName;
string userPassword = SessionManager.Session.Current.Password;
using (Service1Client client = new Service1Client())
{
client.Open();
accessOfMirror = client.GetMirrorList(mirrorId, server, loginUserId, userPassword);
}
return accessOfMirror;
}
Service method
public List<AccessDetails> GetMirrorList(string mirrorId, string server, string userId, string userPassword)
{
string mirrorUser = mirrorId.ToString();
List<ConnectionStringContract> connectionStrings = new List<ConnectionStringContract>();
try
{
connectionStrings = GetConnectionString(server);
}
catch (FaultException<ServiceData> exe)
{
throw exe;
}
AseConnection aseConnection = default(AseConnection);
List<AccessRequest> mirrorUsers = new List<AccessRequest>();
List<FacetsOnlineAccess> foaAccess = new List<FacetsOnlineAccess>();
List<AccessDetails> accessDetails = new List<AccessDetails>();
AccessDetails accDetails = new AccessDetails();
AccessRequest access;
if (!String.IsNullOrEmpty(server))
connectionStrings = connectionStrings.Where(x => x.Server == server).ToList();
foreach (ConnectionStringContract connection in connectionStrings)
{
string connectionString = connection.ConnectionString;
AseCommand aseCommand = new AseCommand();
using (aseConnection = new AseConnection(connectionString))
{
try
{
aseConnection.Open();
try
{
List<Parameter> parameter = new List<Parameter>();
Parameter param;
param = new Parameter();
param.Name = "#name_in_db";
param.Value = mirrorUser.ToLower().Trim();
parameter.Add(param);
int returnCode = 0;
DataSet ds = new DataSet();
try
{
ds = DataAccess.ExecuteStoredProcedure(connectionString, Constant.SP_HELPUSER, parameter, out returnCode);
}
catch (Exception ex)
{
}
if(ds.Tables.Count > 0 && ds.Tables[0].Rows.Count > 0)
{
foreach (DataRow row in ds.Tables[0].Rows)
{
access = new AccessRequest();
if (row.ItemArray[0].ToString() == mirrorUser)
access.Group = row.ItemArray[2].ToString();
else
access.Group = row.ItemArray[0].ToString();
access.Environment = connection.Environment.Trim();
access.Server = connection.Server.Trim();
access.Database = connection.Database.Trim();
mirrorUsers.Add(access);
}
}
}
catch (Exception ex)
{
}
}
catch (Exception ConEx)
{
}
}
}
accDetails.AccessList = mirrorUsers;
//accDetails.FOAList = foaAccess;
accessDetails.Add(accDetails);
return accessDetails;
}
Thanks in advance
Loops can sometimes reduce speeds, especially loops inside loops. I/O operations are always pretty slow. You have a loop with i?o. So if you would execute this I/O operations on parallel threads, the performance would increase.
You could translate
foreach (ConnectionStringContract connection in connectionStrings)
{
...
}
into:
Parallel.ForEach(connectionStrings, connectionString =>
{
...
}
Inside you should lock the commonly used variables, like mirrorUsers with a lock.
I think this is a great start. Meanwhile I will look for other performance issues.
You should be able to take advantage of multi threading without too much hassle...
You should probably make use of the threadpool, so you don't spawn too many threads all running at the same time. You can read about the built in threadpool here:
http://msdn.microsoft.com/en-us/library/3dasc8as%28v=vs.80%29.aspx
What you should do is to extract the body of your foreach loop to a static method. Any parameters should be made into a model that can be passed to the thread. Because then you can use ThreadPool.QueueUserWorkItem(object) to start the threads.
Regarding multiple threads writing to the same resource (list or whatever) you can use any kind of mutex, lock or threadsafe component. Using locks is quite simple:
http://msdn.microsoft.com/en-us/library/c5kehkcz%28v=vs.80%29.aspx

Categories

Resources