OK to establish multiple data contexts? - c#

Given the following code:
public static void SomeLoop()
{
using(var db = new ArcadeContext())
{
var changeRecs = db.ArcadeGameRanks.Where(c => c.Date == today);
foreach (var rankRecord in changeRecs)
{
var rank = SomeMethod(rankRecord.GameID);
UpdateGamesCatRank(rankRecord.GameID, rank);
}
}
}
public static void UpdateGamesCatRank(int gameID, int catRank)
{
using(var db = new ArcadeContext())
{
db.ExecuteCommand("UPDATE ArcadeGame SET CategoryRank = " + catRank + " WHERE ID = " + gameID);
}
}
When I run the SQL Server Profiler I get a lot of repeating Audit Login and Audit Logout messages which seem to impact performance:
I'm self taught in C#, so I know that there's a good chance I'm doing something non-typically.
My question is, is the above design pattern considered good? Or should I be reusing/passing data contexts as parameters to functions so that they do not need to be re-established each time the function is called? (If my assumption that the repeated creation of new Data Contexts is the cause of the logins and logouts).

Since your context is already instantiated, pass it to your method.
public static void SomeLoop()
{
using(var db = new ArcadeContext())
{
var changeRecs = db.ArcadeGameRanks.Where(c => c.Date == today);
foreach (var rankRecord in changeRecs)
{
var rank = SomeMethod(rankRecord.GameID);
UpdateGamesCatRank(rankRecord.GameID, rank, db);
}
}
}
public static void UpdateGamesCatRank(int gameID, int catRank, ArcadeContext db)
{
db.ExecuteCommand("UPDATE ArcadeGame SET CategoryRank = " + catRank + " WHERE ID = " + gameID);
}
This will execute your query, return, and resolve by disposing the context when you are done with your foreach loop.

Not a good idea. That might have just been an example to illustrate your point but I don't see a need for a whole new method just to execute a sql command. Just put that command in your loop. Instantiating a new db context for each call is going to kill your performance.

Related

Problem trying to clone a Project Server database using OData and Entity Framework [duplicate]

This question already has answers here:
How can I use Fast Member to Bulk Copy data into a table with inconsistent column names?
(2 answers)
Closed 2 years ago.
I am having trouble updating my entities with Parallel.Foreach. The program I have, works fine by using foreach to update the entities, but if I use Parallel.Foreach it gives me the error like : "Argument Exception: An item with the same key has already been added". I have no idea why it happens, shouldn't it be thread safe? Or why giving me this error? How to resolve this issue?
The program itself get some data from a database and copy it to another one. If the datarow exists with the same guid (see below), and the status unchanged the matching datarow in the second must be updated. If theres a match, and status changed, modifications must be ignored. Finally if no match in the second database, then insert the datarow into the second database. (Synchronize the two databases). I just want to speed up the process somehow, that is why I first think of parallel processing.
(I am using Autofac as an IoC container and dependency injection if that matters)
Here is the code snippet which tries to update:
/* #param reports: data from the first database */
public string SynchronizeData(List<Reports> reports, int statusid)
{
// reportdataindatabase - the second database data, List() actually selects all, see next code snippet
List<Reports> reportdataindatabase = unitOfWorkTAFeedBack.ReportsRepository.List().ToList();
int allcount = reports.Count;
int insertedcount = 0;
int updatedcount = 0;
int ignoredcount = 0;
// DOES NOT WORK, GIVES THE ERROR
Parallel.ForEach(reports, r =>
{
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID);
if (guid == null)
{
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // an insert on the repository
insertedcount++;
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
unitOfWorkTAFeedBack.ReportsRepository.Update(r); // update on the repo
updatedcount++;
}
else
{
ignoredcount++;
}
}
});
/* WORKS PERFECTLY BUT RELATIVELY SLOW - takes 80 seconds to update 1287 records
foreach (Reports r in reports)
{
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID); // find match between the two databases
if (guid == null)
{
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // no match, insert
insertedcount++;
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
unitOfWorkTAFeedBack.ReportsRepository.Update(r);
updatedcount++;
}
else
{
ignoredcount++;
}
}
} */
unitOfWorkTAFeedBack.Commit(); // this only calls SaveChanges() on DbContext object
int allprocessed = insertedcount + updatedcount + ignoredcount;
string result = "Synchronization finished. " + allprocessed + " reports processed out of " + allcount + ", "
+ insertedcount + " has been inserted, " + updatedcount + " has been updated and "
+ ignoredcount + " has been ignored. \n Press a button to dismiss this window." ;
return result;
}
The program breaks on this Repository class in the Update method (with Parallel.Foreach, no problem with the standard foreach):
public class EntityFrameworkReportsRepository : IReportsRepository
{
private readonly TAFeedBackContext tAFeedBackContext;
public EntityFrameworkReportsRepository(TAFeedBackContext tAFeedBackContext)
{
this.tAFeedBackContext = tAFeedBackContext;
}
public void Add(Reports r)
{
tAFeedBackContext.Reports.Add(r);
}
public void Delete(int Id)
{
var obj = tAFeedBackContext.Reports.Find(Id);
tAFeedBackContext.Reports.Remove(obj);
}
public Reports Get(int Id)
{
var obj = tAFeedBackContext.Reports.Find(Id);
return obj;
}
public IQueryable<Reports> List()
{
return tAFeedBackContext.Reports.AsNoTracking();
}
public void Update(Reports r)
{
var entry = tAFeedBackContext.Entry(r); // The Program Breaks At This Point!
if (entry.State == EntityState.Detached)
{
tAFeedBackContext.Reports.Attach(r);
tAFeedBackContext.Entry(r).State = EntityState.Modified;
}
else
{
tAFeedBackContext.Entry(r).CurrentValues.SetValues(r);
}
}
}
Please bear in mind it hard to give a complete answer as there are thing I need clarity on … but comments should help with building a picture.
Parallel.ForEach(reports, r => //Parallel.ForEach is not the answer..
{
//reportdataindatabase is done..before so ok here
// do you really want FirstOrDefault vs SingleOrDefault
var guid = reportdataindatabase.FirstOrDefault(x => x.AssignmentGUID == r.AssignmentGUID);
if (guid == null)
{
// this is done on the context not the DB, unresolved..(excuted)
unitOfWorkTAFeedBack.ReportsRepository.Add(r); // an insert on the repository
//insertedcount++; u would need a lock
}
else
{
if (guid.StatusId == statusid)
{
r.ReportsID = guid.ReportsID;
// this is done on the context not the DB, unresolved..(excuted)
unitOfWorkTAFeedBack.ReportsRepository.Update(r); // update on the repo
//updatedcount++; u would need a lock
}
else
{
//ignoredcount++; u would need a lock
}
}
});
the issue here... as reportdataindatabase can contain the same key twice..
and the context is only updated after the fact aka when it get here..
unitOfWorkTAFeedBack.Commit();
it may have been called twice for the same entity
as above (commit) is where the work is... doing the add/update above in Parallel wont save you any real time, as that part is quick..
//takes 80 seconds to update 1287 records... does seem long...
//List reportdataindatabase = unitOfWorkTAFeedBack.ReportsRepository.List().ToList();
//PS Add how reports are retrieved.. you want something like
TAFeedBackContext db = new TAFeedBackContext();
var remoteReports = DatafromAnotherPLace //include how this was retrieved;
var localReports = TAFeedBackContext.Reports.ToList(); //these are tracked.. (by default)
foreach (var item in remoteReports)
{
//i assume more than one is invalid.
var localEntity = localReports.SingleOrDefault(x => x.AssignmentGUID == item.AssignmentGUID);
if (localEntity == null)
{
//add as it doenst exist
TAFeedBackContext.Reports.Add(new Report() { *set fields* });
}
else
{
if (localEntity.StatusId == statusid) //only update if status is the passed in status.
{
//why are you modifying the remote entity
item.ReportsID = localEntity.ReportsID;
//update remove entity?, i get the impression its from a different context,
//if not then cool, but you need to show how reports is retrieved
}
else
{
}
}
}
TAFeedBackContext.SaveChanges();

Hangfire get last execution time

I'm using hangfire 1.5.3. In my recurring job I want to call a service that uses the time since the last run. Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job.
Job
public abstract class RecurringJobBase
{
protected RecurringJobDto GetJob(string jobId)
{
using (var connection = JobStorage.Current.GetConnection())
{
return connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobId);
}
}
protected DateTime GetLastRun(string jobId)
{
var job = GetJob(jobId);
if (job != null && job.LastExecution.HasValue)
{
return job.LastExecution.Value.ToLocalTime();
}
return DateTime.Today;
}
}
public class NotifyQueryFilterSubscribersJob : RecurringJobBase
{
public const string JobId = "NotifyQueryFilterSubscribersJob";
private readonly IEntityFilterChangeNotificationService _notificationService;
public NotifyQueryFilterSubscribersJob(IEntityFilterChangeNotificationService notificationService)
{
_notificationService = notificationService;
}
public void Run()
{
var lastRun = GetLastRun(JobId);
_notificationService.CheckChangesAndSendNotifications(DateTime.Now - lastRun);
}
}
Register
RecurringJob.AddOrUpdate<NotifyQueryFilterSubscribersJob>(NotifyQueryFilterSubscribersJob.JobId, job => job.Run(), Cron.Minutely, TimeZoneInfo.Local);
I know, that it is configured as minutely, so I could calculate the time roughly. But I'd like to have a configuration independent implementation. So my Question is: How can I implement RecurringJobBase.GetLastRun to return the time of the previous run?
To address my comment above, where you might have more than one type of recurring job running but want to check previous states, you can check that the previous job info actually relates to this type of job by the following (although this feels a bit hacky/convoluted).
If you're passing the PerformContext into the job method than you can use this:
var jobName = performContext.BackgroundJob.Job.ToString();
var currentJobId = int.Parse(performContext.BackgroundJob.Id);
JobData jobFoundInfo = null;
using (var connection = JobStorage.Current.GetConnection()) {
var decrementId = currentJobId;
while (decrementId > currentJobId - 50 && decrementId > 1) { // try up to 50 jobs previously
decrementId--;
var jobInfo = connection.GetJobData(decrementId.ToString());
if (jobInfo.Job.ToString().Equals(jobName)) { // **THIS IS THE CHECK**
jobFoundInfo = jobInfo;
break;
}
}
if (jobFoundInfo == null) {
throw new Exception($"Could not find the previous run for job with name {jobName}");
}
return jobFoundInfo;
}
You could take advantage of the fact you already stated - "Unfortunately the LastExecution is set to the current time, because the job data was updated before executing the job".
The job includes the "LastJobId" property which seems to be an incremented Id. Hence, you should be able to get the "real" previous job by decrement LastJobId and querying the job data for that Id.
var currentJob = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == CheckForExpiredPasswordsId);
if (currentJob == null)
{
return null; // Or whatever suits you
}
var previousJob = connection.GetJobData((Convert.ToInt32(currentJob.LastJobId) - 1).ToString());
return previousJob.CreatedAt;
Note that this is the time of creation, not execution. But it might be accurate enough for you. Bear in mind the edge case when it is your first run, hence there will be no previous job.
After digging around, I came up with the following solution.
var lastSucceded = JobStorage.Current.GetMonitoringApi().SucceededJobs(0, 1000).OrderByDescending(j => j.Value.SucceededAt).FirstOrDefault(j => j.Value.Job.Method.Name == "MethodName" && j.Value.Job.Type.FullName == "NameSpace.To.Class.Containing.The.Method").Value;
var lastExec = lastSucceded.SucceededAt?.AddMilliseconds(Convert.ToDouble(-lastSucceded.TotalDuration));
It's not perfect but i think a little cleaner than the other solutions.
Hopefully they will implement an official way soon.
The answer by #Marius Steinbach is often good enough but if you have thousands of job executions (my case) loading all of them from DB doesn't seem that great. So finally I decided to write a simple SQL query and use it directly (this is for PostgreSQL storage though changing it to SqlServer should be straightforward):
private async Task<DateTime?> GetLastSuccessfulExecutionTime(string jobType)
{
await using var conn = new NpgsqlConnection(_connectionString);
if (conn.State == ConnectionState.Closed)
conn.Open();
await using var cmd = new NpgsqlCommand(#"
SELECT s.data FROM hangfire.job j
LEFT JOIN hangfire.state s ON j.stateid = s.id
WHERE j.invocationdata LIKE $1 AND j.statename = $2
ORDER BY s.createdat DESC
LIMIT 1", conn)
{
Parameters =
{
new() { Value = $"%{jobType}%" } ,
new() { Value = SucceededState.StateName }
}
};
var result = await cmd.ExecuteScalarAsync();
if (result is not string data)
return null;
var stateData = JsonSerializer.Deserialize<Dictionary<string, string>>(data);
return JobHelper.DeserializeNullableDateTime(stateData?.GetValueOrDefault("SucceededAt"));
}
Use this method that return Last exucution time and Next execution time of one job. this method return last and next execution time of one job.
public static (DateTime?, DateTime?) GetExecutionDateTimes(string jobName)
{
DateTime? lastExecutionDateTime = null;
DateTime? nextExecutionDateTime = null;
using (var connection = JobStorage.Current.GetConnection())
{
var job = connection.GetRecurringJobs().FirstOrDefault(p => p.Id == jobName);
if (job != null && job.LastExecution.HasValue)
lastExecutionDateTime = job.LastExecution;
if (job != null && job.NextExecution.HasValue)
nextExecutionDateTime = job.NextExecution;
}
return (lastExecutionDateTime, nextExecutionDateTime);
}

EF and MVC - approach to work together

I used the following approach long time (approx 5 years):
Create one big class with initialization of XXXEntities in controller and create each method for each action with DB. Example:
public class DBRepository
{
private MyEntities _dbContext;
public DBRepository()
{
_dbContext = new MyEntities();
}
public NewsItem NewsItem(int ID)
{
var q = from i in _dbContext.News where i.ID == ID select new NewsItem() { ID = i.ID, FullText = i.FullText, Time = i.Time, Topic = i.Topic };
return q.FirstOrDefault();
}
public List<Screenshot> LastPublicScreenshots()
{
var q = from i in _dbContext.Screenshots where i.isPublic == true && i.ScreenshotStatus.Status == ScreenshotStatusKeys.LIVE orderby i.dateTimeServer descending select i;
return q.Take(5).ToList();
}
public void SetPublicScreenshot(string filename, bool val)
{
var screenshot = Get<Screenshot>(p => p.filename == filename);
if (screenshot != null)
{
screenshot.isPublic = val;
_dbContext.SaveChanges();
}
}
public void SomeMethod()
{
SomeEntity1 s1 = new SomeEntity1() { field1="fff", field2="aaa" };
_dbContext.SomeEntity1.Add(s1);
SomeEntity2 s2 = new SomeEntity2() { SE1 = s1 };
_dbContext.SomeEntity1.Add(s2);
_dbContext.SaveChanges();
}
And some external code create DBRepository object and call methods.
It worked fine. But now Async operations came in. So, if I use code like
public async void AddStatSimplePageAsync(string IPAddress, string login, string txt)
{
DateTime dateAdded2MinsAgo = DateTime.Now.AddMinutes(-2);
if ((from i in _dbContext.StatSimplePages where i.page == txt && i.dateAdded > dateAdded2MinsAgo select i).Count() == 0)
{
StatSimplePage item = new StatSimplePage() { IPAddress = IPAddress, login = login, page = txt, dateAdded = DateTime.Now };
_dbContext.StatSimplePages.Add(item);
await _dbContext.SaveChangesAsync();
}
}
can be a situation, when next code will be executed before SaveChanged completed and one more entity will be added to _dbContext, which should not be saved before some actions. For example, some code:
DBRepository _rep = new DBRepository();
_rep.AddStatSimplePageAsync("A", "b", "c");
_rep.SomeMethod();
I worry, that SaveChanged will be called after line
_dbContext.SomeEntity1.Add(s1);
but before
_dbContext.SomeEntity2.Add(s2);
(i.e. these 2 actions is atomic operation)
Am I right? My approach is wrong now? Which approach should be used?
PS. As I understand, will be the following stack:
1. calling AddStatSimplePageAsync
2. start calling await _dbContext.SaveChangesAsync(); inside AddStatSimplePageAsync
3. start calling SomeMethod(), _dbContext.SaveChangesAsync() in AddStatSimplePageAsync is executing in another (child) thread.
4. complete _dbContext.SaveChangesAsync() in child thread. Main thread is executing something in SomeMethod()
Ok this time I (think)'ve got your problem.
At first, it's weird that you have two separate calls to SaveChangesmethod. Usually you should try to have it at the end of all your operations and then dispose it.
Even thought yes, your concerns are right, but some clarifications are needed here.
When encountering an asyncor await do not think about threads, but about tasks, that are two different concepts.
Have a read to this great article. There is an image that will practically explain you everything.
To say that in few words, if you do not await an async method, you can have the risk that your subsequent operation could "harm" the execution of the first one. To solve it, simply await it.

How to make two SQL queries really asynchronous

My problem is based on a real project problem, but I have never used the System.Threading.Tasks library or performing any serious programming involving threads so my question may be a mix of lacking knowledge about the specific library and more general misunderstanding of what asynchronous really means in terms of programming.
So my real world case is this - I need to fetch data about an user. In my current scenario it's financial data so let say I need all Accounts, all Deposits and all Consignations for a certain user. In my case this means to query million of records for each property and each query is relatively slow itself, however to fetch the Accounts is several times slower than fetching the Deposits. So I have defined three classes for the three bank products I'm going to use and when I want to fetch the data for all the bank products of certain user I do something like this :
List<Account> accounts = GetAccountsForClient(int clientId);
List<Deposit> deposits = GetDepositsForClient(int clientId);
List<Consignation> consignations = GetConsignationsForClient(int clientId);
So the problem starts here I need to get all those three list at the same time, cause I'm going to pass them to the view where I display all users data. But as it is right now the execution is synchronous (If I'm using the term correctly here) so the total time for collecting the data for all three products is:
Total_Time = Time_To_Get_Accounts + Time_To_Get_Deposits + Time_To_Get_Consignations
This is not good because the each query is relatively slow so the total time is pretty big, but also, the accounts query takes much more time than the other two queries so the idea that get into my head today was - "What if I could execute this queries simultaneously". Maybe here comes my biggest misunderstanding on the topic but for me the closest to this idea is to make them asynchronous so maybe then Total_Time won't be the time of the slowest query but yet will be much faster than the sum of all three queries.
Since my code is complicated I created a simple use case which I think, reflect what I'm trying to do pretty well. I have two methods :
public static async Task<int> GetAccounts()
{
int total1 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query1 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Accounts]";
SqlCommand command = new SqlCommand(query1, connection);
connection.Open();
for (int i = 0; i < 19000000; i++)
{
string s = i.ToString();
}
total1 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total1.ToString());
}
return total1;
}
and the second method :
public static async Task<int> GetDeposits()
{
int total2 = 0;
using (SqlConnection connection = new SqlConnection(connString))
{
string query2 = "SELECT COUNT(*) FROM [MyDb].[dbo].[Deposits]";
SqlCommand command = new SqlCommand(query2, connection);
connection.Open();
total2 = (int) await command.ExecuteScalarAsync();
Console.WriteLine(total2.ToString());
}
return total2;
}
which I call like this:
static void Main(string[] args)
{
Console.WriteLine(GetAccounts().Result.ToString());
Console.WriteLine(GetDeposits().Result.ToString());
}
As you can see I call GetAccounts() first and I slow the execution down on purpose so I give a chance the execution to continue to the next method. However I'm not getting any result for a certain period of time and then I get all printed on the console at the same time.
So the problem - how to make so that I don't wait for the first method to finish, in order to go to the next method. In general the code structure is not that important, what I really want to figure out is if there's any way to make both queries to execute at the same time. The sample here is the result of my research which maybe could be extended to the point where I'll get the desired result.
P.S
I'm using ExecuteScalarAsync(); just because I started with a method which was using it. In reality I'm gonna use Scalar and Reader.
When you use the Result property on a task that hasn't completed yet the calling thread will block until the operation completes. That means in your case that the GetAccounts operation need to complete before the call to GetDeposits starts.
If you want to make sure these method are parallel (including the synchronous CPU-intensive parts) you need to offload that work to another thread. The simplest way to do so would be to use Task.Run:
static async Task Main()
{
var accountTask = Task.Run(async () => Console.WriteLine(await GetAccounts()));
var depositsTask = Task.Run(async () => Console.WriteLine(await GetDeposits()));
await Task.WhenAll(accountTask, depositsTask);
}
Here's a way to to perform two tasks asynchronously and in parallel:
Task<int> accountTask = GetAccounts();
Task<int> depositsTask = GetDeposits();
int[] results = await Task.WhenAll(accountTask, depositsTask);
int accounts = results[0];
int deposits = results[1];
I generally prefer to use Task.WaitAll. To setup for this code segment, I changed the GetAccounts/GetDeposits signatures just to return int (public static int GetAccounts())
I placed the Console.WriteLine in the same thread as assigning the return to validate that one GetDeposits returns before GetAccounts has, but this is unnecessary and probably best to move it after the Task.WaitAll
private static void Main(string[] args) {
int getAccountsTask = 0;
int getDepositsTask = 0;
List<Task> tasks = new List<Task>() {
Task.Factory.StartNew(() => {
getAccountsTask = GetAccounts();
Console.WriteLine(getAccountsTask);
}),
Task.Factory.StartNew(() => {
getDepositsTask = GetDeposits();
Console.WriteLine(getDepositsTask);
})
};
Task.WaitAll(tasks.ToArray());
}
If it's ASP.NET use AJAX to fetch after the page is rendered and put the data in a store. Each AJAX fetch is asynchronous. If you want to create simultaneous SQL queries on the server?
Usage:
// Add some queries ie. ThreadedQuery.NamedQuery([Name], [SQL])
var namedQueries= new ThreadedQuery.NamedQuery[]{ ... };
System.Data.DataSet ds = ThreadedQuery.RunThreadedQuery(
"Server=foo;Database=bar;Trusted_Connection=True;",
namedQueries).Result;
string msg = string.Empty;
foreach (System.Data.DataTable tt in ds.Tables)
msg += string.Format("{0}: {1}\r\n", tt.TableName, tt.Rows.Count);
Source:
public class ThreadedQuery
{
public class NamedQuery
{
public NamedQuery(string TableName, string SQL)
{
this.TableName = TableName;
this.SQL = SQL;
}
public string TableName { get; set; }
public string SQL { get; set; }
}
public static async System.Threading.Tasks.Task<System.Data.DataSet> RunThreadedQuery(string ConnectionString, params NamedQuery[] queries)
{
System.Data.DataSet dss = new System.Data.DataSet();
List<System.Threading.Tasks.Task<System.Data.DataTable>> asyncQryList = new List<System.Threading.Tasks.Task<System.Data.DataTable>>();
foreach (var qq in queries)
asyncQryList.Add(fetchDataTable(qq, ConnectionString));
foreach (var tsk in asyncQryList)
{
System.Data.DataTable tmp = await tsk.ConfigureAwait(false);
dss.Tables.Add(tmp);
}
return dss;
}
private static async System.Threading.Tasks.Task<System.Data.DataTable> fetchDataTable(NamedQuery qry, string ConnectionString)
{
// Create a connection, open it and create a command on the connection
try
{
System.Data.DataTable dt = new System.Data.DataTable(qry.TableName);
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
await connection.OpenAsync().ConfigureAwait(false);
System.Diagnostics.Debug.WriteLine("Connection Opened ... " + qry.TableName);
using (SqlCommand command = new SqlCommand(qry.SQL, connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
System.Diagnostics.Debug.WriteLine("Query Executed ... " + qry.TableName);
dt.Load(reader);
System.Diagnostics.Debug.WriteLine(string.Format("Record Count '{0}' ... {1}", dt.Rows.Count, qry.TableName));
return dt;
}
}
}
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine("Exception Raised ... " + qry.TableName);
System.Diagnostics.Debug.WriteLine(ex.Message);
return new System.Data.DataTable(qry.TableName);
}
}
}
Async is great if the process takes a long time. Another option would be to have one stored procedure that returns all three record sets.
adp = New SqlDataAdapter(cmd)
dst = New DataSet
adp.Fill(dst)
In the code behind of the page, reference them as dst.Tables(0), dst.Tables(1), and dst.Tables(2). The tables will be in the same order as the select statements in the stored procedure.

Can mutithreading be used to performance tune my app

In my application i open connection for about 70 servers each having 8 databases on average(the servers Are categorized into environments viz development, production, UaT, sit,training, misc,Qa ).
The application will check for existence of a user in the each database and fetch details if the user exists.
I have used a method to call the service this method will pass the user id as input inturn the service will check the user across the databases And fetch The details.
this whole process is taking too much of time that the idle time in UI is around 5 - 10 mins.
How can we tune the performance of this application. I thought Of implementing multi-threading and fetching detains in environment basis. but I am not sure if we can call a method with Return type in the application and with input parameters.
Please suggest a way to improve performance.
public List<AccessDetails> GetAccessListOfMirror(string mirrorId,string server)
{
List<AccessDetails> accessOfMirror = new List<AccessDetails>();
string loginUserId = SessionManager.Session.Current.LoggedInUserName;
string userPassword = SessionManager.Session.Current.Password;
using (Service1Client client = new Service1Client())
{
client.Open();
accessOfMirror = client.GetMirrorList(mirrorId, server, loginUserId, userPassword);
}
return accessOfMirror;
}
Service method
public List<AccessDetails> GetMirrorList(string mirrorId, string server, string userId, string userPassword)
{
string mirrorUser = mirrorId.ToString();
List<ConnectionStringContract> connectionStrings = new List<ConnectionStringContract>();
try
{
connectionStrings = GetConnectionString(server);
}
catch (FaultException<ServiceData> exe)
{
throw exe;
}
AseConnection aseConnection = default(AseConnection);
List<AccessRequest> mirrorUsers = new List<AccessRequest>();
List<FacetsOnlineAccess> foaAccess = new List<FacetsOnlineAccess>();
List<AccessDetails> accessDetails = new List<AccessDetails>();
AccessDetails accDetails = new AccessDetails();
AccessRequest access;
if (!String.IsNullOrEmpty(server))
connectionStrings = connectionStrings.Where(x => x.Server == server).ToList();
foreach (ConnectionStringContract connection in connectionStrings)
{
string connectionString = connection.ConnectionString;
AseCommand aseCommand = new AseCommand();
using (aseConnection = new AseConnection(connectionString))
{
try
{
aseConnection.Open();
try
{
List<Parameter> parameter = new List<Parameter>();
Parameter param;
param = new Parameter();
param.Name = "#name_in_db";
param.Value = mirrorUser.ToLower().Trim();
parameter.Add(param);
int returnCode = 0;
DataSet ds = new DataSet();
try
{
ds = DataAccess.ExecuteStoredProcedure(connectionString, Constant.SP_HELPUSER, parameter, out returnCode);
}
catch (Exception ex)
{
}
if(ds.Tables.Count > 0 && ds.Tables[0].Rows.Count > 0)
{
foreach (DataRow row in ds.Tables[0].Rows)
{
access = new AccessRequest();
if (row.ItemArray[0].ToString() == mirrorUser)
access.Group = row.ItemArray[2].ToString();
else
access.Group = row.ItemArray[0].ToString();
access.Environment = connection.Environment.Trim();
access.Server = connection.Server.Trim();
access.Database = connection.Database.Trim();
mirrorUsers.Add(access);
}
}
}
catch (Exception ex)
{
}
}
catch (Exception ConEx)
{
}
}
}
accDetails.AccessList = mirrorUsers;
//accDetails.FOAList = foaAccess;
accessDetails.Add(accDetails);
return accessDetails;
}
Thanks in advance
Loops can sometimes reduce speeds, especially loops inside loops. I/O operations are always pretty slow. You have a loop with i?o. So if you would execute this I/O operations on parallel threads, the performance would increase.
You could translate
foreach (ConnectionStringContract connection in connectionStrings)
{
...
}
into:
Parallel.ForEach(connectionStrings, connectionString =>
{
...
}
Inside you should lock the commonly used variables, like mirrorUsers with a lock.
I think this is a great start. Meanwhile I will look for other performance issues.
You should be able to take advantage of multi threading without too much hassle...
You should probably make use of the threadpool, so you don't spawn too many threads all running at the same time. You can read about the built in threadpool here:
http://msdn.microsoft.com/en-us/library/3dasc8as%28v=vs.80%29.aspx
What you should do is to extract the body of your foreach loop to a static method. Any parameters should be made into a model that can be passed to the thread. Because then you can use ThreadPool.QueueUserWorkItem(object) to start the threads.
Regarding multiple threads writing to the same resource (list or whatever) you can use any kind of mutex, lock or threadsafe component. Using locks is quite simple:
http://msdn.microsoft.com/en-us/library/c5kehkcz%28v=vs.80%29.aspx

Categories

Resources