I'm having some performance issues when starting my windows service, the first round my lstSps is long (about 130 stored procedures). Is there anyway to speed this up (except for speeding the stored procedures up)?
When the foreach is over and goes over to the second round it goes faster, because there aren't that many returning true on TimeToRun(). But, my concern is about the first time, when there are a lot more stored procedures to run.
I have though about making a array and a for loop since I read that its faster, but I believe the problem is because the procedures takes to long time. Could I build this in a better way? Maybe use multiple threads (one for each execute) or something like that?
Would really appreciate some tips :)
EDIT: Just to clarify, it's method HasResult() is executing the SP:s and makes to look taking time..
lock (lstSpsToSend)
{
lock (lstSps)
{
foreach (var sp in lstSps.Where(sp => sp .TimeToRun()).Where(sp => sp.HasResult()))
{
lstSpsToSend.Add(sp);
}
}
}
while (lstSpsToSend.Count > 0)
{
//Take the first watchdog in list and then remove it
Sp sp;
lock (lstSpsToSend)
{
sp = lstSpsToSend[0];
lstSpsToSend.RemoveAt(0);
}
try
{
//Send the results
}
catch (Exception e)
{
Thread.Sleep(30000);
}
}
What I would do is something like this:
int openThread = 0;
ConcurrentQueue<Type> queue = new ConcurrentQueue<Type>();
foreach (var sp in lstSps)
{
Thread worker = new Thread(() =>
{
Interlocked.Increment(ref openThread);
if(sp.TimeToRun() && sp.HasResult)
{
queue.add(sp);
}
Interlocked.Decrement(ref openThread);
}) {Priority = ThreadPriority.AboveNormal, IsBackground = false};
worker.Start();
}
// Wait for all thread to be finnished
while(openThread > 0)
{
Thread.Sleep(500);
}
// And here move sp from queue to lstSpsToSend
while (lstSpsToSend.Count > 0)
{
//Take the first watchdog in list and then remove it
Sp sp;
lock (lstSpsToSend)
{
sp = lstSpsToSend[0];
lstSpsToSend.RemoveAt(0);
}
try
{
//Send the results
}
catch (Exception e)
{
Thread.Sleep(30000);
}
}
The best approach would rely heavily on what these stored procedures are actually doing, If they are returning the same kind of result set, or no result for that matter, it would definitely be beneficial to send them to SQL server all at once instead of one at a time.
The reason for this is network latency, if your SQL server sits in a data center somewhere that you are accessing over a WAN, your latency could be anywhere from 200ms up. So if you are calling 130 stored procedures sequentially, the "cost" would be 200ms X 130. That's 26 seconds just running back and forth over a network connection not actually executing the logic in your proc.
If you can combine all the procedures into a single call, you pay the 200ms cost only once.
Executing them on multiple concurrent threads is also a reasonable approach, but as before it would depend on what your procedures are doing and returning back to you.
Using an array over a list would not really give you any performance increases.
Hope this helps, good luck!
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have a web application that makes industrial scheduling calculations, I'm trying to log events in Azure cosmos db table after every update that happens to the schedule without affecting the application performance (screen Loading time).
That means, I want to fire log method in the BACKGROUND and the end user will not feel it (no freeze or extra loading time) and without making the UI wait for this operation to be done.
I added the next C# lines of code just after finishing the whole calculations:
private List<JPIEventEntity> batch = new List<JPIEventEntity>();
private List<List<JPIEventEntity>> batchesList = new List<List<JPIEventEntity>>();
Thread newThread = new Thread(() => myJPIEventHandler.checkForJPIEventsToSend(customer, author, model));
newThread.Start();
/*
* check if there are any changes or updates in the calculations and log their events.
*/
internal void checkForJPIEventsToSend(JPIBaseCustomer customer, JPIBaseUser author, SchedulingModel afterModel)
{
myCustomer = customer;
myUser = author;
// Looking for deleted Jobs
foreach (Job beforeJob in myBeforeModel.Jobs)
{
if (!afterModel.Jobs.Any(x => x.Guid == beforeJob.Guid))
{
//Looking for deleted Tasks and Log the deletion
foreach (Operation beforeOp in beforeJob.Operations)
{
//if (!afterJob.Operations.Any(x => x.Guid == beforeOp.Guid))
logTaskEvent(EventType.Delete, beforeOp, "", "");
}
//Log Job Deletion
logJobEvent(EventType.Delete, beforeJob, "", "");
}
}
//Comparison
foreach (Job afterJob in afterModel.Jobs)
{
if (myBeforeModel.Jobs.Any(x => x.Guid == afterJob.Guid))
{
Job beforeJob = myBeforeModel.Jobs.First(x => x.Guid == afterJob.Guid);
if (beforeJob.Name != afterJob.Name)
logJobEvent(EventType.NameChanged, afterJob, beforeJob.Name, afterJob.Name);
if (beforeJob.ReleaseDate != afterJob.ReleaseDate)
logJobEvent(EventType.ReleaseDateChanged, afterJob, beforeJob.ReleaseDate, afterJob.ReleaseDate);
if (beforeJob.DueDate != afterJob.DueDate)
logJobEvent(EventType.DueDateChanged, afterJob, beforeJob.DueDate, afterJob.DueDate);
if (beforeJob.IsDueDateExceeded != afterJob.IsDueDateExceeded)
logJobEvent(EventType.DueDateExceededChanged, afterJob, beforeJob.IsDueDateExceeded.ToString(), afterJob.IsDueDateExceeded.ToString());
if (beforeJob.ProcessingState != afterJob.ProcessingState)
{
logJobEvent(EventType.StatusChanged, afterJob,
convertProcessingStateToStatus(beforeJob.ProcessingState.ToString()), convertProcessingStateToStatus(afterJob.ProcessingState.ToString()));
}
if (beforeJob.SequenceNumber != afterJob.SequenceNumber && afterJob.ProcessingState != JobProcessingState.Finished)
logJobEvent(EventType.SequenceNumberChanged, afterJob, beforeJob.SequenceNumber, afterJob.SequenceNumber);
if (beforeJob.CustomQuantity != afterJob.CustomQuantity)
logJobEvent(EventType.QuantityChanged, afterJob, beforeJob.CustomQuantity, afterJob.CustomQuantity);
DateTime? beforeStart = beforeJob.ProcessingStart != null ? beforeJob.ProcessingStart : beforeJob.PlannedStart;
DateTime? afterStart = afterJob.ProcessingStart != null ? afterJob.ProcessingStart : afterJob.PlannedStart;
if (beforeStart != afterStart)
logJobEvent(EventType.StartDateChanged, afterJob, beforeStart, afterStart);
DateTime? beforeEnd = beforeJob.ProcessingEnd != null ? beforeJob.ProcessingEnd : beforeJob.PlannedEnd;
DateTime? afterEnd = afterJob.ProcessingEnd != null ? afterJob.ProcessingEnd : afterJob.PlannedEnd;
if (beforeEnd != afterEnd)
logJobEvent(EventType.EndDateChanged, afterJob, beforeEnd, afterEnd);
TimeSpan? beforeBuffer = beforeJob.DueDate != null ? (beforeJob.DueDate - beforeEnd) : new TimeSpan(0L);
TimeSpan? afterBuffer = afterJob.DueDate != null ? (afterJob.DueDate - afterEnd) : new TimeSpan(0L);
if (beforeBuffer != afterBuffer)
logJobEvent(EventType.BufferChanged, afterJob, beforeBuffer, afterBuffer);
}
//Collect the Batches in one list of batches
CollectBatches();
//Log all the Batches
LogBatches(batchesList);
}
/*
* Collectes the events in one batch
*/
private void logJobEvent(EventType evtType, Job afterJob, string oldValue, string newValue)
{
var eventGuid = Guid.NewGuid();
JPIEventEntity evt = new JPIEventEntity();
evt.Value = newValue;
evt.PrevValue = oldValue;
evt.ObjectGuid = afterJob.Guid.ToString();
evt.PartitionKey = myCustomer.ID; //customer guid
evt.RowKey = eventGuid.ToString();
evt.EventType = evtType;
evt.CustomerName = myCustomer.Name;
evt.User = myUser.Email;
evt.ParentName = null;
evt.ObjectType = JOB;
evt.ObjectName = afterJob.Name;
evt.CreatedAt = DateTime.Now;
batch.Add(evt);
}
/*
* Collectes the Events lists in an enumerable of Batches (max capacity of a single batch insertion is 100).
*/
private void CollectBatches()
{
//batchesList = new List<List<JPIEventEntity>>();
if (batch.Count > 0)
{
int rest = batch.Count;
var nextBatch = new List<JPIEventEntity>();
if (batch.Count > MaxBatchSize) //MaxBatchSize = 100
{
foreach (var item in batch)
{
nextBatch.Add(item);
rest = rest - 1; //rest = rest - (MaxBatchSize * hundreds);
if (rest < MaxBatchSize && nextBatch.Count == (batch.Count % MaxBatchSize))
{
batchesList.Add(nextBatch);
}
else if (nextBatch.Count == MaxBatchSize)
{
batchesList.Add(nextBatch);
nextBatch = new List<JPIEventEntity>();
}
}
}
else
{
batchesList.Add(batch);
}
}
}
private void LogBatches(List<List<JPIEventEntity>> batchesList)
{
if (batchesList.Count > 0)
{
JPIEventHandler.LogBatches(batchesList);
}
}
/*
* Insert Events into database
*/
public static void LogBatches(List<List<JPIEventEntity>> batchesList)
{
foreach (var batch in batchesList)
{
var batchOperationObj = new TableBatchOperation();
//Iterating through each batch entities
foreach (var Event in batch)
{
batchOperationObj.InsertOrReplace(Event);
}
var res = table.ExecuteBatch(batchOperationObj);
}
}
Inside the 'checkForJPIEventsToSend' method, I'm checking if there's any changes or updates in the calculations and insert events (hundreds or even thousands of lines) into the cosmos db table as batches.
After putting the method in a separate thread (as shown above) I still have an EXTRA LOADING duration of 2 to 4 seconds after every operation, which is something critical and bad for us.
Am I using the multi-threading correctly?
Thank you in advance.
As I understand your situation you have a front end application such as a desktop app or a website that creates requests. For each request you
Perform some calculations
Write some data to storage (Cosmos DB)
It is unclear whether you must display some result to the front end after these steps are complete. Your options depend on this question.
Scenario 1: The front end is waiting for the results of the calculations or database changes
The front end requires some result from the calculations or database changes, so the user is forced to wait for this to complete. However you want to avoid freezing your front end whilst you perform the long running tasks.
The solution most developers reach for here is to perform the work in a background thread. Your main thread waits for this background thread to complete and return a result, at which point the main/UI thread will update the front end with the result. This is because the main thread is often the only thread allowed to update the front end.
How you offload the work to a background thread depends on the type of work load you have. If the work is mostly I/O such as File I/O, Network I/O (writing to Azure CosmosDB) then you want to use the Async methods of the Cosmos API and async/await.
See https://stackoverflow.com/a/18033198/6662878
If the work you are doing is mostly CPU based, then threads will only speed up the processing if the problem can be broken into parts and run in parallel across multiple background threads. If the problem cannot be broken down and parallelised then running the work on a single background thread has a small cost associated with thread switching, but in turn this frees up the main/UI thread whilst the CPU based work is in progress in the background.
See https://learn.microsoft.com/en-us/dotnet/standard/asynchronous-programming-patterns/consuming-the-task-based-asynchronous-pattern
You will need to think about how you handle exceptions that occur on background threads, and how the code you run in your background thread will respond to a request to stop processing.
See https://learn.microsoft.com/en-us/dotnet/standard/threading/canceling-threads-cooperatively
Caveat: if any thread is carrying out very CPU intensive work (such as compressing or encrypting large amounts of data, encoding audio or video etc) this can often cause the front end to freeze, stop responding, drop network requests etc. If you have some processor intensive work to complete you need to think about how the work is spread over CPU cores, or CPUs.
Scenario 2: The front end does not need to display any specific result for the request
In this scenario you have more flexibilty about how and when you perform your background work because you can simply respond to the front end request with an acknowledgement that the request is received and will be processed in the (near) future. For example a Web API may respond with a 201 ACCEPTED HTTP response code to signal this.
You now want to queue the requests and process them somewhere other than your main/UI thread. There are a number of options, background threads being one of them, though not the simplest. You may also consider using a framework like https://www.hangfire.io/.
Another popular approach is to create a completely separate service or microservice that is responsible for your picking up requests from a queue and performing the work.
See https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/asynchronous-message-based-communication
Multithreading should come with a big warning message. Sometimes it is unavoidable, but it is always difficult and troublesome to get right. The C# APIs have evolved over time and so there's a lot to learn and a lot of ground to cover. It is often seen as a quick option to convert an application to be multithreaded, though you should be wary of this. Although more complex architectures as discussed in the link above seem overly burdensome, they invariably force you to think through a number of issues that come up when you begin to split up your application into threads, or processes, or services.
Hello I just newly started with Cassandra not much familiar, can u please let me know the error here
I am trying to insert 16000 records using the bellow code
public async Task AddSprintsStories(List<SprintStories> sprintStories)
{
var tasks = new List<Task>();
try
{
if (sprintStories.Count > 0)
{
foreach (var item in sprintStories)
{
SprintStories sprintStoryData = new SprintStories();
sprintStoryData.Id = item.Id;
sprintStoryData.ProjectId = item.ProjectId;
sprintStoryData.SprintId = item.SprintId;
tasks.Add(mapper.InsertAsync<SprintStories>(sprintStoryData, new CqlQueryOptions().SetConsistencyLevel(ConsistencyLevel.LocalQuorum)));
}
await Task.WhenAll(tasks);
}
}
catch (Exception e)
{
}
}
but facing the error: c# Server timeout during write query at consistency LOCALQUORUM (0 peer(s) acknowledged the write over 2 required)
can anyone please help me out here
How does the Cassandra cluster look during this cluster? CPU or disk I/O maxed-out? Without knowing that, my guess is that those 16000 writes are happening faster than your cluster can process them, creating write back pressure. Finally, it just can't process anymore, so they start failing.
For a possible solution, try limiting the number of active threads. Something like this should do it.
int maxActiveThreads = 20;
int activeThreads = 0;
foreach (var item in sprintStories)
{
...
tasks.Add(mapper.InsertAsync<SprintStories>(sprintStoryData, new CqlQueryOptions().SetConsistencyLevel(ConsistencyLevel.LocalQuorum)));
activeThreads++;
if (activeThreads >= maxActiveThreads)
{
await Task.WhenAll(tasks);
activeThreads = 0;
}
}
await Task.WhenAll(tasks);
With this code, only 20 writes will be competing for Cassandra cluster resources at any given time. Do note, that I'm just using 20 as an example. Adjust that number to something that meets your requirements for performance and stability.
Ryan Svihla wrote a great blog post on this topic- Cassandra: Batch Loading Without the BATCH - The Nuanced Edition
I have a web service I need to query and it takes a value that supports pagination for its data. Due to the amount of data I need to fetch and how that service is implemented I intended to do a series of concurrent http web requests to accumulate this data.
Say I have number of threads and page size how could I assign each thread to pick its starting point that doesn't overlap with the other thread? Its been a long time since I took parallel programming and I'm floundering a bit. I know I could find my start point with something like start = N/numThreads * threadNum however I don't know N. Right now I just spin up X threads and each loop until they get no more data. Problem is they tend to overlap and I end up with duplicate data. I need unique data and not to waste requests.
Right now I have code that looks something like this. This is one of many attempts and I see why this is wrong but its better to show something. The goal is to in parallel collect pages of data from a webservice:
int limit = pageSize;
data = new List<RequestStuff>();
List<Task> tasks = new List<Task>();
for (int i = 0; i < numThreads; i++)
{
tasks.Add(Task.Factory.StartNew(() =>
{
try
{
List<RequestStuff> someData;
do
{
int start;
lock(myLock)
{
start = data.Count;
}
someKeys = GetDataFromService(start, limit);
lock (myLock)
{
if (someData != null && someData.Count > 0)
{
data.AddRange(someData);
}
}
} while (hasData);
}
catch (AggregateException ex)
{
//Exception things
}
}));
}
Task.WaitAll(tasks.ToArray());
Any inspiration to solve this without race conditions? I need to stick to .NET 4 if that matters.
I'm not sure there's a way to do this without wasting some requests unless you know the actual limit. The code below might help eliminate the duplicate data as you will only query on each index once:
private int _index = -1; // -1 so first request starts at 0
private bool _shouldContinue = true;
public IEnumerable<RequestStuff> GetAllData()
{
var tasks = new List<Task<RequestStuff>>();
while (_shouldContinue)
{
tasks.Add(new Task<RequestStuff>(() => GetDataFromService(GetNextIndex())));
}
Task.WaitAll(tasks.ToArray());
return tasks.Select(t => t.Result).ToList();
}
private RequestStuff GetDataFromService(int id)
{
// Get the data
// If there's no data returned set _shouldContinue to false
// return the RequestStuff;
}
private int GetNextIndex()
{
return Interlocked.Increment(ref _index);
}
It could also be improved by adding cancellation tokens to cancel any indexes you know to be wasteful, i.e, if index 4 returns nothing you can cancel all queries on indexes above 4 that are still active.
Or if you could make a reasonable guess at the max index you might be able to implement an algorithm to pinpoint the exact limit before retrieving any data. This would probably only be more efficient if your guess was fairly accurate though.
Are you attempting to force parallelism on the part of the remote service by issuing multiple concurrent requests? Paging is generally used to limit the amount of data returned to only that which is needed, but if you need all of the data, then attempting to first page and then reconstruct it later seems like a poor design. Your code becomes needlessly complex, difficult to maintain, you'll likely just move the bottleneck from code you control to somewhere else, and now you've introduced data integrity issues (what happens if all of these threads access different versions of the data you are trying to query?). By increasing the complexity and number of calls, you are also increasing the likelihood of problems occurring (eg. one of the connections gets dropped).
Can you state the problem you are attempting to solve so perhaps instead we can help architect a better solution?
Will parallelism help with performance for a locked object, should it be run single threaded, or is there another technique?
I noticed that when accessing a dataset and adding rows from multiple threads exceptions were thrown. Therefore I created a "thread-safe" version to add rows by locking the table prior to updating the row. This implementation works but is appears slow with many transactions.
public partial class HaMmeRffl
{
public partial class PlayerStatsDataTable
{
public void AddPlayerStatsRow(int PlayerID, int Year, int StatEnum, int Value, DateTime Timestamp)
{
lock (TeamMemberData.Dataset.PlayerStats)
{
HaMmeRffl.PlayerStatsRow testrow = TeamMemberData.Dataset.PlayerStats.FindByPlayerIDYearStatEnum(PlayerID, Year, StatEnum);
if (testrow == null)
{
HaMmeRffl.PlayerStatsRow newRow = TeamMemberData.Dataset.PlayerStats.NewPlayerStatsRow();
newRow.PlayerID = PlayerID;
newRow.Year = Year;
newRow.StatEnum = StatEnum;
newRow.Value = Value;
newRow.Timestamp = Timestamp;
TeamMemberData.Dataset.PlayerStats.AddPlayerStatsRow(newRow);
}
else
{
testrow.Value = Value;
testrow.Timestamp = Timestamp;
}
}
}
}
}
Now I can call this safely from multiple threads, but does it actually buy me anything? Can I do this differently for better performance. For instance is there any way to use System.Collections.Concurrent namespace to optimize performance or any other methods?
In addition, I update the underlying database after the entire dataset is updated and that takes a very long time. Would that be considered an I/O operation and be worth using parallel processing by updating it after each row is updated in the dataset (or some number of rows).
UPDATE
I wrote some code to test concurrent vs sequential processing which shows it takes about 30% longer to do concurrent processing and I should use sequential processing here. I assume this is because the lock on the database is causing the overhead on the ConcurrentQueue to be more costly than the gains from parallel processing. Is this conclusion correct and is there anything that I can do to speed up the processing, or am I stuck as for a Datatable "You must synchronize any write operations".
Here is my test code which might not be scientifically correct. Here is the timer and calls between them.
dbTimer.Restart();
Queue<HaMmeRffl.PlayersRow.PlayerValue> addPlayerRow = InsertToPlayerQ(addUpdatePlayers);
Queue<HaMmeRffl.PlayerStatsRow.PlayerStatValue> addPlayerStatRow = InsertToPlayerStatQ(addUpdatePlayers);
UpdatePlayerStatsInDB(addPlayerRow, addPlayerStatRow);
dbTimer.Stop();
System.Diagnostics.Debug.Print("Writing to the dataset took {0} seconds single threaded", dbTimer.Elapsed.TotalSeconds);
dbTimer.Restart();
ConcurrentQueue<HaMmeRffl.PlayersRow.PlayerValue> addPlayerRows = InsertToPlayerQueue(addUpdatePlayers);
ConcurrentQueue<HaMmeRffl.PlayerStatsRow.PlayerStatValue> addPlayerStatRows = InsertToPlayerStatQueue(addUpdatePlayers);
UpdatePlayerStatsInDB(addPlayerRows, addPlayerStatRows);
dbTimer.Stop();
System.Diagnostics.Debug.Print("Writing to the dataset took {0} seconds concurrently", dbTimer.Elapsed.TotalSeconds);
In both examples I add to the Queue and ConcurrentQueue in an identical manner single threaded. The only difference is the insertion into the datatable. The single-threaded approach inserts as follows:
private static void UpdatePlayerStatsInDB(Queue<HaMmeRffl.PlayersRow.PlayerValue> addPlayerRows, Queue<HaMmeRffl.PlayerStatsRow.PlayerStatValue> addPlayerStatRows)
{
try
{
HaMmeRffl.PlayersRow.PlayerValue row;
while (addPlayerRows.Count > 0)
{
row = addPlayerRows.Dequeue();
TeamMemberData.Dataset.Players.AddPlayersRow(
row.PlayerID, row.Name, row.PosEnum, row.DepthEnum,
row.TeamID, row.RosterTimestamp, row.DepthTimestamp,
row.Active, row.NewsUpdate);
}
}
catch (Exception)
{
TeamMemberData.Dataset.Players.RejectChanges();
}
try
{
HaMmeRffl.PlayerStatsRow.PlayerStatValue row;
while (addPlayerStatRows.Count > 0)
{
row = addPlayerStatRows.Dequeue();
TeamMemberData.Dataset.PlayerStats.AddUpdatePlayerStatsRow(
row.PlayerID, row.Year, row.StatEnum, row.Value, row.Timestamp);
}
}
catch (Exception)
{
TeamMemberData.Dataset.PlayerStats.RejectChanges();
}
TeamMemberData.Dataset.Players.AcceptChanges();
TeamMemberData.Dataset.PlayerStats.AcceptChanges();
}
The concurrent adds as follows
private static void UpdatePlayerStatsInDB(ConcurrentQueue<HaMmeRffl.PlayersRow.PlayerValue> addPlayerRows, ConcurrentQueue<HaMmeRffl.PlayerStatsRow.PlayerStatValue> addPlayerStatRows)
{
Action actionPlayer = () =>
{
HaMmeRffl.PlayersRow.PlayerValue row;
while (addPlayerRows.TryDequeue(out row))
{
TeamMemberData.Dataset.Players.AddPlayersRow(
row.PlayerID, row.Name, row.PosEnum, row.DepthEnum,
row.TeamID, row.RosterTimestamp, row.DepthTimestamp,
row.Active, row.NewsUpdate);
}
};
Action actionPlayerStat = () =>
{
HaMmeRffl.PlayerStatsRow.PlayerStatValue row;
while (addPlayerStatRows.TryDequeue(out row))
{
TeamMemberData.Dataset.PlayerStats.AddUpdatePlayerStatsRow(
row.PlayerID, row.Year, row.StatEnum, row.Value, row.Timestamp);
}
};
Action[] actions = new Action[Environment.ProcessorCount * 2];
for (int i = 0; i < Environment.ProcessorCount; i++)
{
actions[i * 2] = actionPlayer;
actions[i * 2 + 1] = actionPlayerStat;
}
try
{
// Start ProcessorCount concurrent consuming actions.
Parallel.Invoke(actions);
}
catch (Exception)
{
TeamMemberData.Dataset.Players.RejectChanges();
TeamMemberData.Dataset.PlayerStats.RejectChanges();
}
TeamMemberData.Dataset.Players.AcceptChanges();
TeamMemberData.Dataset.PlayerStats.AcceptChanges();
}
The difference in time is 4.6 seconds for the single-threaded and 6.1 for the parallel.Invoke.
Lock & transactions are not good for parallelism and performance.
1)Try avoid lock:Will different threads need to update the same Row in dataset?
2)minimize lock time.
For db operation use may try Batch Update future of ADO.NET: http://msdn.microsoft.com/en-us/library/ms810297.aspx
Multithreading can help upto an extent because once the data across your app boundary , you will start waiting for I/O , here you can do asynchronous processing because your app does not have control over various parameters ( Resource access , Network speed etc),this will give better user experience (If UI app).
Now for your scenario , you may want to use some sort of producer/consumer queue , as soon as a row is available in queue , a different thread start processing it but again this will work upto an extent.
I wanted to parallelize a piece of code, but the code actually got slower probably because of overhead of Barrier and BlockCollection. There would be 2 threads, where the first would find pieces of work wich the second one would operate on. Both operations are not much work so the overhead of switching safely would quickly outweigh the two threads.
So I thought I would try to write some code myself to be as lean as possible, without using Barrier etc. It does not behave consistent however. Sometimes it works, sometimes it does not and I can't figure out why.
This code is just the mechanism I use to try to synchronize the two threads. It doesn't do anything useful, just the minimum amount of code you need to reproduce the bug.
So here's the code:
// node in linkedlist of work elements
class WorkItem {
public int Value;
public WorkItem Next;
}
static void Test() {
WorkItem fst = null; // first element
Action create = () => {
WorkItem cur=null;
for (int i = 0; i < 1000; i++) {
WorkItem tmp = new WorkItem { Value = i }; // create new comm class
if (fst == null) fst = tmp; // if it's the first add it there
else cur.Next = tmp; // else add to back of list
cur = tmp; // this is the current one
}
cur.Next = new WorkItem { Value = -1 }; // -1 means stop element
#if VERBOSE
Console.WriteLine("Create is done");
#endif
};
Action consume = () => {
//Thread.Sleep(1); // this also seems to cure it
#if VERBOSE
Console.WriteLine("Consume starts"); // especially this one seems to matter
#endif
WorkItem cur = null;
int tot = 0;
while (fst == null) { } // busy wait for first one
cur = fst;
#if VERBOSE
Console.WriteLine("Consume found first");
#endif
while (true) {
if (cur.Value == -1) break; // if stop element break;
tot += cur.Value;
while (cur.Next == null) { } // busy wait for next to be set
cur = cur.Next; // move to next
}
Console.WriteLine(tot);
};
try { Parallel.Invoke(create, consume); }
catch (AggregateException e) {
Console.WriteLine(e.Message);
foreach (var ie in e.InnerExceptions) Console.WriteLine(ie.Message);
}
Console.WriteLine("Consume done..");
Console.ReadKey();
}
The idea is to have a Linkedlist of workitems. One thread adds items to the back of that list, and another thread reads them, does something, and polls the Next field to see if it is set. As soon as it is set it will move to the new one and process it. It polls the Next field in a tight busy loop because it should be set very quickly. Going to sleep, context switching etc would kill the benefit of parallizing the code.
The time it takes to create a workitem would be quite comparable to executing it, so the cycles wasted should be quite small.
When I run the code in release mode, sometimes it works, sometimes it does nothing. The problem seems to be in the 'Consumer' thread, the 'Create' thread always seems to finish. (You can check by fiddling with the Console.WriteLines).
It has always worked in debug mode. In release it about 50% hit and miss. Adding a few Console.Writelines helps the succes ratio, but even then it's not 100%. (the #define VERBOSE stuff).
When I add the Thread.Sleep(1) in the 'Consumer' thread it also seems to fix it. But not being able to reproduce a bug is not the same thing as knowing for sure it's fixed.
Does anyone here have a clue as to what goes wrong here? Is it some optimization that creates a local copy or something that does not get updated? Something like that?
There's no such thing as a partial update right? like a datarace, but then that one thread is half doen writing and the other thread reads the partially written memory? Just checking..
Looking at it I think it should just work.. I guess once every few times the threads arrive in different order and that makes it fail, but I don't get how. And how I could fix this without adding slowing it down?
Thanks in advance for any tips,
Gert-Jan
I do my damn best to avoid the utter minefield of closure/stack interaction at all costs.
This is PROBABLY a (language-level) race condition, but without reflecting Parallel.Invoke i can't be sure. Basically, sometimes fst is being changed by create() and sometimes not. Ideally, it should NEVER be changed (if c# had good closure behaviour). It could be due to which thread Parallel.Invoke chooses to run create() and consume() on. If create() runs on the main thread, it might change fst before consume() takes a copy of it. Or create() might be running on a separate thread and taking a copy of fst. Basically, as much as i love c#, it is an utter pain in this regard, so just work around it and treat all variables involved in a closure as immutable.
To get it working:
//Replace
WorkItem fst = null
//with
WorkItem fst = WorkItem.GetSpecialBlankFirstItem();
//And
if (fst == null) fst = tmp;
//with
if (fst.Next == null) fst.Next = tmp;
A thread is allowed by the spec to cache a value indefinitely.
see Can a C# thread really cache a value and ignore changes to that value on other threads? and also http://www.yoda.arachsys.com/csharp/threads/volatility.shtml