Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to threading and Multithreading . I have a method which Fetches Ids as input in JSON format from DB as
{
"ID": ["1",
....,
.....
"30000"]}
Now these Ids are to be processed again via WebAPI POST call. The issue is,though the code is optimized, it is taking hours to process all data.
How can I process these Ids in batch or multi threading to make it faster?
Recent versions of .NET have great libraries you can use that take care of the multi-threading for you.
Check out the Parallel For Each loop: https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.parallel.foreach?view=netcore-3.1
You pass it a list and everything inside the loop is executed once for each item in the list and C# will do some of the iterations in parallel (multi-threaded). That means you could do your processing for more than one ID in parallel.
Whether or not it improves performance depends on the environment and work being done.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm in the process of converting our data layer for a fairly large and complex WCF application to talk to the database asynchronously.
This has resulted in async and awaits being littered everywhere in the calling/consuming code.
Looking at the stack trace for a typical request I can already see many sections for the System.Runtime.CompilerServices.TaskAwaiter doing it's thing with await. And I have only just started this task!
I understand what .net does when it encounters async/await, so my question is the following: Is the extra overhead associated working with async/await worth it when the result is quite a few async methods from the beginning of a request to the end? I understand the benefits of calling the database asynchronously but is there a limit? Particularly when the calling application is fairly large and complex (or more appropriately a large and long call-stack).
Thanks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We are having a excel data Invoice input of 3 million customers every month to be processed.
There are only 8 fields in the data. Now the time consumed in processing to PDF format is too big & we are not able to meet the TAT.
Can anyone suggest any input to reduce processing speed?
There are several possibilities to reduce processing time.
Please check before what the bottleneck is.
To reduce processing time here are some ideas:
Use TPL to parallelize processing https://en.wikipedia.org/wiki/Parallel_Extensions#Task_Parallel_Library
Maybe use a thirdparty library to process excel files (e.g. Aspose.Cells, Aspose.PDF)
When the hardware is the bottleneck => use SSD and a better CPU or use CUDA to process https://en.wikipedia.org/wiki/CUDA
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Recently I was working on a project to write a csv file and an XML file at the same time.
They contain the same metadata information, how to open two StreamWriters in C# at the same time?
The question is too broad so the answer is generic. In order "to write a csv file and an XML file at the same time" you have to implement sort of multi-threading, using either Task Parallel Library (TPL - recommended) or other technique available in .NET, and run the aforementioned procedures on two different threads (in either blocking or non-blocking mode).
More details on TPL implementation: https://msdn.microsoft.com/en-us/library/dd537609%28v=vs.110%29.aspx.
Hope this may help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a requirement to load a file containing up to 1 million lines of string data. My first thought is to use C# 5.0 async to load the data whilst not blocking the UI thread. If the user tries to access something that relies on the data they will get a loading message.
Still I would like the fastest possible method in order to improve the user's experience.
Is the speed of reading data from the disk purely a function of the disk speed and thus StreamReader.ReadAllLines() is as performant as other c# code? Or is there something 'fancy' I can do to boost performance programmatically. This does not have to be described in detail. If so what approximate percentage improvement might be achieved?
I am purely interested in read speed and not concerned with the speed of code that may process the data once loaded.
First of all, take a look on File size, here is detailed Performance measurements
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to learn RavenDB and using it's .Net client. If I am not wrong I think the main reason to use NoSql database like RavenDB is speed. It seems to be very fast as it is not relational. However when I am trying to play with .Net client of RavenDB I find that all calls are REST based calls. Doesn't that slow down speed? For each call of adding document it makes some call to HILO which basically lets the .Net client know which should be the next unique number to use and then it makes 2nd call to store the actual document.
You seems to be running RavenDB in a console app, and checking what happens in a very short lived fashion.
Run RavenDB in a real app over time, and you'll see that it is highly optimized for network usage.
You only see this hilo call once per X number of updates, and that X number changes based on your actual usage scenarios.
The more you use RavenDB, the faster it becomes.