Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I want to be able to use more threads in a for loop.
Why is because each loop is runnings proccess which it needs to wait to exit for.
And currently it doesn't seem to be even close to 100% my CPU, and looking at the performance most of the stuff is from waiting for the processes.
My code is pretty much like this now:
Parallel.For(0, count, i =>
{
........
});
Everything is synced afterwards so it doesn't matter how many threads run per say.
They don't need to sync data during their run or anything so there it's free run in that regard.
Thanks.
EDIT:
Okay i tried looking for what actually makes it slow.
And for some reason it does take the CPU,
it just doesn't show it, probably cause it's being run "Console Window Host"?
In that case the limit is indeed the CPU, damn it -_-.
Still, why doesn't it show up as a normal process?
It's basically:
using (Process pRocess = new Process())
{
pRocess .StartInfo.Filename = "somefile.exe";
work work and also, maybe more work.
pRocess.WaitForExit();
}
Maybe you mean
Parallel
.For(0, 10, new ParallelOptions() {MaxDegreeOfParallelism = 10},
i =>{ Console.WriteLine(Thread.CurrentThread.ManagedThreadId); });
but notice, that this is not number of threads. More info here https://msdn.microsoft.com/en-us/library/system.threading.tasks.paralleloptions.maxdegreeofparallelism%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
From what I know it is rather known that c# can not be accurate when timing is critical. I certainly can understand that but was hoping there were known game hacks to help my issue.
tech:
I'm using an API for USB that sends data over a control transfer. In the API I get an event when an interrupt transfer occurs (one every 8 ms). I then simply fire off my control transfer at that exact time. What I have noticed, however not often, is that it takes more then 8ms to fire. Most of the time it does so in a timely matter (< 1ms after the interrupt event). The issue is that control transfers can not happen at the same time of an interrupt transfer so the control transfer must be done with in 5ms of the interrupt transfer so that it is complete and the interrupt transfer can take place.
So usb stuff aside my issue is getting an event to fire < 5ms after another event. I'm hoping there is a solution for this as gaming would also suffer form this sort of thing. For example some games can be put in a high priority mode. I wonder if that can be done in code? I may also try a profiler to back up my suspicions, it may be something I can turn off.
For those that want to journey down the technical road, the api is https://github.com/signal11/hidapi
If maybe someone has a trick or idea that may work, here are some of the considerations in my case.
1) usb interrupt polls happen ever 8 ms and are only a few hundred us long
2) control transfer should happen once every 8-32 ms (fast the better)
3) this control transfer can take up to 5 ms to complete
4) Skipping oscillations is ok for the controller transfer
5) this is usb 1.1
This is not even a C# problem, you are in a multi tasking non-realtime OS, so you don't know when your program is going to be active, the OS can give priority to other tasks.
Said that, you can raise the priority of the program thread, but I doubt it will solve anything:
System.Threading.Thread.CurrentThread.Priority = ThreadPriority.Highest;
When such restrictive timmings must be met then you must work at kernel level, per example as a driver.
This question already has answers here:
Parallels.ForEach Taking same Time as Foreach
(2 answers)
Closed 7 years ago.
I have written an algorithm to solve Sudoku puzzle.
Now I have to read around 100 sudoku puzzles and solve them.
Basically I am just reading all the problems in 2d arrays and loop through each of them to solve them.
foreach (var problem in problems)
{
Solve(problem); // Solve is a static method
}
If I replace above piece of code with :
Parallel.ForEach(problems, problem => Solve(problem));
I donot set considerable improvement. I have 2 cores in my machine.
Am I missing anything , do I have to do anything else to make sure my algorithm is parallel and uses all cores.
There is some overhead involved in setting up and managing the parallel tasks. This overhead would negate some of the gains you'd make in solving the problems. It's possible, if the code executes quickly enough, that this overhead might dominate the execution time. Moreover, with 2 cores, you'd only expect to see at best 2x improvement (half the time) since only 2 problems could be worked on at any given time. Absent any information on your performance or code, it's difficult to say whether there is anything causing it to not be able to run in parallel (for example do you have critical sections that force serial execution).
I agree with #tvanfosson . And another possibility is you have 2 cores so your one core is used on other things like your IDE, at least not 100% available to overcome the overhead of Parallel
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Improve this question
I am facing a strange issue that throws following exception.
The CLR has been unable to transition from COM context 0x22f3090 to COM context 0x22f32e0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
So, I am keen to know its possible reasons in wpf. As of now, I am performing an operation which is causing this So, I put stop watch and check the time of my code but my code is not taking time to execute rather it is taken by run time framework. I know probably I am doing something wrong with my code. So, I am keen to know the possible reason for this kind of bug.. currently Invoking that operation is taking more than 5 minutes and even operation is very simple
After doing lots of efforts, I came to know that I was making a silly mistake by putting a data grid inside a scroll bar, which disables the by default visualization(UI and Data), and It was trying to load all the objects in the grid regardless of their need,
So, good note:
Never ever place scroll bar outside a **grid.
<ScrollViewer>
<DataGrid>
....
</DataGrid>
</ScrollViewer>**
Never do this.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am writing a DLL in C# and I need, (not want), to limit the usage of the DLL within a specific time.
I want it to stop working after N hours from the start of the usage, while notifying the user.
And if the DLL is not used for M hours, also stop the usage (as a sort of keep alive).
Usage being calling one of the functions of the DLL.
Using exceptions always collide with the availability of the DLL (I can't keep it in blocking).
I would very appreciate any help I can get, I am not very experienced using C#.
Any solution in any language will be great, obviously I am running on a windows computer but its components (service pack etc.) can be easily modified, if any modification is needed.
Your help is very much appreciated, thank you.
EDIT
I think I wasn't clear.
What I need is notifying the user of the end of the usage of the functions in the DLL, so he could initialize those functions again if he wants.
The DLL is not unloaded.
And I don't want to interfere with the main program process, the one using the DLL functions.
use a timer:
System.Threading.Timer t = new System.Threading.Timer(End, null, N * 60 * 1000, Timeout.Infinite);
private static void End(object state)
{
// Tell the user and end the program
}
for not used use another timer:
System.Threading.Timer t2 = new System.Threading.Timer(End, null, M * 60 * 1000, Timeout.Infinite);
now i don't know what "used" mean for you, but in any function that is considered a "use" do:
t2.Change(M * 60 * 1000, Timeout.Infinite);
to end the program you can use:
Environment.Exit(0);
keep in mind, that if you have more then one thread you'll might get a zombie, so make sure that all other threads other then the main thread are Background or that you abort them or single them to close.
Now i understand that you only want to notify the user...
so what i suggest is using:
MessageBox.Show(this, "what message you want to give the user");
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a asp.net web page in which the button click event process runs for 35 minutes and in the front end I am using ajax and showing a progress bar image. If the process (button click event) completes in less than 30 minutes then page reloads successfully, else "in progress" image keeps showing even after the process is completed, until the time of AsyncPostBackTimeout (which is set to 60 minutes) and shows server time out issue after 60 minutes.
Please let me know if there is something I am doing wrong.
Without seeing your code, I can't tell you what's going wrong. However, I can recommend a couple of options:
Break the task out in to multiple steps (instead of one long chained task). it may be a little more work for the user, but at least they're not left hanging on a page for a half hour+ (ouch!).
Use a profiler to see what's actually taking so long and see if you can't optimize the code to cut down on the process. For example, if it's a database call it may make sense to make a stored procedure instead of multiple select/updates (with data doing back and forth)--keep the processing on the machine until the final result is needed.
For long tasks, it may make sense to break the process out in to a service or separate entity (and just have the service report back progress). For example MSMQ is a great way to have a dedicated service running and pass tasks off to it when needed. Just keep in mind, this now creates another layer which is one more place to maintain.
If a process takes 30 minutes, it could tomorrow take 60 minutes or more just because your servers will be busy doing other things. The approach is then fundamentally wrong.
My advice would be to put such long tasks to another layer, a system service. The service runs, picks tasks from a queue, executes one by one. The front layer just peeks every few seconds/minutes to see if the operation is complete. Or even better, users do not wait, they do other things and eventually somehow they are informed that the long-running task is complete.