Performance issue with WPF [closed] - c#

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Improve this question
I am facing a strange issue that throws following exception.
The CLR has been unable to transition from COM context 0x22f3090 to COM context 0x22f32e0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
So, I am keen to know its possible reasons in wpf. As of now, I am performing an operation which is causing this So, I put stop watch and check the time of my code but my code is not taking time to execute rather it is taken by run time framework. I know probably I am doing something wrong with my code. So, I am keen to know the possible reason for this kind of bug.. currently Invoking that operation is taking more than 5 minutes and even operation is very simple

After doing lots of efforts, I came to know that I was making a silly mistake by putting a data grid inside a scroll bar, which disables the by default visualization(UI and Data), and It was trying to load all the objects in the grid regardless of their need,
So, good note:
Never ever place scroll bar outside a **grid.
<ScrollViewer>
<DataGrid>
....
</DataGrid>
</ScrollViewer>**
Never do this.

Related

making c# accurtate as posible (maybe with hacs)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
From what I know it is rather known that c# can not be accurate when timing is critical. I certainly can understand that but was hoping there were known game hacks to help my issue.
tech:
I'm using an API for USB that sends data over a control transfer. In the API I get an event when an interrupt transfer occurs (one every 8 ms). I then simply fire off my control transfer at that exact time. What I have noticed, however not often, is that it takes more then 8ms to fire. Most of the time it does so in a timely matter (< 1ms after the interrupt event). The issue is that control transfers can not happen at the same time of an interrupt transfer so the control transfer must be done with in 5ms of the interrupt transfer so that it is complete and the interrupt transfer can take place.
So usb stuff aside my issue is getting an event to fire < 5ms after another event. I'm hoping there is a solution for this as gaming would also suffer form this sort of thing. For example some games can be put in a high priority mode. I wonder if that can be done in code? I may also try a profiler to back up my suspicions, it may be something I can turn off.
For those that want to journey down the technical road, the api is https://github.com/signal11/hidapi
If maybe someone has a trick or idea that may work, here are some of the considerations in my case.
1) usb interrupt polls happen ever 8 ms and are only a few hundred us long
2) control transfer should happen once every 8-32 ms (fast the better)
3) this control transfer can take up to 5 ms to complete
4) Skipping oscillations is ok for the controller transfer
5) this is usb 1.1
This is not even a C# problem, you are in a multi tasking non-realtime OS, so you don't know when your program is going to be active, the OS can give priority to other tasks.
Said that, you can raise the priority of the program thread, but I doubt it will solve anything:
System.Threading.Thread.CurrentThread.Priority = ThreadPriority.Highest;
When such restrictive timmings must be met then you must work at kernel level, per example as a driver.

Async rewrite of sync code performs 20 times slower in loops [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I'm trying not to panic here, so please, bear with me! :S
I've spent a considerable amount of time rewriting a big chunk of code from sync (i.e. thread-blocking) to async (i.e. C# 6+). The code in question runs inside an ASP.NET application and spans everything from low-level ADO.NET DB access to higher-level unit-of-work pattern, and finally a custom async HTTP handler for public API access - the full server-side stack, so to speak. The primary purpose of the rewrite wasn't optimization, but untangling, general clean-up, and bringing the code up to something that resembles a modern and deliberate design. Naturally, optimization gain was implicitly assumed.
Everything in general is great and I'm very satisfied with the overall quality of the new code, as well as the improved scalability it's shown so far in the last couple of weeks of real-world tests. The CPU and memory loads on the server have fallen drastically!
So what's the problem, you might ask?
Well, I've recently been tasked with optimizing a simple data import that is still utilizing the old sync code. Naturally, it didn't take me long before I tried changing it to the new async code-base to see what would happen.
Given everything, the import code is quite simple. It's basically a loop that reads items from a list that's been previously read into memory, adds each of them individually to a unit of work, and saves it to a SQL database by means of an INSERT statement. After the loop is done, the unit of work is committed, which makes the changes permanent (i.e. the DB transaction is committed as well).
The problem is that the new code takes about 20 times as long as the old one, when the expectation was quite the opposite! I've checked and double-checked and there is no obvious overhead in the new code that would warrant such sluggishness.
To be specific: the old code is able to import 1100 items/sec steadily, while the new one manages 40 items/sec AT BEST (on average, it's even less, because the rate is falling slightly over time)! If I run the same test over a VPN, so that the network cost outweighs everything else, the throughputs is somewhere along 25 items/sec for sync and 20 for async.
I've read about multiple cases here on SO which report a 10-20% slowdown when switching from sync to async in similar situations and I was prepared to deal with that for tight loops such as mine. But a 20-fold penalty in a non-networked scenario?! That's completely unacceptable!
What is my best course of action here? How do I tackle this unexpected problem?
UPDATE
I've run the import under a profiler, as suggested.
I'm not sure what to make of the results, though. It would seem that the process spends more than 80% of its time just... waiting. See for yourselves:
The 14% spent inside the custom HTTP handler corresponds to the IDataReader.Read which is a consequence of a tiny remainder of the old sync API. This is still subject to optimization and is likely to be reduced in the near future. Regardless, it's dwarfed by the WaitAny cost, which definitely isn't there in the all-sync version!
What's curious is that the report isn't showing any direct calls from my code to WaitAny, which makes me think this is probably part of the async/await infrastructure. Am I wrong in this conclusion? I kind of hope I am!
What worries me is that I might be reading this all wrong. I know that async costs are much harder to reason about than single-threaded costs. In the end, the WaitAny might be nothing more than the equivalent of the "Sytem Idle Process" on Windows - an artificial representation of the CPU infrastructure that reflects a free percentage of the CPU resource.
Can anyone shed some light here for me, please?

Worse multithreaded performance on better system (possibly due to Deedle) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We are dealing with a multithreaded C# service using Deedle. Tests on a quad-core current system versus an octa-core target system show that the service is about two times slower on the target system instead of two times faster. Even when restricting the number of threads to two, the target system is still almost 40% slower.
Analysis shows a lot of waiting in Deedle(/F#), making the target system basically run on two core. Non-Deedle test programs show normal behaviour and superiour memory bandwidth on the target system.
Any ideas on what could cause this or how to best approach this situation?
EDIT: It seems most of the time waiting is done in calls to Invoke.
The problem turned out to be a combination of using Windows 7, .NET 4.5 (or actually the 4.0 runtime) and the heavy use of tail recursion in F#/Deedle.
Using Visual Studio's Concurrency Visualizer, I already found that most time is spent waiting in Invoke calls. On closer inspection, these result in the following call trace:
ntdll.dll:RtlEnterCriticalSection
ntdll.dll:RtlpLookupDynamicFunctionEntry
ntdll.dll:RtlLookupFunctionEntry
clr.dll:JIT_TailCall
<some Deedle/F# thing>.Invoke
Searching for these function gave multiple articles and forum threads indicating that using F# can result in a lot of calls to JIT_TailCall and that .NET 4.6 has a new JIT compiler that seems to deal with some issues relating to these calls. Although I didn't find anything mentioning problems relating to locking/synchronisation, this did give me the idea that updating to .NET 4.6 might be a solution.
However, on my own Windows 8.1 system that also uses .NET 4.5, the problem doesn't occur. After searching a bit for similar Invoke calls, I found that the call trace on this system looked as follows:
ntdll.dll:RtlAcquireSRWLockShared
ntdll.dll:RtlpLookupDynamicFunctionEntry
ntdll.dll:RtlLookupFunctionEntry
clr.dll:JIT_TailCall
<some Deedle/F# thing>.Invoke
Apparently, in Windows 8(.1) the locking mechanism was changed to something less strict, which resulted in a lot less need for waiting for the lock.
So only with the target system's combination of both Windows 7's strict locking and .NET 4.5's less efficient JIT compiler, did F#'s heavy usage of tail recursion cause problems. After updating to .NET 4.6, the problem disappeared and our service is running as expected.

How much async/await is OK? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In our project we are using async/await for almost 3 purposes (for all of their methods):
Data access layer: where fetching/updating databases (using Dapper).
Cache (Redis): read/write.
ASP.Net MVC 5 controllers.
The question is how much async/await is ok. Is it ok to use them even when reading or writing small amount of data? How about cache and controllers?
Remarks: the project is a little special and it may have about 50,000 requests per second for few hours of a day.
According to an article I've read:
Async/await is great for avoiding blocking while potentially
time-consuming work is performed in a .NET application, but there are
overheads associated with running an async method
The cost of this is comparatively negligible when the asynchronous
work takes a long time, but it’s worth keeping in mind.
Based on what you asked, even when reading or writing small amount of data?. It doesnt seem to be a good idea as there are over.
Here is the article: The overhead of async/await in NET 4.5
And in the article he used a profiler to check the optimization of async/await.
QUOTE:
Despite this async method being relatively simple, ANTS Performance
Profiler shows that it’s caused over 900 framework methods to be run
in order to initialize it and the work it does the first time that
it’s run.
The question here maybe if you're gonna accept these minimal overheads, and take into consideration that these overheads do pile up into something possibly problematic.
The question is how much async/await is ok. Is it ok to use them even
when reading or writing small amount of data? How about cache and
controllers?
You should use async/await for I/O bound operations, it doesn't matter if it's a small amount of data. More important is to avoid potentially long running I/O bound operations mainly disk and network calls. Asp.net has limited size of thread pool and these operations may block it. Using asynchronous calls helps your application to scale better and allows to handle more concurrent requests.
For more info: http://msdn.microsoft.com/en-us/magazine/dn802603.aspx

asp.net page not responding after 30 minutes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a asp.net web page in which the button click event process runs for 35 minutes and in the front end I am using ajax and showing a progress bar image. If the process (button click event) completes in less than 30 minutes then page reloads successfully, else "in progress" image keeps showing even after the process is completed, until the time of AsyncPostBackTimeout (which is set to 60 minutes) and shows server time out issue after 60 minutes.
Please let me know if there is something I am doing wrong.
Without seeing your code, I can't tell you what's going wrong. However, I can recommend a couple of options:
Break the task out in to multiple steps (instead of one long chained task). it may be a little more work for the user, but at least they're not left hanging on a page for a half hour+ (ouch!).
Use a profiler to see what's actually taking so long and see if you can't optimize the code to cut down on the process. For example, if it's a database call it may make sense to make a stored procedure instead of multiple select/updates (with data doing back and forth)--keep the processing on the machine until the final result is needed.
For long tasks, it may make sense to break the process out in to a service or separate entity (and just have the service report back progress). For example MSMQ is a great way to have a dedicated service running and pass tasks off to it when needed. Just keep in mind, this now creates another layer which is one more place to maintain.
If a process takes 30 minutes, it could tomorrow take 60 minutes or more just because your servers will be busy doing other things. The approach is then fundamentally wrong.
My advice would be to put such long tasks to another layer, a system service. The service runs, picks tasks from a queue, executes one by one. The front layer just peeks every few seconds/minutes to see if the operation is complete. Or even better, users do not wait, they do other things and eventually somehow they are informed that the long-running task is complete.

Categories

Resources