Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
From what I know it is rather known that c# can not be accurate when timing is critical. I certainly can understand that but was hoping there were known game hacks to help my issue.
tech:
I'm using an API for USB that sends data over a control transfer. In the API I get an event when an interrupt transfer occurs (one every 8 ms). I then simply fire off my control transfer at that exact time. What I have noticed, however not often, is that it takes more then 8ms to fire. Most of the time it does so in a timely matter (< 1ms after the interrupt event). The issue is that control transfers can not happen at the same time of an interrupt transfer so the control transfer must be done with in 5ms of the interrupt transfer so that it is complete and the interrupt transfer can take place.
So usb stuff aside my issue is getting an event to fire < 5ms after another event. I'm hoping there is a solution for this as gaming would also suffer form this sort of thing. For example some games can be put in a high priority mode. I wonder if that can be done in code? I may also try a profiler to back up my suspicions, it may be something I can turn off.
For those that want to journey down the technical road, the api is https://github.com/signal11/hidapi
If maybe someone has a trick or idea that may work, here are some of the considerations in my case.
1) usb interrupt polls happen ever 8 ms and are only a few hundred us long
2) control transfer should happen once every 8-32 ms (fast the better)
3) this control transfer can take up to 5 ms to complete
4) Skipping oscillations is ok for the controller transfer
5) this is usb 1.1
This is not even a C# problem, you are in a multi tasking non-realtime OS, so you don't know when your program is going to be active, the OS can give priority to other tasks.
Said that, you can raise the priority of the program thread, but I doubt it will solve anything:
System.Threading.Thread.CurrentThread.Priority = ThreadPriority.Highest;
When such restrictive timmings must be met then you must work at kernel level, per example as a driver.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We are dealing with a multithreaded C# service using Deedle. Tests on a quad-core current system versus an octa-core target system show that the service is about two times slower on the target system instead of two times faster. Even when restricting the number of threads to two, the target system is still almost 40% slower.
Analysis shows a lot of waiting in Deedle(/F#), making the target system basically run on two core. Non-Deedle test programs show normal behaviour and superiour memory bandwidth on the target system.
Any ideas on what could cause this or how to best approach this situation?
EDIT: It seems most of the time waiting is done in calls to Invoke.
The problem turned out to be a combination of using Windows 7, .NET 4.5 (or actually the 4.0 runtime) and the heavy use of tail recursion in F#/Deedle.
Using Visual Studio's Concurrency Visualizer, I already found that most time is spent waiting in Invoke calls. On closer inspection, these result in the following call trace:
ntdll.dll:RtlEnterCriticalSection
ntdll.dll:RtlpLookupDynamicFunctionEntry
ntdll.dll:RtlLookupFunctionEntry
clr.dll:JIT_TailCall
<some Deedle/F# thing>.Invoke
Searching for these function gave multiple articles and forum threads indicating that using F# can result in a lot of calls to JIT_TailCall and that .NET 4.6 has a new JIT compiler that seems to deal with some issues relating to these calls. Although I didn't find anything mentioning problems relating to locking/synchronisation, this did give me the idea that updating to .NET 4.6 might be a solution.
However, on my own Windows 8.1 system that also uses .NET 4.5, the problem doesn't occur. After searching a bit for similar Invoke calls, I found that the call trace on this system looked as follows:
ntdll.dll:RtlAcquireSRWLockShared
ntdll.dll:RtlpLookupDynamicFunctionEntry
ntdll.dll:RtlLookupFunctionEntry
clr.dll:JIT_TailCall
<some Deedle/F# thing>.Invoke
Apparently, in Windows 8(.1) the locking mechanism was changed to something less strict, which resulted in a lot less need for waiting for the lock.
So only with the target system's combination of both Windows 7's strict locking and .NET 4.5's less efficient JIT compiler, did F#'s heavy usage of tail recursion cause problems. After updating to .NET 4.6, the problem disappeared and our service is running as expected.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a timer in my application that fires every 10 ms. I know this is a very small value for a Windows timer and aware of the precision issues. Anyway, it causes the CPU usage to increase to 10% on average and the memory usage is slowly increasing but then eventually goes back down to a lower value. Without the timer, there are no CPU or memory issues. From what I've read, the memory increasing and then decreasing is a normal thing and it is due to Windows not releasing memory unless it has to. However, is this going to cause any performance problems with my application? Is 10% CPU usage going to cause problems as well? When I increase the timer to 100 ms it seems to be a little better but still seeing a similar type of effect. I need the timer interval to be as small as possible.
A 10% CPU usage (in my opinion) is not a big deal. I mean it's ok but definitly not the best. It's ok if you will need to do a lot of extra stuff to achieve better performance.
I wrote a lot of apps that uses 20% CPU and it works kinda fine. However, a timer set to 10ms is kind of weird. I guess you want to use it to constantly check for something. If you are doing that, don't use timers at 10ms. It's better to use events to do this. If you don't know events, here is a simple guide.
You declare an event like this:
public event EventHandler SomethingHappened;
For the purpose of this example, I will put the event in a class called MyClass. When you want to raise the event i.e. make the event occur, do this:
SomethingHappened (this, EventArgs.Empty);
Now let's see how do you subscribe to the event. Of course you need to create an object:
MyClass obj = new MyClass ();
And then write a method to execute when the event happened. The return value type and parameters must be the same as this one:
public void DoSomething (object sender, EventArgs e) {
}
Now you do the subscription:
obj.SomethingHappened += DoSomething;
For more information, here's an MSDN tutorial:
https://msdn.microsoft.com/en-us/library/aa645739(v=vs.71).aspx
SOLVED. The issue was that I had some code in the timer event that was slowing everything down. After replacing a few lines of code the CPU usage went back down to 0% and memory is no longer increasing. Hopefully this may help someone else in the future.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So this question is kind of interesting, I think. And before I lose anyones attention or they start shoving their fingers into my face with accusations: no I am not making or attempting to make a virus. I actually had a really good idea for a game. This game will be kind of like a creepy pasta(http://en.wikipedia.org/wiki/Creepypasta), in that it will mimic the ideals of a "haunted game". This game, when launched, will play for only a few seconds before ultimately "crashing" back to the desktop. It is at this time that I would like to hide all traces of its existence(which is still running in the background) so that I may continue with phase 2 of the game. During this phase, I will randomly take control of a console window, or play creepy sound effects at arbitrary/random intervals. The game will open back up at random, as if it has a mind of its own, too, but the game will be different each time this happens.
I would like to hide the game from the task manager completely. So this window:
Will show absolutely nothing of the program, no matter the tab the user selects. I want the game to, quite literally, turn into a ghost. The programming language that I am planning on using is C# and for the graphics library, OpenTK(Which is irrelevant for this question, but I want to make sure I lay down as much information as possible).
Anyone have any ideas? Oh, and I should also mention that I am quite fluent in the .NET framework/api, and I can build any windows forms application by hand(without using the editor).
Update:
I just thought of a fun alternative to hiding it. The answer: make the program smarter. Send the program to desktop, then listen for the opening of a task manager. If it opens, my program immediately shuts down task manager and the game responds with something super creepy like "But i thought you wanted to play with me? Why are you trying to kill me?" in a console. Sounds awesome. Lol.
I am no expert but I think most techniques that deals with process hiding uses CreateRemoteThread.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682437(v=vs.85).aspx
It is pretty tough to get right, but there are maaany blogs about it, eg:
http://resources.infosecinstitute.com/using-createremotethread-for-dll-injection-on-windows/
This works by picking some victim process that is already running, like say svchost.exe and add your thread into this.
Also while speaking of svchost, you can also very legally register a service and be hosted by this windows process, your clients may see the running game by calling the listing command:
tasklist /svc /fi "imagename eq svchost.exe"
or:
http://www.howtogeek.com/80082/svchost-viewer-shows-exactly-what-each-svchost-exe-instance-is-doing/
This is a tad more hidden than directly appearing as a task, while remainging more gentle to the user than the CreateRemoteThread. Also less crash prone, and also, anti viruses usually hook CreateRemoteThread to block calls to it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Improve this question
I am facing a strange issue that throws following exception.
The CLR has been unable to transition from COM context 0x22f3090 to COM context 0x22f32e0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
So, I am keen to know its possible reasons in wpf. As of now, I am performing an operation which is causing this So, I put stop watch and check the time of my code but my code is not taking time to execute rather it is taken by run time framework. I know probably I am doing something wrong with my code. So, I am keen to know the possible reason for this kind of bug.. currently Invoking that operation is taking more than 5 minutes and even operation is very simple
After doing lots of efforts, I came to know that I was making a silly mistake by putting a data grid inside a scroll bar, which disables the by default visualization(UI and Data), and It was trying to load all the objects in the grid regardless of their need,
So, good note:
Never ever place scroll bar outside a **grid.
<ScrollViewer>
<DataGrid>
....
</DataGrid>
</ScrollViewer>**
Never do this.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a asp.net web page in which the button click event process runs for 35 minutes and in the front end I am using ajax and showing a progress bar image. If the process (button click event) completes in less than 30 minutes then page reloads successfully, else "in progress" image keeps showing even after the process is completed, until the time of AsyncPostBackTimeout (which is set to 60 minutes) and shows server time out issue after 60 minutes.
Please let me know if there is something I am doing wrong.
Without seeing your code, I can't tell you what's going wrong. However, I can recommend a couple of options:
Break the task out in to multiple steps (instead of one long chained task). it may be a little more work for the user, but at least they're not left hanging on a page for a half hour+ (ouch!).
Use a profiler to see what's actually taking so long and see if you can't optimize the code to cut down on the process. For example, if it's a database call it may make sense to make a stored procedure instead of multiple select/updates (with data doing back and forth)--keep the processing on the machine until the final result is needed.
For long tasks, it may make sense to break the process out in to a service or separate entity (and just have the service report back progress). For example MSMQ is a great way to have a dedicated service running and pass tasks off to it when needed. Just keep in mind, this now creates another layer which is one more place to maintain.
If a process takes 30 minutes, it could tomorrow take 60 minutes or more just because your servers will be busy doing other things. The approach is then fundamentally wrong.
My advice would be to put such long tasks to another layer, a system service. The service runs, picks tasks from a queue, executes one by one. The front layer just peeks every few seconds/minutes to see if the operation is complete. Or even better, users do not wait, they do other things and eventually somehow they are informed that the long-running task is complete.