Play image sequence in C# [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
What is the best way that uses less CPU has better performance for player a sequence of images in C#?
I'm writing an application for an industrial touch panel wince 5. The images are a maximum of 100kb each. Unfortunately I have to use C# because this is a form that must be integrated into an existing application.

Integrating into a C# application does not mean you have to use C# - you can write code in C++ or other languages and use p/invoke to call it from C#. But unless you're doing something really inefficient in C# you can get pretty decent performance out of it (I work on what is essentially a flight simulator written in C#, and it's pretty high performance - not as high as we could achieve with C++, but still a perfectly viable/realistic alternative, and it's so much faster to develop).
However, unless you are writing a realtime app for a very low-spec CPU and it's already struggling to keep up with its tasks, loading and displaying a 100kB image at any likely framerate shouldn't be much of a problem.
As you don't really specify the bounds of what you can do, it's hard to give a precise answer.
In general, it would make sense to use DirectX/OpenGL in preference to GDI in preference to GDI+ to get decent rendering/blitting performance.
If you have control over it, then there are many file formats that you can use which will help the speed (by compressing the data well to minimise the amount of data to be loaded, and/or by using hardware decompression approaches, better data streaming and caching approaches, pre-processing to optimise the image data to suit the target hardware, etc). If you have this much flexibility you may be able to use a video playback library or even an external video playback application that will do all the work for you, leaving you to write a trivial bit of "control logic" in C#. This will get you a much more efficient system than you are likely to achieve by rolling your own solution.

Related

making c# accurtate as posible (maybe with hacs)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
From what I know it is rather known that c# can not be accurate when timing is critical. I certainly can understand that but was hoping there were known game hacks to help my issue.
tech:
I'm using an API for USB that sends data over a control transfer. In the API I get an event when an interrupt transfer occurs (one every 8 ms). I then simply fire off my control transfer at that exact time. What I have noticed, however not often, is that it takes more then 8ms to fire. Most of the time it does so in a timely matter (< 1ms after the interrupt event). The issue is that control transfers can not happen at the same time of an interrupt transfer so the control transfer must be done with in 5ms of the interrupt transfer so that it is complete and the interrupt transfer can take place.
So usb stuff aside my issue is getting an event to fire < 5ms after another event. I'm hoping there is a solution for this as gaming would also suffer form this sort of thing. For example some games can be put in a high priority mode. I wonder if that can be done in code? I may also try a profiler to back up my suspicions, it may be something I can turn off.
For those that want to journey down the technical road, the api is https://github.com/signal11/hidapi
If maybe someone has a trick or idea that may work, here are some of the considerations in my case.
1) usb interrupt polls happen ever 8 ms and are only a few hundred us long
2) control transfer should happen once every 8-32 ms (fast the better)
3) this control transfer can take up to 5 ms to complete
4) Skipping oscillations is ok for the controller transfer
5) this is usb 1.1
This is not even a C# problem, you are in a multi tasking non-realtime OS, so you don't know when your program is going to be active, the OS can give priority to other tasks.
Said that, you can raise the priority of the program thread, but I doubt it will solve anything:
System.Threading.Thread.CurrentThread.Priority = ThreadPriority.Highest;
When such restrictive timmings must be met then you must work at kernel level, per example as a driver.

How do I best-determine system requirements for a new app? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to specify the hardware your software needs?
How do you determine the system requirements of a user's PC in order for them to install and run your software?
I am aware of the obvious, such as Windows, .NET Framework [version number]. But how do you come up with the correct RAM, Processor and all of that?
Is this just something that you observe while you're debugging your app? Do you just check out the Resource Monitor and watch for how much Disk usage your app is using, or how much memory it is taking up?
Are there any tools, or would you recommend I use tools to help determine system requirements for my applications?
I've searched for this but I have not been able to find much information.
More importantly, what about the Windows Experience Index? I've seen a few box apps in the shop say you need a Windows Exp. Index of N, but are there tools that determine what index is required for my app to run?
Until you start doing stress testing and load testing, using or carefully simulating production volumes and diversity of data, you do not really have a high quality build ready for mass deployment.
And when you do, experience (measurements and, if necessary, projection) from this testing will give you RAM, CPU and similar requirements for your customers.
Sure, the resource monitor is a good way to see how much CPU and ram it consumes. But it all depends on the app you're making, and as the developer you know aprox. how much power is needed under the hood.
If you're just developing standard WinForms / VCL apps that use standard native controls, you really shouldn't worry too much - 256 MB RAM and a 1 GHz processor should be enough, this is usually what I tend to put on my sysreq page.
For heavy 3D games you should probably start looking more into it, how you do that I can't tell you.
If you REALLY want exact hertz and bytes, you could use a VM and alter the specs and see how your app behaves.

How should I optimize following requirement in my application. Dealing mainly with asp.net 3.5, C#, File I/O, Threading and some other aspects as well [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Scenario -
It is an simple application through which only single user ( NO Authentication Reqd.) uploads only excel 2003/2010 file one at a time and then two functions are called Foo() & Goo() one after other taking some data from that excel doing some string manipulation & numeric computation returning two different text files for same user to download.
Now my question is how can I optimize this. Performance has high preference. Also how can I use Threading between those two functions Foo() & Goo() ? Will that optimize my performance ?
What more tips & tweaks are needed to achieve maximum speed in overall process.
Do the results of Goo depend on anything done in Foo? Do either of the methods actually change the data in the Excel document? Do they need to use the Excel object model other than to extract data to start with?
If not, you could:
Extract the data needed for Foo, then launch Foo in a separate thread
Extract the data needed for Goo and then run Goo (in the current thread, as it's already separate)
However, I would look at the existing performance characteristics of your code first - which bits actually take the most time? For example, if accessing the spreadsheet is taking more time than Foo and Goo, you won't get much benefit from the threading, and you'll certainly end up with more complicated code. (I think the Office COM objects are effectively single-threaded.) If Foo and Goo are the bottleneck, have you run a profiler on your existing code to see if you can make it faster without threading?
Do you have performance targets already? How close are you to meeting those targets? Don't bother trying to make the code as fast as it can possibly be, when you've already got it to run as fast as you need it to.
If Foo & Goo are not related, you could run them in two different threads making them run in parallel

Ways to improve efficiency of C# code [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Like most of us, I am a big fan of improving efficiency of code. So much so that I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower.
Fortunately for all of us, in most cases, the faster and more efficient solutions are also the cleaner and the most elegant ones. I used to be just a dabbler in programming but I am into full-time development now, and just started with C# and web development. I have been reading some good books on these subjects but sadly, books rarely cover the finer aspects. Like say, which one of two codes which do the same thing will run faster. This kind of knowledge comes mostly through experience only. I request all fellow programmers to share any such knowledge here.
Here, I'll start off with these two blog posts I came across. This is exactly the kind of stuff I am looking for in this post:
Stringbuilder vs String performance analysis
The cost of throwing an exception
P.S: Do let me know if this kind of thing already exists somewhere on this site. I searched but couldn't find, surprisingly. Also please post any book you know of that covers such things.
P.P.S: If you got to know of something from some blog post or some online source to which we all have access, then it would be better to post the link itself imo.
There are some things you should do like use generics instead of objects to avoid boxing/unboxing and also improve the code safety, but the best way to optimize your code is to use a profiler to determine which parts of your code are slow. There are many great profilers for .NET code available and they can help determine the bottlenecks in your programs.
Generally you shouldn't concern yourself with small ways to improve code efficiency, but instead when you are done coding, then profile it to find the bottlenecks.
A good profiler will tell you stats like how many times a function was executed, what the average running time was for a function, what the peak running time was for a function, what the total running time was for a function, etc. Some profilers will even draw graphs for you so you can visually see which parts of the program are the biggest bottleneck and you can drill down into the sub function calls.
Without profiling you will most likely be wrong about which part of your program is slow.
An example of a great and free profiler for .NET is the EQATEC Profiler.
The single most important thing regarding this question is: Don't optimize prematurely!
There is only one good time to optimize and that is when there are performance constraints that your current working implementation cannot fulfill. Then you should get out a profiler and check which parts of your code are slow and how you can fix them.
Thinking about optimization while coding the first version is mostly wasted time and effort.
"I would rather choose fast-executing dirty code over something which might be more elegant or clean, but slower."
If I were writing a pixel renderer for a game, perhaps I'd consider doing this - however, when responding to a user's click on a button, for example, I'd always favour the slower, elegant approach over quick-and-dirty (unless slow > a few seconds, when I might reconsider).
I have to agree with the other posts - profile to determine where your slow points are and then deal with those. Writing optimal code from the outset is more trouble than its worth, you'll usually find that what you think will be slow will be just fine and the real slow areas will surprise you.
One good resource for .net related performance info is Rico Mariani's Blog
IMO it's the same for all programming platforms / languages, you have to use profiler and see whitch part of the code are slow, and then do optimization on that parts.
While these links that you provided are valuable insig don't do such things in advance, measure first and then optimize.
edit:
http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html
When to use StringBuilder?
At what point does using a StringBuilder become insignificant or an overhead?
There are lots of tricks, but if that's what you're thinking you need, you need to start over. The secret of performance in any language is not in coding techniques, it is in finding what to optimize.
To make an analogy, if you're a police detective, and you want to put robbers in jail, the heart of your business is not about different kinds of jails. It is about finding the robbers.
I rely on a purely manual method of profiling. This is an example of finding a series of points to optimize, resulting in a speedup multiple of 43 times.
If you do this on an existing application, you are likely to discover that the main cause of slow performance is overblown data structure design, resulting in an excess of notification-style consistency maintenance, characterized by an excessively bushy call tree. You need to find the calls in the call tree that cost a lot and that you can prune.
Having done that, you may realize that a way of designing software that uses the bare minimum of data structure and abstractions will run faster to begin with.
If you've profiled your code, and found it to be lacking swiftness, then there are some micro-optimizations you can sometimes use. Here's a short list.
Micro-optimize judiciously - it's like the mirror from Harry Potter: if you're not careful you'll spend all your time there and get nothing else done without getting a lot in return.
The StringBuilder and exception throwing examples are good ones - those are mistakes I used to make which sometimes added seconds to a function execution. When profiling, I find I personally use up a lot of cycles simply finding things. In that case, I cache frequently accessed objects using a hashtable (or a dictionary).
Good program architecture give you a lot better optimization, than optimized function.
The most optimization is to avoiding all if else in runtime code, put them all at initialize time.
Overall, optimization is bad idea, because the most valuable is readable program, not a fast program.
http://www.techgalaxy.net/Docs/Dev/5ways.htm has some very good points... just came across it today.

Which platform should I use : native C++ or C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop a windows application. If I use native C++ and MFC for user interface then the application will be very fast and tiny. But using MFC is very complicated. Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run. But developing GUI is very easy by using WinForm. Which one do you prefer?
"fast" and "slow" are subjective, especially with today's PC's. I'm not saying deliberately make the thing slow, but there isn't nearly as much overhead in writing a managed application as you might think. The JIT etc work very well to make the code execute very fast. And you can also NGEN for extra start-up speed if you really need.
Actually, if you have time to learn it, you might want to consider WPF rather than winform - this is a different skill-set, but allows you to make very good use of graphics hardware etc.
Also - .NET framework comes with new OS installs, and is still very common on those that pre-date it. So for me it would be a fairly clear choice to develop with C#/.NET. The time to develop a robust and fully tested C++ app (with no leaks, etc) is (for me at least) much greater than the same with C#.
Be careful not to optimize too early; you mention the speed of the code but for most Windows user interface operations, the difference is not noticeable as the main bottlenecks of drawing and disk access are no different for either approach.
My recommendation is that you use C# and WPF or WinForms for your user interface. If you encounter some slow downs, use a profiler to determine where and then consider replacing some business logic with native code, but only if there is a benefit.
There are a great many possible Windows applications, and each has its own requirements.
If your application needs to be fast (and what I work on does), then native C++ is a good way to go.
If your application needs to be small (perhaps for really fast transmission over slow lines), then use whatever gets it small.
If your application is likely to be downloaded a lot, you probably want to be leery of later versions of .NET, which your users might not have yet.
If, like most, it will be fast and small enough anyway on the systems it's likely to be used on, use what will allow you to develop fastest and best.
In almost all cases, the right thing to optimize is developer effort. Use whatever gets a high-quality job done fastest and best.
First.. (though I'm a die hard c++ coder) I have to admit c# is in most cases perfectly fine where speed and size are concerned. In some cases the application is smaller, because the interpreted part is already on the target system. (don't spamm me on that one, an app with a dll is smaller then the app all in one. Windows just happens to ship with the "DLL" already there.)
As to coding.. I honestly don't think there is a significant difference. I don't spend alot of my time typing code. Most of it is thinking out a problem. The code part is quite small. Saving a few lines here and there.. Blahh it's not an argument for me.. If it were I'd be working in APL. Learning the STL, MFC and what have you is likely just as intensive as learning the c# libraries. In the end they're all the same.
C# does have one thing going for it.. A market. It's the latest "hot" skill and so theres a market for it. Jobs are easy to find. Now keep in mind java was a "hot" skill a few years back and now ever tom dick and harry has it on their resume. That makes it harder to niche yourself.
Ok.. all that said.. I LOVE C++.. There's nothing like getting dirty when I really need to. When the MFC libs don't do the job I take a look at what they're sitting on and so on and so on.. It's a perenial language and I belive it's still at or near the most used lang in the world. Yah c++, Yah!.
Note also that most Windows computer already have .NET installed on them, so that really shouldn't be a concern.
Also, aside from the .NET installation, .NET applications tend to be quite small.
And for most application with a UI, the speed of the User is the really limiting time factor.
C# applications are slower to start than MFC applications, but you might not notice a speed difference between the two once the application is loaded.
Having no information on the application you plan to develop, I vote for WPF.
In my opinion, the requirements should help you decide the platform. What is more important: Having an application that is easily maintainable or one that must be extremely fast and small ?
A large class of applications nowadays can be written using .NET and managed code and this is in general beneficial to the development in the long term. From my experience, .NET applications are usually fast enough for most use cases and they are simpler to create.
Native C++ still has its use, but just for being "faster and smaller", when "fast enough and small enough" is sufficient does not sound enough as a justification.
The speed argument between native and managed code is largely a non-issue at this point. Each release of the .NET Framework makes performance improvements over the previous ones and application performance is always a very high priority for the .NET development teams.
Starting with Windows Vista and Windows Server 2008, the .NET Framework is installed as part of the operating system. It is also part of Windows Update so almost any Windows XP system will also have it installed. If the requirement that the framework be installed on the target machine is really that much of a problem there are also compilers that will essentially embed the required runtime portions into your application to generate a single exe, but they are expensive (and in my opinion, not really worth the cost).
MFC isn't hard to learn, actually it is very easy.
Almost equal to C#.
Choice of a language or tool should be dictated by the functional and performance requirements of your project and expertise. If performance is a real consideration for you and you have done some analysis to prefer C++ over C#, then you have a decision already. Note though that having MFC based application is not terribly efficient either. On the other hand, overheads in .NET applications are over-stated.
Performance is something that is really a function of how well you write your code and what scalability requirements exist. If you would only have to work with one client with a maximum database records of 1K, then we should not be talking performance.
If ease of development and maintainability is more important, certainly C# would be the choice.
So I am not sure this is a question that can be given an answer as choice A or B with the data you have provided. You need to do the analysis of functional and non-functional requirements and decide.
Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run
An MFC app requires MFC dll's to run (and probably VC runtime as well!), so they might need to be installed, or if they are statically linked, you add to the size of the exe.
Blockquote
.NET is easier to work with. Unless you'll lose users by using it or will have trouble with code migration, you should probably use .NET. It is highly unlikely that speed will be an issue for this. Size probably doesn't matter that much, either.
Which technology are you more familiar with?
The information you gave does not include anything that would help decide. Yes, MFC apps tend to be smaller (if you include the runtime size which isn't a suitable measure in the long run), more responsive and mor costly to develop. So what?

Categories

Resources