Related
I wrote part of a program that does some heavy work with strings in C#. I initially chose C# not only because it was easier to use .NET's data structures, but also because I need to use this program to analyse some 2-3 million text records in a database, and it is much easier to connect to databases using C#.
There was a part of the program that was slowing down the whole code, and I decided to rewrite it in C using pointers to access every character in the string, and now the part of the code that took some 119 seconds to analyse 10,000,000 strings in C# takes the C code only 5 seconds! Performance is a priority, so I am considering rewriting the whole program in C, compiling it into a dll (something which I didn't know how to do when I started writing the program) and using DllImport from C# to use its methods to work with the database strings.
Given that rewriting the whole program will take some time, and since using DllImport to work with C#'s strings requires marshalling and such things, my question is will the performance gains from the C dll's faster string handling outweigh the performance hit of having to repeatedly marshal strings to access the C dll from C#?
First, profile your code. You might find some real headsmacker that speeds the C# code up greatly.
Second, writing the code in C using pointers is not really a fair comparison. If you are going to use pointers why not write it in assembly language and get real performance? (Not really, just reductio ad absurdam.) A better comparison for native code would be to use std::string. That way you still get a lot of help from the string class and C++ exception-safety.
Given that you have to read 2-3 million records from the DB to do this work, I very much doubt that the time spent cracking the strings is going to outweigh the elapsed time taken to load the data from the DB. So, consider instead how to structure your code so that you can begin string processing while the DB load is in progress.
If you use a SqlDataReader (say) to load the rows sequentially, it should be possible to batch up N rows as fast as possible and hand off to a separate thread for the post-processing that is your current headache and reason for this question. If you are on .Net 4.0 this is simplest to do using Task Parallel Library, and System.Collections.Concurrent could also be useful for collation of results between the threads.
This approach should mean that neither the DB latency nor the string processing is a show-stopping bottleneck, because they happen in parallel. This applies even if you are on a single-processor machine because your app can process strings while it's waiting for the next batch of data to come back from the DB over the network. If you find string processing is the slowest, use more threads (ie. Tasks) for that. If the DB is the bottleneck, then you have to look at external means to improve its performance - DB hardware or schema, network infrastructure. If you need some results in hand before processing more data, TPL allows dependencies to be created between Tasks and the coordinating thread.
My point is that I doubt it's worth the pain of re-engineering the entire app in native C or whatever. There are lots of ways to skin this cat.
One option is to rewrite the C code as unsafe C#, which ought to have roughly the same performance and won't incur any interop penalties.
There's no reason to write in C over C++, and C/C++ does not exist.
The performance implications of marshalling are fairly simple. If you have to marshal every string individually, then your performance is gonna suck. If you can marshal all ten million strings in one call, then marshalling isn't gonna make any difference at all. P/Invoke is not the fastest operation in the world but if you only invoke it a few times, it's not really gonna matter.
It might be easier to re-write your core application in C++ and then use C++/CLI to merge it with the C# database end.
There are some pretty good answers here already, especially #Steve Townsend's.
However, I felt it worth underlining a key point: There is intrinisically no reason why C code "will be faster" than C# code. That idea is a myth. Under the bonnet they both produce machine code that runs on the same CPU. As long as you don't ask the C# to do more work than the C, then it can perform just as well.
By switching to C, you forced yourself to be more frugal (you avoided using high level features like managed strings, bounds-checking, garbage collection, exception handling, etc, and simply treated your strings as blocks of raw bytes). If you applied these low-level techniques to your C# code (i.e. treating your data as raw blocks of bytes as you did in C), you would find much less difference in the speed.
For example: Last week I re-wrote (in C#) a class that a junior had written (also in C#). I achieved a 25x speed improvement over the original code by applying the same approach I would use if I were writing it in C (i.e. thinking about performance). I achieved the same speedup you're claiming without having to change to a different language at all.
Finally, just because an isolated case can be made 24x faster, it does not mean you can make your whole program 24x faster across the board by porting it all to C. As Steve said, profile it to work out where it's slow, and expend your effort only where it'll provide significant benefits. If you blindly convert to C you'll probably find you've spent a lot of time making some already-working-code a lot less maintainable.
(P.S. My viewpoint comes from 29 years experience writing assembler, C, C++, and C# code, and understanding that the language is just a tool for generating machine-code - in the case of C# vs C++ vs C, it is primarily the programmer's skill, not the language used, that determines whether the code will run quickly or slowly. C/C++ programmers tend to be better than C# programmers because they have to be - C# allows you to be lazy and get the code written quickly, while C/C++ make you do more work and the code takes longer to write. But a good programmer can get great performance out of C#, and a poor programmer can wrest abysmal performance out of C/C++)
With strings being immutable in .NET, I have no doubt that an optimised C implementation will outperform an optimised C# implemented - no doubt!
P/Invoke does incur an overhead but if you implement the bulk of the logic in C and only expose very granular API for C#, I believe you are in a much better shape.
At the end of the day, writing an implementation in C means taking longer - but that will give you better performance if you preprepared for extra development cost.
Make yourself familiar with mixed assemblies - this is better than Interop. Interop is a fast track way to deal with native libs, but mixed assemblies perform better.
Mixed assemblies on MSDN
As usual the main thing is testing and measuring...
For concatenation of long strings or multiple strings always use StringBuilder. What not everybody knows, is that StringBuilder cannot only be used to make concatenating strings faster, but also Insertion, Removal and Replacement of characters.
If this isn't fast enough for you, you can use char- or byte-arrays instead of a strings and operate on these. If you are done with manipulation you can convert the array back to a string.
There is also the option in C# to use unsafe code to get a pointer to a string and modifiy the otherwise immutable string but I wouldn't really recommend this.
As others have alread said, you can use managed C++ (C++/CLI) to nicely interoperate between .NET and managed code.
Would you mind showing us the code, maybe there are other options for optimizing?
When you start to optimize a program at a late stage (the application was written without optimization in mind) then you have to identify the bottlenecks.
Profiling is the first step to see where all those CPU cycles are going.
Just keep in mind that C# profilers will only profile your .Net application -not the IIS server implemented in the kernel nor the network stack.
And this can be an invisible bottleneck that beats by several orders of magnitude what you are focussing on when trying to make progress.
There you think that you have no influence on IIS implemented as a kernel driver -and you are right.
But you can do without it - and save a lot of time and money.
Put your talent where it can make the difference - not where you are forced to run with your feet tied together.
The inherent differences are usually given as 2x less CPU, 5x memory. In practice, few people are good enough at or C++ to gain the benefits.
There's additional gain for skimping on Unicode support, but only you can know your application well enough to know if that's safe.
Use the profiler first, make sure you not I/O bound.
I program in Java doing a lot of web related stuff but I've been toying with the idea of creating a very simple DAW in some language. I considered C# but it doesn't seem to support Direct X anymore (Though there are some libraries that work with differing degrees of success). I was curious if anyone out there had an opinion on playing a lot of multi-channel sounds through Java. I would also at some point need to hack in some VST support (which would probably not be trivial. I'm really afraid that my only option will be C++, and that would be unpleasant enough to make me not actually work on it (know some C++, but not really enough to write something this intense).
Anyone have some ideas? Thanks
VST support in Java may be reasonably easy after all; I've heard of positive experiences with http://github.com/mhroth/jvsthost (that is to say, someone I had a conversation with on a forum seemed to be up and running with it pretty quickly, running a number of different synths successfully).
An aside: Personally, I'm developing some software in Java that uses SuperCollider as an audio backend (disclaimer: my actual experience of Java sound is limited). While it would probably be just about possible to build a DAW around SuperCollider, I wouldn't really recommend it as the tool for that job. However, I also don't quite understand why you want to build a DAW in the first place... should you decide you want to explore alternative means of making music with computers, you might give SC a look (also ChucK I found very easy to get started with and quite a lot of fun) :-)
Anyway, back to the question... while I tend to refer to Java specifically, much of this will go for C# as well:
Traditionally, garbage collection has been a source of concern doing anything where time is of the essence in Java; in a DAW, for example, this may manifest itself as inaccurate timing or clicks in the output where the GC interrupts the program long enough that it is not able to process a complete buffer. This will be particularly true if you want to use small buffers for low latency, and/or are not careful about the amount of garbage generated. However, I don't want to spread FUD about Java sound: as I mentioned, I haven't really used it heavily myself, and in any case I believe these issues are improving. It is certainly an issue you will need to be aware of, but probably not a show-stopper.
I imagine that a big bottleneck in any DAW will be file IO, which shouldn't suffer through Java as long as proper care is taken.
If you start doing intense DSP on many channels simultaneously, then it may be that Java computation performance isn't totally optimal (although probably not bad really); however if you mostly do basic mixing in your DAW code and any DSP with VSTs, then his should be a non-issue anyway.
In terms of actual audio IO, I see that there are also ASIO implementations for Java, should you be interested. I don't even have indirect experience of those, so I really won't vouch for them. Java 1.7 is supposed to have improved low-latency audio support, FWIW (although from what I've read, the applications they have in mind are not things like DAWs). DirectX support I don't think should be a major factor for a DAW. In that sense, you might not want to dismiss C#, as it is a very nice language.
There are already some DAWs that are using the java plattform (frinika or javaDAW
for example). So I think it's a reasonable option.
I'm working on something similar so I would have to say it is possible, my laptop was stolen and I've has to start over but I've rebuilt most of it. So far the track threads have been lining up pretty well but I'm considering implementing something like LWJGL's timer for better precision. Tritonus is a very helpful library and you can find it at jsresources.org as well as some very helpful examples. I've learned a lot there. I'd you send me an e-mail I'd be happy to share my code with you.
I would like to build my own signal processing library, and possibly another one about graphs algorithm. I find C# very useful and robust in regards of possible bugs associated with memory allocation, pointers, threading etc...
But I was wondering how much am I going to lose in terms of performance. Is it going to be something acceptable?
Thanks
When I started my DSIP course I was a pure C# developer . After looking around for a while, I ended up using C++libraries and learning C++ which at end it was to my benefit since I was doing real-time image processing and there is no way C# can match the performance.
In fact, you can run a quick test and run a mathematical equation consisting of a few multiplication in C# and C++ a million times and see the huge difference that there is doing calculations with floating point numbers.
If you are lucky, you will get wrappers in C# which is the best of both worlds, fast calculation in C++ and easy to use in C#. There is a C# wrapper for OpenCV and it seems to be quite good (http://www.emgu.com/wiki/index.php/Main_Page).
I highly recommend OpenCV especially for 2D signal processing. It is great in performance and made my project possible. Have a look at the demo here: http://www.youtube.com/watch?v=NeQvcdRPxoI and http://www.youtube.com/watch?v=NqYPJELHkhA
With the right hardware, you shouldn't have an issue.
Something to think about though: I have no trouble whatsoever trying to find a library to do signal processing in C++. However, I can't easily find much information at all dealing with signal processing for C#.
If you can find a way to make it work, you might have just found a way to get into a niche area of development that many other people can benefit from.
Just something to think about. Unless you are doing this on some mission critical system it probably doesn't matter either way. I'd just go with whatever you think will benefit you more in the long term.
It's not so much about the performance loss in C# but the unpredictability of when garbage collection is performed. During GC all managed threads are frozen until the GC is completed. During that time you cannot do any processing, which for 99% of applications won't matter, but for signal processing is critical. This is i.e. why you don't see a Microsoft sanctioned managed version of DirectShow or other "real time" signal processing applications.
I'm writing an image processing program to perform real time processing of video frames. It's in C# using the Emgu.CV library (C#) that wraps the OpenCV library dll (unmanaged C++). Now I have to write my own special algorithm and it needs to be as fast as possible.
Which will be a faster implementation of the algorithm?
Writing an 'unsafe' function in C#
Adding the function to the OpenCV library and calling it through Emgu.CV
I'm guessing C# unsafe is slower because it goes throught the JIT compiler, but would the difference be significant?
Edit:
Compiled for .NET 3.5 under VS2008
it needs to be as fast as possible
Then you're asking the wrong question.
Code it in assembler, with different versions for each significant architecture variant you support.
Use as a guide the output from a good C++ compiler with optimisation, because it probably knows some tricks that you don't. But you'll probably be able to think of some improvements, because C++ doesn't necessarily convey to the compiler all information that might be useful for optimisation. For example, C++ doesn't have the C99 keyword restrict. Although in that particular case many C++ compilers (including MSVC) do now support it, so use it where possible.
Of course if you mean, "I want it to be fast, but not to the extent of going outside C# or C++", then the answer's different ;-)
I would expect C# to at least approach the performance of similar-looking C++ in a lot of cases. I assume of course that the program will be running long enough that the time the JIT itself takes is irrelevant, but if you're processing much video then that seems likely. But I'd also expect there to be certain things which if you do them in unsafe C#, will be far slower than the equivalent thing in C++. I don't know what they are, because all my experience of JITs is in Java rather than CLR. There might also be things which are slower in C++, for instance if your algorithm makes any calls back into C# code.
Unfortunately the only way to be sure how close it is is to write both and test them, which kind of misses the point that writing the C++ version is a bunch of extra effort. However, you might be able to get a rough idea by hacking some quick code which approximates the processing you want to do, without necessarily doing all of it or getting it right. If you algorithm is going to loop over all the pixels and do a few FP ops per pixel, then hacking together a rough benchmark should take all of half an hour.
Usually I would advise against starting out thinking "this needs to be as fast as possible". Requirements should be achievable, and by definition "as X as possible" is only borderline achievable. Requirements should also be testable, and "as X as possible" isn't testable unless you somehow know a theoretical maximum. A more friendly requirement is "this needs to process video frames of such-and-such resolution in real time on such-and-such a speed CPU", or "this needs to be faster than our main competitor's product". If the C# version does that, with a bit to spare to account for unexpected minor issues in the user's setup, then job done.
It depends on the algorithm, the implementation, the C++ compiler and the JIT compiler. I guess in most cases the C++ implementation will be faster. But this may change.
The JIT compiler can optimize your code for the platform your code is running on instead of an average for all the platforms your code might run on as the C++ compiler does. This is something newer versions of the JIT compiler are increasingly good at and may in some cases give JITted code an advantage. So the answer is not as clear as you might expect. The new Java hotspot compiler does this very well for example.
Other situations where managed code may do better than C++ is where you need to allocate and deallocate lots of small objects. The .net runtime preallocates large chunks of memory that can be reused so it doesn't need to call into the os every time you need to allocate memory.
I'm not sure unsafe C# runs much faster than normal C#. You'll have to try this too.
If you want to know what's the best solution for your situation you'll have to try both and measure the difference. I dont think there will be more than
Languages don't have a "speed". It depends on the compiler and the code. It's possible to write inefficient code in any language, and a clever compiler will generate near-optimal code no matter the language of the source.
The only really unavoidable factor in performance between C# and C++ is that C# apps have to do more at startup (load the .NET framework and perhaps JIT some code), so all things being equal, they will launch a bit slower. After that, it depends, and there's no fundamental reason why one language must always be faster than another.
I'm also not aware of any reasons why unsafe C# should be faster than safe. In general, safe is good because it allows the compiler to make some much stronger assumptions, and so safe might be faster. But again, it depends on the code you're compiling, the compiler you're using and a dozen other factors.
In short, give up on the idea that you can measure the performance of a language. You can't. A language is never "fast" or slow". It doesn't have a speed.
C# is typically slower than C++. There are runtime checks in managed code. These are what make it managed, after all. C++ doesn't have to check whether the bounds of an array have been exceeded for example.
From my experience, using fixed memory helps a lot. There is a new System.IO.UnmanagedMemoryAccessor class in .NET 4.0 which may help in the future.
If you are going to implement your algorithm in a standard way I think it's irrelevant.
But some languages have bindings to apis or libraries that can give you a non standart boost.
Consider if you can use GPU processing - nvidia and ati provide the CUDA and CTM frameworks and there is an ongoing standarization effort from the khronos group (openGL). A hunch tells me also that amd will add at least one streaming processor core in their future chips. So I think there is quite a promise in that area.
Try to see if you can exploit SSE instructions, there are libraries around -most in C++ or C- that provide handy apis, check Intel's site for handy optimized libraries I do recall "Intel Performance Primitives" and a "Math Kernel".
But on the politics side, do incorporate your algorithm in OpenCV so others may benefit too.
It's a battle that will rage on forever. C versus C++ versus C# versus whatever.
In C#, the notion of unsafe is to unlock "dangerous" operations. ie, the use of pointers, and being able to cast to void pointers etc, as you can in C and C++.
Very dangerous, and very powerful! But defeating what C# was based upon.
You'll find that nowadays, Microsoft has made strides in the direction of performance, especially since the release of .NET, and the next version of .NET will actually support inline methods, as you can with C++. This will increase performance for very specific situations. I hate that it's not going to be a c# feature, but a nasty attribute the compiler picks up on - but you can't have it all.
Personally, I'm writing a game with C# and managed DirectX (why not XNA?? beyond the scope of this post). I'm using unsafe code in graphical situations, which brings about a nod in the direction of what others have said.
It's only because pixel access is rediculously slow with GDI++ that I was driven to look for alternatives. But on the whole, the c# compiler is pretty damned good, and for code comparisons (you can find articles) you'll find the performance is very comparable to c++.
That's not to say there isn't a better way to write the code.
At the end of the day, I personally see C, C++, and C# as about the same speed when executing. It's just that in some painful situations where you want to work really closely with the underlying hardware or very close to those pixels, that you'll find noticeable advantage to the C/C++ crowd.
But for business, and most things nowadays, C# is a real contender, and staying within the "safe" environment is definitely a bonus.
When stepping outside, you can get most things done with unsafe code, as I have - and boy, have I gone to some extremes! But was it worth it? Probably not. I personally wonder if I should have thought more along the lines of time-critical code in C++, and all the Object Oriented safe stuff in C#. But I have better performance than I thought I'd get!
So long as you're careful with the amount of interop calls you're making, you can get the best of both worlds. I've personally avoided that, but I don't know to what cost.
So an approach I've not tried, but would love to hear adventures in, in actually using C++.NET to develop a library in - would that be any faster than c#'s unsafe for these special graphical situations? How would that compare to native C++ compiled code? Now there's a question!
Hmm..
If you know your environment and you use a good compiler (for video processing on windows, Intel C++ Compiler is probably the best choice), C++ will beat C# hands-down for several reasons:
The C++ runtime environment has no intrinsic runtime checks (the downside being that you have free reign to blow yourself up). The C# runtime environment is going to have some sanity checking going on, at least initially.
C++ compilers are built for optimizing code. While it's theoretically possible to implement a C# JIT compiler using all of the optimizing voodo that ICC (or GCC) uses, it's doubtful that Microsoft's JIT will reliably do better. Even if the JIT compiler has runtime statistics, that's still not as good as profile-guided optimization in either ICC or GCC.
A C++ environment allows you to control your memory model much better. If your application gets to the point of thrashing the data cache or fragmenting the heap, you'll really appreciate the extra control over allocation. Heck, if you can avoid dynamic allocations, you're already much better off (hint: the running time of malloc() or any other dynamic allocator is nondeterministic, and almost all non-native languages force heavier heap usage, and thus heavier allocation).
If you use a poor compiler, or if you can't target a good chipset, all bets are off.
To be honest, what language you write it in is not nearly as important as what algorithms you use (IMO, anyway). Maybe by going to native code you might make your application faster, but it might also make it slower--it'd depend on the compiler, how the programs are written, what sort of interop costs you'd incur if you're using a mixed environment, etc. You can't really say without profiling it. (and, for that matter, have you profiled your application? Do you actually know where it's spending time?)
A better algorithm is completely independent of the language you choose.
I'm a little late in responding but I can give you some anecdotal experience. We had some matrix multiplication routines that were originally coded in C# using pointers and unsafe code. This proved to be a bottleneck in our application and we then used pinning+P/Invoke to call into a C++ version of the Matrix multiplication routine and got a factor of 2 improvement. This was a while ago with .NET 1.1, so things might be better now. As others point out, this proves nothing, but it was an interesting exercise.
I also agree with thAAAnos, if you algorithm really has to be "as fast as possible" leverage IPL or, if you must, consider a GPU implementation.
Running on the CPU is always going to be faster than running on a VM on the CPU. I can't believe people are trying to argue otherwise.
For example, we have some fairly heavy image processing work on our web server that's queued up. Initially to get it working, we used PHP's GD functions.
They were slow as hell. We rewrote the functionality we needed in C++.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop a windows application. If I use native C++ and MFC for user interface then the application will be very fast and tiny. But using MFC is very complicated. Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run. But developing GUI is very easy by using WinForm. Which one do you prefer?
"fast" and "slow" are subjective, especially with today's PC's. I'm not saying deliberately make the thing slow, but there isn't nearly as much overhead in writing a managed application as you might think. The JIT etc work very well to make the code execute very fast. And you can also NGEN for extra start-up speed if you really need.
Actually, if you have time to learn it, you might want to consider WPF rather than winform - this is a different skill-set, but allows you to make very good use of graphics hardware etc.
Also - .NET framework comes with new OS installs, and is still very common on those that pre-date it. So for me it would be a fairly clear choice to develop with C#/.NET. The time to develop a robust and fully tested C++ app (with no leaks, etc) is (for me at least) much greater than the same with C#.
Be careful not to optimize too early; you mention the speed of the code but for most Windows user interface operations, the difference is not noticeable as the main bottlenecks of drawing and disk access are no different for either approach.
My recommendation is that you use C# and WPF or WinForms for your user interface. If you encounter some slow downs, use a profiler to determine where and then consider replacing some business logic with native code, but only if there is a benefit.
There are a great many possible Windows applications, and each has its own requirements.
If your application needs to be fast (and what I work on does), then native C++ is a good way to go.
If your application needs to be small (perhaps for really fast transmission over slow lines), then use whatever gets it small.
If your application is likely to be downloaded a lot, you probably want to be leery of later versions of .NET, which your users might not have yet.
If, like most, it will be fast and small enough anyway on the systems it's likely to be used on, use what will allow you to develop fastest and best.
In almost all cases, the right thing to optimize is developer effort. Use whatever gets a high-quality job done fastest and best.
First.. (though I'm a die hard c++ coder) I have to admit c# is in most cases perfectly fine where speed and size are concerned. In some cases the application is smaller, because the interpreted part is already on the target system. (don't spamm me on that one, an app with a dll is smaller then the app all in one. Windows just happens to ship with the "DLL" already there.)
As to coding.. I honestly don't think there is a significant difference. I don't spend alot of my time typing code. Most of it is thinking out a problem. The code part is quite small. Saving a few lines here and there.. Blahh it's not an argument for me.. If it were I'd be working in APL. Learning the STL, MFC and what have you is likely just as intensive as learning the c# libraries. In the end they're all the same.
C# does have one thing going for it.. A market. It's the latest "hot" skill and so theres a market for it. Jobs are easy to find. Now keep in mind java was a "hot" skill a few years back and now ever tom dick and harry has it on their resume. That makes it harder to niche yourself.
Ok.. all that said.. I LOVE C++.. There's nothing like getting dirty when I really need to. When the MFC libs don't do the job I take a look at what they're sitting on and so on and so on.. It's a perenial language and I belive it's still at or near the most used lang in the world. Yah c++, Yah!.
Note also that most Windows computer already have .NET installed on them, so that really shouldn't be a concern.
Also, aside from the .NET installation, .NET applications tend to be quite small.
And for most application with a UI, the speed of the User is the really limiting time factor.
C# applications are slower to start than MFC applications, but you might not notice a speed difference between the two once the application is loaded.
Having no information on the application you plan to develop, I vote for WPF.
In my opinion, the requirements should help you decide the platform. What is more important: Having an application that is easily maintainable or one that must be extremely fast and small ?
A large class of applications nowadays can be written using .NET and managed code and this is in general beneficial to the development in the long term. From my experience, .NET applications are usually fast enough for most use cases and they are simpler to create.
Native C++ still has its use, but just for being "faster and smaller", when "fast enough and small enough" is sufficient does not sound enough as a justification.
The speed argument between native and managed code is largely a non-issue at this point. Each release of the .NET Framework makes performance improvements over the previous ones and application performance is always a very high priority for the .NET development teams.
Starting with Windows Vista and Windows Server 2008, the .NET Framework is installed as part of the operating system. It is also part of Windows Update so almost any Windows XP system will also have it installed. If the requirement that the framework be installed on the target machine is really that much of a problem there are also compilers that will essentially embed the required runtime portions into your application to generate a single exe, but they are expensive (and in my opinion, not really worth the cost).
MFC isn't hard to learn, actually it is very easy.
Almost equal to C#.
Choice of a language or tool should be dictated by the functional and performance requirements of your project and expertise. If performance is a real consideration for you and you have done some analysis to prefer C++ over C#, then you have a decision already. Note though that having MFC based application is not terribly efficient either. On the other hand, overheads in .NET applications are over-stated.
Performance is something that is really a function of how well you write your code and what scalability requirements exist. If you would only have to work with one client with a maximum database records of 1K, then we should not be talking performance.
If ease of development and maintainability is more important, certainly C# would be the choice.
So I am not sure this is a question that can be given an answer as choice A or B with the data you have provided. You need to do the analysis of functional and non-functional requirements and decide.
Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run
An MFC app requires MFC dll's to run (and probably VC runtime as well!), so they might need to be installed, or if they are statically linked, you add to the size of the exe.
Blockquote
.NET is easier to work with. Unless you'll lose users by using it or will have trouble with code migration, you should probably use .NET. It is highly unlikely that speed will be an issue for this. Size probably doesn't matter that much, either.
Which technology are you more familiar with?
The information you gave does not include anything that would help decide. Yes, MFC apps tend to be smaller (if you include the runtime size which isn't a suitable measure in the long run), more responsive and mor costly to develop. So what?