As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am new to C# and have to maintain a C# Application.
Now I've found a method that has 32 Parameters (not auto-generated code).
From C/C++ I remember the rule of thumb "4 Parameters". It may be an old-fashioned rule rooting back to old 0x86 compilers, where 4 Parameters could be accommodated in registers (fast) or on stack otherwise.
I am not concerned about performance, but I do have a feeling that 32 parameters per functions are not easy to maintain even in C#.
Or am I completely not up to date?
What is the rule of thumb for C#?
Thank you for any hint!
There is no general consensus and it depends on who you ask.
In general - the moment readability suffers, there are too many...
Bob Martin says the ideal number of parameters is 0 and that 3 is stretching it.
32 parameters is a massive code smell. It means the class has way too many responsibilities and needs to be refactored. Even applying a parameter object refactoring sounds to me like it would hide a bad design rather than solve the issue.
From Clean Code Tip of the Week #10:
Functions should have a small number of arguments. No argument is best, followed by one, two, and three. More than three is very questionable and should be avoided with prejudice.
Hmmm 32 parameters is way too much.
There are as many rules as people i guess. However, common sense dictates that more than 6 becomes unwieldy.
When you have so many parameters it's always better to pass an object as a single parameter and have the parameters as properties, at least is easier to read.
C# doesn't limit maximum number of parameters, AFAIK.
But IL does: 0x1FFFFFFF.
Of course, this post isn't a guide to write methods with huge amount of parameters.
I believe that a common feeling from the developer community is about 5 or 6 parameters maximum. The times that I've seen methods like yours, it is someone doing something like "SaveCustomer" and pass every field instead of passing a customer object.
There is no silver bullet answer for this. Everything depends on you and your dev group.
The quantity of parameters may arrive also on numbers like 32, even if this leads to think about poor design, but this is kind of things that may happen to meet during career.
General agreement on this is
use as less as pssible
use overloaded functions, to slice parameters between different functions
func A(a,b)
{
A(a,b,c);
}
can use params keyword to pass arbitrary information in array, like object[]
can use Key-value stores where you can hold a lot of information and recover it
In general they say that the code-line has not to be long as much then constrain you to scroll horizontally in your editor, even if this is not strictly related to question subject, but may lead to some ideas on it.
Hope this helps.
You could take another approach by creating an object that is passed in as single parameter?
I have never head of a rule of thumb for parameters, but common sense and practicality usually prevails.
While I suspect the question will get closed as argumentative, 32 is definitely too many.
One option is to look at the builder patter, which will at least make the task more readable.
I think nowadays the most important thing would be readability for humans as opposed to performance. I doubt that a similar performance behaviour exists in .NET anyway, but even if it did, code that is correct is infinitely more useful than code that performs slightly quicker but that does the wrong thing. By keeping it easy to understand, you increase the chances of the code being correct.
A handful of parameters - rarely beyond 5 in my experience - is best most of the time. You could consider refactoring code that requires more than this by providing the parameters in the form of properties on a class which the method is subsequently called on.
I think it is pleasant to have zero up to five parameters per method.
But this depends on varying things, like coding style and class design.
Look at the .NET Framework, you will see often this:
A class with methods that almost have less or no parameters, but uses few properties to control the behavior of the class (instead of 30 parameters).
A class with huge set of methods with less or no paramters and with almost no properties. e.g. BinaryReader.
Keep your public API so simple as possible. Less parameters helps other developers to use your class without to learn to much about 'how it works'. Makes code more legible.
Valentin you have right feeling, 32 parameters mean only one - something going totally wrong.
From my past experience in C++, I saw only one "parameters" leader:
It was Win32 APi CreateWindow with 11 params.
You should never ever use such huge quantity of parameters.
From other hand, if you interested in question from theoretical point (probably it can be asked at interview) - How many parameters allowed for method?
So, here as was mention above C# method can have no more then 0x1FFFFFFF parameters (IL limitation).
You can use params[] array to set up such huge quantity.
And why exactly such limit?
Because, if you convert this value to bytes and multiple by reference size (4 bytes) you will receive exactly 2 GB.
There are 2 GB limitation on all objects in .NET and you are never allowed to create a single object that exceeds 2 GB.
As far as I know there isn't any hard and fast rule for how many parameters you should have. It depends entirely on what you are doing.
However for most applications, 32 parameters sounds like a bit too much. It might indicate a bad design. There might be a way to simplify things if you look closely.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Is there any special jargon word for a class that has no functions but is used to store data?
One of the examples is Data Transefer Object (DTO), although it, of course, still can have methods.
Plain old data structure (POD) seems to be an appropriate term. Though rarer than POJO/POCO, from what I've seen, it seems to be the best fit for your criteria.
There is no standard term for C# because this practice is pretty rare. I call such classes (or structs) "records", for no particularly good reason.
I see a lot of flamewars and a high rated incorrect answer here. So I'll chime in with my not entirely correct but close enough answer.
A JavaBean is a special data encapsulation object in Java. In C# I'm not entirely aware of the name but they do have a structure (rather than a class) which I'm accustomed to using for similar types of tasks.
Another term you may wish to use is Entity. Java has "persistance entities" which are effectively JavaBeans with an annotation. My advice would be to be consistent with whichever you choose to use.
As mentioned this isn't a perfect answer but it should be close enough.
http://en.wikipedia.org/wiki/JavaBeans
http://docs.oracle.com/javaee/5/tutorial/doc/bnbqa.html
http://msdn.microsoft.com/en-us/library/vstudio/ah19swz4.aspx
http://msdn.microsoft.com/en-us/library/system.data.entity(v=vs.103).aspx
I mostly refer to them as container classes. Maybe the term you're looking for, because it doesn't sound very functional. But they often have getters/setters.
Utility class is also a nice term. Utility class which stores xyz data for use with bla.
Do you mean something like this?
public class Foo {
public int a;
public String b;
}
I don't think there's a specific term for a (public) class like that in Java. Except maybe "bad practice".
If your platform has a decent JIT compiler, there's no good reason to write code like that. At least make the fields private and provide getters and/or setters. A decent JIT compiler will optimize simple getters and setters so that there is no performance overhead.
The key point is that you should never let code like that appear in a API that is exposed outside of a single compilation unit. Why? It exposes the implementation details of the class and forces other code to depend on them.
If the class is an private inner class the above code could be reasonable, though I'd be more comfortable if the fields were final and there was a constructor. Especially if the compilation unit was large.
Your question is tagged C# and Java, but I've found most people understand if you call them structs (from C).
Note that in C++, structs may have functions too, but I don't think this is idiomatic.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Even though .Net allows dynamic invocation (e.g. with reflection, C# dynamic keyword), but when using a language such as C# we sometimes feel it is necessary to use static typing, in order to prove that our program is correct, and will not have typing issues at runtime.
Sometimes this results in us introducing interfaces or base classes that fee like they are just for purpose of explaining to the compiler that 'Yes, I know all the objects I pass to this context are going to be understand invoke Method X with arg Y - here, I will prove it to you using an interface definition!' (For example - .net internally uses IReadChunkBytes interface to allow passing either SteamReadChunkBytes or BufferReadChunkBytes objects to some method or other.)
Other times we create classes or types to serve other purposes which are do not feel very usefully type-y, such as being unique identifiers (a bit like enums) with small attached behavior, or to hold a set of constants, etc.
I'm interested in better understanding what the compiletime, runtime, and other costs are going to be when I face such design decisions where I am asking 'should I define a new type or interface just in order to solve this problem?' Obviously there will be two sides to the cost and benefit in each such comparison, but in general we should hopefully see the same costs for 'define new type' in each such comparison/disucssion. How do we quantify these costs?
The performance and/or space costs of statically creating a new interface or class are always negligible. Don't think about it too much in this sense. In contrast, reflection and late binding can cause serious performance problems. You should use static typing pretty much at every opportunity.
The costs associated with creating a new class or interface aren't performance costs. They're more human costs. Here is a list of some considerations you should make before adding a new class or interface. At any rate, using late binding or reflection is probably not going to help your program. These are last-resort techniques.
Program complexity. While this is often not the case, a general rule of the thumb is that every class adds additional complexity to your application, and thus makes it harder to understand during run time, pass on to new project members, remember, and diagram. Changes become more difficult to implement.
If you really don't feel like a class is necessary, perhaps it isn't. Maybe there are other ways to solve your problem, such as using more dynamic classes. Perhaps you can use inheritance or other techniques to reduce repetitions.
Almost everything has some runtime cost. The only exception would be things like empty space. The reason is that almost everything gets record in IL image, even local variable names, parameter names, constants. So at least there will be disk cost, virtual memory space cost, working space cost.
In terms of CPU, more metadata will slow down program startup, token resolution, JIT/NGEN.
But sometimes adding types can have positive impact on performance too.
Using dynamic over strong types is more likely to give you performance issues. So if you are fine with using dynamic for most of your objects you may not need to worry about cost of creating static types.
Side note: if you prefer dynamic typing C# may not be the best language to work with. And it would be harder to get good samples as most C# code is targeting strongly typed objects.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I've been working with C# and more generally the .Net framework for a couple of years now. I often heard about the similarity between C# & the Java language and would like to learn more about the second one.
Have you got any specific advice to learn Java when coming from C# ?
Any common errors a C# programmer would do when starting Java ?
Any documentation showing the habits you can keep and the ones you must change (still in a C# to Java optic, so something a bit more specific then a C# vs Java comparison) ?
Well, while C# and Java are superficially alike there are a number of small differences that might bite you. Generally I think the opposite direction—going from Java to C#—is less problematic. This is mainly due to C# being a more complex language so you might find many simplifications from common Java patterns but the other way around might be a little painful.
Things to look out for (partial list, not guaranteed to be exhaustive):
Different ...
Naming conventions. In Java only type names start with a capital letter (i. e. PascalCase), everything else uses camelCase. Not very hard to adhere to, though.
Also interfaces generally don't start with I. On the other hand you have to implement them with a different keyword. Doesn't really help in the middle of the code, though.
Class library :-)
While obvious, this has been the thing I spent most time on when learning a language. When dealing with a known paradigm the syntax differences are quickly sorted out, but getting to know the standard library / class library / framework takes some time in some cases :-)
Patterns. Well, not quite, it's still the same stuff. But C# supports some patterns at the language level, while you still have to implement them yourself in Java. No events, but the Observer pattern (very prevalent in Swing—whenever you see a Listener, you know what to do :-))
Exception handling. Java has so-called checked exceptions which means that an exception must either be caught or declared upwards. Usually this means that you have
catch (SomeException ex) {
ex.printStackTrace();
}
pretty often in your code1 :-)
Types. While .NET has normal objects and value types, they both are objects and support methods, properties, &c. Java has a dichotomy of primitive types, such as int, float, char, &c. and classes such as String. Doesn't matter much since they implemented auto-boxing, but sometimes it's still annoying to wrap int in Integer.
Polymorphism: All Java methods are virtual by default whereas c# methods are not.
Minor syntactic differences.
foreach (a in b) → for (a : b)
Different access keywords. Things like internal and protected internal don't exist. But unqualified members are visible to other classes in the same package (sort of internal, but then again not quite).
String comparison isn't done with == in Java. You have to use .equals(). While in C# == on strings is value equality, in Java == is always reference equality.
No ...
Properties. In Java this is generally done with the Foo getFoo()/void setFoo(Foo foo) pattern which C# generates silently behind your back when using properties but you have to do it explicitly in Java. Generally, to keep the language itself simpler many things in Java are just conventions. Still, most of the time you're better off adhering to them :-)
Operator overloading. Deemed a hazard to the righteous programmer they weren't implemented for fear of abuse. Don't need them too often anyway, not even in C#, but sometimes they are nice and then you're missing something.
Indexers. You always have to access list items through myList.get(5) instead of the array-like syntax myList[5]. Just a mild inconvenience, though.
LINQ (though there exist implementations2 but it's not as nicely integrated), or lambda functions3 (no delegates anyway, but anonymous classes), extension methods, or partial classes (yes, that's a painful one when dealing with Swing, unless you're very disciplined), and a few more things.
Multidimensional arrays. You can use jagged arrays (arrays of arrays), buttrue multidimensionality isn't there.
Generics are compile-time only, at runtime only Objects remain. Also wildcards in generics can be hard to resolve sometimes when the compiler complains about all of the four ? in your generics having different types. (Though to be fair: That was a case where I would have needed type information at runtime anyway so I reverted back to Objects).
General advice: Grab a friend with Java experience and let him glance over your code. While he probably can't tell you everything you should take care of when you directly ask him that question, he can spot strange things in code just fine and notify you of that. This has greatly helped me learning Java (although I learned Java first and then C#, so it might be different).
1 Yes, I know many catch blocks look different, but still, this is probably the archetypical one and not even that rare.
2 Quaere, JaQue, JaQu, Querydsl
3 There's lambdaj, though. Thanks for pointing that out, Esko.
I honestly think the biggest hurdle for many C# developers trying to learn Java is learning a new IDE. Visual Studio is great, and when you're coding in C# for a long period of time, you get used to it. When having to move over to Eclipse or Netbeans, you suddenly feel lost. How do I set a breakpoint? Where's the immediate window? How do I create a windows app? etc etc... I know this sounds crazy, but I'm telling you, people get very attached to their IDE's and have a tough time getting used to new ones...
Languages themselves are pretty similar, sans few keywords and Java lacking some features C# programmers got used to (properties, using, reified (non-type-erased) generics).
The main problem here is knowledge of frameworks, of which there are thousands for Java.
The main language is fine. Getting to know the libraries will be one thing which takes time. If you're doing web-applications, there is a LOT to learn... equivalent technologies to WCF and ASP.net.
You don't say what kind of area you work in... desktop, server, or web-server?
Biggest difference between C# and Java : In Java, all methods are virtual. Hence the reason why tools such as NUnit and such came from the Java world.
To be honest, if you're a competent C# programmer I don't believe there's much you do need to know apart from packaging and deployment of applications.
Here's a good link http://en.wikipedia.org/wiki/Comparison_of_Java_and_C_Sharp
The biggest thing you need to learn is how to Greenspun C#'s functional style features in Java. For example, you can expect to make a lot of interfaces with only one method to get around Java's lack of lambda functions and delegates.
I honestly recommend Java in a Nutshell. Most Java/any_other_lang introduction books are for totally novice readers explaining the concept of a loop for pages and recursion for a chapter... You can start writing Java programs within two days with this book. Of course it will take you a long time to understand what is going on under the hood and how to use all the available framework. But once the language itself is mastered, it is easy to get along even with google only resources.
Although, this is the other way around, I found the following link to be quite useful for comparing Java and C#.
C# From a Java Developer's Perspective
I made a transition from Java to C# and back to Java again. I think syntactically they are very similar and most of the trouble I had was learning the .NET APIs and learning how to use them effectively. Many times I was using 'syntactic sugar', writing my code as if it was in Java and then translating it to C#. I spent a lot of time on Microsoft's website reading and learning about the APIs which was a huge help.
Is there any design reason for that (like the reason they gave up multi inheritance)?
or it just wasn't important enough?
And same question applies for optional parameters in methods... this was already in the first version of vb.net... so it surely no laziness that cause MS not to allow optional parameters, probably architecture decision.. and it seems they had change of heart about that, because C# 4 is going to include that..
What was the decision and why did they give it up?
Edit:
Maybe readers didn't fully understand me. I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used millions of times per second.
Say I have a method called Add(int num), and this method is used quiet a lot with 1 as parameter (Add(1);), I've found out it is faster to implement a special method especially for one. And I don't mean overloading - Writing a new method called AddOne, and literally copy the Add method into it, except that instead of using num I'm writing 1. This might seems horribly weird to you, but it's actually faster.
(as much as ugly it is)
That made me wonder why C# doesn't support manual inline which can be amazingly helpful here.
Edit 2:
I asked myself whether or not to add this. I'm very well familiar with the weirdness (and disadvantages) of choosing a platform such as dot net for such project, but I think dot net optimizations are more important than you think... especially features such as Any CPU etc.
To answer part of your question, see Eric Gunnerson's blog post: Why doesn't C# have an 'inline' keyword?
A quote from his post:
For C#, inlining happens at the JIT
level, and the JIT generally makes a
decent decision.
EDIT: I'm not sure of the reason for delayed optional parameters support, however saying they "gave up" on it sounds as though they were expected to implement it based on our expectations of what other languages offered. I imagine it wasn't high on their priority list and they had deadlines to get certain features out the door for each version. It probably didn't rise in importance till now, especially since method overloading was an available alternative. Meanwhile we got generics (2.0), and the features that make LINQ possible etc. (3.0). I'm happy with the progression of the language; the aforementioned features are more important to me than getting support for optional parameters early on.
Manual inlining would be almost useless. The JIT compiler inlines methods during native code compilation where appropriate, and I think in almost all cases the JIT compiler is better at guessing when it is appropriate than the programmer.
As for optional parameters, I don't know why they weren't there in previous versions. That said, I don't like them to be there in C# 4, because I consider them somewhat harmful because the parameter get baked into the consuming assembly and you have to recompile it if you change the standard values in a DLL and want the consuming assembly to use the new ones.
EDIT:
Some additional information about inlining. Although you cannot force the JIT compiler to inline a method call, you can force it to NOT inline a method call. For this, you use the System.Runtime.CompilerServices.MethodImplAttribute, like so:
internal static class MyClass
{
[System.Runtime.CompilerServices.MethodImplAttribute(MethodImplOptions.NoInlining)]
private static void MyMethod()
{
//Powerful, magical code
}
//Other code
}
My educated guess: the reason earlier versions of C# didn't have optional parameters is because of bad experiences with them in C++. On the surface, they look straight-forward enough, but there are a few bothersome corner cases. I think one of Herb Sutter's books describes this in more detail; in general, it has to do with overriding virtual methods. Maximilian has mentioned one of the .NET corner cases in his answer.
You can also pretty much get by with out them by manually writing multiple overloads; that may not be very nice for the author of the class, but clients will hardly notice the difference between overloads and optional parameters.
So after all these years w/o them, why did C# 4.0 add them? 1) improved parity with VB.NET, and 2) easier interop with COM.
I'm working lately on a calculation program (support numbers of any size, to the last digit), in which some methods are used literally millions of times per second.
Then you chose a wrong language. I assume you actually profiled your code (right?) and know that there is nothing apart from micro-optimisations that can help you. Also, you're using a high-performance native bigint library and not writing your own, right?
If that's true, don't use .NET. If you think you can gain speed on partial specialisation, go to Haskell, C, Fortran or any other language that either does it automatically, or can expose inlining to you to do it by hand.
If Add(1) really matters to you, heap allocations will matter too.
However, you should really look at what the profiler can tell you...
C# has added them in 4.0: http://msdn.microsoft.com/en-us/library/dd264739(VS.100).aspx
As to why they weren't done from the beginning, its most likely because they felt method overloads gave more flexibility. With overloading you can specify multiple 'defaults' based on the other parameters that you're taking. Its also not that much more syntax.
Even in languages like C++, inlining something doesn't guarantee that it'll happen; it's a hint to the compiler. The compiler can either take the hint, or do its own thing.
C# is another step removed from the generated assembly code (via IL + the JIT), so it becomes even harder to guarantee that something will inline. Furthermore, you have issues like the x86 + x64 implementations of the JIT differing in behaviour.
Java doesn't include an inline keyword either. The better Java JITs can inline even virtual methods, nor does the use of keywords like private or final make any difference (it used to, but that is now ancient history).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As a beginning programmer, I'm trying to settle on a standard naming convention for myself. I realize that it's personal preference, but I was trying to get some ideas from some of you (well a LOT of you) who are much smarter than myself.
I'm not talking about camel notation but rather how do you name your variables, etc. IMHO, var_Quantity is much more descriptive than Q or varQ. However, how do you keep the variable from becoming too long. I've tried to be more descriptive with naming my controls, but I've ended up with some like "rtxtboxAddrLine1" for a RadTextBox that holds address line 1. Too me,that is unmanageable, although it's pretty clear what that control is.
I'm just curious if you have some guides that you follow or am I left up to my own devices?
Some basic rules can be found here. And much more extended rules can be found here. These are the official guidelines from the Microsoft framework designers.
As for your example, the variable should should be called simply quantity.
In this case, I think you'd be better off naming it as primaryAddressLine or firstAddressLine. Here's why - rtxt as a prefix uselessly tells you the type. Intellisense will help you with the type and is immune to changes made to the actual object type. Calling it firstAddressLine keeps it away from the (poor) convention of using 1, 2, 3...on the end of variable names to indicate that for some reason you needed more of them instead of a collection.
Name it for what it represents/how it's meant to be interpreted or used not for its data type, and in naming it don't abbreviate if you don't need to.
The Guidelines for Names is the best starting point. But as in other areas of life, once you know the rules, you begin to know where it's reasonable to break them.
I never use the old Hungarian notation that calls things strFirstName, intCount, and the like; but I still use it on controls: txtFirstName, btnVerifyData, etc. Reasons include:
I'm not that likely to change the type of a control
If I do change the type of a control, I'll have to change a lot of things, not just the name, so changing the name too is no big deal
They're far easier to find with Intellisense.
In addition, I'm quite likely to do the same thing to many of the TextBoxes or ComboBoxes on a page or form, whereas I'm not likely to do something to all the ints or strings referred to on a page or form. So it helps to be able to quickly find all the TextBoxes with their txt prefix.
There are others, though, that adamantly oppose Hungarian even in this case, and I'm sure they have their reasons. Regardless of your personal style, you may find yourself working on a team that has a very different style. In which case, just do what they do; it's very, very rarely worth making an issue of it. The only time I'd do so is if their style leads to a lot of bugs, but off the top of my head I can't think of a case that would cause that.
There are a few good coding standards documents available online - David Lance wrote one:
http://weblogs.asp.net/lhunt/attachment/591275.ashx
I'd recommend that you use Microsoft's own guidelines as a starting point. Typically, most companies start there (in my experience, anyway).
http://msdn.microsoft.com/en-us/library/czefa0ke(VS.71).aspx
The more descriptive the better, you will find that the length isn't as important as remembering what that control/variable did five years down the road.
For .NET API design (and some general C# guidelines) check Krzysztof Cwalina and Brad Abrams' Framework Design Guidelines
Regards,
tamberg
I generally try to follow the microsoft guidelines, with a few very old habits thrown in.
So, I still can't get out of the habit of prefixing privates with an underscore _privateMember.
I'm old, and that got burnt into my brain.
As far as prefixing control widgets, I have found that if you get too descriptive, it can become painful, in the case of changing the UI down the track.
e.g. you have something called ddlProductLine for a dropdown list, and then that has to change to a radio button group, your prefixing convention starts to be more PITA than helpful.
When you have a lot of widgets to work with, sometimes a more generic prefix like uiCtl can help with the clutter, but still make sense if you have to change widget type.