I am reviewing a co-worker's C# console app, and I see this snippet:
class Program
{
static void Main(string[] args)
{
Program p = new Program();
p.RealMain();
}
... non-static RealMain function
}
Presumably, he's doing this because he wants to have instance-level fields, etc.
I haven't seen this before, but this style bugs me. Is it a common and accepted practice?
There is a school of thought that says that the main() function of object oriented code should do as little as possible. Main() is an "ugly" throwback to procedural code design, where programs were written in one function, calling subroutines only as necessary. In OOP, all code should be encapsulated in objects that do their jobs when told.
So, by doing this, you reduce the LOC in the main() entry point to two lines, and the real logic of the program is structured and executed in a more O-O fashion.
It makes sense to me.
In particular, you may want to add just enough logic into Main to parse the command line arguments - possibly using a generalized argument parser - and then pass those options into the constructor in a strongly-typed way suitable for the program in question.
Albin asked why this would be necessary. In a word: testability. In some cases it's entirely feasible to at least test some aspects of a top level program with unit tests or possibly integration tests. Using instance fields instead of static fields (etc) improves the testability here, as you don't need to worry about previous test runs messing up the state.
if you want to get non static functions you have to do like this.
class Program
{
static void Main(string[] args)
{
Program p = new Program(); // dependency of the class will be there.So not a good practice
p.RealMain();// if you want initalize, you have to go like this or better you can do it in some other class.
}
void RealMain(){}
}
Never seen it before. If you want to go with this pattern, create a separate Program2-class with RealMain and instantiate that instead.
Why do you need instance level fields? Is static fields not enough?
There could be a benefit if you internally want to instantiate many Program classes.
I don't see anything particularly wrong with this approach, I just don't have seen it before.
Application entry point is always defined as static void Main(...).
You can decide to write your code inside Main() or use this method to run something else located elsewhere... it's up to you decide...
The accepted practice is to create an instance of a separate class which can contain anything you need. The above snippet looks weird at least :).
If it was any other class but "Program" the question wouldn't have come up. This design gives you the opportunity to instantiate multiple instances of "Program", maybe threaded in the future, so why not. I'm with KeithS here: As few as possible in static void Main.
I see this a lot, particularly for quick console programs knocked up to try something out, or test something.
Visual Studio practically encourages it - if you ask for a new Console program, it generates a single file, with a class containing only a Main method.
Unless you are doing something complicated, which requires more than 1 class, or something very simple, which doesn't require a class at all (i.e. all methods and variables static)
why would you not do it this way?
Related
C#9 supports top-level statements, but I am curious whether it is possible to apply any attribute to generated main method (STAThread, actually), or I have to use classical approach with Main method.
This feature was designed for newcomers to language so they won't need to write bunch of boilerplate each time. So this
namespace HelloWorldProg
{
public static class HelloWorldClass
{
public static void Main(string[] args)
{
System.Console.WriteLine("Finally I can write Hello World");
}
}
}
transforms to this
System.Console.WriteLine("That's much easier!");
It's a question of entry threshold and learning curve. Without top-level statement you need to know about
namespaces
classes
incapsulation
static/instance members
passing arguments
arrays
how to write text to console
While with top-level statements you need to know only about last item to be able to write program and you may dig into other themes latter.
It's like "how to write 'hello world' in Haskell". Well, you need to know monads, IO in particular and do-notation. In order to know monads you should learn category theory.
Now answering your question:
You cannot declare attributes with top-level statements. They were designed for different purposes. Proposal, priorities in platform design
I have a console app, that is used for importing stuff from a WS.
I'm really new to Console Apps so I have a question.
The Project contains a class called Importer.cs. This class has a Method called Initialize():
class Importer
{
static void Initialize()
{
//here i connect to the server etc.
}
}
Now I want to be able to call my APP like:
Importer.exe Initialize
So it should call the Initialize method, and then I would like to be able to go on with for example:
Importer.exe StartImport
I already work with the args[] parameter, but i'm quite stuck now.
Use a library like CommandLineParser and then use Reflection to call these method as MethodInfo objects.
In your console application, look in your solution explorer (in the right window of VS where all your files are shown). Find the one called PROGRAM.CS. Open it and look for this:
static void Main(string[] args)
{
}
Inside that, put this:
static void Main(string[] args)
{
//This starts a new instance of the Importer Class
Importer myImporter = new Importer();
//This calls the Initialize Method within Importer
Importer.Initialize();
//This calls the StartInput() Method within Importer
Importer.StartInput();
//Use Console.ReadLine() as the last line in this method to keep your screen open after program execution ends.
Console.ReadLine();
}
The first thing your console app runs ins Main() in console app.
Then, you simply test the project using F5. After testing, build the project and run by double-clicking the EXE file.
Depending on what you're exactly doing, this might not even work. I imagine a situation, where you Initialize() and afterwards StartImport(), but in between these 2 calls the program has been finished and the initialized state is gone. If you do not have methods like Initialize(), but rather atomic subcommands, your approach is possible in principle.
But there arises one question: Is this going to be generic? I mean: Say, you add another method. Do you then want to have the access to the method by a subcommand automatically established or wouldn't you mind adding another subcommand case? The first option would mean, you need to get comfortable with Reflection.
Now, say, you add methods, which you cannot declare private due to whatever reason, but you don't want them to be exposed as a subcommand. So you would need to keep track of the visibility. Sure, this would mean poor design, but if you're stuck with legacy components, this might just happen.
Now, let's say you need the Initialize() (or similar) command(s). They take care of a connection state or whatever and the program needs to still run, when the next subcommand is invoked, so it can use the initialized information, handles, etc.. Wouldn't it then make more sense to design your console application like a shell? So you would start with Importer.exe and get asked for subcommands.
I have a legacy class that is rather complex to maintain:
class OldClass {
method1(arg1, arg2) {
... 200 lines of code ...
}
method2(arg1) {
... 200 lines of code ...
}
...
method20(arg1, arg2, arg3) {
... 200 lines of code ...
}
}
The methods are huge, unstructured, and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, with one pulic method and several helpers.
What would you suggest? Several ideas come to my mind:
Add several private helper methods to each method and join them in #region (straight-forward refactoring)
Use Command pattern (one command class per OldClass method in a separate file).
Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands).
?
Thank you in advance!
SRP - Single Responsibilty principle and DRY - Don't Repeat yourself
I would start by finding the bits that are repetitive and extracting them into helper functions. Once you've narrowed the code base down in this way, you can consider other ways to refactor, and the code will be much easier to wrap your head around.
See SD CloneDR for a tool that can tell you what code blocks each of your methods have in common, including possible parameterizations.
DRY - Don't repeat yourself.
The first thing I always do is to remove (all) repetition. Even a single line is repetition.
That will normalise the code plus also give you a bunch of enhancements such as genericising the code.
Start by mapping the current functionality and making an UML class diagram. That way you can effectively achieve DRY.
Change the design to be effective and DRY, while still keeping the interface of your system as much the same as you can.
Then you write unit tests for the new system, it would be better to write them for the old system as wel, but since you are probably going to change method names and arguments, the unit tests probably cannot work on both systems.
Ask your manager feedback on the unit test, did you understood the functionality properly? Don't implement any new features, this will cause problems with existing systems using the code, and if you get the new design right adding new features
Implement the approved system.
Use default values as arguments to reduce overloading: SelectUser(int userId = 0) can be called with SelectUser();
i'm working on a fork of the Divan CouchDB library, and ran into a need to set some configuration parameters on the httpwebrequest that's used behind the scenes. At first i started threading the parameters through all the layers of constructors and method calls involved, but then decided - why not pass in a configuration delegate?
so in a more generic scenario,
given :
class Foo {
private parm1, parm2, ... , parmN
public Foo(parm1, parm2, ... , parmN) {
this.parm1 = parm1;
this.parm2 = parm2;
...
this.parmN = parmN;
}
public Bar DoWork() {
var r = new externallyKnownResource();
r.parm1 = parm1;
r.parm2 = parm2;
...
r.parmN = parmN;
r.doStuff();
}
}
do:
class Foo {
private Action<externallyKnownResource> configurator;
public Foo(Action<externallyKnownResource> configurator) {
this.configurator = configurator;
}
public Bar DoWork() {
var r = new externallyKnownResource();
configurator(r);
r.doStuff();
}
}
the latter seems a lot cleaner to me, but it does expose to the outside world that class Foo uses externallyKnownResource
thoughts?
This can lead to cleaner looking code, but has a huge disadvantage.
If you use a delegate for your configuration, you lose a lot of control over how the objects get configured. The problem is that the delegate can do anything - you can't control what happens here. You're letting a third party run arbitrary code inside of your constructors, and trusting them to do the "right thing." This usually means you end up having to write a lot of code to make sure that everything was setup properly by the delegate, or you can wind up with very brittle, easy to break classes.
It becomes much more difficult to verify that the delegate properly sets up each requirement, especially as you go deeper into the tree. Usually, the verification code ends up much messier than the original code would have been, passing parameters through the hierarchy.
I may be missing something here, but it seems like a big disadvantage to create the externallyKnownResource object down in DoWork(). This precludes easy substitution of an alternate implementation.
Why not:
public Bar DoWork( IExternallyKnownResource r ) { ... }
IMO, you're best off accepting a configuration object as a single parameter to your Foo constructor, rather than a dozen (or so) separate parameters.
Edit:
there's no one-size-fits-all solution, no. but the question is fairly simple. i'm writing something that consumes an externally known entity (httpwebrequest) that's already self-validating and has a ton of potentially necessary parameters. my options, really, are to re-create almost all of the configuration parameters this has, and shuttle them in every time, or put the onus on the consumer to configure it as they see fit. – kolosy
The problem with your request is that in general it is poor class design to make the user of the class configure an external resource, even if it's a well-known or commonly used resource. It is better class design to have your class hide all of that from the user of your class. That means more work in your class, yes, passing configuration information to your external resource, but that's the point of having a separate class. Otherwise why not just have the caller of your class do all the work on your external resource? Why bother with a separate class in the first place?
Now, if this is an internal class doing some simple utility work for another class that you will always control, then you're fine. But don't expose this type of paradigm publicly.
I am running into a design disagreement with a co-worker and would like people's opinion on object constructor design. In brief, which object construction method would you prefer and why?
public class myClass
{
Application m_App;
public myClass(ApplicationObject app)
{
m_App = app;
}
public method DoSomething
{
m_App.Method1();
m_App.Object.Method();
}
}
Or
public class myClass
{
Object m_someObject;
Object2 m_someOtherObject;
public myClass(Object instance, Object2 instance2)
{
m_someObject = instance;
m_someOtherObject = instance2;
}
public method DoSomething
{
m_someObject.Method();
m_someOtherObject.Method();
}
}
The back story is that I ran into what appears to be a fundamentally different view on constructing objects today. Currently, objects are constructed using an Application class which contains all of the current settings for the application (Event log destination, database strings, etc...) So the constructor for every object looks like:
public Object(Application)
Many classes hold the reference to this Application class individually. Inside each class, the values of the application are referenced as needed. E.g.
Application.ConfigurationStrings.String1 or Application.ConfigSettings.EventLog.Destination
Initially I thought you could use both methods. The problem is that in the bottom of the call stack you call the parameterized constructor then, higher up the stack, when the new object expects a reference to the application object to be there, we ran into a lot of null reference errors and saw the design flaw.
My feeling on using an application object to set every class is that it breaks encapsulation of each object and allows the Application class to become a god class which holds information for everything. I run into problems when thinking of the downsides to this method.
I wanted to change the objects constructor to accept only the arguments it needs so that public object(Application) would change to public object(classmember1, classmember2 etc...). I feel currently that this makes it more testable, isolates change, and doesn't obfuscate the necessary parameters to pass.
Currently, another programmer does not see the difference and I am having trouble finding examples or good reasons to change the design, and saying it's my instinct and just goes against the OO principles I know is not a compelling argument. Am I off base in my design thoughts? Does anyone have any points to add in favor of one or the other?
Hell, why not just make one giant class called "Do" and one method on it called "It" and pass the whole universe into the It method?
Do.It(universe)
Keep things as small as possible. Discrete means easier to debug when things inevitably break.
My view is that you give the class the smallest set of "stuff" it needs for it to do its job. The "Application" method is easier upfront but as you've seen already, it will lead to maintainence issues.
I thing Steve McConnel put it very succintly. He states,
"The difference between the
'convenience' philosophy and the
'intellectual manageability'
philosophy boils down to a difference
in emphasis between writing programs
and reading them. Maximizing scope
may indeed make programs easy to
write, but a program in which any
routine can use any variable at any
time is harder to understand than a
program that uses well-factored
routines. In such a program you can't
understand only one routine; you have
to understand all the other routines
with which that routine shares global
data. Such programs are hard to read,
hard to debug, and hard to modify." [McConnell 2004]
I wouldn't go so far as to call the Application object a "god" class; it really seems like a utility class. Is there a reason it isn't a public static class (or, better yet, a set of classes) that the other classes can use at will?