What design pattern to use when calling Run method inside Main? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Many times I have seen inside Main method of the program, independently of programming language, the following pattern but I don't know how is called and why is used like this. What can I achieve buy using something like the following inside Main? There are any alternatives/variations?
class Program
{
static void Main(string[] args)
{
new Program().Run();
}
private void Run()
{
var rep = new Repository();
dynamic data = rep.GetPerson();
Console.WriteLine(data.Name);
dynamic data2 = rep.GetPersonWrappedInAnonymousType();
Console.WriteLine(data2.Person.Name);
}
}
Thanks in advance.
Edit
Also if you see something in many parts, yes it is a pattern. This is the definition of a pattern!

Making the Program class able to be instantiated (instead of making everything static) allows to have multiple "programs" running at the same time or one after the other. This is useful for testing purposes.
Probably, there are instance fields in this class. By using a fresh instance each time each test run is isolated from the other ones.
Now, if there is no instance state this is a useless thing to do.

If this is a pattern at all, it might be something around responsibility separation. You could pack it into one method:
class Program
{
static void Main(string[] args)
{
var rep = new Repository();
dynamic data = rep.GetPerson();
Console.WriteLine(data.Name);
dynamic data2 = rep.GetPersonWrappedInAnonymousType();
Console.WriteLine(data2.Person.Name);
}
}
but the person that wrote it wanted to separate the "start" of the program (standard "main" entry point) with the actual task(s) that is to be performed (get person, getpersonwrapped..).
Instantiating and object of "Program" class is IMHO for purists. It's highly uncommon to actually have multiple instances of these objects in a single project. However, Java/C# fans that often claim that in OOP "everything is an object" and despise everything what's static and not bound to objects. The static void main is a great pain for them, and so they quickly de-static-ize the Program class in this way. I didn't want to judgde that, but I agree with HansPassant's comment - looks cargo-cultish. But for me, that's no difference and just a tiny cosmetical change, and as I said, especially for the Program class it is mostly irrelevant. But I wouldn't call that a "design pattern". A "code style", or "implementation pattern", but not "design". There's really no architecture and no algorithm in creating and running a "run" method.
If you add more bits to that, some common interface that defines the Run(), some choosing-the-right-program implementation at runtime, then maybe you could get into, I dont know, maybe design pattern called "policy" or "strategy".. but that's mostly exaggerated

Unfortunately no one was willing to answer my question so i have to accept my own answer. However i do believe there are people who can give a more experience based answer on this. So the answer is yes this is a pattern! Is called The command pattern.
From Wikipedia Article :
In object-oriented programming, the command pattern is a behavioral design pattern in which an object is used to represent and encapsulate all the information needed to call a method at a later time. This information includes the method name, the object that owns the method and values for the method parameters.
Four terms always associated with the command pattern are command, receiver, invoker and client. A command object has a receiver object and invokes a method of the receiver in a way that is specific to that receiver's class. The receiver then does the work. A command object is separately passed to an invoker object, which invokes the command, and optionally does bookkeeping about the command execution. Any command object can be passed to the same invoker object. Both an invoker object and several command objects are held by a client object. The client contains the decision making about which commands to execute at which points. To execute a command, it passes the command object to the invoker object.
Using command objects makes it easier to construct general components that need to delegate, sequence or execute method calls at a time of their choosing without the need to know the class of the method or the method parameters. Using an invoker object allows bookkeeping about command executions to be conveniently performed, as well as implementing different modes for commands, which are managed by the invoker object, without the need for the client to be aware of the existence of bookkeeping or modes.

Related

void method with arguments AND return value methods without arguments? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm developing software in C# and I remember that I read an article with the following statement:
Design the methods of a class like this:
use a void method with argument(s) in order to change state of the class instance. method will do some changes on the data of the class
use return value with a method without any arguments in order to retrieve data from the class
I can't find the article anymore and I'm wondering about the benefits and drawbacks of such a convention in order to improve the quality and maintainability of the code I'm writing.
My question is: do you know any references/articles for this question? Does it make sense to follow these statements?
C# has properties which I think makes these suggestions less practical. The big point to take away is that data inside a class should be encapsulated (the outside world cannot access it directly) and abstracted (the details are hidden). This is one of the primary purposes of a class. For this reason I think this advice would make more sense for a language like C++ that doesn't have properties.
One of the problems though is that people would write a class with private fields then have to write a getter method and a setter method for each private field and this results in redundant boiler plate code. Properties help streamline this strategy while still maintaining encapsulation and abstraction. For example,
UserInfo user = new UserInfo();
user.Username = "foo";
Console.WriteLine(user.Password);
In the above example I have set the username to foo and retreived the password for the user. Exactly HOW I set and retreived the information is hidden. It may be saved in an .xml file right now but later I decide to change to saving it in a database or directly in memory. The outside world that uses this classes is never the wiser. This is one of the many beauties of OOP.
The two bullets in your question can respond to a getter and a setter of a property. If I want to retrieve data from my class without any arguments it makes more sense to have a property. Likewise if I want to change the state of my object I can use a setter property. I can perform validation on the input, make it thread-safe, add logging, everything I could do with a method that takes a single argument.
I suppose the bullet points are referring to Command Query Responsibility Separation. In practice you could still need to pass some argument to a return method in order to filter the results for instance. Using voids to change the state is sensible, and also making your return methods so that they don't change the state is good too, but really the number of arguments you pass in is not decided by whether you have a getter or setter method. It is really a factor of what a method needs to know to get it's job done.
The amount of arguments that you pass in should be kept to a minimum however. If you have more than one argument you should really consider if your method is doing more than one thing. I suppose the message is, "Have your methods do one thing and do it well". You can get loads of information about this from Uncle Bob, Bob Martin's Clean Coders series is a great source of information on this subject.
Having a get method that returns a value and doesn't take any parameters makes it very clear what your method is doing.
GetFirstName() is clearer about what it's doing than GetName(bool first) or GetFirstName(User user).
Clarity is key. The method signature may seem like it's fairly clear, but when you read GetName(false) in the code somewhere then it's going to cause some confusion.
These types of getter methods are also not really my standard in C# where I am more likely to use a property for a getter of that nature.
When it comes to void methods with arguments, then this mainly comes into play for setters where you are setting the state of something in the object. Again, easily handled with properties in C#.
Most of the time these guidelines are there to help keep your code testable.
Methods that have fewer parameters are more easily tested -- assuming you are injecting dependencies into the method signature and not creating new objects in the method itself, which can be difficult to mock and test.
Think of how many test cases you may need to cover if you have a method that accepts 5 parameters.
In the end, these are general guidelines that are good for testability and clarity of your code, but there are certainly times when you will find that it doesn't make sense to follow these guidelines.
As with any coding, just be aware of what you are doing and why you are doing it.

Instantiating the class that contains static void Main()

I am reviewing a co-worker's C# console app, and I see this snippet:
class Program
{
static void Main(string[] args)
{
Program p = new Program();
p.RealMain();
}
... non-static RealMain function
}
Presumably, he's doing this because he wants to have instance-level fields, etc.
I haven't seen this before, but this style bugs me. Is it a common and accepted practice?
There is a school of thought that says that the main() function of object oriented code should do as little as possible. Main() is an "ugly" throwback to procedural code design, where programs were written in one function, calling subroutines only as necessary. In OOP, all code should be encapsulated in objects that do their jobs when told.
So, by doing this, you reduce the LOC in the main() entry point to two lines, and the real logic of the program is structured and executed in a more O-O fashion.
It makes sense to me.
In particular, you may want to add just enough logic into Main to parse the command line arguments - possibly using a generalized argument parser - and then pass those options into the constructor in a strongly-typed way suitable for the program in question.
Albin asked why this would be necessary. In a word: testability. In some cases it's entirely feasible to at least test some aspects of a top level program with unit tests or possibly integration tests. Using instance fields instead of static fields (etc) improves the testability here, as you don't need to worry about previous test runs messing up the state.
if you want to get non static functions you have to do like this.
class Program
{
static void Main(string[] args)
{
Program p = new Program(); // dependency of the class will be there.So not a good practice
p.RealMain();// if you want initalize, you have to go like this or better you can do it in some other class.
}
void RealMain(){}
}
Never seen it before. If you want to go with this pattern, create a separate Program2-class with RealMain and instantiate that instead.
Why do you need instance level fields? Is static fields not enough?
There could be a benefit if you internally want to instantiate many Program classes.
I don't see anything particularly wrong with this approach, I just don't have seen it before.
Application entry point is always defined as static void Main(...).
You can decide to write your code inside Main() or use this method to run something else located elsewhere... it's up to you decide...
The accepted practice is to create an instance of a separate class which can contain anything you need. The above snippet looks weird at least :).
If it was any other class but "Program" the question wouldn't have come up. This design gives you the opportunity to instantiate multiple instances of "Program", maybe threaded in the future, so why not. I'm with KeithS here: As few as possible in static void Main.
I see this a lot, particularly for quick console programs knocked up to try something out, or test something.
Visual Studio practically encourages it - if you ask for a new Console program, it generates a single file, with a class containing only a Main method.
Unless you are doing something complicated, which requires more than 1 class, or something very simple, which doesn't require a class at all (i.e. all methods and variables static)
why would you not do it this way?

Should methods that are required to be executed in a specific order be private? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have a Class that retrieves some data and images does some stuff to them and them uploads them to a third party app using web services.
The object needs to perform some specific steps in order.
My question is should I be explicitly exposing each method publicly like so.
myObject obj = new myObject();
obj.RetrieveImages();
obj.RetrieveAssociatedData();
obj.LogIntoThirdPartyWebService();
obj.UploadStuffToWebService();
or should all of these methods be private and encapsulated in a single public method like so.
public class myObject()
{
private void RetrieveImages(){};
private void RetrieveAssociatedData(){};
private void LogIntoThirdPartyWebService(){};
private void UploadStuffToWebService(){};
public void DoStuff()
{
this.RetrieveImages();
this.RetrieveAssociatedData();
this.LogIntoThirdPartyWebService();
this.UploadStuffToWebService();
}
}
which is called like so.
myObject obj = new myObject();
obj.DoStuff();
It depends on who knows that the methods should be called that way.
Consumer knows: For example, if the object is a Stream, usually the consumer of the Stream decides when to Open, Read, and Close the stream. Obviously, these methods need to be public or else the object can't be used properly. (*)
Object knows: If the object knows the order of the methods (e.g. it's a TaxForm and has to make calculations in a specific order), then those methods should be private and exposed through a single higher-level step (e.g. ComputeFederalTax will invoke CalculateDeductions, AdjustGrossIncome, and DeductStateIncome).
If the number of steps is more than a handful, you will want to consider a Strategy instead of having the steps coupled directly into the object. Then you can change things around without mucking too much with the object or its interface.
In your specific case, it does not appear that a consumer of your object cares about anything other than a processing operation taking place. Since it doesn't need to know about the order in which those steps happen, there should be just a single public method called Process (or something to that effect).
(*) However, usually the object knows at least the order in which the methods can be called to prevent an invalid state, even if it doesn't know when to actually do the steps. That is, the object should know enough to prevent itself from getting into a nonsensical state; throwing some sort of exception if you try to call Close before Open is a good example of this.
If method B() truly cannot be called unless A() is called first, then proper design dictates that A should return some object that B requires as a parameter.
Whether this is always practical is another matter, but that's how it should be done.
Yes private, otherwise you are leaving the door open for users to do things wrong, which will only be a cause for pain for everyone.
Do you ever need to call any of these methods on its own? ie does any of them do anything which is useful and might be needed stand alone? if so then you might want to keep those public, but even if you keep them all public, you should have the method which calls them in the correct order (preferably with a useful name) to make things easier for your users.
It all depends on whether the operation is essentially atomic. In this case it looks like a single operation to us outsiders, but is it really? If LogIntoThirdPartyWebService fails, does the UI need to present a dialog box to ask the user if they want to retry? In the case where you have a single operation, retrying the LogIntoThirdPartyWebService operation also requires redoing potentially expensive operations like RetrieveImages, while making them separate enables more granular logic.
What I would do in this case is something like this:
Images images = RetrieveImages();
ImagesAndData data = RetrieveAssociatedData(images);
WebService webservice = LogIntoThirdPartyWebService();
UploadStuffToWebService(data, webservice);
or maybe more ideally something like this:
UploadStuffToWebService(RetrieveImages().RetrieveAssociatedData(),
LogIntoThirdPartyWebService());
Now you have granularity while enforcing the proper order of operations.
It sounds to me like from the consumer of your object's point of view, the object does one thing: it moves images from one place to another. As the consumer of the object, all of the individual steps you need to take to accomplish that are irrelevant to me; after all that's why I have you to do it for me.
So you should have a single DoStuff() method that takes all the necessary params, and make all the implementation details private.
Private -- and take the parameters in the constructor and execute the order there.
Do not assume the caller will, or knows how to, call them in order.
So, rather than the example you have listed, I would do it this way:
MyObject myObject = new MyObject(); // make a constructor to take any parameters that are required to "setup" the object per your requirements.
myObject.UploadToWebService();
It really depends on whether you estimate that anyone would want to invoke only one of these methods and whether they make sense individually or can be implemented independently. If not, then it is better to avoid exposing anything but the high level op.
Expose as little as possible, as much as necessary. If a call to FuncA() is always followed by a call to FuncB(), make one public and have it call the other, or else have public FuncC() call them in sequence.
Yes, it should definitely be private, especially as all the methods seem to be parameterless and you're just concerned with the order.
The only time I would consider calling each method explicitly is if they each took several, non-overlapping parameters, and you wouldn't want to pass such a long string of parameters to one method and would want to modularize. And then you should make sure to document it clearly. But remember that comments are not executable... You'll still have to trust your user a bit more than you really should.
One of the biggest factors of information hiding and OOP... only give the user what is absolutely necessary. Allow as little room for mess-up as possible.
The question of public or private depends entirely on the contract you wish to expose for your object. Do you want users of your object to call the methods individually, or do you want them to call a single "DoStuff" method and be done with it?
It all depends on the intended usage of the class.
In the example you've given, I'd say DoStuff should be public and the rest private.
Which do you think would be easier for the consumers of your class?
Absolutely write one public method that performs the correct steps in the correct order. Otherwise, the caller is not going to do it right; they're going to forget a step or skip something.
Neither. I think you have at least 3 objects otherwise you are breaking the Single-Responsibility Principal. You need an object that "Gets and holds images", one that "manipulates images", and one that "manages external vendor communication".
One reason they would be public is if you intend the user to be able to insert logic between steps. In this case, you should impose that the functions are called in the correct order internally by keeping a really tiny state machine. If the state machine transitions in the wrong order, you have options besides just doing something wrong, such as throwing an exception.
However, an alternative design that allows them all to be remain private if the case of needing to act beween steps does exist. Instead of making the methods public, provide a public callback interface that lets the users attach handlers that you call at each step of the process. In your now private doItAll() method, you can do something as granular as:
if(preRetrieveHandlerExists){
preRetrieveHandler()
}
obj.RetrieveImages();
if(postRetrieveHandlerExists){
postRetrieveHandler()
}
//so on and so forth
My software engineering rule of thumb is to always give the user/consumer/caller as little chance to screw things up as possible. Therefore, keep the methods private to ensure working order.
Fowler uses the term "Feature Envy" to describe a situation where one object calls a handful of methods (especially repeatedly) on another.
I don't know where he got it from. You don't see it much in the literature, and a lot of people over the years have had no idea what I was talking about (I dunno why, I thought the name was perfectly obvious once I heard it. Which is why I repeat it)

The purpose of delegates [duplicate]

This question already has answers here:
Where do I use delegates? [closed]
(8 answers)
Closed 9 years ago.
Duplicate:
Difference between events and delegates and its respective applications
What are the advantages of delegates?
Where do I use delegates?
I wonder what the purpose of delegates is. I haven't used them that much and can't really think of something.
In my courses, it's written that a delegate is a blue-print for all methods that comply with its signature.
Also, you can add multiple methods to one delegate, and then they'll be executed after eachother in the order they were added. Which is probably only usefull for methods that affect local variables or methodes that don't return any values.
I've read that C# implements Events as delegates, which is documented as being:
//Summary: Represents the method that
will handle an event that has no event
data.
//Parameters:
//sender: The source of the event.
//e: An System.EventArgs that contains no event data.
[Serializable]
[ComVisible(true)]
public delegate void EventHandler(object sender, EventArgs e);
Still, it's kinda confusing. Can someone give a good, usefull example of this concept?
Yeah,
You're almost there. A delegate refers to a method or function to be called. .NET uses the Events to say.. when someones presses this button, I want you to execute this piece of code.
For example, in the use of a GPS application:
public delegate void PositionReceivedEventHandler(double latitude, double longitude);
This says that the method must take two doubles as the inputs, and return void. When we come to defining an event:
public event PositionReceivedEventHandler PositionReceived;
This means that the PositionRecieved event, calls a method with the same definition as the
PositionReceivedEventHandler delegate we defined. So when you do
PositionRecieved += new PositionReceivedEventHandler(method_Name);
The method_Name must match the delegate, so that we know how to execute the method, what parameters it's expecting. If you use a Visual Studio designer to add some events to a button for example, it will all work on a delegate expecting an object and an EventArgs parameter.
Hope that helps some...
As you noted a delegate is a way to create a signature for an method call. There are many great examples of using delegates, but the one that really opened my mind is this example.
public delegate Duck GetDuckDelegate();
public GetDuckDelegate GiveMeTheDuckFactoryMethod(string type)
{
switch(type)
{
case "Rubber":
return new GetDuckDelegate(CreateRubberDuck);
case "Mallard":
return new GetDuckDelegate(CreateMallardDuck);
default:
return new GetDuckDelegate(CreateDefaultDuck);
}
}
public Duck CreateRubberDuck()
{
return new RubberDuck();
}
public Duck CreateMallardDuck()
{
return new MallardDuck();
}
public Duck CreateDefaultDuck()
{
return new Duck();
}
Then to use it
public static void Main() {
var getDuck = GiveMeTheDuckFactoryMethod("Rubber");
var duck = getDuck();
}
Arguably, the Factory pattern would be a better method for this, but I just thought up this example on the fly and thought it proved the point of how delegates can be treated as objects
Delegates allow you to pass methods around like values.
For example, .Net has a method called Array.ForEach that takes a delegate and an array, and calls the delegate on each element of the array.
Therefore, you could write,
int[] arr = new int[] { 1, 2, 4, 8, 16, 32, 64 };
Array.ForEach(arr, new Action<int>(Console.WriteLine));
This code will call Console.WriteLine for each number in the array.
There are many things you can do by making functions that take delegates, especially when combined with anonymous methods. For examples, look at LINQ.
Many people initially get confused with the real need for delegates and events. I was one of them and it took me some time to figure it out :-). Recently answered a similar query in ASP.NET forums and thought it would be good if I create a blog post on this topic! Here was the query:
"I was reading an example somewhere of a Bank Class that if the minimum balance is reached you need to inform the rest of the app that the min has reached, but can't we do that by just calling a normal method.
for example: lets say when we deduct some amount from the balance and if minimum reached then call some method to take some action, I am totally missing why do we need delegates and custom events here?"
Thing is in the Bank case, you can definitely call a method, but then it would be simple procedural programming, we need event based programming when we want our code to respond to some events generated by a system.
For eg.: think that windows OS is a system, and we are writing a code (in any language) where we want to capture an event like mouse_click(). Now how would our program know that a mouse click has occured? We can use low level code for it, but since OS is already handling low level code, its best to capture an event raised by the OS.
In other terms, the moment a mouse_click() happens the OS fires an event. The OS doesnt care who captures this event and uses it, it just sends out a notification. Then any code (like ours) can capture that event and use it accordingly. This saves us a lot of time to write code for the same ourselves. And other programs too can use the same event and handle it accordingly.
Similarly, the banking system can be huge, and many other outside applications might be accessing it. The banking system does not know how many such applications there are which need it, or are dependent on it, and how would they handle certain situations like when balance is low, so it simply fires an event whenever low balance occurs, and this event can be used by any other code, besides banking code itself.
Note that each susbcriber to that event can handle that event independently, for eg. the banking code might stop something from executing if balance is low, some other reporting app might send an email in such a case, or some ATM code can stop a particualr transaction and notify the user that balance is low.
Hope this clears things a bit!
I can provide you with an example using a web application architecture:
Generally, with a web application you can provide a front controller that receives requests from many clients. We could put all our methods within the front controller for dealing with the many different types of requests from the clients. However, this get a little cumbersome. Instead we can use delegates to encapsulate functionality for different requests. We could have:
Authentication Delegate
User Management Delegate
and so on. So it's a neat way to split up functionality into logical chunks - delegates. The Struts framework is based on this way of working (the ActionServlet and Action classes).
There are lots of excellent articles explaining delegates - here are some good ones:
Delegates and events
C# Delegates Explained
Delegates in C#
Delegates, to my understanding, provides a way of specializing the behavior of a class without subclassing it.
Some classes have complex generic behavior, but are still meant to be specialized. Think of a Window class in a GUI framework: A Window can propably do a lot on it's own, but you would most likely still want to specialize it in some way. In some frameworks, this is done via inheritance. A different way of doing it is with delegates. Say you want something to happen when the Window resizes: Your delegate class can then implement a method called onWindowResize (provided of course that the Window class supports this), which gets called whenever the Window resizes and is responsible for any specialized behavior when the Window resizes.
I'm not going to argue the merits of delegation over inheritance, but suffice it to say that there are many who feel that delegation is "cleaner" than inheritance.

Instantiating objects with a Configuration class or with Parameters

I am running into a design disagreement with a co-worker and would like people's opinion on object constructor design. In brief, which object construction method would you prefer and why?
public class myClass
{
Application m_App;
public myClass(ApplicationObject app)
{
m_App = app;
}
public method DoSomething
{
m_App.Method1();
m_App.Object.Method();
}
}
Or
public class myClass
{
Object m_someObject;
Object2 m_someOtherObject;
public myClass(Object instance, Object2 instance2)
{
m_someObject = instance;
m_someOtherObject = instance2;
}
public method DoSomething
{
m_someObject.Method();
m_someOtherObject.Method();
}
}
The back story is that I ran into what appears to be a fundamentally different view on constructing objects today. Currently, objects are constructed using an Application class which contains all of the current settings for the application (Event log destination, database strings, etc...) So the constructor for every object looks like:
public Object(Application)
Many classes hold the reference to this Application class individually. Inside each class, the values of the application are referenced as needed. E.g.
Application.ConfigurationStrings.String1 or Application.ConfigSettings.EventLog.Destination
Initially I thought you could use both methods. The problem is that in the bottom of the call stack you call the parameterized constructor then, higher up the stack, when the new object expects a reference to the application object to be there, we ran into a lot of null reference errors and saw the design flaw.
My feeling on using an application object to set every class is that it breaks encapsulation of each object and allows the Application class to become a god class which holds information for everything. I run into problems when thinking of the downsides to this method.
I wanted to change the objects constructor to accept only the arguments it needs so that public object(Application) would change to public object(classmember1, classmember2 etc...). I feel currently that this makes it more testable, isolates change, and doesn't obfuscate the necessary parameters to pass.
Currently, another programmer does not see the difference and I am having trouble finding examples or good reasons to change the design, and saying it's my instinct and just goes against the OO principles I know is not a compelling argument. Am I off base in my design thoughts? Does anyone have any points to add in favor of one or the other?
Hell, why not just make one giant class called "Do" and one method on it called "It" and pass the whole universe into the It method?
Do.It(universe)
Keep things as small as possible. Discrete means easier to debug when things inevitably break.
My view is that you give the class the smallest set of "stuff" it needs for it to do its job. The "Application" method is easier upfront but as you've seen already, it will lead to maintainence issues.
I thing Steve McConnel put it very succintly. He states,
"The difference between the
'convenience' philosophy and the
'intellectual manageability'
philosophy boils down to a difference
in emphasis between writing programs
and reading them. Maximizing scope
may indeed make programs easy to
write, but a program in which any
routine can use any variable at any
time is harder to understand than a
program that uses well-factored
routines. In such a program you can't
understand only one routine; you have
to understand all the other routines
with which that routine shares global
data. Such programs are hard to read,
hard to debug, and hard to modify." [McConnell 2004]
I wouldn't go so far as to call the Application object a "god" class; it really seems like a utility class. Is there a reason it isn't a public static class (or, better yet, a set of classes) that the other classes can use at will?

Categories

Resources