Object Oriented Method Design options - c#

I want to develop a process() method. The method takes some data in the form of a data class, and processes it. The data classes are similar, but slightly different.
For example we have the following classes of data processDataObject_A, processDataObject_B and processDataObject_C.
Is it better to overload the method:
void process(processDataObject_A data)
{
//Process processDataObject_A here
}
void process(processDataObject_B data)
{
//Process processDataObject_B here
}
void process(processDataObject_C data)
{
//Process processDataObject_C here
}
OR have the concrete data classes extend some Abstract Data Class, and pass that to the process method and then have the method check the type and act accordingly:
void process(AbstractProcessDataObject data)
{
//Check for type here and do something
}
OR is there some better way to address it? Would the approach change if this were to be a Web Method?
Thanks in advance

I would go with:
process (data) {
data.doProcessing();
}

The fact that your methods return void lead me to believe that you may have your responsibilities turned around. I think it may be better to think about this as having each of your classes implement an interface, IProcessable, that defines a Process method. Then each class would know how to manipulate it's own data. This, I think, is less coupled than having a class which manipulates data inside each object. Assuming all of theses classes derive from the same base class you could put the pieces of the processing algorithm that are shared in the base class.
This is slightly different than the case where you may have multiple algorithms that operate on identical data. If you need this sort of functionality then you may still want to implement the interface, but have the Process method take a strategy type parameter and use a factory to create an appropriate strategy based on its type. You'd end up having a strategy class for each supported algorithm and data class pair this way, but you'd be able keep the code decoupled. I'd probably only do this if the algorithms were reasonably complex so that separating the code makes it more readable. If it's just a few lines that are different, using the switch statement on the strategy type would probably suffice.
With regard to web methods, I think I'd have a different signature per class. Getting the data across the wire correctly will be much easier if the methods take concrete classes of the individual types so it knows how to serialize/deserialize it properly. Of course, on the back end the web methods could use the approach described above.
public interface IProcessable
{
public void Process() {....}
}
public abstract class ProcessableBase : IProcessable
{
public virtual void Process()
{
... standard processing code...
}
}
public class FooProcessable : ProcessableBase
{
public override void Process()
{
base.Process();
... specific processing code
}
}
...
IProcessable foo = new FooProcessable();
foo.Process();
Implementing the strategy-based mechanism is a little more complex.
Web interface, using data access objects
[WebService]
public class ProcessingWebService
{
public void ProcessFoo( FooDataObject foo )
{
// you'd need a constructor to convert the DAO
// to a Processable object.
IProcessable fooProc = new FooProcessable( foo );
fooProc.Process();
}
}

I second Marko's design.
Imagine you need to add another type of data structure and process logic, say processDataObject_D. With your first proposed solution (method overloading), you will have to modify the class by adding another method. With your second proposed solution, you will have to add another condition to the type checking and execution statement. Both requires you to modify the existing code.
Marko's solution is to avoid modifying the existing code by leveraging polymorphism. You don't have to code if-else type checking. It allows you to add new data structure and process logic without modifying the existing code as long as the new class inherits the same super class.
Studying Strategy Pattern of the design patterns will give you full theoritical understanding of the problem you are facing. The book "Head First Design Pattern" from O'Reilly is the best introduction I know of.

How about polymorphism on AbstractProcessDataObject - i.e. a virtual method? If this isn't appropriate (separation of concerns etc), then the overload would seem preferable.
Re web-methods; very different: neither polymorphism nor overloading are very well supported (at least, not on basic-profile). The detection option "Check for type here and do something" might be the best route. Or have different named methods for each type.
per request:
abstract class SomeBase { // think: AbstractProcessDataObject
public abstract void Process();
}
class Foo : SomeBase {
public override void Process() { /* do A */ }
}
class Bar : SomeBase {
public override void Process() { /* do B */ }
}
SomeBase obj = new Foo();
obj.Process();

I believe the Strategy pattern would help you.

Related

Why we should implement Interface?

Implementing Interface just provide the skeleton of the method. If we know the exact signature line of that method, in this case
what is the requirement to implement Interface?
This is the case in which Interface has been implemented
interface IMy
{
void X();
}
public class My:IMy
{
public void X()
{
Console.WriteLine("Interface is implemented");
}
}
This is the case in which Interface has not been implemented
public class My
{
public void X()
{
Console.WriteLine("No Interface is implemented ");
}
}
My obj = new My();
obj.X();
Both the approaches will produce the same result.
what is the requirement to implement Interface?
The purpose of interfaces is to allow you to use two different classes as if they were the same type. This is invaluable when it comes to separation of concerns.
e.g. I can write a method that reads data from an IDataReader. My method doesn't need to know (or care) if that's a SqlDataReader, and OdbcDataReader or an OracleDataReader.
private void ReadData(IDataReader reader)
{
....
}
Now, lets say I need that method to process data coming from a non-standard data file. I can write my own object that implements IDataReader that knows how to read that file, and my method again, neither knows nor cares how that IDataReader is implemented, only that it is passed an object that implements IDataReader.
Hope this helps.
You can write multiple classes that implement an interface, then put any of them in a variable of the interface type.
This allows you to swap implementations at runtime.
It can also be useful to have a List<ISomeInterface> holding different implementations.
There are two purposes of inheritance in .net:
Allow derived classes to share the base-class implementations of common functionality
Allow derived-class objects to be substituted for base-class objects anywhere the latter would be accepted.
Unlike some languages (C++, for example) which allow multiple inheritance, .net requires every class to have precisely one parent type (Object, if nothing else). On the other hand, sometimes it's useful to have a class be substitutable for a number of unrelated types. That's where interfaces come in.
An object which implements an interface is substitutable for an instance of that declared interface type. Even though objects may only inherit from one base type, they may implement an arbitrary number of interfaces. This thus allows some of the power of multiple inheritance, without the complications and drawbacks of full multiple-inheritance support.
You've provided a very basic example, which is probably why you're having trouble understand why. Examine something like this:
public interface IDbColumn
{
int domainID { get; set; }
}
public static IEnumerable<T> GetDataByDomain<T>(
IQueryable<T> src) where T:IDbColumn
{
string url = HttpContext.Current.Request.Url.Host;
int i = url == "localhost" ? 1 : 2;
return src.Where(x => x.domainID == i|| x.domainID == 3);
}
domainID is a physical column in every table that will reference this method, but since the table type isn't known yet there's no way to have access to that variable without an interface.
Heres simple example wich helped me to understand interfaces:
interface IVehicle
{
void Go();
}
public class Car:IVehicle
{
public void Go()
{
Console.WriteLine("Drive");
}
}
public class SuperCar:IVehicle
{
public void Go()
{
Console.WriteLine("Drive fast!!");
}
}
IVehicle car = new Car();
car.Go(); //output Drive
car = new SuperCar();
car.Go(); //output Drive fast!!
Say you have three classes, A, B, C.
A needs to accept an argument. Either B or C can be passed through.
The best way to do this is create an interface that B and C share
Well interfaces are not meant to be used with just one class, they are used accross many classes to make sure that they contain a set of methods.
a good way to visualize it is to think about driver abstraction, being able to run 1 query that can be interoperated by several different database servers.
interface DatabaseDriver
{
public function connect(ConnectionDetails $details){}
public function disconnect(){}
public function query(Query $query){}
public function prepareQuery(SQLQuery $query){}
}
and then your actual drivers would use the interface so that the database object can be assured that that the selected driver is able to perform the tasks required.
class MySqlDriver extends Database implements DatabaseDriver{}
class AccessDriver extends Database implements DatabaseDriver{}
class MsSqlDriver extends Database implements DatabaseDriver{}
hope this helps.
Note: Code in PHP

A design for sending objects to be added to the appropriate data structure

I have a class called DataStructures where I have a set of public static data structures that store objects. To add an object to a data structures is an involved process requiring a number of checks to be carried out, processes to be remembered and data to be rearranged. In another class called Foo, I need to add objects to the data structures.
I was thinking I can do this by making a method called ObjectFeed which would take an object and the object's label as parameters. The label would tell the method which of the data structures the object should be added to. I would also have a method called addObject which would take the object to append and the appropriate target data structure as parameters:
Public Class DataStructures
{
public static List<obj> object1Storage = new List<obj>();
public static List<obj> object2Storage = new List<obj>();
...
}
Public Class Foo
{
public void ObjectFeed(/* PARAMETERS */)
{
//Code that generates an object called inspectionObject
//inspection object has an associated enum Type
if(objectType == Type.Type1)
{
addObject(inspectionObject, DataStructures.object1Storage);
}
if(objectType == Type.Type2)
{
addObject(inspectionObject, DataStructures.object2Storage);
}
...
}
private void addObject(obj inspectionObject, List<obj> objStorage)
{
objStorage.Add(inspectionObject);
//And a lot more code
}
}
Passing a public data structure as a parameter to a method that can just as well access that data structure directly doesn't feel correct. Is there a more clever and less intuitive way of doing this?
Edit:
In the example I originally contrived, the ObjectFeed method served no apparent purpose. I rewrote the method to look more like a method from the real world.
Where is the object type coming from? Passing a string value as a type of something is very rarely a good idea. Consider different options:
Create an enum for these values and use this. You can always parse it from string or print it to string if you need to.
Maybe it makes sense to have a couple of specific methods: FeedObjectType1(object obj), etc.? How often will these change?
Its really difficult to give you a definite answer without seeing the rest of the code.
Exposing public static lists from your DataStructures class is in most cases not a good design. To start with I would consider making them private and providing some methods to access the actual functionality that is needed. I would consider wrapping the lists with the addObject method, so that you don't have to pass the list as an argument. But again I am not sure if it makes sense in your case.
You seem to use DataStructures like some kind of global storage. I don't know what you store in there so I'm going to assume you have good reasons for this global storage.
If so, I would replace each list with a new kind of object, which deals with additions of data and does the checks relevant for it.
Something like:
interface IObjectStorage
{
void Add(object obj);
void Remove(object obj);
}
Each object storage type would derive from this and provide their own logic. Or it could derive from Collection<T> or something similar if collection-semantics makes sense. As your example is right now, I can't see the use for ObjectFeed, it serves as a fancy property accessor.
Selecting which property to access through a string sounds iffy to me. It is very prone to typos; I would rather use Type-objects available from any object in C# through the GetType-method or typeof() construct.
However. The whole setup feels a bit wrong to me, DataStructures et al.
First, testing your static class will be hard. I would pass around these stores to the types that need them instead. Replacing them with other stuff will also be hard, using interfaces will at least not tie you to a concrete implementation, but what if you want to use another location to store the objects in other code? Your static class is no longer relevant and you'll need to change a lot of code.
Maybe these things are out of your control, I don't know, the example code is a bit vague in that sense.
As pointed out in other answers:
The public static Lists are bad practice
Since the addObject method is the same for every data structure, it should be implemented as a data structure accessor.
To this end, I moved the instantiation of the data structures into Foo and moved the addObject method from Foo to a new class called StorageLibrary that more accurately represents the data structure architecture.
private class StorageLibrary
{
private List<obj> storedObjects = new List<obj>();
public void addObject(obj inspectionObject)
{
storedObjects.Add(inspectionObject);
//And a lot more code
}
}
public class Foo : StorageLibrary
{
//Declaration of libraries
public static StorageLibrary storage1 = new StorageLibrary();
public static StorageLibrary storage2 = new StorageLibrary();
...
private void ObjectFeed(/* PARAMATERS */)
{
//generate objects
if (objectType == Type.Type1)
{
storage1.addObject(inspectionObject);
}
if (objectType == Type.Type2)
{
storage2.addObject(inspectionObject);
}
...
}
}

Performing a common action on derived types

Is there a neat way to make several classes (which say derive from 1 interface), to each perform a same action? Think of http modules in ASP.NET which serve each request (Each the key word) - is there a way to perform some common action on derived types? Reflection may be one way, though I would be interested in a way at a base class level.
Thanks
Not with only an interface; you'd want an abstract class in the middle there:
abstract class Whatever : IFooable {
public virtual void Do () {
PreDo();
}
protected abstract void PreDo();
}
Then you call Do, and PreDo is automatically called first on all implementing types.
(Edit: Just to be clear, I made Do virtual so this means if you re-implement it you should call base.Do() as the first thing, just to ensure that it actually calls the parent method).
If your classes all derive from a common base class, you could put this logic in your common base class.
If I understand what you are asking correctly, then perhaps an event handler is the way to go?
If you need a bunch of objects to respond to some action, then events (also called "message passing") is the way to go.
Something like this?
class Foo
{
public event EventHandler PerformAction;
private void OnActionNeeded()
{
// A bunch of Bars need to do something important now.
if (PerformAction != null)
PerformAction.Invoke();
}
}
class Bar
{
public Bar(Foo fooToWatch)
{
fooToWatch.PerformAction += new EventHandler(Foo_PerformAction);
}
void Foo_PerformAction(object sender, EventArgs e)
{
// Do that voodoo that you do here.
}
}
May not be a complete answer but I am tempted to think in terms of AOP and attributes.
some references:
http://www.codeproject.com/KB/cs/ps-custom-attributes-1.aspx
http://www.postsharp.org/contributions/documentation/removing-duplicate-code-in-functions
The Template Method design pattern may apply to what you're asking.
http://www.dofactory.com/Patterns/PatternTemplate.aspx
The overall point of designing an interface is to provide a protocol between two components and hide the implementation part.
The interfaces serve as a communication medium.
And what you are asking seem to be specific to implementaion.
Which can be cleanly handled using utility classes(singleton with the method)
I do not suggest to have abstract class in ur current scenario.

Design Pattern to Handle Grouping Similar Entities Together

Over the past few years I've been on projects where we've run into a similar problem in our object hierarchy that always seems to cause problems. I was curious if anyone here knew of a classical OOP (Java, C#, PHP5, etc) design pattern that could gracefully handle this situation.
Say we have an existing system. This system has, among other things, two types of entities, each modeled with an individual class. Let's say
Customer
SalesRepresentative
For historical reasons, neither of these classes inherit from the same base class or share a common interface.
The problem I've seen is, inevitably, a new feature gets specced out that requires us to treat the Customer and the SalesRepresentative as the same type of Object. The way I've seen this handled in the past is to create a new class that includes a member variable for both, and then each method will operate on the objects differently depending on which is set
//pseudo PHPish code
class Participator
{
public $customer;
public $salesRepresentative;
public function __construct($object)
{
if(object is instance of Customer)
{
$this->customer = $object;
}
if(object is instance of SalesRepresentative)
{
$this->salesRepresentative = $object;
}
}
public function doesSomething()
{
if($customer)
{
//We're a customer, do customer specific stuff
}
else if($salesRepresentative)
{
//We're a salesRepresentative, do sales
//representative specific stuff
}
}
}
Is there a more graceful way of handling this type of situation?
Maybe a Wrapper can be used here. Create a Wrapper Interface say ParticipatorWrapper that specifies the new functionality and build concrete Wrappers for each class, say CustomerWrapper and SalesRepresentativeWrapper that both implement the new functionality.
Then simply wrap the object in its appropriate wrapper and write code that targets the ParticipatorWrapper.
Update: Javaish code:
interface ParticipatorWrapper{
public void doSomething();
}
class CustomerWrapper implements ParticipatorWrapper{
Customer customer;
public void doSomething(){
//do something with the customer
}
}
class SaleREpresentativeWrapper implements ParticipatorWrapper{
SaleRepresentative salesRepresentative;
public void doSomething(){
//do something with the salesRepresentative
}
}
class ClientOfWrapper{
public void mymethod(){
ParticipatorWrapper p = new ParticipatorWrapper(new Customer());
p.doSomething();
}
}
This is an alternative to that Vincent's answer, taking an opposite sort of approach. As I note below, there are some downsides, but your specific problem may obviate those and I think this solution is simpler in those cases (or you may want to use some combination of this solution and Vincent's).
Rather than wrapping the classes, introduce hooks in the classes and then pass them the functions. This is a reasonable alternative if you're looking to do the same thing with the same data from both classes (which I am guessing you are, based on lamenting that the two classes don't have a shared superclass).
This is using Visitor instead of Wrapper. Javaish this be something like:
public <Output> Output visit(Vistor<Output> v) {
return v.process(...all shared the fields in Customer/SalesRep...);
}
And then you have a Visitor interface which all your functions inherit from that looks like:
interface Visitor<Output> {
public Output process(...shared fields...);
}
There are someways to chop what gets passed to your Visitor, but the involves introducing new classes to specify which inputs to use, which becomes wrapping anyways, so you might as well use Vincent's answer.
The downside to this solution is if you do something that alters the structure of the class fields, you can buy yourself lots of refactoring, which is less of a problem in Vincent's answer. This solution is also a little bit less useful if you're making modifications to the data stored in the Customer/SalesRep instance, as you'd effectively have to wrap those inside the Visitor.
I think you could apply the concept of mixins to your classes to get the functionality you want.

How to Avoid Calling Viritual Methods from a Base Constructor

I have an abstract class in a library. I'm trying to make it as easy as possible to properly implement a derivation of this class. The trouble is that I need to initialize the object in a three-step process: grab a file, do a few intermediate steps, and then work with the file. The first and last step are particular to the derived class. Here's a stripped-down example.
abstract class Base
{
// grabs a resource file specified by the implementing class
protected abstract void InitilaizationStep1();
// performs some simple-but-subtle boilerplate stuff
private void InitilaizationStep2() { return; }
// works with the resource file
protected abstract void InitilaizationStep3();
protected Base()
{
InitilaizationStep1();
InitilaizationStep2();
InitilaizationStep3();
}
}
The trouble, of course, is the virtual method call in the constructor. I'm afraid that the consumer of the library will find themselves constrained when using the class if they can't count on the derived class being fully initialized.
I could pull the logic out of the constructor into a protected Initialize() method, but then the implementer might call Step1() and Step3() directly instead of calling Initialize(). The crux of the issue is that there would be no obvious error if Step2() is skipped; just terrible performance in certain situations.
I feel like either way there is a serious and non-obvious "gotcha" that future users of the library will have to work around. Is there some other design I should be using to achieve this kind of initialization?
I can provide more details if necessary; I was just trying to provide the simplest example that expressed the problem.
I would consider creating an abstract factory that is responsible for instantiating and initializing instances of your derived classes using a template method for initialization.
As an example:
public abstract class Widget
{
protected abstract void InitializeStep1();
protected abstract void InitializeStep2();
protected abstract void InitializeStep3();
protected internal void Initialize()
{
InitializeStep1();
InitializeStep2();
InitializeStep3();
}
protected Widget() { }
}
public static class WidgetFactory
{
public static CreateWidget<T>() where T : Widget, new()
{
T newWidget = new T();
newWidget.Initialize();
return newWidget;
}
}
// consumer code...
var someWidget = WidgetFactory.CreateWidget<DerivedWidget>();
This factory code could be improved dramatically - especially if you are willing to use an IoC container to handle this responsibility...
If you don't have control over the derived classes, you may not be able to prevent them from offering a public constructor that can be called - but at least you can establish a usage pattern that consumers could adhere to.
It's not always possible to prevent users of you classes from shooting themselves in the foot - but, you can provide infrastructure to help consumers use your code correctly when they familiarize themselves with the design.
That's way too much to place in the constructor of any class, much less of a base class. I suggest you factor that out into a separate Initialize method.
In lots of cases, initialization stuff involves assigning some properties. It's possible to make those properties themselves abstract and have derived class override them and return some value instead of passing the value to the base constructor to set. Of course, whether this idea is applicable depends on the nature of your specific class. Anyway, having that much code in the constructor is smelly.
At first sight, I would suggest to move this kind of logic to the methods relying on this initialization. Something like
public class Base
{
private void Initialize()
{
// do whatever necessary to initialize
}
public void UseMe()
{
if (!_initialized) Initialize();
// do work
}
}
Since step 1 "grabs a file", it might be good to have Initialize(IBaseFile) and skip step 1. This way the consumer can get the file however they please - since it is abstract anyways. You can still offer a 'StepOneGetFile()' as abstract that returns the file, so they could implement it that way if they choose.
DerivedClass foo = DerivedClass();
foo.Initialize(StepOneGetFile('filepath'));
foo.DoWork();
Edit: I answered this for C++ for some reason. Sorry. For C# I recommend against a Create() method - use the constructor and make sure the objects stays in a valid state from the start. C# allows virtual calls from the constructor, and it's OK to use them if you carefully document their expected function and pre- and post-conditions. I inferred C++ the first time through because it doesn't allow virtual calls from the constructor.
Make the individual initialization functions private. The can be both private and virtual. Then offer a public, non-virtual Initialize() function that calls them in the correct order.
If you want to make sure everything happens as the object is created, make the constructor protected and use a static Create() function in your classes that calls Initialize() before returning the newly created object.
You could employ the following trick to make sure that initialization is performed in the correct order. Presumably, you have some other methods (DoActualWork) implemented in the base class, that rely on the initialization.
abstract class Base
{
private bool _initialized;
protected abstract void InitilaizationStep1();
private void InitilaizationStep2() { return; }
protected abstract void InitilaizationStep3();
protected Initialize()
{
// it is safe to call virtual methods here
InitilaizationStep1();
InitilaizationStep2();
InitilaizationStep3();
// mark the object as initialized correctly
_initialized = true;
}
public void DoActualWork()
{
if (!_initialized) Initialize();
Console.WriteLine("We are certainly initialized now");
}
}
I wouldn't do this. I generally find that doing any "real" work in a constructor ends up being a bad idea down the road.
At the minimum, have a separate method to load the data from a file. You could make an argument to take it a step further and have a separate object responsible for building one of your objects from file, separating the concerns of "loading from disk" and the in-memory operations on the object.

Categories

Resources