I have a little design problem. Let's say I have a project that contains a large number of people. I want to allow the user to export those people to a CSV file with the information he chooses.
For example, He could choose Id, Name, Phone number and according to his choice I would create the file.
Of course, there is a simple of way doing it like if(idCheckBox.Checked) getId(); etc.
I'm looking for something better. I don't want that for each new option I would like to add I would need to change the UI (e.g. New checkbox).
I thought of reading the possible options from a file, but that will only solved the UI problem. How would I know which values to get without using all those "if's" again?
You don't need a fancy design pattern for this task. However I understand you have identified a reason to change (added options in future). So you want to minimize amount of classes to be modified.
Your real problem is how to decouple CSV creation from the objects whose structure is going to change. You don't want your parsing logic to be affected whenever your Person class is changed.
In the following example the CSV object is truly decoupled from the objects it receives and parses. To achieve this, we are coding to an abstraction rather to an implementation. This way we are not even coupled to the Person object, but will welcome any objects that implement the AttributedObject interface. This dependency is being injected to our CSV parser.
I implemented this in PHP, but the idea is the same. C# is a static language, so fetching the attributes would be with a bit of change. You might use some kind of ArrayAccess interface.
interface AttributedObject {
public function getAttribute($attribute);
}
class Person implements AttributedObject {
protected $firstName;
protected $lastName;
protected $age;
protected $IQ;
public function __construct($firstName, $lastName, $age, $IQ)
{
$this->firstName = $firstName;
$this->lastName = $lastName;
$this->age = $age;
$this->IQ = $IQ;
}
public function getAttribute($attribute)
{
if(property_exists($this, $attribute)) {
return $this->$attribute;
}
throw new \Exception("Invalid attribute");
}
}
class CSV {
protected $attributedObject = null;
protected $attributesToDisplay = null;
protected $csvRepresentation = null;
protected $delimiter = null;
public function __construct(AttributedObject $attributedObject, array $attributesToDisplay, $delimiter = '|')
{
$this->attributedObject = $attributedObject;
$this->attributesToDisplay = $attributesToDisplay;
$this->delimiter = $delimiter;
$this->generateCSV();
}
protected function generateCSV()
{
$tempCSV = null;
foreach ($this->attributesToDisplay as $attribute) {
$tempCSV[] = $this->attributedObject->getAttribute($attribute);
}
$this->csvRepresentation = $tempCSV;
}
public function storeCSV()
{
$file = fopen("tmp.csv", "w");
fputcsv($file, $this->csvRepresentation, $this->delimiter);
}
}
$person1 = new Person('John', 'Doe', 30, 0);
$csv = new CSV($person1, array('firstName', 'age', 'IQ'));
$csv->storeCSV();
You can build a mapping set of fields based what fields the user is allowed to select, and which fields are required. This data can be read from a file or database. Your import/export can be as flexible as needed.
Here is a conceivable data structure that could hold info for your import/export sets.
public class FieldDefinition
{
public FieldDataTypeEnum DataType { get; set; }
public string FieldName{get;set;}
public int MaxSize { get; set; }
public bool Required { get; set; }
public bool AllowNull { get; set; }
public int FieldIndex { get; set; }
public bool CompositeKey { get; set; }
}
public class BaseImportSet
{
private List<FieldDefinition> FieldDefinitions { get; set; }
protected virtual void PerformImportRecord(Fields selectedfields)
{
throw new ConfigurationException("Import set is not properly configured to import record.");
}
protected virtual void PerformExportRecord(Fields selectedfields)
{
throw new ConfigurationException("Export set is not properly configured to import record.");
}
public LoadFieldDefinitionsFromFile(string filename)
{
//Implement reading from file
}
}
public class UserImportSet : BaseImportSet
{
public override void PerformImportRecord(Fields selectedfields)
{
//read in data one record at a time based on a loop in base class
}
public override string PerformExportRecord(Fields selectedfields)
{
//read out data one record at a time based on a loop in base class
}
}
Related
I am learning DDD and trying to model articles, its variants and parameters.
Article can be on it's own without variants
Variant must be child of an article
both article and variant can have some parameters (colors, brands, sizes...), physical quantities (width, length, some article-specific like inner length)
If you set some parameter on an article, it can be "synchronized" to it's children variants
you can override this in a variant by setting that parameter as "unlinked", then this variant would have different parameter value than article
some parameters can be set multiple times (color: red, blue), but some only once (brand)
those parameters are dynamically create, it's not a Color or Brand property but key-value selected from preconfigured values
I think my main aggregate roots will be Article and Variant.
My current code looks like this:
internal class Article : AggregateRoot<ArticleId>
{
private readonly ISet<VariantId> _variants = new HashSet<VariantId>();
private readonly ISet<AssignedParameter> _parameters = new HashSet<AssignedParameter>();
private readonly ISet<AssignedPhysicalQuantity> _physicalQuantities = new HashSet<AssignedPhysicalQuantity>();
public string Name { get; private set; }
public string Catalog { get; private set; }
public IReadOnlySet<VariantId> Variants => _variants.AsReadOnly();
public IReadOnlySet<AssignedParameter> Parameters => _parameters.AsReadOnly();
public IReadOnlySet<AssignedPhysicalQuantity> PhysicalQuantities => _physicalQuantities.AsReadOnly();
private Article(ArticleId id, string name, string catalog)
: base(id)
{
Name = name;
Catalog = catalog;
}
public static Article Register(ArticleId id, string name, string catalog)
{
var article = new Article(id, name, catalog);
article.AddEvent(new ArticleRegistered(article.Id, article.Name, article.Catalog));
return article;
}
public void AssignParameter(Parameter parameter, ParameterValue parameterValue, bool syncToVariants)
{
if (!parameter.CanBeAssignedMultipleTimes && _parameters.Any(p => p.ParameterId == parameter.Id))
{
throw new ParameterCanBeAssignedOnlyOnceException($"Parameter {parameter.Id} can by assigned only once.");
}
var assignedParameter = new AssignedParameter(parameter.Id, parameterValue.Id, syncToVariants);
if (!_parameters.Add(assignedParameter))
{
throw new ParameterIsAlreadyAssignedException($"Parameter {parameter.Id} with value {parameterValue.Id} is already assigned.");
}
AddEvent(new ArticleParameterAssigned(Id, assignedParameter.ParameterId, assignedParameter.ParameterValueId));
}
public void UnassignParameter(Parameter parameter, ParameterValue parameterValue)
{
var assignedParameter = _parameters.FirstOrDefault(p => p.ParameterId == parameter.Id && p.ParameterValueId == parameterValue.Id);
if (assignedParameter is null)
{
throw new ParameterIsNotAssignedException($"Parameter {parameter.Id} is not assigned.");
}
_parameters.Remove(assignedParameter);
AddEvent(new ArticleParameterUnassigned(Id, assignedParameter.ParameterId, assignedParameter.ParameterValueId));
}
// physical quantity assign / unassign are similar to parameters
}
internal class Variant : AggregateRoot<VariantId>
{
private readonly ISet<AssignedParameter> _parameters = new HashSet<AssignedParameter>();
private readonly ISet<AssignedPhysicalQuantity> _physicalQuantities = new HashSet<AssignedPhysicalQuantity>();
public string Name { get; private set; }
public string Catalog { get; private set; }
public EanCode Ean { get; private set; }
public decimal Weight { get; private set; }
public IReadOnlySet<AssignedParameter> Parameters => _parameters.AsReadOnly();
public IReadOnlySet<AssignedPhysicalQuantity> PhysicalQuantities => _physicalQuantities.AsReadOnly();
internal Variant(VariantId id, string name, string catalog, EanCode ean, decimal weight)
: base(id)
{
Name = name;
Catalog = catalog;
Ean = ean;
Weight = weight;
}
// parameter and physical quantity assignment methods
}
Parameters:
internal class Parameter : AggregateRoot<ParameterId>
{
private readonly ISet<ParameterValue> _values = new HashSet<ParameterValue>();
public string Code { get; private set; }
public string Name { get; private set; }
public bool CanBeAssignedMultipleTimes { get; private set; }
public IReadOnlySet<ParameterValue> Values => _values.AsReadOnly();
public Parameter(ParameterId id, string code, string name, bool canBeAssignedMultipleTimes)
: base(id)
{
Code = code;
Name = name;
CanBeAssignedMultipleTimes = canBeAssignedMultipleTimes;
}
}
internal class ParameterValue : Entity<ParameterValueId>
{
public string Code { get; private set; }
public string Name { get; private set; }
public Parameter Parameter { get; private init; } = null!;
public ParameterValue(ParameterValueId id, string code, string name)
: base(id)
{
Code = code;
Name = name;
}
}
Value objects:
// for Article, variant doesn't have SyncToVariants property and has some other
internal class AssignedParameter : ValueObject
{
public ParameterId ParameterId { get; private init; }
public ParameterValueId ParameterValueId { get; private init; }
public bool SyncToVariants { get; private init; }
public AssignedParameter(ParameterId parameterId, ParameterValueId parameterValueId, bool syncToVariants)
{
ParameterId = parameterId;
ParameterValueId = parameterValueId;
SyncToVariants = syncToVariants;
}
protected override IEnumerable<object> GetEqualityComponents()
{
yield return ParameterId;
yield return ParameterValueId;
}
}
internal class AssignedPhysicalQuantity : ValueObject { ... }
My questions:
What would be the best way to notify variants of the parameter change? I can think of two ways using events.
First would be using ArticleParameterChanged(ArticleId, parameter.Id, parameterValue.Id). I would handle this event and changed all variants at once in the handler - I don't think this is the way, but I wouldn't need to hold variants collection in article.
Second would be to loop through variant IDs and create ArticleVariantParameterChanged(ArticleId, VariantId, parameterId, parameterValueId) event. This seems more correct to me?
if (syncToVariants)
{
foreach (var variantId in _variants)
{
AddEvent(new ArticleVariantParameterChanged(Id, variantId, parameter.Id, parameterValue.Id);
}
}
How do I add new variant to article? The easiest way would be to create new variant and update the article in one transaction.
// Article method
public Variant RegisterVariant(VariantId variantId, ...)
{
var variant = new Variant(variantId, ...);
_variants.Add(variantId);
return variant;
}
// command handler? or domain service?
var article = await _articleRepo.GetAsync(articleId);
var variant = article.RegisterVariant(variantId, ...);
await _variantRepo.AddAsync(variant);
await _articleRepo.UpdateAsync(article);
Or using events?
// Article method
public Variant RegisterVariant(VariantId variantId, ...)
{
var variant = Variant.Register(variantId, this.Id, ...);
return variant;
}
// Variant static method
public Variant Register(VariantId variantId, ArticleId articleId, ...)
{
var variant = new Variant(variantId, articleId, ...);
variant.AddEvent(new VariantRegistered(variantId, articleId));
return variant;
}
// command handler
var variant = article.RegisterVariant(...);
await _variantRepo.AddAsync(variant);
// VariantRegisteredHandler
article.AddVariant(variantId);
However here it seems kind of confusing to me, article.RegisterVariant and article.AddVariant... Maybe it's just wrong naming?
Also here can occur condition race between adding new variant and assigning a new parameter, when someone adds new parameter before the VariantRegistered event was handled, so it wouldn't sync that parameter.
So I'm thinking, is it even good idea to store those shared parameters in each variant? Maybe it would be enough to just have variant specific parameters there and merge everything in the read model? However this would be harder to prevent duplications - if the article already has a parameter "color - red", assigning "color - red" to variant would need to check the article parameters too and there can be another race condition.
I read that entities without any domain business logic could be treated as CRUD, that means they wouldn't even inherit AggregateRoot and each of them would have own repository, right?
Let's say someone really wants to delete some parameter value, for example blue color. This wouldn't (hopefully) happen in my app, but I'm still curious how this would be handled. He confirms he really wants to delete it and I need to go through all articles and unassign it from them. How?
My idea would be either to have ParameterValueDeleted event and ParameterValueDeletedHandler would query for all articles and variants and unassign it one by one, this handler would take really long time to execute.
Or ParameterValueDeletedHandler would query for all IDs, create some event for them and that handler would unassign it later. However in the latter case I don't know how that event would be named to make sense. UnassignArticleParameter seems more like command than event and ArticleParameterUnassigned is something coming from article. Also I read that commands indicate something that can be rejected, so I would say command doesn't fit here.
Also I see a problem when someone deletes that parameter and someone else queries for an article which doesn't have it unassigned yet - database join would fail because it would join to non existent parameter (considering single database for read and write model).
If I wanted to have mandatory parameters, where would be the best place to validate that all of them are set? Move the article registration logic to ArticleFactory and check it there? And for variants maybe ArticleService or VariantFactory? This seems kinda inconsistent to me, but maybe it's right?
var article = await _articleRepo.GetAsync(articleId);
_articleService.RegisterVariant(article, /* variant creation data */);
_variantFactory.Register(article, /* variant creation data */);
I think this should be all, I hope I explained everything well.
I would appreciate any help with this!
Brief: I'm creating an MVC application in which I need to display a variety of types documents, some containing more author information than others.
What I wanna do: My approach is to have a generic "view document" view, which dynamically displays the document in a format dictated by the shape/type of the object passed to it.
Example: A simple document would be loaded into a SimpleDocumentViewModel, and display as such. However I'd like to load a larger type of document into an ExtendedDocumentViewModel, bringing with it additional information about both the document and the author. The view(s) would then display the appropriate data based on the object it receives.
Where I'm at now: In this vein I've created the following interfaces and classes, but I'm stuck as to how to return/identify the more specific return types in their derived classes.
abstract class BaseDocumentViewModel : DocumentViewModel, IDocumentViewModel
{
public int DocumentId { get; set; }
public string Body { get; set; }
public IAuthorViewModel Author { get; set; }
}
class SimpleDocumentViewModel : BaseDocumentViewModel
{
}
class ExtendedDocumentViewModel : BaseDocumentViewModel
{
public new IAuthorExtendedViewModel Author { get; set; }
}
interface IAuthorViewModel
{
int PersonId { get; set; }
string Name { get; set; }
}
interface IAuthorExtendedViewModel : IAuthorViewModel
{
int ExtraData { get; set; }
int MoreExtraData { get; set; }
}
Question: So my question is; how best can I get the specific types from the fully implemented classes, or do I need to return the base types and query it all in the view? Or am I off my head and need to go back to the drawing board?
Edits:
I know that c# doesn't support return type covarience, but hoped that there may be another way of returning/identifying the derived types so that I don't have to query them all in the view.
My current solution would be to always return the base types, and have a separate view for each concrete type that simply casts each object to the correct type, only querying those that could differ. Perhaps this is the best solution end of, but it feels very inelegant.
Usually you can do a simple "is" check. So you can have conditional rendering in your views, for example:
#if(Model is ExtendedDocumentViewModel)
{
// render ExtendedDocumentViewModel html here
}
Type checking is usually considered an anti pattern, however I am not sure if there is a much better approach to this problem. If you are using .NET Core you can also check the subclass tag here http://examples.aspnetcore.mvc-controls.com/InputExamples/SubClass .
Possible cleaner option is to just have a signature in the interface called GetView that each document has to implement. This way each document type has their own way of implementing the function and the calling function knows that each document has a function GetView. This method will work well if every document has a unique way of viewing the document. However if some documents share the same way of getting views, then may I suggest creating each View type into their own class and you can assign the views types to each document. I suggest looking into the strategy pattern.
First suggestion:
class SimpleDocumentViewModel : IAuthorViewModel
{
view GetView()
{
... do document specific stuff
... return view
}
}
class ExtendedDocumentViewModel : IAuthorViewModel
{
int ExtraData { get; set; }
int MoreExtraData { get; set; }
view GetView()
{
... do document specific stuff
... return view
}
}
interface IAuthorViewModel
{
view GetView();
}
Second suggestion:
class SimpleDocumentViewModel : IAuthorViewModel
{
public viewType1 view {get;set;}
public SimpleDocumentViewModel(viewType1 viewIn,etc...)
{
view = viewIn;
}
view GetView()
{
return view.GetView();
}
}
class ExtendedDocumentViewModel : IAuthorViewModel
{
int ExtraData { get; set; }
int MoreExtraData { get; set; }
public viewType2 view {get;set;}
public ExtendedDocumentViewModel(viewType2 viewIn,etc...)
{
view = viewIn;
}
view GetView()
{
return view.GetView(ExtraData,MoreExtraData);
}
}
interface IAuthorViewModel
{
view GetView();
}
I may be way off base here, but as I understand your question... why not just throw the return types in an object and pass that to your view?
You could look at the desired method and use reflection to pull out whatever info you want. Modify this and the object class hold whatever you want it to.
public class DiscoverInternalClass
{
public List<InternalClassObject> FindClassMethods(Type type)
{
List<InternalClassObject> MethodList = new List<InternalClassObject>();
MethodInfo[] methodInfo = type.GetMethods();
foreach (MethodInfo m in methodInfo)
{
List<string> propTypeList = new List<string>();
List<string> propNameList = new List<string>();
string returntype = m.ReturnType.ToString();
foreach (var x in m.GetParameters())
{
propTypeList.Add(x.ParameterType.Name);
propNameList.Add(x.Name);
}
InternalClassObject ICO = new InternalClassObject(c.Name, propNameList, propTypeList);
MethodList.Add(ICO);
}
return MethodList;
}
}
he object class could be something like this or modify it however you want:
public class InternalClassObject
{
public string Name { get; set; }
public List<string> ParameterNameList { get; set; }
public List<string> ParameterList { get; set; }
public InternalClassObject(string iName,List<string> iParameterNameList, List<string> iParameterList)
{
Name = iName;
ParameterNameList = iParameterNameList;
ParameterList = iParameterList;
}
}
You could call the method like this with the desired class.
public static List<InternalClassObject> MethodList = new List<InternalClassObject>();
DiscoverInternalClass newDiscover= new DiscoverInternalClass();
MethodList = newDiscover.FindClassMethods(typeof(ExtendedDocumentViewModel));
Now you can have your GetView build based on what is in MethodList
Hope this helps!
I've inherited a MVC project that seems to use Telerik Open Access to handle data instead of using something I'm more familiar with like entity framework. I'm trying to understand the whole concept of how to work with this data method, but right now I'm just needing to find out how to add a table. I've limited my code examples to one table, but in reality there are dozens of them.
So I see that the class OpenAccessContext.cs has a database connection string, but it also has a IQueryable item made up of the class tblMaterial. The tblMaterial class is defined in tblMaterial.cs. I don't understand how this class is connected to the SQL database version of tblMaterial (so feel free to educate me on that).
I have a table called tblContacts in the SQL database. What do I need to do to connect it to my project? There's no "update from database" option when I right click any object in the solution (because they're all just classes). Will I need to create a new class manually called tblContacts.cs? If so, how do I connect it to the database version of tblContacts? Am I going to need to manually change multiple classes to add the table (OpenAccessContext, MetadataSources, Repository, etc.)?
I tried to keep this as one simple question (how do I add a table) so I don't get dinged, but any light you can shine on the Telerik Open Access would be helpful. (Please don't ding me for asking that!) I checked out the Telerik documentation here: http://docs.telerik.com/data-access/developers-guide/code-only-mapping/getting-started/fluent-mapping-getting-started-fluent-mapping-api , but it's related to setting up a new open access solution. I need to know how to modify one (without ruining the already working code). Thank you in advance for your help!
Here's the solution as seen in Visual Studio:
Open Access
Properties
References
OpenAccessContext.cs
OpenAccessMetadataSources.cs
Repository.cs
tblMaterial.cs
Here's the code:
OpenAccessContext.cs
namespace OpenAccess
{
public partial class OpenAccessContext : OpenAccessContext
{
static MetadataContainer metadataContainer = new OpenAccessMetadataSource().GetModel();
static BackendConfiguration backendConfiguration = new BackendConfiguration()
{
Backend = "mssql"
};
private static string DbConnection = ConfigurationManager.ConnectionStrings["ConnString"].ConnectionString;
private static int entity = ConfigurationManager.AppSettings["Entity"] == "" ? 0 : int.Parse(ConfigurationManager.AppSettings["Entity"]);
public OpenAccessContext() : base(DbConnection, backendConfiguration, metadataContainer)
{
}
public IQueryable<tblMaterial> tblMaterials
{
get
{
return this.GetAll<tblMaterial>(); //.Where(a => a.EntityId == entity);
}
}
}
}
OpenAccessMetadataSources.cs
namespace OpenAccess
{
public class OpenAccessMetadataSource : FluentMetadataSource
{
protected override IList<MappingConfiguration> PrepareMapping()
{
var configurations = new List<MappingConfiguration>();
// tblMaterial
var materialConfiguration = new MappingConfiguration<tblMaterial>();
materialConfiguration.MapType(x => new
{
MaterialId = x.MaterialId,
MaterialName = x.MaterialName,
MaterialDescription = x.MaterialDescription,
MaterialActive = x.MaterialActive,
MaterialUsageType = x.MaterialUsageType,
AddDate = x.AddDate,
AddBy = x.AddBy,
ModDate = x.ModDate,
ModBy = x.ModBy
}).ToTable("tblMaterial");
materialConfiguration.HasProperty(x => x.MaterialId).IsIdentity(KeyGenerator.Autoinc);
}
}
}
Repository.cs
namespace OpenAccess
{
public class Repository : IRepository
{
#region private variables
private static OpenAccessContext dat = null;
#endregion private varibles
#region public constructor
/// <summary>
/// Constructor
/// </summary>
public Repository()
{
if (dat == null)
{
dat = new OpenAccessContext();
}
}
#endregion public constructor
#region Material (tblMaterials)
public int CreateMaterial(tblMaterial itm)
{
try
{
dat.Add(itm);
dat.SaveChanges();
return itm.MaterialId;
}
catch (Exception)
{
return 0;
}
}
}
}
tblMaterial.cs
namespace OpenAccess
{
public class tblMaterial
{
public int MaterialId { get; set; }
public string MaterialName { get; set; }
public string MaterialDescription { get; set; }
public bool MaterialActive { get; set; }
public int MaterialUsageType { get; set; }
public DateTime? AddDate { get; set; }
public string AddBy { get; set; }
public DateTime? ModDate { get; set; }
public string ModBy { get; set; }
}
}
In the case of tblContacts, I would suggest to you the following workflow for extending the model:
Add a new class file that will hold the definition of the tblContact POCO class. In this class add properties that will correspond to the columns of the table. The types of the properties should logically match the datatypes of the table columns.
In the OpenAccessMetadataSource class, add a new MappingConfiguration<tblContact> for the tblContact class and using explicit mapping provide the mapping details that logically connect the tblContact class with the tblContacts table. Make sure to add both the existing and the new mapping configurations to the configurations list.
Expose the newly added class through an IQueryable<tblContact> property in the context. This property will allow you to compose LINQ queries against the tblContacts table.
Regarding the Repository class, it seems like it is related to the custom logic of the application. It surely is not a file generated by Data Access. Therefore, you need to discuss it in your team.
I also strongly advise you against using OpenAccess in the namespaces of your application. This is known to interfere with the Data Access' namespaces during build time and at some point it causes runtime errors.
I hope this helps.
I have a program that receives files from clients and do some operations on files and save them on disk or won’t save them.
For decoupling of jobs, I created an interface named IFileEditor. Every component that do something on file, should implement this interface:
public interface IFileEditor
{
string Name { get; set; }
byte[] Content { get; set; }
string EditedName { get; set; }
byte[] EditedConent { get; set; }
string ComponentName { get; }
XmlDocument Config { get; set; }
XmlDocument Result { get; set; }
void EditFile(byte[] content);
}
Main method in this interface is EditFile that receives file contents and do operations, and maybe in last save the result on disk.
Sample class that I wrote is that create a thumbnail from image that implements this interface:
public class ThumbnailCreator : IFileEditor
{
public string Name { get; set; }
public byte[] Content { get; set; }
public sting EditedName { get; set; }
public byte[] EditedConent { get; set; }
public XmlDocument Config { get; set; }
public XmlDocument Result { get; set; }
public void EditFile(byte[] content)
{
//change the file content and save the thumbnail content in disk
}
}
I may have lots of components like ThumbnailCreator, for example zip content or anything else that do operation on content.
In main program I load every component by reflection. Implementation of loading them is not important, just know that copying ddl of component beside .exe of main program and if dll implements IFileEditor, I add that to list.
Main question is that, main application just receives files and pass them to components, and components do the jobs. If I want to pass the result of one component to another, what should I do?
Remember that components doesn't know each other and main program should not interfere in passing results.
I searched, and I think chain-of-responsibility design pattern will solve my question. I don’t know is that correct? If correct how to implement this?
For example one component creates the thumbnail and pass the result to compress the thumbnail.
I wrote this part like this, that every developer can create a component and main program could be extendable.
Thanks for reading this large post. ;)
Yes, the chain-of-responsibility pattern is what you could use to solve this design problem. You basically need to make sure that each processor knows the next processor in the chain and calls it and have some sort of runner that configures the processor chain once, starts the processing operation and collects the end result. There are many use cases for such a pattern, System.Net.Http.DelegatingHandler (http://msdn.microsoft.com/en-us/library/system.net.http.delegatinghandler(v=vs.110).aspx) works like this. Java ServletFilters are conceptually the same thing as well.
You could also just keep your processors in a collection, iterate that collection and apply each processor to your input by calling a specific method, i.e. EditFile in your example.
Update -- here's a naive implementaiton to illustrate what I mean (taken from LINQPad):
void Main()
{
// Variant 1 - Chaining
var editorChain = new UpperCaseFileEditor(new LowerCaseFileEditor());
var data1 = new char[] { 'a', 'B', 'c' };
editorChain.Edit(data1);
data1.Dump(); // produces ['a','b','c']
// Variant 2 - Iteration
var editors = new List<IFileEditor> { new LowerCaseFileEditor(), new UpperCaseFileEditor() };
var data2 = new char[] { 'a', 'B', 'c' };
foreach (var e in editors) {
e.Edit(data2);
}
data2.Dump(); // produces ['A','B','C']
}
// Define other methods and classes here
public interface IFileEditor {
IFileEditor Next { get; set; }
void Edit(char[] data);
}
public class UpperCaseFileEditor : IFileEditor {
public IFileEditor Next { get; set; }
public UpperCaseFileEditor() : this(null) {}
public UpperCaseFileEditor(IFileEditor next) {
Next = next;
}
public void Edit(char[] data) {
for (int i = 0; i < data.Length; ++i) {
data[i] = Char.ToUpper(data[i]);
}
if (Next != null)
Next.Edit(data);
}
}
public class LowerCaseFileEditor : IFileEditor {
public IFileEditor Next { get; set; }
public LowerCaseFileEditor() : this(null) {}
public LowerCaseFileEditor(IFileEditor next) {
Next = next;
}
public void Edit(char[] data) {
for (int i = 0; i < data.Length; ++i) {
data[i] = Char.ToLower(data[i]);
}
if (Next != null)
Next.Edit(data);
}
}
Please take into consideration that this is a just an illustration and I won't claim that this will scale to a production/real-world use case :-). Depending on what you really do, you might need to work on performance improvements, it might be quite handy to work with streams instead of byte/char arrays for example.
I'm currently creating objects for an application of mine when this stuff come to mind. I know that using DBML's over Manual Creation of classes(see class below) can improve the speed of my application development but I'm really confused of what would be the other disadvantages and advantages of using DBML's over Manual Creation of classes like what I'm doing below thanks for all people who would help. :)
[Serializable]
public class Building
{
public Building()
{
LastEditDate = DateTime.Now.Date;
LastEditUser = GlobalData.CurrentUser.FirstName + " " + GlobalData.CurrentUser.LastName;
}
public int BuildingID { get; set; }
public string BuildingName { get; set; }
public bool IsActive { get; set; }
public DateTime LastEditDate { get; set; }
public string LastEditUser { get; set; }
public static bool CheckIfBuildingNameExists(string buildingName, int buildingID = 0)
{
return BuildingsDA.CheckIfBuildingNameExists(buildingName, buildingID);
}
public static Building CreateTwin(Building building)
{
return CloningUtility.DeepCloner.CreateDeepClone(building);
}
public static List<Building> GetBuildingList()
{
return BuildingsDA.GetBuildingList();
}
public static List<Building> GetBuildingList(bool flag)
{
return BuildingsDA.GetBuildingList(flag).ToList();
}
public static Building SelectBuildingRecord(int buildingId)
{
return BuildingsDA.SelectBuilding(buildingId);
}
public static void InsertBuildingRecord(Building building)
{
BuildingsDA.InsertBuilding(building);
}
public static void UpdateBuildingRecord(Building building)
{
BuildingsDA.UpdateBuilding(building);
}
public static void DeleteBuildingRecord(int building)
{
BuildingsDA.DeleteBuilding(building);
}
}
and my DAL is like this:
internal static class BuildingsDA
{
internal static Building SelectBuilding(int buildingId)
{
SqlCommand commBuildingSelector = ConnectionManager.MainConnection.CreateCommand();
commBuildingSelector.CommandType = CommandType.StoredProcedure;
commBuildingSelector.CommandText = "Rooms.asp_RMS_Building_Select";
commBuildingSelector.Parameters.AddWithValue("BuildingID", buildingId);
SqlDataReader dreadBuilding = commBuildingSelector.ExecuteReader();
if (dreadBuilding.HasRows)
{
dreadBuilding.Read();
Building building = new Building();
building.BuildingID = int.Parse(dreadBuilding.GetValue(0).ToString());
building.BuildingName = dreadBuilding.GetValue(1).ToString();
building.IsActive = dreadBuilding.GetValue(2).ToString() == "Active";
building.LastEditDate = dreadBuilding.GetValue(3).ToString() != string.Empty ? DateTime.Parse(dreadBuilding.GetValue(3).ToString()) : DateTime.MinValue;
building.LastEditUser = dreadBuilding.GetValue(4).ToString();
dreadBuilding.Close();
return building;
}
dreadBuilding.Close();
return null;
}
....................
}
I would also want to know if what could be the faster between the two methods of OOP implementation thanks :)
DBML
Pros:
You can get your job done fast!
Cons:
You can't shape your entity the way you want, for example you need 5 columns from the table but it has 10 columns you will get all of them, at least its schema. If you don't care much about data volum
You client side will have dependency with DAL (Data Access Layer), if you change property name, type in DAL you need to change in both BLL (Business Logic Layer) and client (Presentation Layer)
If you manual create class you might take a little bit more time to code but you get more flexible with it. Your client code will not depend on your DAL, any changes on DAL will not cause problems on client code.
Creating your model classes manually you can put additional attributes to properties (it cannot be done with DBML), apply your own data validation (as far as I remember it is possible to be done with DBML using partial methods).
With many tables and assocatiations DBML could become hard to read.
Disadventage of creating model classes manually is that you have to do all DBML stuff (attributes and a lot of code).
If you want to create model classes manually you can take a look at Entity Framework Code First or Fluent NHibernate. Both allows creating model easily.