C# Processing same object with different "processors" a flyweight pattern? - c#

I've been doing a lot of research on different design patterns and I'm trying to determine the correct way of doing this.
I have an image uploading MVC app that I'm developing which needs to process the image in several different ways, such as create a thumbnail and save a database record. Would the best way to approach this be via a flyweight pattern? Using this as an example:
var image = new Image();
List<IProcessors> processors = processorFactory.GetProcessors(ImageType.Jpeg);
foreach(IProcessor processor in processors)
{
processor.process(image);
}
I have second part to this question as well. What if the processor has smaller related "sub-processors"? An example that I have in my head would be a book generator.
I have a book generator
that has page generators
that has paragraph generators
that has sentence generators
Would this be a flyweight pattern as well? How would I handle the traversal of that tree?
EDIT
I asked this question below but I wanted to add it here:
All the examples that I've see of the composite pattern seems to relate to handling of values while the flyweight pattern seems to deal with processing (or sharing) of an object's state. Am I just reading into the examples too much? Would combining the patterns be the solution?

I can at least handle the second part of the question. To expand a tree (or a composite), use simple recursion.
void Recursion(TreeItem parent)
{
// First call the same function for all the children.
// This will take us all the way to the bottom of the tree.
// The foreach loop won't execute when we're at the bottom.
foreach (TreeItem child in parent.Children)
{
Recursion(child);
}
// When there are no more children (since we're at the bottom)
// then finally perform the task you want. This will slowly work
// it's way up the entire tree from bottom most items to the top.
Console.WriteLine(parent.Name);
}

What your describing could have some flyweights representing each of those nested classes. But in this case that would be more of an implementation detail. In my experience, flyweights are usually called for at the architectural level or implementation level but rarely as an element of design.
Consider this class -
public interface IMyData {
IdType MyId { get; }
byte[] BlobData { get; }
long SizeOfBlob { get; }
}
public class MyData : IMyData {
public IdType MyId { get; private set; }
public byte[] BlobData { get; set; }
public long SizeOfBlob { get { return BlobData.LongLength; } }
}
}
In your multi-tiered application, this object needs to travel from the source database, to a manager's IPhone for approval based on the blob size, and then to an accounting system for billing. So instead of transporting the whole thing to the IPhone App, you substitute the flyweight:
public class MyDataFlyWeight : IMyData {
public MyDataFlyWeight(IdType myId, long blobSize){
MyId = myId;
BlobSize = blobSize;
}
public IdType MyId { get; set; }
public byte[] MutableBlobData { get {
throw new NotImplmentedException();
}
}
public long BlobSize { get; private set; }
}
}
By having both implement IMyData and by building the system with the interface and not the concrete type (you did this, right?!), then you could use MyDataFlyweight objects from the IPhone App and MyData objects in the rest of the system. All you have to do is properly initialize MyDataFlyweight with the blob size.
The architecture which calls for an IPhone App would dictate that a flyweight is used within the IPhone App.
In addition, consider the newer Lazy<T> class:
public class MyData : IMyData {
public IdType MyId { get; private set; }
private Lazy<byte[]> _blob = new Lazy<byte[]>(() =>
StaticBlobService.GetBlob(MyId));
public byte[] BlobData { get { return _blob.Value; } }
public long SizeOfBlob { get { return BlobData.LongLength; } }
}
}
This is an example of using the flyweight purely as an implementation detail.

Related

Separation of Concerns/Code Structuring in a class based environment (C# used as example)

I have always wondered what the best practice is for separating code in a class based language. As an example, I made a project that handles api interaction with my web api. I want to know what the right option is to go with, or another suggestion.
Example 1
Project Files
Api.cs
DataTypes
Anime.cs
Episode.cs
Api.cs
public class Api
{
public static async Task<List<Anime>> GetAnimesByKeyword(string keyword)
{
// Execute api request to server
return result;
}
public static async Task<List<Episode>> GetEpisodesByAnime(Anime anime)
{
// Execute api request to server
return result;
}
}
DataTypes -> Anime.cs
public class Anime
{
public string Name { get; set; }
public string Summary { get; set; }
// Other properties
}
DataTypes -> Episode.cs
public class Episode
{
public string Name { get; set; }
public Date ReleaseDate { get; set; }
// Other properties
}
Or example 2
Project Files
Api.cs
DataTypes
Anime.cs
Episode.cs
Api.cs
public class Api
{
// Nothing for now
}
DataTypes -> Anime.cs
public class Anime
{
public static async Task<Anime> GetById(int id)
{
return result;
}
public string Name { get; set; }
public string Summary { get; set; }
// Other properties
}
DataTypes -> Episode.cs
public class Episode
{
public static async Task<List<Episode>> GetEpisodesByAnime(Anime anime)
{
return result;
}
public string Name { get; set; }
public Date ReleaseDate { get; set; }
// Other properties
}
What of these 2 is the preferred way of structuring the code, or is there a better way to do this. It might seem insignificant, but it does matter to me.
Thanks for helping me out!
In general, follow the Single Responsibility Principle. In practice this means you have simple objects that are data-only and more complex service classes that do work like loading from an external service or database.
Your second example mixes concerns AND it binds these two classes together tightly (Episode now depends on Anime). You can also see how it's hard to decide which class to put that loading method on: should it be anime.GetEpisodes() or Episode.GetEpisodesByAnime()? As the object graph gets more complex this escalates.
Later you may well want a different data transfer object for an entity. Having simple data-only objects makes it easy to add these and to use Automapper to convert.
But (on your first example) don't use static methods because that makes your service class harder to test. One service may depend on another (use dependency injection) and to test each in isolation you don't want to have static methods.

Neo4jClient does not add properties to a relationship

We have been trying to write a C# client that seeds a Neo4j instance with some nodes and relationships. We are facing probelms when trying to create relationship properties.
Here is the code to create the relatioship with the flag property:
var s = clientConnection.CreateRelationship(root, new RelationshipPrincipleToContent("SECURITY", rootFolder) { flags = "+W" });
Here is the relationship class:
public class RelationshipPrincipleToContent : Relationship, IRelationshipAllowingSourceNode<Principles>, IRelationshipAllowingTargetNode<Content>{
public string flags { get; set; }
string RelationshipName;
public RelationshipPrincipleToContent(NodeReference targetNode) : base(targetNode){}
public RelationshipPrincipleToContent(string RelationshipName, NodeReference targetNode): base(targetNode){
this.RelationshipName = RelationshipName;
}
public override string RelationshipTypeKey{
get { return RelationshipName; }
}
}
When we look at the data in the data browser tab there are no properties on the relationships. We have also created a relationship index?
What are we missing/ doing wrong?
Firstly add a class (PayLoad.cs in this instance) that holds a set for a public string.
public class PayLoad
{
public string Comment { get; set; }
}
Update your relationship class to use this PayLoad class:
public class RelationshipPrincipleToContent : Relationship<PayLoad>, IRelationshipAllowingSourceNode<Principles>, IRelationshipAllowingTargetNode<Content>
{
string RelationshipName;
public RelationshipPrincipleToContent(string RelationshipName, NodeReference targetNode, PayLoad pl)
: base(targetNode, pl)
{
this.RelationshipName = RelationshipName;
}
public override string RelationshipTypeKey
{
get { return RelationshipName; }
}
}
}
Now just update your method call on the relationship class:
clientConnection.CreateRelationship(AllPrincipals, new RelationshipPrincipleToContent("SECURITY", rootFolder, new PayLoad() { Comment = "+R" }));
(Context: I lead the Neo4jClient project.)
Shaun's answer is correct, however dated.
The direction of both Neo4j and Neo4jClient is towards Cypher as a unified approach to everything you need to do.
This Cypher query:
START root=node(0), rootFolder=node(123)
CREATE root-[:SECURITY { flags: 'W+' }]->rootFolder
Is this in C#:
client.Cypher
.Start(new { root = client.RootNode, rootFolder })
.Create("root-[:SECURITY {security}]->rootFolder")
.WithParam("security", new { flags = "+W" })
.ExecuteWithoutResults();
Some notes:
Using Cypher for this type of stuff might look a bit more complex to start with, but it will grow better for you. For example, a simple switch from Create to CreateUnique will ensure you don't create the same relationship twice; that would be much harder with the procedural approach.
Non-Cypher wrappers in Neo4jClient are a bit old and clunky, and will not see any significant investment moving forward
The C# approach uses WithParam to ensure that everything gets encoded properly, and you can still pass in nice objects
The C# approach uses WithParam to allow query plan caching

C# Application Architecture - EF5 & understanding the Service Layer

At work I've got thrown into developing a legacy enterprice application, that still is under production and stalled for the last few months because of bad design and instability.
So we've started using EF5 and applying some design patterns / layers to our application.
What I'm struggling to understand is: what exactly should the Service Layer do in our case? Would it be over-architecturing or would it provide some benefits without adding unneccesary comlexity?
Let's show you what we've got so far:
we've introduced EF (Code First with POCOs) to map our legacy database (works reasonably well)
we've created repositories for the most stuff we need in our new Data Layer (specific implementations, I don't see any kind of benefit regarding seperation of concern using generic repos..)
Now in the specific case it is about calculating prices for an article - either by getting a price from an arcile directly or from the group the article is in (if there is no price specified). It's getting a lot more complex, because there also are different pricelists involved (depending on the complete value of the order) and depending on the customer who also can have special prices etc.
So my main question is: who is responsible for getting the correct price?
My thoughts are:
The order has to know of the items it consists of. Those items on the other hand have to know what their price is, but the order must not know of how to calculate the item's price, just that it has to summarize their costs.
Excert of my code at the moment:
ArticlePrice (POCO, Mappings soon to be swapped by Fluid API)
[Table("artikeldaten_preise")]
public class ArticlePrice : BaseEntity
{
[Key]
[Column("id")]
public int Id { get; set; }
[Column("einheit")]
public int UnitId { get; set; }
[ForeignKey("UnitId")]
public virtual Unit Unit { get; set; }
[Column("preisliste")]
public int PricelistId { get; set; }
[ForeignKey("PricelistId")]
public virtual Pricelist Pricelist { get; set; }
[Column("artikel")]
public int ArticleId { get; set; }
[ForeignKey("ArticleId")]
public virtual Article Article { get; set; }
public PriceInfo PriceInfo { get; set; }
}
Article Price Repository:
public class ArticlePriceRepository : CarpetFiveRepository
{
public ArticlePriceRepository(CarpetFiveContext context) : base(context) {}
public IEnumerable<ArticlePrice> FindByCriteria(ArticlePriceCriteria criteria)
{
var prices = from price in DbContext.ArticlePrices
where
price.PricelistId == criteria.Pricelist.Id
&& price.ArticleId == criteria.Article.Id
&& price.UnitId == criteria.Unit.Id
&& price.Deleted == false
select price;
return prices.ToList();
}
}
public class ArticlePriceCriteria
{
public Pricelist Pricelist { get; set; }
public Article Article { get; set; }
public Unit Unit { get; set; }
public ArticlePriceCriteria(Pricelist pricelist, Article article, Unit unit)
{
Pricelist = pricelist;
Article = article;
Unit = unit;
}
}
PriceService (does have a horriffic code smell...)
public class PriceService
{
private PricelistRepository _pricelistRepository;
private ArticlePriceRepository _articlePriceRepository;
private PriceGroupRepository _priceGroupRepository;
public PriceService(PricelistRepository pricelistRepository, ArticlePriceRepository articlePriceRepository, PriceGroupRepository priceGroupRepository)
{
_pricelistRepository = pricelistRepository;
_articlePriceRepository = articlePriceRepository;
_priceGroupRepository = priceGroupRepository;
}
public double GetByArticle(Article article, Unit unit, double amount = 1, double orderValue = 0, DateTime dateTime = new DateTime())
{
var pricelists = _pricelistRepository.FindByDate(dateTime, orderValue);
var articlePrices = new List<ArticlePrice>();
foreach (var list in pricelists)
articlePrices.AddRange(_articlePriceRepository.FindByCriteria(new ArticlePriceCriteria(list, article, unit)));
double price = 0;
double priceDiff = 0;
foreach (var articlePrice in articlePrices)
{
switch (articlePrice.PriceInfo.Type)
{
case PriceTypes.Absolute:
price = articlePrice.PriceInfo.Price;
break;
case PriceTypes.Difference:
priceDiff = priceDiff + articlePrice.PriceInfo.Price;
break;
}
}
return (price + priceDiff) * amount;
}
public double GetByPriceGroup(PriceGroup priceGroup, Unit unit)
{
throw new NotImplementedException("not implemented yet");
}
//etc. you'll get the point that this approach might be completely WRONG
}
My final questions are:
How do I correctly model my problem? Is it correct, that I am on my way of overarchitecturing my code?
How would my Service Layer correctly look like? Would I rather have a ArticlePriceService, an ArticleGroupPriceService, etc.? But who would connect that pieces and calculate the correct price? Would that e.g. be the responsibility of an OrderItemService that has a method "GetPrice"? But then again the orderItemService would have to know about the other services..
Please try to provide me with possible solutions regarding architecture, and which object/layer does what.
Feel free to ask me additional questions if you need more info!
You did present a simple scenario which the Repository itself might be sufficient.
Do you have more repositories?
Do you expect you application to grow, and have more repositories in use?
Having a service layer that abstract the data layer is recommended and in use by most of the applications/examples that I have seen, and the overhead is not that significant.
One reason for using services might pop-up when you would like to fetch data from several different repositories, and then perform some kind of aggregation / manipulations on the data.
A Service layer would then provide the manipulation logic, while the service consumer would not have to deal with several different repositories.
You should also think of situations where you might want to have more then one entity changed in one transaction (Meaning - more than one repository), and saving the changes to the DB only when all update actions where successful.
That situation should imply using the Unit Of Work Pattern, and probably will conclude the use of a Service Layer, to enable proper unit-testing.
When i started with objects and architecture, my main problem was to give a good name to classes.
To me, it seems your service should be called "ShopService" (or something equivalent). Then your method GetByArticle, should be nammed GetPriceByArticle.
The idea of changing the name of the service for something bigger than just the price would be more meaningfull and would also address other issues (like your OrderPriceService you wonder about).
Maybe you can ask yourself "What is the name of my page or window that interracts with this service ?" Is there only one or more ? If more, what do they have in common ?
This could help you figure out a good name for your service, and consequently different methods to acquire what each needs.
Tell me more. I will be please to help.

How to model entities to enforce a data models constraints at compile time?

I have the below data model that constrains ItemTypes with a subset of Events. Each ItemType has a valid set of Events, this is constrained in the ItemEvent table. For example, a Video can be { played, stopped, paused }, an Image can be { resized, saved, or shared }, and so on.
What is the best way to reflect this constraint in the Entity model so that I can get compile time assurance that an Event used is valid for a particular Item? Specifically, I am refactoring the AddItemEventLog method:
public void AddItemEventLog(Item item, string ItemEvent)
{
//
}
Obviously, this is a contrived example, just trying illustrate-- it allows a developer to pass in any ItemEvent string they desire. Even if I create an enumeration based on ItemEvent resultset, there isnt anything in the entity model to prevent a developer from passing in ItemEvent.Resize with an Item of type Video.
I have Item as the base class of Video, and I have tried to override an enum but now know that is not possible. I am less interested in checking for the validity of the Event at runtime, as I already will throw an exception when the DB raises a FK violation. I want to nip it in the bud at coding time if possible :)
Currently have classes modeled like this but open to any modifications:
//enums.cs
public enum ItemType : byte
{
Video = 1,
Image = 2,
Document = 3
}
//item.cs
public class Item : BaseModel
{
public int ItemId { get; set; }
public ItemTypeLookup.ItemType ItemType { get; set; }
public string ItemName { get; set; }
}
//video.cs
public class Video : Item
{
public string Width { get; set; }
public string Height { get; set; }
public string Thumb { get; set; }
}
I think that Code Contracts may be the only way to enforce something like this at compile time. Outside of compile time checks, writing unit tests to ensure the correct functionality is the next best thing!

Domain modelling - Implement an interface of properties or POCO?

I'm prototyping a tool that will import files via a SOAP api to an web based application and have modelled what I'm trying to import via C# interfaces so I can wrap the web app's model data in something I can deal with.
public interface IBankAccount
{
string AccountNumber { get; set; }
ICurrency Currency { get; set; }
IEntity Entity { get; set; }
BankAccountType Type { get; set; }
}
internal class BankAccount
{
private readonly SomeExternalImplementation bankAccount;
BankAccount(SomeExternalImplementation bankAccount)
{
this.bankAccount = bankAccount;
}
// Property implementations
}
I then have a repository that returns collections of IBankAccount or whatever and a factory class to create BankAccounts for me should I need them.
My question is, it this approach going to cause me a lot of pain down the line and would it be better to create POCOs? I want to put all of this in a separate assembly and have a complete separation of data access and business logic, simply because I'm dealing with a moving target here regarding where the data will be stored online.
This is exactly the approach I use and I've never had any problems with it. In my design, anything that comes out of the data access layer is abstracted as an interface (I refer to them as data transport contracts). In my domain model I then have static methods to create business entities from those data transport objects..
interface IFooData
{
int FooId { get; set; }
}
public class FooEntity
{
static public FooEntity FromDataTransport(IFooData data)
{
return new FooEntity(data.FooId, ...);
}
}
It comes in quite handy where your domain model entities gather their data from multiple data contracts:
public class CompositeEntity
{
static public CompositeEntity FromDataTransport(IFooData fooData, IBarData barData)
{
...
}
}
In contrast to your design, I don't provide factories to create concrete implementations of the data transport contracts, but rather provide delegates to write the values and let the repository worry about creating the concrete objects
public class FooDataRepository
{
public IFooData Insert(Action<IFooData> insertSequence)
{
var record = new ConcreteFoo();
insertSequence.Invoke(record as IFooData);
this.DataContext.Foos.InsertOnSubmit(record); // Assuming LinqSql in this case..
return record as IFooData;
}
}
usage:
IFooData newFoo = FooRepository.Insert(f =>
{
f.Name = "New Foo";
});
Although a factory implementation is an equally elegant solution in my opinion. To answer your question, In my experience of a very similar approach I've never come up against any major problems, and I think you're on the right track here :)

Categories

Resources