C# generic processors - c#

I have a problem which takes a standard combination of inputs, however there are a several algorithms ('processors') which can solve it. The output is a Boolean. Each processor is only valid for one particular scenario, and there will never be more than one processor which is valid.
Each processor determines whether it is valid or not by having some initial inputs supplied to it. If it is valid, then it calculates some information based on those initial inputs, and stores it - since that information is useful in the final process. Then, if it is valid, then additional inputs are supplied to the processor and it returns an output.
If no processor is valid then a default answer is given.
So the algorithm is like this in pseudo code:
process(inputs)
for each processor
determine validity and get data
if valid
use data to output result
end if
end for
output default result
end
Here is a C# example, which isn't syntactically valid. And its just an example, in real life the inputs are more complex than strings and integers. The computation of the second input (int i in the contrived example) is executed repeatedly inside a loop, whereas the first input is only calculated once - hence the separation of whether a process if valid from what the result of the processor is. As an alternative to using an IEnumerable, we could have an array or list of processors.
public class ProcessController
{
public static IEnumerable<Processor<X>> GetProcessors<X>() where X: ProcessorInfo
{
yield return new ProcessorA();
yield return new ProcessorB();
}
public static bool Process<X>(String s, int i) where X : ProcessorInfo
{
foreach (Processor<X> processor in GetProcessors<X>())
{
X x = (X) processor.GetInfoIfCanProcess(s);
if (x != null)
{
return processor.GetResult(x, i);
}
}
return false;
}
}
public abstract class Processor<T> where T: ProcessorInfo
{
public abstract T GetInfoIfCanProcess(String s);
public abstract bool GetResult(T info, int i);
}
public interface ProcessorInfo
{
bool IsValid();
}
public class ProcessorA: Processor<ProcessorA.ProcessorInfoA>
{
public class ProcessorInfoA: ProcessorInfo
{
public bool IsValid()
{
//do something!
}
}
public override ProcessorInfoA GetInfoIfCanProcess(string s)
{
//do something!
}
public override bool GetResult(ProcessorInfoA info, int i)
{
//do something!
}
}
public class ProcessorB : Processor<ProcessorB.ProcessorInfoB>
{
public class ProcessorInfoB : ProcessorInfo
{
public bool IsValid()
{
//do something!
}
}
public override ProcessorInfoB GetInfoIfCanProcess(string s)
{
//do something!
}
public override bool GetResult(ProcessorInfoB info, int i)
{
//do something!
}
}
I am getting syntax errors in the GetProcessors method: Cannot implicitly convert type Play.ProcessorA to Play.Processor<X>. How can I get around this?

You are trying to couple a processor class and its processor info class using generics, which is problematic. You end up with a loose couping where you just cast the objects to what they should be, instead of making the generics assure that the types are correct.
I suggest that you avoid that problem by storing the info in the class itself. The code that runs the processor doesn't have to know anything about the information that the processor uses, it only has to know if the processor is valid or not.
Return a boolean instead of a ProcessorInfo object, and keep the relevant data in the Processor object. If the processor is valid, it has the data that it needs. If not, it will go away along with the data that it got from the first step when you go on to try the next processor.

The simplest way to fix this is using OfType:
private static IEnumerable<object> GetProcessors()
{
yield return new ProcessorA();
yield return new ProcessorB();
}
public static IEnumerable<Processor<X>> GetProcessors<X>() where X: ProcessorInfo
{
return GetProcessors.OfType<Processor<X>>();
}
Since Processor<X> is invariant there is no common type you can use, and since X is chosen outside the method you need to use a dynamic type check.

Related

Better way to solve my class hierarchy problem than Unsafe.As cast to a child class?

(I'm writing about matrices, but don't worry, this is not a maths question and presumes no maths knowledge.)
I have a Matrix class that has three fields
double[] Data;
int Rows;
int Columns;
and defines hundreds of mathematical operations.
Additionally, I have a SymmetricMatrix : Matrix subclass that has no instance fields of its own, that offers all of the operations above via inheritance, offers some additional operations (like EigenvalueDecomposition), and finally repeats some definitions with a new keyword to change the return type to SymmetricMatrix.
For example, if Matrix has
public Matrix MatrixMultiplication(Matrix otherMatrix)
public Matrix ScaledBy(double scalar)
then SymmetricMatrix would simply inherit the MatrixMultiplication but it would change the ScaledBy to return a SymmetricMatrix.
Rather than reimplementing ScaledBy , I define it via
// SymmetricMatrix.cs
public new SymmetricMatrix ScaledBy(double scalar) => base.ScaledBy(scalar).AsSymmetricMatrix()
// Matrix.cs
public SymmetricMatrix AsSymmetricMatrix() => Unsafe.As<SymmetricMatrix>()
(I'm using new instead of virtual+override for reasons that don't matter for the purposes of this question).
I found this approach to work surprisingly well, and it allows me to be super succinct in defining SymmetricMatrix.cs. The obvious downside is that it may exploit unsupported behavior (?) and that it confuses the debugger I'm using a lot (the runtime type of the result of the cast is Matrix which is not a subclass of its compile time type SymmetricMatrix, and yet, all the operations defined on SymmetricMatrix succeed because the data held by both classes is the same)
Questions
Do you foresee any problems with my approach that I haven't though of?
Might my approach break with the next dotnet version?
Do you think there are cleaner ways of achieving what I want? I had a few ideas but none seem to work out cleanly. For example, I cannot encode which operations preserve Child classes in the parent class via
class Matrix<T> where T : Matrix
...
T ScaledBy(double other)
because whether or not an operation preserves a Child class is knowledge only the Child can have. For example, SymmetricPositiveDefiniteMatrix would NOT be preserved by ScaledBy.
Alternatively, I could use encapsulation over inheritance and define
class BaseMatrix
{
...lots of operations
}
...
class Matrix : BaseMatrix
{
private BaseMatrix _baseMatrix;
...lots of "OperationX => new Matrix(_baseMatrix.OperationX)"
}
class SymmetricMatrix : BaseMatrix
{
private BaseMatrix _baseMatrix;
...lots of "OperationX => preserving ? new SymmetricMatrix(_baseMatrix.OperationX) : new Matrix(_baseMatrix.OperationX);"
}
That's very sad to code up though, because I'd have to manually propagate every change I make to BaseMatrix to, at the moment, four extra classes, and users would have to manually cast SymmetricMatrixs to Matrix in lots of scenarios.
Finally, I could simply not offer subclasses and rather have flags like bool _isSymmetric and encode the operations that preserve symmetry/positivity/... by changing/setting the flags in all my methods. This is sad too, because :
Users would then be able to write Matrix.EigenvalueDecomposition() just to have this break at runtime with a You have to promise me that your matrix is symmetric because I only implemented EigenvalueDecomposition for that case error
It would be too easy to forget resetting a flag when it's not preserved and then accidentally running an operation that assumes symmetry (which BLAS does by simply ignoring half of the matrix)
I like being able to specify in the type system rather than the comments that, e.g., the matrix I return from CholeskyFactor is lower triangular (conventions differ and some may expect an upper triangular matrix)
Users wouldn't see which flags are currently set so they wouldn't know whether I use the "specialized" version of a given algorithm and likely end up using .SetSymmetric() unnecessarily all over the place just to be safe.
Unless I missed a requirement, you could implement what you want using generics
A (not so brief) example :
public abstract class BaseMatrix<TMatrix> where TMatrix : BaseMatrix<TMatrix>
{
// Replace with how you actuallt are storing you matrix values
protected object _valueStore;
// common initialization (if required), if not keep empty
protected BaseMatrix()
{
}
// necessary to copy the value store.
protected BaseMatrix(object valueStore)
{
this._valueStore = valueStore;
}
public virtual TMatrix ScaledBy(double scalar)
{
// implementation
return default;
}
}
public class Matrix : BaseMatrix<Matrix>
{
}
public class SymmetricMatrix : BaseMatrix<SymmetricMatrix>
{
public SymmetricMatrix() : base(){}
// allows to build from another matrix, could be internal
public SymmetricMatrix(object valueStore) : base(valueStore){}
public override SymmetricMatrix ScaledBy(double scalar)
{
return base.ScaledBy(scalar);
}
}
public class SymmetricPositiveDefiniteMatrix : BaseMatrix<SymmetricPositiveDefiniteMatrix>
{
// If we *know* the type is not the same after this method call, we mask with a new one retirning the target type
// A custom cast operator will handle the conversion
public new SymmetricMatrix ScaledBy(double scalar)
{
return base.ScaledBy(scalar);
}
// Added to allow a cast between SymmetricPositiveDefiniteMatrix and SymmetricMatrix
// in this example, we keep the value store to not have to copy all the values
// Depending on project rules could be explicit instead
public static implicit operator SymmetricMatrix(SymmetricPositiveDefiniteMatrix positiveSymmetricMatrix)
{
return new SymmetricMatrix(positiveSymmetricMatrix._valueStore);
}
}
Which would make the ScaledBy(double) method available in each type of matrix, and returning the type of the matrix it was called on.
And it would comply with your requirement not to have to redefine most methods / properties in each class.
Obviously, you can override methods in childrens to either do more before or after the base() call or even define an entirely different implementation.
One shortcoming would be if you needed one type to inherit from a class inheriting from BaseMatrix<TMatrix>. You'd have no way of changing the generic type parameter.
Alternatively if this really doesn't match your needs and prefer to go the encapsulation route, you could have a look at source generators since the code seems to be quite repetitive.
As we were talking with colleagues about procedural programming, here's my 2 cents about your problem.
The underlying data structure is of (and keep its) specific form:
struct DataMatrix
{
public double[] Data;
public int Rows;
public int Columns;
}
As you have specific behaviour depending on the type, you can flag it in a specialized type instead of strong-typing it:
struct Matrix
{
public DataMatrix DataMatrix;
public bool isSymmetrical;
public bool isDefinite;
public bool isUpperTriangular;
}
Then, you may call your methods from one place using said flagged type:
// MatrixManager acts solely as a namespace for convenient method grouping
public static class MatrixManager
{
//Consider a void return for each method, as matrix.DataMatrix is(/may be) the only affected field
public static Matrix Multiplication(Matrix matrix, Matrix otherMatrix) => /*..multiply matrix.DataMatrix with otherMatrix.DataMatrix then return matrix.*/;
public static Matrix ScaledBy(Matrix matrix, double scalar) => /*..scale matrix.DataMatrix with scalar then return matrix.*/;
public static void SetSymmetrical(Matrix matrix) { if (!matrix.isSymmetrical) matrix.isSymmetrical = true; }
public static Matrix UnavailableOperationIfSymmetricalExample(Matrix matrix) => matrix.isSymmetrical ? matrix : EigenvalueDecomposition(matrix);
/*..hundreds of Matrix-based methods here..*/
public static Matrix EigenvalueDecomposition(Matrix matrix) { if (matrix.isSymmetrical) return matrix; /*..otherwise do decompose matrix.DataMatrix then return matrix.*/}
}
This way, you can specify the actual behaviour when calling a method with any flagged Matrix type (throw exception? call another method? return?), without having to bother with a behaviour-type tangled structure.
Hope that would suit your needs.
I think that the use of Unsafe.As() is indeed unsafe.
A major problem is this:
Matrix mat = new Matrix(/* whatever */);
SymmetricMatrix smat = mat.AsSymmetricMatrix();
Console.WriteLine(smat.GetType().FullName); // "Matrix"
So you have something declared as SymmetrixMatrix but its underlying type is in fact Matrix. I'm sure you can imagine the sort of horrible problems that could cause...
This approach also requires that the base class knows about its derived classes - a big no-no.
One way to solve issues like this it to make sure you can clone your objects in a way that works with inheritance. Then you can clone the original objects and return them, with a cast if necessary.
For example:
public class Matrix
{
public Matrix()
{
// Whatever
}
// Copy constructor; used for cloning.
public Matrix(Matrix other)
{
_data = other._data.ToArray();
}
public virtual Matrix ScaledBy(double scalar)
{
var result = Clone();
result.ScaleBy(scalar);
return result;
}
protected void ScaleBy(double scalar)
{
for (int i = 0; i < _data.Length; ++i)
{
_data[i] *= scalar;
}
}
protected virtual Matrix Clone()
{
return new Matrix(this);
}
readonly double[] _data = new double[16]; // For illustration.
}
public class SymmetricMatrix: Matrix
{
public SymmetricMatrix()
{
// Whatever
}
// Copy constructor; used for cloning.
public SymmetricMatrix(SymmetricMatrix other): base(other)
{
// No new fields to copy.
// Any newly added fields should be copied here.
}
public override SymmetricMatrix ScaledBy(double scalar)
{
// This cast will work because the underlying type returned from
// base.ScaledBy() really is SymmetricMatrix
return (SymmetricMatrix)base.ScaledBy(scalar);
}
protected override SymmetricMatrix Clone()
{
return new SymmetricMatrix(this);
}
}
public class SymmetricPositiveDefiniteMatrix: Matrix
{
// Doesn't override ScaledBy() so SymmetricPositiveDefiniteMatrix.ScaledBy() will return a Matrix.
protected override SymmetricPositiveDefiniteMatrix Clone()
{
// If SymmetricMatrix ever adds new fields to be cloned, they should be cloned here.
return Unsafe.As<SymmetricPositiveDefiniteMatrix>(base.Clone());
}
}
Now code like this returns the expected types:
var smatrix = new SymmetricMatrix();
var sresult = smatrix.ScaledBy(1.0);
Console.WriteLine(sresult.GetType().FullName); // "SymmetricMatrix"
var spmatrix = new SymmetricPositiveDefiniteMatrix();
var spresult = spmatrix.ScaledBy(1.0);
Console.WriteLine(spresult.GetType().FullName); // "Matrix"
As an example of code where Unsafe.As() can cause strange results, consider this code:
using System;
using System.Runtime.CompilerServices;
namespace ConsoleApp1;
static class Program
{
public static void Main()
{
var myBase = new MyBaseClass();
var pretendDerived = myBase.AsMyDerivedClass();
// Given the definition of MyDerivedClass, what do you think this will print?
Console.WriteLine(pretendDerived.Name());
}
}
public class MyBaseClass
{
public MyDerivedClass AsMyDerivedClass() => Unsafe.As<MyDerivedClass>(this);
}
public class MyDerivedClass
{
readonly string _name = "TEST";
public string Name() => _name;
}
As the documentation for Unsafe.As() states:
The behavior of Unsafe.As(o) is only well-defined if the typical
"safe" casting operation (T)o would have succeeded. Use of this API to
circumvent casts that would otherwise have failed is unsupported and
could result in runtime instability.
Thanks to all the creative answers, all of which I drew inspiration from.
I ended up combining encapsulation and inheritance, using a data-only TwoDimensionalArray and inheritance from a method-only (plus a single TwoDimensionalArray field) Matrix class:
class TwoDimensionalArray
{
double[] data;
int Rows;
int Columns
double Scale;
}
class Matrix
{
internal readonly TwoDimensionalArray _array;
public int Rows => _array.Rows;
public int Columns => _array.Columns;
public static Ones(int Rows, int Columns)
{
var data = new double[Rows * Column];
data.AsSpan().Fill(1);
return new Matrix(data, Rows, Columns, 1);
}
internal Matrix(TwoDimensionalArray array)
{
_array = array;
}
public Matrix ScaledBy(double scalar)
{
var output = Clone();
output.ScaleBy(scalar);
return output;
}
public void ScaleBy(double scalar)
{
_array.Scale *= scalar;
}
public Matrix Clone()
{
var newData = new double[array.data.Length];
array.data.AsSpan().CopyTo(newData);
return new Matrix(new TwoDimensionalArray(newData, Rows, Columns);
}
}
public class SymmetricMatrix : Matrix
{
internal SymmetricMatrix(TwoDimensionalArray array) => base(array);
public new SymmetricMatrix ScaledBy(double scalar) => base.ScaledBy(scalar).AsSymmetricMatrix();
public (Matrix, double[]) EigenvalueDecomposition()
{
//implementation
}
public new SymmetricMatrix Clone() => base.Clone().AsSymmetricMatrix();
}
class SymmetricPositiveDefiniteMatrix : SymmetricMatrix
{
internal SymmetricPositiveDefiniteMatrix(TwoDimensionalArray array) => base(array);
public Matrix CholeskyFactor()
{
//implementation
}
public new SymmetricPositiveDefiniteMatrix Clone() => base.Clone().AsSymmetricMatrix();
}
public static class MatrixExt
{
public SymmetricMatrix AsSymmetricMatrix(this Matrix matrix)
{
return new SymmetricMatrix(matrix._array);
}
public SymmetricPositiveDefiniteMatrix AsSymmetricMatrix(this Matrix matrix)
{
return new SymmetricPositiveDefiniteMatrix(matrix._array);
}
}
Note that the result of .AsSymmetricMatrix() is a view, not a clone (similar to, e.g., myArray.AsSpan() being a view of an array, not a copy). I needed TwoDimensionalArray to be a class, not a struct, so that all changes to the view, including rescaling and reshaping, are reflected in the original.

Why honest function example in C# still not being honest?

from this reference : http://functionalprogrammingcsharp.com/honest-functions
I have learned to be more honest when defining method/function in C#.
It said that prefer pure function so that the function will always give the exact return type given in signature.
However when I try to apply it:
int Divide(int x, int y)
{
return x / y;
}
From the website:
The signature states that the function accepts two integers and returns another integer. But this is not the case in all scenarios. What happens if we invoke the function like Divide(1, 0)? The function implementation doesn't abide by its signature, throwing DivideByZero exception. That means this function is also "dishonest". How can we make this function an honest one? We can change the type of the y parameter (NonZeroInteger is a custom type which can contain any integer except zero):
int Divide(int x, NonZeroInteger y)
{
return x / y.Value;
}
I'm not sure what is the implementation of NonZeroInteger, they don't seem to give any implementation of NonZeroInteger in the website, should it check for 0 inside that class? And
I'm pretty sure if I call Divide(1, null) it will still show an error, thus making the function not honest.
Why honest function example in C# still not being honest?
Taking the example you've posted, and having read the link, if you want to make the function "honest" then you don't really need to create a new type, you could just implement the Try pattern:
bool TryDivide(int x, int y, out int result)
{
if(y != 0)
{
result = x / y;
return true;
}
result = 0;
return false;
}
This function basically fulfills the "honest" principle. The name says it will try to do division, and the resulting 'bool` says that it will indicate it is was successful.
You could create a struct NonZeroInteger but you're going to have to write a lot of code around it to make it act like a regular numeric type, and you'll probably come full circle. For example, what if you pass 0 to the NonZeroInteger constructor? Should it fail? Is that honest.
Also, struct type always have a default constructor, so if you're wrapping an int it's going to be awkward to avoid it being set to 0.
To make it honest, define a new data structure and check the status.
enum Status { OK, NAN }
class Data
{
public int Value { get; set; }
public Status Status { get; set; }
public static Data operator /(Data l, Data r)
{
if (r.Value == 0)
{
// Value can be set to any number, here I choose 0.
return new Data { Value = 0, Status = Status.NAN };
}
return new Data { Value = l.Value / r.Value, Status = Status.OK };
}
public override string ToString()
{
return $"Value: {Value}, Status: {Enum.GetName(Status.GetType(), Status)}";
}
}
class Test
{
static Data Divide(Data left, Data right)
{
return left / right;
}
static void Main()
{
Data left = new Data { Value = 1 };
Data right = new Data { Value = 0 };
Data output = Divide(left, right);
Console.WriteLine(output);
}
}
The notion of "honest function" still has room for interpretation, and I don't want to debate about it here, would be more opinion than actual useful answer.
To specifically answer your example, you could declare NonZeroInteger as a ValueType, with struct instead of class.
A value type is non-nullable (except if you explicitly specify the nullable version with a ?). No null-problem in this case. By the way, int is an example of value type (it's an alias for System.Int32, to be exact).
As some have pointed out, it could lead to other difficulties (struct has always a default constructor that initialize all fields to their default, and the default for an int is 0...)
For an mid-experienced programmer, this kind of example does not need to be explicitly implemented in the article to be understood on principle.
However, if you are unsure about it, it would definitely be a good programming learning exercise, I strongly encourage you to implement it yourself! (And create unit tests to demonstrate that your function has no "bug", by the way)
This NonZeroInteger is just a "symbol", which just represents the idea, not conrete implementation.
Surely, author could provide implemenetation of such construct, but its name servers just right for the sake of an article.
Possible implememntation might be:
public class NonZeroInteger
{
public int Value { get; set; }
public NonZeroInteger(int value)
{
if( value == 0 ) throw new ArgumentException("Argument passed is zero!");
Value = value;
}
}
But it's just pushing dishonesty somewhere else (in terms of an article), because constructor should return an object, not throw exception.
IMO, honesty is not achievable, because it's just moving dishonesty somewhere else, as shown in this example.
After reading it thoroughly a lot of times..
I found that his second option on the website is honest, and the first one is wrong.
int? Divide(int x, int y)
{
if (y == 0)
return null;
return x / y;
}
Edit: got idea from another article, basically mimicking the F# path, something like this:
Option<int> Divide(int x, int y)
{
if (y == 0)
return Option<int>.CreateEmpty();
return Option<int>.Create(x / y);
}
public class Option<T> : IEnumerable<T>
{
private readonly T[] _data;
private Option(T[] data)
{
_data = data;
}
public static Option<T> Create(T element)
{
return new Option<T>(new T[] { element });
}
public static Option<T> CreateEmpty()
{
return new Option<T>(new T[0]);
}
public IEnumerator<T> GetEnumerator()
{
return ((IEnumerable<T>) _data).GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Match(Action<T> onSuccess, Action onError) {
if(_data.Length == 0) {
onError();
} else {
onSuccess(_data[0]);
}
}
}
Ref: https://www.matheus.ro/2017/09/26/design-patterns-and-practices-in-net-option-functional-type-in-csharp/
to call:
public void Main() {
Option<int> result = Divide(1,0);
result.Match(
x => Console.Log(x),
() => Console.Log("divided by zero")
)
}
I just want to mention that NonZeroInteger can definitely be implemented honestly using a variation on Peano numbers:
class NonZeroInteger
{
/// <summary>
/// Creates a non-zero integer with the given value.
/// (This is private, so you don't have to worry about
/// anyone passing in 0.)
/// </summary>
private NonZeroInteger(int value)
{
_value = value;
}
/// <summary>
/// Value of this instance as plain integer.
/// </summary>
public int Value
{
get { return _value; }
}
private readonly int _value;
public static NonZeroInteger PositiveOne = new NonZeroInteger(1);
public static NonZeroInteger NegativeOne = new NonZeroInteger(-1);
/// <summary>
/// Answers a new non-zero integer with a magnitude that is
/// one greater than this instance.
/// </summary>
public NonZeroInteger Increment()
{
var newValue = _value > 0
? _value + 1 // positive number gets more positive
: _value - 1; // negative number gets more negative
return new NonZeroInteger(newValue); // can never be 0
}
}
The only tricky part is that I've defined Increment so that it works with both positive and negative integers. You can create any integer value you want except zero, and no exceptions are ever thrown, so this class is totally honest. (I'm ignoring overflow for now, but I don't think it would be a problem.)
Yes, it requires you to increment by one repeatedly to build large integers, which is extremely inefficient, but that's OK for a toy class like this one. There are probably other honest implementations that would be more efficient (e.g. using a uint as an offset from +1 or -1), but I'll leave that as an exercise for the reader.
You can test it like this:
class Test
{
static int Divide(int x, NonZeroInteger y)
{
return x / y.Value;
}
static void Main()
{
var posThree = NonZeroInteger.PositiveOne
.Increment()
.Increment();
Console.WriteLine(Divide(7, posThree));
var negThree = NonZeroInteger.NegativeOne
.Increment()
.Increment();
Console.WriteLine(Divide(7, negThree));
}
}
Output is:
2
-2
Honestly this is, IMO, overkill, but if I were to do a "Honest" method. I would do something like this. Instead of creating an entire new class. This is not a recommendation of what to do, and could easily cause issues in your code later. The best way to handle this, IMO, is to use the function and catch the exception outside of the Method. This is an "Honest" method in the fact that it always returns an integer, but it could return false values back.
int Divide(int x, int y)
{
try
{
return x / y;
}
catch (DivideByZeroException)
{
return 0;
}
}

Multiple functions with different parameters in c#

I would like to request what's the best way to handle the situation below.
I would like to call populate function for different instance with different parameters. I implemented populate function in each inheritance classes. But I don't know what's the best way to use it for hundreds of times.
(Eg index would be the total number of country in the world).
public enum TCountry
{
tAustralia,
tUnitedKingdom,
.. etc..
}
public enum TCity
{
tSydney,
tLondon,
... etc..
}
public int ProcessData ( string population, int index)
{
switch (index)
{
case 0:
TypeAust aus = new TypeAust();
retun aus.poulate( population, tAustralia, tSydney);
// Different calculation Sydney -Aus
DisplayAus(); // Display for Sydney - Aus
case 1:
TypeUK uk = new TypeUK();
retun uk.poulate( population, tUnitedKingdom, tLondon);
// Different calculation for Londond - UK
DisplayUK(); // Display for London - UK
....
... etc..
}
}
Thanks in Advance
I would recommend going with a different design. Rather than working with that case statement, put all your types in a collection and call populate without knowing what specific type you're working with.
List<TypeCountry> countries = new List<TypeCountry>() { new TypeUK(), TypeAus(), TypeUS() //ect };
//use this list all throughout the program
Instead of you switch statement you can just do;
return countries[index].populate( //args go here );
Then you can do other things like;
int worldPop = countries.Aggregate((c, n) => c.Populate(//args) + n.Populate(//args));
In general you need to stop treating each type as if it's different from the next. Move your logic into the base class, get rid of code that has to refer to the type by name or requires specific checks for which inheriting type it is. For almost all cases you should be able to have a collection of the base classes type and pass it around and work on it without ever knowing what type the specific instance you're dealing with is. If that's not the case then you're doing inheritance wrong. If you have that big case statement then there is no point in even using inheritance because as far as I can tell you're not getting any of it's benefits.
I would have to see more of your actual needs and architecture but you could look into generics.
public int ProcessData<T>(string population) where T : BaseCountry, new() {
var country = new T();
Display(country);
return country.Populate(population);
}
public void Display<T>(T country) where T : BaseCountry { ... }
You would use ProcessData like:
ProcessData<TypeAust>(99);
Your display method would be generic too. This way, your process data method is always constrained to work with anything that implements BaseCountry. BaseCountry would define an abstract or virtual Populate() method.
You could do something like this so the logic is broken out into different classes:
public enum TCountry
{
tAustralia,
tUnitedKingdom
}
public enum TCity
{
tSydney,
tLondon
}
public abstract class TypeCountry
{
public abstract int Populate(string population);
}
public class TypeAust : TypeCountry
{
public override int Populate(string population)
{
// do your calculation with tAustralia, tSydney...
}
}
public class TypeUK: TypeCountry
{
public override int Populate(string population)
{
// do your calculation with tUnitedKingdom, tLondon...
}
}
public static class TypeCountryFactory
{
public static TypeCountry GetCountry(TCountry country)
{
switch (country)
{
case TCountry.tAustralia:
return new TypeAust();
case TCountry.tUnitedKingdom:
return new TypeUK();
}
}
}
public int ProcessData (string population, int TCountry)
{
TypeCountry country = TypeCountryFactory.GetCountry(TCountry);
return country.Populate(population);
}

Does C#/CLR contain a mechanism for marking the return values of properties as read-only / immutable?

I've been looking around, and so far haven't managed to find a good way to do this. It's a common problem, I'm sure.
Suppose I have the following:
class SomeClass : IComparable
{
private int myVal;
public int MyVal
{
get { return myVal; }
set { myVal = value; }
}
public int CompareTo(object other) { /* implementation here */ }
}
class SortedCollection<T>
{
private T[] data;
public T Top { get { return data[0]; } }
/* rest of implementation here */
}
The idea being, I'm going to implement a binary heap, and rather than only support Insert() and DeleteMin() operations, I want to support "peeking" at the highest (or lowest, as the case may be) priority value on the stack. Never did like Heisenberg, and that whole "you can't look at things without changing them" Uncertainty Principle. Rubbish!
The problem, clearly, is that the above provides no means to prevent calling code from modifying MyVal (assuming SortedCollection) via the Top property, which operation has the distinct possibility of putting my heap in the wrong order. Is there any way to prevent modifications from being applied to the internal elements of the heap via the Top property? Or do I just use the code with a warning: "Only stable if you don't modify any instances between the time they're inserted and dequeue'd. YMMV."
To answer your question: No, there's no way to implement the kind of behavior you want - as long as T is of reference type (and possibly even with some value-types)
You can't really do much about it. As long as you provide a getter, calling code can modify the internal contents of your data depending on the accessibility of said data (i.e. on properties, fields, and methods).
class SomeClass : IComparable
{
private int myVal;
public int MyVal
{
get { return myVal; }
set { myVal = value; }
}
public int CompareTo(object other) { /* implementation here */ }
}
class SortedCollection<T>
{
private T[] data;
public T Top { get { return data[0]; } }
/* rest of implementation here */
}
//..
// calling code
SortedCollection<SomeClass> col;
col.Top.MyVal = 500; // you can't really prevent this
NOTE What I mean is you can't really prevent it in the case of classes that you don't control. In the example, like others have stated you can make MyVal's set private or omit it; but since SortedColleciton is a generic class, you can't do anything about other people's structures..
You can have a readonly property (that is, a property with only a getter):
private int myVal;
public int MyVal { get { return myVal; } }
But be careful: this may not always work how you expect. Consider:
private List<int> myVals;
public List<int> MyVals { get { return myVals; } }
In this case, you can't change which List the class uses, but you can still call that List's .Add(), .Remove(), etc methods.
Your properties don't have to have the same accessibility for get/set. This covers you for anything that returns a value type (typically structs that only contain value types) or immutable reference types.
public int MyVal
{
get { return myVal; }
private set { myVal = value; }
}
For mutable reference types, you have other options, such as returning Clone()s or using ReadOnlyCollection<T> to keep the caller from changing them:
private List<int> data;
public IList<int> Data
{
get { return new ReadOnlyCollection<int>(this.data); }
}
Only implement getters for your properties and modify the collection by having add/remove methods
I understand your problem now. I think this should work:
class SortedCollection<T> where T: ICloneable
{
private T[] data;
public T Top
{
get
{
T ret = (T)data[0].Clone();
return ret;
}
}
/* rest of implementation here */
}
The ICloneable constraint ensures that the type parameter implements the ICloneable interface. (if this is acceptable)

C#: Avoiding Bugs caused by not Overriding ToString

I find the following bug occurring far too often in my code and wondered if anyone knows some good strategies to avoid it.
Imagine a class like this:
public class Quote
{
public decimal InterestRate { get; set; }
}
At some point I create a string that utilises the interest rate, like this:
public string PrintQuote(Quote quote)
{
return "The interest rate is " + quote.InterestRate;
}
Now imagine at a later date I refactored the InterestRate property from a decimal to its own class:
public class Quote
{
public InterestRate InterestRate { get; set; }
}
... but say that I forgot to override the ToString method in the InterestRate class. Unless I carefully looked for every usage of the InterestRate property I would probably never notice that at some point it is being converted to a string. The compiler would certainly not pick this up. My only chance of saviour is through an integration test.
The next time I call my PrintQuote method, I would get a string like this:
"The interest rate is Business.Finance.InterestRate".
Ouch. How can this be avoided?
By creating an override of ToString in the IntrestRate class.
The way to prevent this kind of problem is to have a unit test for absolutely all your class members, which therefore includes your PrintQuote(Quote quote) method:
[TestMethod]
public void PrintQuoteTest()
{
quote = new Quote();
quote.InterestRate = 0.05M;
Assert.AreEqual(
"The interest rate is 0.05",
PrintQuote(quote));
}
In this case, unless you defined a implicit conversion between your new InterestRate class and System.Decimal, this unit test would actually no longer compile. But that would definitely be a signal! And if you did define an implicit conversion between your InterestRate class and System.Decimal, but forgot to override the ToString method, then this unit test would compile, but would (correctly) fail at the Assert.AreEqual() line.
The need for having a unit test for absolutely every class member cannot be overstated.
Creating an override of ToString is just one of those things you do for most, if not all, classes. Certainly for all "value" classes.
Note that ReSharper will generate a lot of the boilerplate code for you. From:
public class Class1
{
public string Name { get; set; }
public int Id { get; set; }
}
The result of running Generate Equality Members, Generate Formatting Members and Generate Constructor is:
public class Class1 : IEquatable<Class1>
{
public Class1(string name, int id)
{
Name = name;
Id = id;
}
public bool Equals(Class1 other)
{
if (ReferenceEquals(null, other))
{
return false;
}
if (ReferenceEquals(this, other))
{
return true;
}
return Equals(other.Name, Name) && other.Id == Id;
}
public override string ToString()
{
return string.Format("Name: {0}, Id: {1}", Name, Id);
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj))
{
return false;
}
if (ReferenceEquals(this, obj))
{
return true;
}
if (obj.GetType() != typeof (Class1))
{
return false;
}
return Equals((Class1) obj);
}
public override int GetHashCode()
{
unchecked
{
return ((Name != null ? Name.GetHashCode() : 0)*397) ^ Id;
}
}
public static bool operator ==(Class1 left, Class1 right)
{
return Equals(left, right);
}
public static bool operator !=(Class1 left, Class1 right)
{
return !Equals(left, right);
}
public string Name { get; set; }
public int Id { get; set; }
}
Note there is one bug: it should have offered to create a default constructor. Even ReSharper can't be perfect.
Not to be a jerk but write a test case each time you create a class. It is a good habit to get into and avoids oversights for you and others participating in your project.
Well, as others have said, you just have to do it. But here are a couple of ideas to help yourself make sure you do it:
1) use a base object for all of your value classes that overrides toString and, say, throws an exception. This will help remind you to override it again.
2) create a custom rule for FXCop (free Microsoft static code analysis tool) to check for toString methods on certain types of classes. How to determine which types of classes should override toString is left as an exercise for the student. :)
In the case where ToString is called on something statically typed as an InterestRate, as in your example, or in certain related cases where an InterestRate is cast to Object and then immediately used as a parameter to something like string.Format, you could conceivably detect the problem with static analysis. You could search for a custom FxCop rule that approximates what you want, or write one of your own.
Note that it will always be possible to devise a sufficiently dynamic call pattern that it breaks your analysis, probably not even a very complicated one ;), but catching the lowest-hanging fruit should be easy enough.
That said, I agree with some of the other commenter that thorough testing is probably the best approach to this specific problem.
For a very different perspective, you could defer all ToString'ing to a separate concern of your application. StatePrinter (https://github.com/kbilsted/StatePrinter) is one such API where you can use the defaults or configure depending on types to print.
var car = new Car(new SteeringWheel(new FoamGrip("Plastic")));
car.Brand = "Toyota";
then print it
StatePrinter printer = new StatePrinter();
Console.WriteLine(printer.PrintObject(car));
and you get the following output
new Car() {
StereoAmplifiers = null
steeringWheel = new SteeringWheel()
{
Size = 3
Grip = new FoamGrip()
{
Material = ""Plastic""
}
Weight = 525
}
Brand = ""Toyota"" }
and with the IValueConverter abstraction you can define how types are printer, and with the FieldHarvester you can define which fields are to be included in the string.
Frankly, the answer to your question is that your initial design was flawed. First, you exposed a property as a primitive type. Some believe this is wrong. After all, your code allows this ...
var double = quote.InterestRate * quote.InterestRate;
The problem with that is, what is the unit of the result? Interest^2? The second problem with your design is that you rely on an implicit ToString() conversion. Problems with relying on implicit conversion are more well known in C++ (for example), but as you point out, can bite you in C# as well. Perhaps if your code originally had ...
return "The interest rate is " + quote.InterestRate.ToString();
... you would have noticed it in the refactor. Bottom line is if you have design issues in your original design, they might be caught in refactor and the might not. Best bet is to not do them in the first place.

Categories

Resources