I am attempting to apply the Strategy pattern to a particular situation, but am having an issue with how to avoid coupling each concrete strategy to the context object providing data for it. The following is a simplified case of a pattern that occurs a few different ways, but should be handled in a similar way.
We have an object Acquisition that provides data relevant to a specific frame of time - basically a bunch of external data collected using different pieces of hardware. It's already too large because of the amount of data it contains, so I don't want to give it any further responsibility. We now need to take some of this data, and based on some configuration send a corresponding voltage to a piece of hardware.
So, imagine the following (much simplified) classes:
class Acquisition
{
public Int32 IntegrationTime { get; set; }
public Double Battery { get; set; }
public Double Signal { get; set; }
}
interface IAnalogOutputter
{
double getVoltage(Acquisition acq);
}
class BatteryAnalogOutputter : IAnalogOutputter
{
double getVoltage(Acquisition acq)
{
return acq.Battery;
}
}
Now, every concrete strategy class has to be coupled to my Acquisition class, which is also one of the most likely classes to be modified since it's core to our application. This is still an improvement over the old design, which was a giant switch statement inside the Acquisition class. Each type of data may have a different conversion method (while Battery is a simple pass-through, others are not at all that simple), so I feel Strategy pattern or similar should be the way to go.
I will also note that in the final implementation, IAnalogOutputter would be an abstract class instead of an interface. These classes will be in a list that is configurable by the user and serialized to an XML file. The list must be editable at runtime and remembered, so Serializable must be part of our final solution. In case it makes a difference.
How can I ensure each implementation class gets the data it needs to work, without tying it to one of my most important classes? Or am I approaching this sort of problem in the completely wrong manner?
Strategy Pattern encapsulates a - usually complex - operation/calculation.
The voltage you want to return is dependent on
pieces of configuration
Some of the acquisition data
So I would put these into another class and pass it to strategy implementors.
Also in terms of serialisation, you do not have serialise the strategy classes, perhaps only their name or type name.
UPDATE
Well, it seems that your implementations need only one piece of the acquisition data. That is a bit unusual for a strategy pattern - but I do not believe it fits Visitor better so strategy is fine. I would create a class which has as property, acquisition data (perhaps inherits from it) in addition to configuration that implementors need.
One thing you could do is use factory methods to construct your Strategies. Your individual strategies can take in their constructor only the individual data elements they need, and the factory method is the only thing that needs to know how to fill in that data given an Acquisition object. Something like this:
public class OutputterFactory
{
public static IAnalogOutputter CreateBatteryAnalogOutputter(Acquisition acq)
{
return new BatteryANalogOutputter(acq.Battery);
}
}
Ok, I hate to not give someone else the credit here, but I found a hybrid solution that works very well for my purposes. It serializes perfectly, and greatly simplifies the addition of new output types. The key was a single interface, IOutputValueProvider. Also note how easily this pattern handles the retrieval of varying ways of storing the data (such as a Dictionary instead of a parameter).
interface IOutputValueProvider
{
Double GetBattery();
Double GetSignal();
Int32 GetIntegrationTime();
Double GetDictionaryValue(String key);
}
interface IAnalogOutputter
{
double getVoltage(IOutputValueProvider provider);
}
class BatteryAnalogOutputter : IAnalogOutputter
{
double getVoltage(IOutputValueProvider provider)
{
return provider.GetBattery();
}
}
class DictionaryValueOutputter : IAnalogOutputter
{
public String DictionaryKey { get; set; }
public double getVoltage(IOutputValueProvider provider)
{
return provider.GetDictionaryValue(DictionaryKey);
}
}
So then, I just need to ensure Acquisition implements the interface:
class Acquisition : IOutputValueProvider
{
public Int32 IntegrationTime { get; set; }
public Double Battery { get; set; }
public Double Signal { get; set; }
public Dictionary<String, Double> DictionaryValues;
public double GetBattery() { return Battery;}
public double GetSignal() { return Signal; }
public int GetIntegrationTime() { return IntegrationTime; }
public double GetDictionaryValue(String key)
{
Double d = 0.0;
return DictionaryValues.TryGetValue(key, out d) ? d : 0.0;
}
}
This isn't perfect, since now there's a gigantic interface that must be maintained and some duplicate code in Acquisition, but there's a heck of a lot less risk of something being changed affecting the other parts of my application. It also allows me to start subclassing Acquisition without having to change some of these external pieces. I hope this will help some others in similar situations.
Related
I have a legacy C# library (a set of interrelated algorithms) in which there is a global god object which is passed to all classes. This god object (simply called Manager :D ) has a Parameters member, and an ObjectCollection member (among lots of others).
public class Manager
{
public Parameters {get; private set;}
public ObjectCollection {get; private set;}
...
...
}
I am unable to test the algorithms because everything takes the manager as dependency, and initializing that means I have to initialize everything. So I want to refactor this design.
Parameters has more than 100 fields in it, the values control the different algorithms. The ObjectCollection has the entities required for the overall execution of the engine, stored by Id, by Name, etc.
The following are the approaches I've though of, but not satisfied with:
Pass Parameters and ObjectCollection (or IParameters and IObjectCollection) instead of the Manager, but I don't think this solves any issue. I wouldn't know which of the parameters the algorithms would depend on.
Splitting the parameters class to smaller ones also is difficult as one parameter may affect many algorithms, so a logical separation is difficult. Plus the dependencies for each algorithm may end up to be many.
A singleton pattern like is usually done for a Logger, but that too is not testable.
Some of the parameters control the algorithm logic, some of the parameters are just required for the algorithm. I'm thinking of making each algorithm a separate class implementing an interface, and at the application start, deciding which algorithm to instantiate based on the parameter. I might end up splitting the current set of algorithm classes to many more, and I'm afraid I'll end up complicating it more and losing the structure of the algorithms.
Is there any standard way to deal with this, or is just splitting big classes to smaller ones and passing dependencies by constructor the only general advice?
In order to allow yourself to make small steps I'd start with a single algorithm and identify the parameters it requires. These can then be exposed in an interface so...
public interface IAmTheParametersForAlgorithm1
{
int OneThing {get;}
int AnotherThing {get;}
}
Then you can alter Manager so that it implements that interface and as in #marcel's answer expose those parameters directly on Manager.
Now you can test Algorithm1 with a very small mock or self-shunt because you don't need to initialise a gigantic Manager in order to run your test. And Algorithm1 no longer knows it takes a Manager object.
public Manager : IAmTheParametersForAlgorithm1 {}
public class Algorithm1
{
public Algorithm1(IAmTheParametersForAlgorithm1 parameters){}
}
Bit by bit you can continue expanding this to each of the sets of parameters and dealing with small, specific interfaces will allow you to identify where different algorithms have common parameters.
public Manager :
IAmTheParametersForAlgorithm1,
IAmTheParametersForAlgorithm2,
IAmTheParametersForAlgorithm3,
IAmTheParametersForAlgorithm4 {}
It also means that as you identify algorithms whose parameters are no longer accessed outside of their interface you can stop injecting Manager into those algorithms, take the parameters out of Manager, and create a new class which only provides those parameters.
This means you can keep your application running the whole time you're making this change if you aren't able to dedicate time to make one gigantic breaking change
For the Parameters, I would go with something like this:
public class Parameters
{
public int MyProperty1 { get; set; }
public int MyProperty2 { get; set; }
public int MyProperty3 { get; set; }
}
public class AlgorithmParameters1
{
private Parameters parameters;
public int MyProperty1 { get { return parameters.MyProperty1; } }
public int MyProperty3 { get { return parameters.MyProperty3; } }
public AlgorithmParameters1(Parameters parameters)
{
this.parameters = parameters;
}
}
public class Algorithm1
{
public void Run(AlgorithmParameters1 parameters)
{
//Access only MyProperty1 and MyProperty3...
}
}
Usage would look like:
var parameters = new Parameters()
{
MyProperty1 = 4,
MyProperty2 = 5,
MyProperty3 = 6,
};
new Algorithm1().Run(new AlgorithmParameters1(parameters));
By the way, I don't see how you could differ between parameters that control an algorithm and are required for it. By control do you mean they are used to make a decision which algorithm to take?
I'm using Redis Cache using Stack Exchange library.
I used cloudStructure library to use Redis Dictionary and Redis List.
Problem is when I try to retrieve values and if that model has a null
value for one list property it is throwing me below exception -
Jil.DeserializationException : Error occurred building a deserializer
for TestMainClass: Expected a
parameterless constructor for
System.Collections.Generic.ICollection1[TestChildClass]
---- Jil.Common.ConstructionException : Expected a parameterless constructor for
System.Collections.Generic.ICollection1[TestChildClass]
public class TestMainClass
{
public TestMainClass();
public int Id { get; set; }
public virtual ICollection<TestChildClass> Mydata { get; set; }
public string Title { get; set; }
}
public class TestChildClass
{
public TestChildClass();
public int Id { get; set; }
public string Value { get; set; }
}
Redis code for retrieve value:
RedisDictionary<int, TestMainClass> dictionary =
new RedisDictionary<int, TestMainClass>("localhost", "mylocaldictionary");
var result = await dictionary.Get(121);
What If I could not able to convert ICollection < T > into List < T >?
It might be a nice feature if the serialization library detected interfaces like ICollection<T> and IList<T> and implemented them with the concrete List<T> during deserialization, but ultimately: every feature needs to be thought of, considered (impact), designed, implemented, tested, documented and supported. It may be that the library author feels this is a great idea and should be implemented; it might not be high on the author's list, but they'd be more than happy to take a pull request; or there might be good reasons not to implement it.
In the interim, as a general rule that will solve virtually every serialization problem you will ever encounter with any library:
the moment the library doesn't work perfectly with your domain model: stop serializing your domain model - use a DTO instead
By which, I mean: create a separate class or classes that are designed with the specific choice of serializer in mind. If it wants List<T>: then use List<T>. If it wants public fields: use public fields. If it wants the types to be marked [Serializable]: mark the types [Serializable]. If it wants all type names to start with SuperMagic: then start the type name with SuperMagic. As soon as you divorce the domain model from the serialization model, all the problems go away. In addition: you can support multiple serializers in parallel, without getting into the scenario that A needs X and doesn't work with Y; B needs Y and doesn't work with X.
All you then need to do is write a few lines of code to map between the two similar models (or use libraries that do exactly that, like AutoMapper).
I have a summary objects, who's responsibilities actually to combine a lot of things together and create a summary report, who later going to be serialized into the XML.
In this objects I have a lot of structures like this:
public class SummaryVisit : Visit, IMappable
{
public int SummaryId { get; set; }
public int PatientId { get; set; }
public int LocationId { get; set; }
public IMappable Patient
{
get
{
return new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
}
}
public IMappable Location
{
get
{
return new SummaryLocation(LocationBusinessService.FindLocation(this.LocationId));
}
}
public IEnumerable<IMappable> Comments
{
get
{
return new SummaryComments(CommentBusinessService.FindComments(this.SummaryId, Location));
}
}
// ... can be a lot of these structures
// ... using different business services and summary objects
public IEnumerable<IMappable> Tasks
{
get
{
return new SummaryTasks(TaskBusinessService.FindTasks(this));
}
}
}
PatientBusinessService, LocationBusinessService etc. are statics.
And each of these SummaryPatient, SummaryLocation etc. have the same type of structure inside.
What is the best approach to refactor and unit test this?
Tried to replace static calls with calls via the interfaced proxies (or refactor statics to non-static classes & interfaces), but this class just got a lot of these interfaces as the constructor injection stuff and start to be super greedy. In addition, these interfaces have a one used method inside (if I going to create it just to this summary needs).
And as soon as this is a summary object, commonly this static services used just once for the whole structure to get appropriate properties for output.
You could change your tests to be more integrational (test more than one class at the time). You could try to modify your services to be more universal and be able to take data from different sources (like TestDataProvider and your current data provider).
Better solution I think is to modify classes you want to test:
Use strong typing for properties and gain all benefits. I think you should return more specific types instead of IMappable
It looks like some of your data is stored inside class (ids) some data is not (IMappable object references). I would refactor this to hold references to objects inside class:
private SummaryPatient _patient;
public SummaryPatient Patient
{
get
{
if (_patient == null)
_patient = new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
return _patient;
}
}
Then you can assign your tests data in constructor or create static method CreateDummy(...) just for unit tests. This method then should use CreateDummy for child objects. You can use it in your unit tests.
I am trying to find a better way to handle some growing if constructs to handle classes of different types. These classes are, ultimately, wrappers around disparate value types (int, DateTime, etc) with some additional state information. So the primary difference between these classes is the type of data they contain. While they implement generic interfaces, they also need to be kept in homogeneous collections, so they also implement a non-generic interface. The class instances are handled according to the type of data they represent and their propogation continues or doesn't continue based on that.
While this is not necessarily a .NET or C# issue, my code is in C#.
Example classes:
interface ITimedValue {
TimeSpan TimeStamp { get; }
}
interface ITimedValue<T> : ITimedValue {
T Value { get; }
}
class NumericValue : ITimedValue<float> {
public TimeSpan TimeStamp { get; private set; }
public float Value { get; private set; }
}
class DateTimeValue : ITimedValue<DateTime> {
public TimeSpan TimeStamp { get; private set; }
public DateTime Value { get; private set; }
}
class NumericEvaluator {
public void Evaluate(IEnumerable<ITimedValue> values) ...
}
I have come up with two options:
Double Dispatch
I recently learned of the Visitor pattern and its use of double dispatch to handle just such a case. This appeals because it would allow undesired data to not propogate (if we only want to handle an int, we can handle that differently than a DateTime). Also, the behaviors of how the different types are handled would be confined to the single class that is handling the dispatch. But there is a fair bit of maintenance if/when a new value type has to be supported.
Union Class
A class that contains a property for each value type supported could be what each of these classes store. Any operation on a value would affect the appropriate component. This is less complex and less maintenance than the double-dispatch strategy, but it would mean that every piece of data would propogate all the way through unnecessarily as you can no longer discriminate along the lines of "I don't operate upon that data type". However, if/when new types need to be supported, they only need to go into this class (plus whatever additional classes that need to be created to support the new data type).
class UnionData {
public int NumericValue;
public DateTime DateTimeValue;
}
Are there better options? Is there something in either of these two options that I did not consider that I should?
method 1, using dynamic for double dispatch (credit goes to http://blogs.msdn.com/b/curth/archive/2008/11/15/c-dynamic-and-multiple-dispatch.aspx).
Basically you can have your Visitor pattern simplified like this:
class Evaluator {
public void Evaluate(IEnumerable<ITimedValue> values) {
foreach(var v in values)
{
Eval((dynamic)(v));
}
}
private void Eval(DateTimeValue d) {
Console.WriteLine(d.Value.ToString() + " is a datetime");
}
private void Eval(NumericValue f) {
Console.WriteLine(f.Value.ToString() + " is a float");
}
}
sample of usage:
var l = new List<ITimedValue>(){
new NumericValue(){Value= 5.1F},
new DateTimeValue() {Value= DateTime.Now}};
new Evaluator()
.Evaluate(l);
// output:
// 5,1 is a float
// 29/02/2012 19:15:16 is a datetime
method 2 would use Union types in c# as proposed by #Juliet here (alternative implementation here)
I tell you have I've solved a similar situation - is by storing the Ticks of a DateTime or TimeSpan as double in the collection and by using IComparable as a where constraint on the type parameter. The conversion to double / from double is performed by a helper class.
Please see this previous question.
Funnily enough this leads to other problems, such as boxing and unboxing. The application I am working on requires extremely high performance so I need to avoid boxing. If you can think of a great way to generically handle different datatypes (including DateTime) then I'm all ears!
Good question. The first thing that came to my mind was a reflective Strategy algorithm. The runtime can tell you, either statically or dynamically, the most derived type of the reference, regardless of the type of the variable you are using to hold the reference. However, unfortunately, it will not automatically choose an overload based on the derived type, only the variable type. So, we need to ask at runtime what the true type is, and based on that, manually select a particular overload. Using reflection, we can dynamically build a collection of methods identified as handling a particular sub-type, then interrogate the reference for its generic type and look up the implementation in the dictionary based on that.
public interface ITimedValueEvaluator
{
void Evaluate(ITimedValue value);
}
public interface ITimedValueEvaluator<T>:ITimedValueEvaluator
{
void Evaluate(ITimedValue<T> value);
}
//each implementation is responsible for implementing both interfaces' methods,
//much like implementing IEnumerable<> requires implementing IEnumerable
class NumericEvaluator: ITimedValueEvaluator<int> ...
class DateTimeEvaluator: ITimedValueEvaluator<DateTime> ...
public class Evaluator
{
private Dictionary<Type, ITimedValueEvaluator> Implementations;
public Evaluator()
{
//find all implementations of ITimedValueEvaluator, instantiate one of each
//and store in a Dictionary
Implementations = (from t in Assembly.GetCurrentAssembly().GetTypes()
where t.IsAssignableFrom(typeof(ITimedValueEvaluator<>)
and !t.IsInterface
select new KeyValuePair<Type, ITimedValueEvaluator>(t.GetGenericArguments()[0], (ITimedValueEvaluator)Activator.CreateInstance(t)))
.ToDictionary(kvp=>kvp.Key, kvp=>kvp.Value);
}
public void Evaluate(ITimedValue value)
{
//find the ITimedValue's true type's GTA, and look up the implementation
var genType = value.GetType().GetGenericArguments()[0];
//Since we're passing a reference to the base ITimedValue interface,
//we will call the Evaluate overload from the base ITimedValueEvaluator interface,
//and each implementation should cast value to the correct generic type.
Implementations[genType].Evaluate(value);
}
public void Evaluate(IEnumerable<ITimedValue> values)
{
foreach(var value in values) Evaluate(value);
}
}
Notice that the main Evaluator is the only one that can handle an IEnumerable; each ITimedValueEvaluator implementation should handle values one at a time. If this isn't feasible (say you need to consider all values of a particular type), then this gets really easy; just loop through every implementation in the Dictionary, passing it the full IEnumerable, and have those implementations filter the list to only objects of the particular closed generic type using the OfType() Linq method. This will require you to run all ITimedValueEvaluator implementations you find on the list, which is wasted effort if there are no items of a particular type in a list.
The beauty of this is its extensibility; to support a new generic closure of ITimedValue, just add a new implementation of ITimedValueEvaluator of the same type. The Evaluator class will find it, instantiate a copy, and use it. Like most reflective algorithms, it's slow, but the actual reflective part is a one-time deal.
Why not just implement the interface that you actually want, and allow the implementing type to define what the value is? For example:
class NumericValue : ITimedValue<float> {
public TimeSpan TimeStamp { get; private set; }
public float Value { get; private set; }
}
class DateTimeValue : ITimedValue<DateTime>, ITimedValue<float> {
public TimeSpan TimeStamp { get; private set; }
public DateTime Value { get; private set; }
public Float ITimedValue<Float>.Value { get { return 0; } }
}
class NumericEvaluator {
public void Evaluate(IEnumerable<ITimedValue<float>> values) ...
}
If you want the behavior of the DateTime implementation to vary based on the particular usage (say, alternate implementations of Evaluate functions), then they by definition need to be aware of ITimedValue<DateTime>. You can get to a good statically-typed solution by providing one or more Converter delegates, for example.
Finally, if you really only want to handle the NumericValue instances, just filter out anything that isn't a NumericValue instance:
class NumericEvaluator {
public void Evaluate(IEnumerable<ITimedValue> values) {
foreach (NumericValue value in values.OfType<NumericValue>()) {
....
}
}
}
One of the most important aspects of OOP is data hiding. Can somebody explain using a simple piece of code what data hiding is exactly and why we need it?
Data or Information Hiding is a design principal proposed by David Paranas.
It says that you should hide the
design decisions in one part of the
program that are likely to be changed
from other parts of the program, there
by protecting the other parts from
being affected by the changes in the
first part.
Encapsulation is programming language feature which enables data hiding.
However note that you can do data\information hiding even without encapsulation. For example using modules or functions in non Object Oriented programming languages. Thus encapsulation is not data hiding but only a means of achieving it.
While doing encapsulation if you ignore the underlying principal then you will not have a good design. For example consider this class -
public class ActionHistory
{
private string[] _actionHistory;
public string[] HistoryItems
{
get{return _actionHistory; }
set{ _actionHistory = value; }
}
}
This calls encapsulates an array. But it does not hide the design decision of using a string[] as an internal storage. If we want to change the internal storage later on it will affect the code using this class as well.
Better design would be -
public class ActionHistory
{
private string[] _actionHistory;
public IEnumerable<string> HistoryItems
{
get{return _actionHistory; }
}
}
I'm guessing by data hiding you mean something like encapsulation or having a variable within an object and only exposing it by get and modify methods, usually when you want to enforce some logic to do with setting a value?
public class Customer
{
private decimal _accountBalance;
public decimal GetBalance()
{
return _accountBalance;
}
public void AddCharge(decimal charge)
{
_accountBalance += charge;
if (_accountBalance < 0)
{
throw new ArgumentException(
"The charge cannot put the customer in credit");
}
}
}
I.e. in this example, I'm allowing the consuming class to get the balance of the Customer, but I'm not allowing them to set it directly. However I've exposed a method that allows me to modify the _accountBalance within the class instance by adding to it via a charge in an AddCharge method.
Here's an article you may find useful.
Information hiding (or more accurately encapsulation) is the practice of restricting direct access to your information on a class. We use getters/setters or more advanced constructs in C# called properties.
This lets us govern how the data is accessed, so we can sanitize inputs and format outputs later if it's required.
The idea is on any public interface, we cannot trust the calling body to do the right thing, so if you make sure it can ONLY do the right thing, you'll have less problems.
Example:
public class InformationHiding
{
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
/// This example ensures you can't have a negative age
/// as this would probably mess up logic somewhere in
/// this class.
private int _age;
public int Age
{
get { return _age; }
set { if (value < 0) { _age = 0; } else { _age = value; } }
}
}
Imagine that the users of your class are trying to come up with ways to make your class no longer fulfill its contract. For instance, your Banking object may have a contract that ensures that all Transactions are recorded in a log. Suppose mutation of the Bank's TransactionLog were publically accessible; now a consuming class could initiate suspect transactions and modify the log to remove the records.
This is an extreme example, but the basic principles remain the same. It's up to the class author to maintain the contractual obligations of the class and this means you either need to have weak contractual obligations (reducing the usefulness of your class) or you need to be very careful about how your state can be mutated.
What is data hiding?
Here's an example:
public class Vehicle
{
private bool isEngineStarted;
private void StartEngine()
{
// Code here.
this.isEngineStarted = true;
}
public void GoToLocation(Location location)
{
if (!this.isEngineStarted)
{
this.StartEngine();
}
// Code here: move to a new location.
}
}
As you see, the isEngineStarted field is private, ie. accessible from the class itself. In fact, when calling an object of type Vehicle, we do need to move the vehicle to a location, but don't need to know how this will be done. For example, it doesn't matter, for the caller object, if the engine is started or not: if it's not, it's to the Vehicle object to start it before moving to a location.
Why do we need this?
Mostly to make the code easier to read and to use. Classes may have dozens or hundreds of fields and properties that are used only by them. Exposing all those fields and properties to the outside world will be confusing.
Another reason is that it is easier to control a state of a private field/property. For example, in the sample code above, imagine StartEngine is performing some tasks, then assigning true to this.isEngineStarted. If isEngineStarted is public, another class would be able to set it to true, without performing tasks made by StartEngine. In this case, the value of isEngineStarted will be unreliable.
Data Hiding is defined as hiding a base class method in a derived class by naming the new class method the same name as the base class method.
class Person
{
public string AnswerGreeting()
{
return "Hi, I'm doing well. And you?";
}
}
class Employee : Person
{
new public string AnswerGreeting()
{
"Hi, and welcome to our resort.";
}
}
In this c# code, the new keyword prevents the compiler from giving a warning that the base class implementation of AnswerGreeting is being hidden by the implementation of a method with the same name in the derived class. Also known as "data hiding by inheritance".
By data hiding you are presumably referring to encapsulation. Encapsulation is defined by wikipedia as follows:
Encapsulation conceals the functional
details of a class from objects that
send messages to it.
To explain a bit further, when you design a class you can design public and private members. The class exposes its public members to other code in the program, but only the code written in the class can access the private members.
In this way a class exposes a public interface but can hide the implementation of that interface, which can include hiding how the data that the class holds is implemented.
Here is an example of a simple mathematical angle class that exposes values for both degrees and radians, but the actual storage format of the data is hidden and can be changed in the future without breaking the rest of the program.
public class Angle
{
private double _angleInDegrees;
public double Degrees
{
get
{
return _angleInDegrees;
}
set
{
_angleInDegrees = value;
}
}
public double Radians
{
get
{
return _angleInDegrees * PI / 180;
}
set
{
_angleInDegrees = value * 180 / PI;
}
}
}