I've a class with some properties which I want to serialize. My problem is that
I can't serialize the "CustomCanvasClass". I only need the X/Y properties of it.
So I created a new property and marked the "CustomCanvasClass" property as [NonSerialized].
Unfortunatly it won't works. Maybe have somebody another idea to copy this data out of the class.
[Serializable]
public class CustomClass
{
//won't serialized
public double X
{
get
{
return Canvas.GetLeft(CustomCanvasClass);
}
set
{
Canvas.SetLeft(CustomCanvasClass, value);
}
}
public string Property1 { get; set; }
//CanvasElement inherits from Canvas. Serialization would throw a Exception.
public CanvasElement CustomCanvasClass
{
get
{
return _CustomCanvasClass;
}
set
{
_CustomCanvasClass = value;
}
}
[NonSerialized]
private CanvasElement _CustomCanvasClass;
}
Use a DTO for the properties you need and serialize that.
DTO stands for data transfer object. It contains data you want to transfer only and no logic.
E.g. Add a class like this:
class MyCustomClassDto
{
public double X {get;set;}
public double Y {get;set;}
}
So instead of trying to serialize your custom class directly, you would initialize an instance of this with your X and Y values and serialize that.
Then in your main class you could add this:
public MyCustomClassDto GetData()
{
return new MyCustomerClassDto{X = X, Y = Y};
}
You could add a serialization method to your DTO also.
Alternatively you can use a mapping tool like automapper - which would be suitable if you have many DTOs or corresponding objects in different layers.
Hope that makes the idea clear. Can't see other ways of expanding without seeing more details/context.
The proper mvvm itemspanel approach in the comment on the question is preferable, but it might require substantial rewriting depending on your existing codebase. You might want to consider the business case for such refactoring imo, considering the effort against how much more is likely to be built on top of it.
Related
I have a summary objects, who's responsibilities actually to combine a lot of things together and create a summary report, who later going to be serialized into the XML.
In this objects I have a lot of structures like this:
public class SummaryVisit : Visit, IMappable
{
public int SummaryId { get; set; }
public int PatientId { get; set; }
public int LocationId { get; set; }
public IMappable Patient
{
get
{
return new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
}
}
public IMappable Location
{
get
{
return new SummaryLocation(LocationBusinessService.FindLocation(this.LocationId));
}
}
public IEnumerable<IMappable> Comments
{
get
{
return new SummaryComments(CommentBusinessService.FindComments(this.SummaryId, Location));
}
}
// ... can be a lot of these structures
// ... using different business services and summary objects
public IEnumerable<IMappable> Tasks
{
get
{
return new SummaryTasks(TaskBusinessService.FindTasks(this));
}
}
}
PatientBusinessService, LocationBusinessService etc. are statics.
And each of these SummaryPatient, SummaryLocation etc. have the same type of structure inside.
What is the best approach to refactor and unit test this?
Tried to replace static calls with calls via the interfaced proxies (or refactor statics to non-static classes & interfaces), but this class just got a lot of these interfaces as the constructor injection stuff and start to be super greedy. In addition, these interfaces have a one used method inside (if I going to create it just to this summary needs).
And as soon as this is a summary object, commonly this static services used just once for the whole structure to get appropriate properties for output.
You could change your tests to be more integrational (test more than one class at the time). You could try to modify your services to be more universal and be able to take data from different sources (like TestDataProvider and your current data provider).
Better solution I think is to modify classes you want to test:
Use strong typing for properties and gain all benefits. I think you should return more specific types instead of IMappable
It looks like some of your data is stored inside class (ids) some data is not (IMappable object references). I would refactor this to hold references to objects inside class:
private SummaryPatient _patient;
public SummaryPatient Patient
{
get
{
if (_patient == null)
_patient = new SummaryPatient(PatientBusinessService.FindPatient(this.PatientId));
return _patient;
}
}
Then you can assign your tests data in constructor or create static method CreateDummy(...) just for unit tests. This method then should use CreateDummy for child objects. You can use it in your unit tests.
I have a colleciton of objects which need to maintain several time-stamps for that last time certain properties within the object was updated (one time-stamp per property).
I would just implement the time-stamp update in the setter except that the deserialization library being used first creates an object, then updates all of its properties (using the object's setter). This means that all my time-stamps would be invalidated every time my program deserializes them.
I'm thinking I need a singleton class or some update method which handles updating the properties and also controls the time-stamp update. Is there a better way to implement this behavior? Does a design pattern exist for this behavior?
If you separate your serialization concerns from your business layer, it should help find you some flexibility to hammer out a solution. Have 99% of your API work with your business object (which updates timestamps when properties update), then only convert to/from some data-transfer-object (DTO) for serialization purposes only.
For example, given some business object like this:
public class MyObject
{
public DateTime SomeValueUpdated { get; private set; }
private double _SomeValue;
public double SomeValue
{
get
{
return _SomeValue;
}
set
{
SomeValueUpdated = DateTime.Now;
_SomeValue = value;
}
}
public MyObject()
{
}
//for deserialization purposes only
public MyObject(double someValue, DateTime someValueUpdated)
{
this.SomeValue = someValue;
this.SomeValueUpdated = someValueUpdated;
}
}
You could have a matching DTO like this:
public class MyObjectDTO
{
public DateTime SomeValueUpdated { get; set; }
public double SomeValue { get; set; }
}
Your DTO can be specially adorned with various XML schema altering attributes, or you can manage the timestamps however you see fit and your business layer doesn't know and doesn't care.
When it comes time to serialize or deserialize the objects, run them through a converter utility:
public static class MyObjectDTOConverter
{
public static MyObjectDTO ToSerializable(MyObject myObj)
{
return new MyObjectDTO {
SomeValue = myObj.SomeValue,
SomeValueUpdated = myObj.SomeValueUpdated
};
}
public static MyObject FromSerializable(MyObjectDTO myObjSerialized)
{
return new MyObject(
myObjSerialized.SomeValue,
myObjSerialized.SomeValueUpdated
);
}
}
If you wish, you can make any of the properties or constructors of MyObject to be internal so only your conversion utility can access them. (For example, maybe you don't want to have the public MyObject(double someValue, DateTime someValueUpdated) constructor publicly accessible)
I have an entity called "Set" which contains Cards. Sometimes I want to see the entire card and its contents (card view), when sometimes I just want to know how many cards are in the Set (table views). In my effort to keep things DRY, I decided to try and re-use my SetDto class with multiple constructors like this:
public class SetDto
{
public SetDto()
{
Cards = new List<CardDto>();
}
// Called via SetDto(set, "thin")
public SetDto (Set set, string isThin)
{
var setDto = new SetDto()
{
SetId = set.SetId,
Title = set.Title,
Details = set.Details,
Stage = set.Stage,
CardCount = set.Cards.Count
};
return setDto;
}
// Called via SetDto(set)
public SetDto(Set set)
{
SetId = set.SetId;
UserId = set.UserId;
Title = set.Title;
Details = set.Details;
FolderId = set.FolderId;
Stage = set.Stage;
IsArchived = set.IsArchived;
Cards = new List<CardDto>();
foreach (Card card in set.Cards)
{
Cards.Add(new CardDto(card));
}
}
/// property definitions
I originally had two different DTOs for sets - ThinSetDto and FullSetDto - but this seemed messy and tougher to test. Does the above solution seem ok, or am I breaking a known best-practice? Thank you for your time!
I would create three methods in the SetManager class (a class handling CRUD operations) not in the DTO.
The dto shold have no such a logic inside. Anyway I agree with you that the replication is useless (and evil).
public class BaseSetDTO
{
public BaseSetDTO()
{
Set();
}
internal virtual void Set()
{
//Do your base set here with base properties
}
}
public class SetDTO : BaseSetDTO
{
internal override void Set()
{
//Do a full set here
}
}
Create a base class, then let your types handle what they are supposed to set. Create a new on for your ThinSetDTO and override again.
Instead, I would prefer extension method by declaring all properties in Set class and modifying the properties by passing required parameters. Otherwise initialize a baseDTO and have various versions by adding required properties and call extension method to create required version DTO and return baseDTO.
public static Set SetDto(this Set set, bool isThin)
{
if(isThin)
{
}
return objSet;
}
A common solution to this is to have the repository (or equivalent) return the 'flavor' of the DTO/entity you want by either having different access methods ie: Get() ... GetSet(), or to enumerate your 'flavors' of the entity in question and pass that to your 'Get' (or equivalent) method ie:
enum ContactCollectionFlavors { Full, CountOnly, CountWithNames .... }
...
foo = ContactRepository.GetByLastName('Jones', ContactCollectionFlavors.CountWithNames);
This can get a little messy, from experience the entity in question should have some way of knowing what 'flavor' it is, which smells bad since it breaks encapsulation and seperation of concerns - but in my opinion its better hold your nose and keep some out of band data, so that later you can have lazy loading of the entity allowing you to turn 'light flavors' into fully populated entities.
I have a class that looks like this:
public class MyModel{
public int TheId { get; set; }
public int ....
public string ....
}
I have another class that take a list of several types, including MyModel, and serializes the lists in json. It has several methods, one for each type of list.
public class ToJson{
public string MyModelToJson (List<MyModel> TheListOfMyModel) {
string ListOfMyModelInJson = "";
JavascriptSerializer TheSerializer = new ....
TheSerializer.RegisterConverters(....
ListOfMyModelInJson = TheSerializer.Serialize(TheListOfMyModel);
return ListOfMyModelInJson;
}
public string MyOtherModelToJson (List<MyOtherModel> TheListOfOtherModel) {....}
public string YetAnotherModelToJson (List<YetAnotherModelToJson> TheListOfYetAnotherModelToJson) {....}
}
What I want to do is encapsulate the serializing into MyModel, something like this:
public class MyModel{
public int TheId { get; set; }
public int ....
public string ....
public string MyModelToJson()
}
How can I encapsulate a method into an object so that it's available for a list of objects?
I thought of doing a foreach loop but that gets messy because in the calling method, you have to manipulate the json strings of each object in the list and concatenate them.
Let me know of OO principles of encapsulation apply in this case.
Thanks for your suggestions.
One way would be to define your ToJson as accepting a generic type:
public class ToJson<T>{
public string MyModelToJson (List<T> TheListOfMyModel) {
string ListOfMyModelInJson = "";
JavascriptSerializer TheSerializer = new ....
TheSerializer.RegisterConverters(....
ListOfMyModelInJson = TheSerializer.Serialize(TheListOfMyModel);
return ListOfMyModelInJson;
}
}
extension methods!
public static class JsonExtensions
{
public static string ToJson<T>(this List<T> list)
{
}
}
I'm not sure that I understand your question, but I think that what you want to do is not return a String but a JsonObject, JsonArray, or JsonPrimitive:
public class MyModel {
public JsonObject myModelToJson() ... //this method implements the interface!
}
Where JsonObject is a class that represents a json object.
Make this class implement an interface where the contract is that the return value is a JsonValue.
Then, in the ToJson class, return a JsonArray:
public class ToJson
public JsonArray myModelToJson(List<things that can be json-ized> myList) ...
}
Don't serialize the objects/arrays/primitives to a String until you absolutely need to, and let a library take care of the actual serialization.
That was a confusing answer.
Here's what I think you should do:
get hold of a decent json library. Ideally, it should have JsonObjects, JsonArrays, and JsonPrimitives which are subclasses of JsonElement. I've used Google gson in java, but I don't know what an equivalent C# version would be.
create an interface, JsonAble with one method -- toJson -- that returns a JsonElement.
implement this interface for all concerned classes
serializing a list of JsonAble objects is then very easy -- it becomes a JsonArray.
a decent json library should have a serialize method -- so you'll never have to worry about throwing strings around yourself
For what it's worth, I wouldn't remove the class at all. What you're talking about doing is adding an additional responsibility to your model, and apparently going against SRP heuristic. That is, you have a class whose current responsibility is to model data, and you're going to make it responsible for modeling data and also converting its data to some form, using various service classes that it now needs to know about. If the model class encapsulates GUI concepts like raising events for GUI, then it has divergent reasons to change - if the scheme for notifying the GUI changes and if the scheme for converting to JSON changes.
If it were me, I'd have the models inherit from a base class or define an interface as mentioned by Matt Fenwick, and have your ToJson class take a batch of those as input, process them, and return the result.
I understand the desire to eliminate the extra class, and might advocate it if it were a simple conversion involving only data elements of the class, but as soon as you need a service class of some kind to do the operation, it seems a poor fit for the model object, as you now cannot model data without a JavascriptSerializer. That's awkward if you want to model data that you don't then serialize.
One final thing that I can think of is that you can build on a'b'c'd'e'f'g'h's suggestion and piggy back the method onto some existing service, thus eliminating the class. If you just have a generic method on that service that implements the serialization, you can eliminate the separate class, since you no longer need a separate method for each model object type.
I want to add metadata to my object graph for non-domain type data that will be associated to my objects but is not essential to the problem set of that domain. For example, I need to store sort settings for my objects so that the order in which they appear in the UI is configurable by the user. The sort indices should be serializable so that the objects remember their positions. That's just one among a few other metadata items I need to persist for my objects. My first thought is to solve this by having a MetadataItem and a MetadataItemCollection where the base Entity class will have a "Meta" property of type MetadataItemCollection. E.g.:
public class MetadataItem
{
public string Name;
public object Data;
}
public class MetadataItemCollection
{
/* All normal collection operations here. */
// Implementation-specific interesting ones ...
public object Get(string name);
public MetadataItem GetItem(string name);
// Strongly-type getters ...
public bool GetAsBool(string name);
public string GetAsString(string name);
// ... or could be typed via generics ...
public T Get<T>(string name);
}
public class Entity
{
public MetadataItemCollection Meta { get; }
}
A few concerns I can think of are:
Serialization - the database has a single table of EntityID | Name | Value where Value is a string and all types are serialized to a string?
Future Proofing - what if a metadata item's type (unlikely) or name needs to be changed?
Refactorability - should the keys come from a static list via enum or a class with static string properties, or should free-form strings be allowed:
var i = entity.Meta["SortIndex"];
vs.
public enum Metadatas { SortIndex };
var i = entity.Meta[Metadatas.SortIndex];
vs.
public static class Metadatas
{
public static string SortIndex = "SortIndex";
}
var i = entity.Meta[Metadatas.SortIndex];
Anything else?
Thoughts, ideas, gotchas???
Thanks for your time.
Solution:
Following #Mark's lead, and after watching the Udi video Mark linked to, I created two new interfaces: IUiPresentation and IUiPresentationDataPersistor. It's important to note that none of the objects in my Entity object model have any awareness of these interfaces; the interfaces are in a separate assembly and never referenced by my Entity object model. The magic is then done via IoC in the presentation models. It would be something like the following:
public class PhoneViewModel
{
IUiPresentationDataPersistor<Phone> _uiData
IUiPresentation<Phone> _presenter;
// Let IoC resolve the dependency via ctor injection.
public PhoneViewModel(Phone phone, IUiPresentationDataPersistor<Phone> uiData)
{
_uiData = uiData;
_presenter = uiData.Get(phone); // Does a simple lookup on the phone's ID.
}
public int SortIndex
{
get { return _presenter.SortIndex; }
set { _presenter.SortIndex = value; }
}
public void Save()
{
_uiData.Save();
}
}
It's a little more complicated in that the ViewModel implements INotifyPropertyChanged to get all the goodness that it provides, but this should convey the general idea.
Metadata literally means data about data, but what you seem to be asking for is a way to control and change behavior of your objects.
I think such a concern is much better addressed with a Role Interface - see e.g. Udi Dahan's talk about Making Roles Explicit. More specifically, the Strategy design pattern is used to define loosely coupled behavior. I'd look for a way to combine those two concepts.
As we already know from .NET, the use of static, weakly typed attributes severely limits our options for recomposing components, so I wouldn't go in that direction.