Object references used as keys in dictionary on client and WCF server - c#

I'm writing a client and service using WCF, however I suspect that there are multiple issues at the moment.
As you can see from the code below, the process is as follows: The client asks for some data, which the service generates and returns in a DTO-object. The client then attempts to do a look-up in a returned dictionary, but this throws a KeyNotFoundException.
In addition, a test on the server fails before this (if left uncommented) because the input parameter list allBranches no longer contains Branch currentBranch, which it did on the client side of the method call.
Can someone enlighten me as to what happens in this code, and why it blows up first on the server side and later on the client side?
Shared class definitions
------------------------
[DataContract(IsReference = true)]
public class Branch
{
public Branch(int branchId, string name)
{
BranchId = branchId;
Name = name;
}
[DataMember]
public int BranchId { get; set; }
[DataMember]
public string Name { get; set; }
}
[DataContract]
public class Department
{
public string Name { get; set; }
// a few other properties, both primitives and complex objects
}
[DataContract]
public class MyDto
{
[DataMember]
public IDictionary<Branch, List<Department>> DepartmentsByBranch { get; set; }
[DataMember]
public Branch CurrentBranch { get; set; }
// lots of other properties, both primitives and complex objects
}
Server-side
--------------------------------
public CreateData(List<Branch> allBranches, Branch currentBranch)
{
// BOOM: On the server side, currentBranch is no longer contained in allBranches (presumably due to serialization and deserialization)
if (!branches.Contains(branchToOpen))
{
throw new ArgumentException("allBranches no longer contain currentBranch!");
}
// Therefore, I should probably not do the following, expecting to use currentBranch as a key in departmentsByBranch later on
var departmentsByBranch = branches.ToDictionary(branch => branch, branch => new List<Department>());
return new MyDto
{
DepartmentsByBranch = departmentsByBranch,
CurrentBranch = departmentsByBranch,
};
}
Client-side (relevant code only)
--------------------------------
var service = new ServiceProxy(); // using a binding defined in app.config
var allBranches = new List<Branch>
{
new Branch(0, "First branch"),
new Branch(1, "Second branch"),
// etc...
};
var currentBranch = allBranches[0];
MyDto dto = service.CreateData(allBranches, currentBranch);
var currentDepartments = dto.DepartmentsByBranch[currentBranch]; // BOOM: Generates KeyNotFoundException
EDIT: I followed Jon's excellent answer below and did the following (which fixed all problems):
Made Branch immutable by giving every property a private setter.
Every class used as a key in a dictionary should be immutable, or at least have its hash code computed from immutable properties.
Implemented IEquatable + overrides of Object.Equals and GetHashCode, the latter as per this SO-answer (link)
Implementing IEquatable is done simply by testing for equal property values,
public bool Equals(Branch other)
{
return other != null && ((BranchId == other.BranchId) && (Name == other.Name));
}

The reason your code fails is that you have two separate Branch instances in your client: the one you create locally (currentBranch) and the one that gets received from the server and created implicitly by WCF (inside dto.DepartmentsByBranch). You have not specified that these two instances are "the same thing", so as far as the dictionary is concerned it has never seen that currentBranch you are talking about.
You need to give Branch a proper implementation of IEquatable<Branch> -- and the same goes for all classes you use as dictionary keys.
Note that "proper implementation" means
If you implement IEquatable<T>, you should also override the base
class implementations of Object.Equals(Object) and GetHashCode so that
their behavior is consistent with that of the IEquatable<T>.Equals
method.

Related

Is there a way to derive a type argument from a string for passing into a generic method?

I typed this out in Notepad++ real quick so please forgive any typos/mistakes. If it's possible, I'd be getting rid of some repetitive work (i.e. a long case statement). Not a huge deal but I'm curious if it's possible and if so, how bad would it be to actually implement the code.
jsonFromWebpage = {
StatusUpdate: {
QueryType: "SomeClassName",
LocalCount: 5,
RemoteCount: 5
},
StatusUpdate: {
QueryType: "AnotherClass",
LocalCount: 29,
RemoteCount: 30
}
}
// Model
public class StatusUpdate
{
public string QueryType { get; set; }
public int LocalCount { get; set; }
public int RemoteCount { get; set; }
}
// Controller
public IActionResult GetStatusUpdate([FromBody] List<StatusUpdate> status)
{
_service.GetStatusUpdate(status);
return status
}
// Service
public List<Status> GetStatusUpdate(List<StatusUpdate> status)
{
foreach(var s in status)
{
var typeArgument = s.QueryType; // <--- Is there a way for this...
status.CurrentCount = GetTotalCount<typeArgument>(); // <--- to work here?
status.RemoteCount = thisworksfineforotherreasons(s.QueryType);
}
}
// Repo
public int GetTotalCount<T>() where T: class
{
var result = _db.GetCount<T>();
return result;
}
EDIT
First, thank you to everyone that has responded. Having read everything so far, I wanted to give a little more context. Here's a different take on the example:
// View
<div class="col-12">
<div class="api-types">History</div>
<div class="progress-bar">50 out of 50 copied</div>
</div>
<div class="col-12">
<div class="api-types">Users</div>
<div class="progress-bar">25 out of 32 copied</div>
</div>
// -- View javascript
var types = [];
$(".api-types").each(function (c, i) {
types.push({ ApiAndClassName: $(i).text() });
});
pushToController(JSON.stringify(types));
// Controller
public IActionResult GetSyncStatus(List<SyncStatusVM> status)
{
_service.GetSyncStatus(status);
return Json(status);
}
// Service
public List<SyncStatusVM> GetSyncStatus(List<SyncStatusVM> status)
{
foreach(var s in status)
{
// LocalCount
var magicTypeFigurator = s.ApiAndClassName
s.LocalCount = _repo.GetCount<magicTypeFigurator>(); <-- "this is a variable but should be a type..."
// Remote
var url = $"https://api.domain.com/{s.ApiAndClassName.ToLower()}"
s.RemoteCount = FetchCountFromApi(url);
}
return status;
}
// Repository
public long GetCount<T>()
{
var result = _orm.Count<T>();
return result;
}
// Models
public class SyncStatusVM
{
public string ApiAndClassName { get; set; }
public int LocalCount { get; set; }
public int RemoteCount { get; set; }
}
public class History
{
public long Id {get;set;}
public DateTime CreatedDate {get;set;}
public string Message {get;set;}
}
public class Users
{
public long Id {get;set;}
public string FirstName {get;set;}
public string LastName {get;set;}
}
Using this code, I can just create a section in the view and a class for each type. The class is reused by the ORM and desearializing from the API. The most cumbersome point is having a case statement in the controller that calls the generic method with the correct type, based on the "ApiAndClassName". I could edit the ORM so it's string based instead of generic but I don't like that method for various reasons. I could turn the case statement into a collection in the controller or just move it to the service layer but what I have in place already works. I could also just refactor so the view builds from a collection but there are other data points where that wouldn't be the best option. Unless there's something I'm missing, the generic argument from string thing kinda makes sense. It's a fringe case... and kinda just curious if it can be done well enough.
Generally strong typsisation is your friend. Compile time type checks are a feature, not a enemy to be fought. Without them or with too agressive casting, we get the JavaScript and PHP examples from this comic.
For work with weakly typed langauges or WebServices, .NET has the ExpandoObject. The data can be stored in it, then later transfered into the proper type of instance. Also it looks like your case would fall into JSON deserialisation, wich is a well established code.
Generic is the wrong term. Generics are usually about the type still being known at compile time, so the compile time type checks still work. You are explicitly about the type not being known at compile time, only at runtime. This is very distinct from a generic. Dynamic Types are the proper term afaik. But to not mix it up with the type Dynamic (yes, naming here becomes really confusing).
Reflection is the droid you are looking for. For most purposes, the name of a class or field does not exist at runtime. It is primarily there for you and the compiler to communicate. Now Reflection is the exception. It is all about getting stuff (like instances or property/fields) based on a string representation of their name. The nessesary metadata is baked into the .NET Assemblies, as much as the COM support. But as I support strong typisation, I am not a friend of it.
switch/case statements can usually be replaced with a collection of some sort. Cases are really just a hardcoded way to check a collection of constants. You use the case identifier as the key and whatever else you need for the Value. You can totally use Functions as the value (thanks to delegates). Or the Type type, you then use for the instance creation.
But for your case it sounds like all of this is wrong. Bog standart Inheritance - Inheritance might be the real droid you are looking for. A JSON service would not usually give you different instance in a single collection, unless those instances are related in some way. "SomeClassName" and "AnotherClass" should have another ancestor. Or in fact, they should even be just one class - QueryType is simply a string field of said class.
Assuming that you have a way to map strings to Type objects, yes: you can use MethodInfo.MakeGenericMethod():
var totalCount = (int) (
GetType()
.GetMethod("GetTotalCount")
.MakeGenericMethod(MapStringToType(s.QueryType))
.Invoke(this, null)
);
This assumes the presence of a method Type MapStringToType(string) in the local scope.
One way to map types would be to use a Dictionary<string, Type> and fill it with the allowed types and their respective names that will be used in the JSON data to refer to them.

Return only a subset of properties of an object from an API

Say I have a database in which I am storing user details of this structure:
public class User
{
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string PasswordHash { get; set; }
}
I have a data access layer that works with this that contains methods such as GetById() and returns me a User object.
But then say I have an API which needs to return a users details, but not sensitive parts such as the PasswordHash. I can get the User from the database but then I need to strip out certain fields. What is the "correct" way to do this?
I've thought of a few ways to deal with this most of which involve splitting the User class into a BaseClass with non sensitive data and a derived class that contains the properties I would want kept secret, and then converting or mapping the object to the BaseClass before returning it, however this feels clunky and dirty.
It feels like this should be a relatively common scenario, so am I missing an easy way to handle it? I'm working with ASP.Net core and MongoDB specifically, but I guess this is more of a general question.
It seems for my purposes the neatest solution is something like this:
Split the User class into a base class and derived class, and add a constructor to copy the required fields:
public class User
{
public User() { }
public User(UserDetails user)
{
this.UserId = user.UserId;
this.Name = user.Name;
this.Email = user.Email;
}
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
}
public class UserDetails : User
{
public string PasswordHash { get; set; }
}
The data access class would return a UserDetails object which could then be converted before returning:
UserDetails userDetails = _dataAccess.GetUser();
User userToReturn = new User(userDetails);
Could also be done using AutoMapper as Daniel suggested instead of the constructor method. Don't love doing this hence why I asked the question but this seems to be the neatest solution and requires the least duplication.
There are two ways to do this:
Use the same class and only populate the properties that you want to send. The problem with this is that value types will have the default value (int properties will be sent as 0, when that may not be accurate).
Use a different class for the data you want to send to the client. This is basically what Daniel is getting at in the comments - you have a different model that is "viewed" by the client.
The second option is most common. If you're using Linq, you can map the values with Select():
users.Select(u => new UserModel { Name = u.Name, Email = u.Email });
A base type will not work the way you hope. If you cast a derived type to it's parent type and serialize it, it still serializes the properties of the derived type.
Take this for example:
public class UserBase {
public string Name { get; set; }
public string Email { get; set; }
}
public class User : UserBase {
public string UserId { get; set; }
public string PasswordHash { get; set; }
}
var user = new User() {
UserId = "Secret",
PasswordHash = "Secret",
Name = "Me",
Email = "something"
};
var serialized = JsonConvert.SerializeObject((UserBase) user);
Notice that cast while serializing. Even so, the result is:
{
"UserId": "Secret",
"PasswordHash": "Secret",
"Name": "Me",
"Email": "something"
}
It still serialized the properties from the User type even though it was casted to UserBase.
If you want ignore the property just add ignore annotation in you model like this, it will skip the property when model is serializing.
[JsonIgnore]
public string PasswordHash { get; set; }
if you want ignore at runtime(that means dynamically).there is build function avilable in Newtonsoft.Json
public class User
{
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string PasswordHash { get; set; }
//FYI ShouldSerialize_PROPERTY_NAME_HERE()
public bool ShouldSerializePasswordHash()
{
// use the condtion when it will be serlized
return (PasswordHash != this);
}
}
It is called "conditional property serialization" and the documentation can be found here. hope this helps
The problem is that you're viewing this wrong. An API, even if it's working directly with a particular database entity, is not dealing with entities. There's a separation of concerns issue at play here. Your API is dealing with a representation of your user entity. The entity class itself is a function of your database. It has stuff on it that only matters to the database, and importantly, stuff on it that does not matter to your API. Trying to have one class that can satisfy multiple different applications is folly, and will only lead to brittle code with nested dependencies.
More to the point, how are you going to interact with this API? Namely, if your API exposes your User entity directly, then any code that consumes this API either must take a dependency on your data layer so it can access User or it must implement its own class representing a User and hope that it matches up with what the API actually wants.
Now imagine the alternative. You create a "common" class library that will be shared between your API and any client. In that library, you define something like UserResource. Your API binds to/from UserResource only, and maps that back and forth to User. Now, you have completely segregated your data layer. Clients only know about UserResource and the only thing that touches your data layer is your API. And, of course, now you can limit what information on User is exposed to clients of your API, simply by how you build UserResource. Better still, if your application needs should change, User can change without spiraling out as an API conflict for each consuming client. You simply fixup your API, and clients go on unawares. If you do need to make a breaking change, you can do something like create a UserResource2 class, along with a new version of your API. You cannot create a User2 without causing a whole new table to be created, which would then spiral out into conflicts in Identity.
Long and short, the right way to go with APIs is to always use a separate DTO class, or even multiple DTO classes. An API should never consume an entity class directly, or you're in for nothing but pain down the line.

Managing multiple versions of object in JSON

I have a class in C#, that has a number of variables. Let's call it "QuestionItem".
I have a list of this object, which the user modifies, and then sends it via JSON serialization (with Newtonsoft JSON library) to the server.
To do so, I deserialize the objects that are already in the server, as a List<QuestionItem>, then add this new modified object to the list, and then serialize it back to the server.
In order to display this list of QuestionItems to the user, I deserialize the JSON as my object, and display it somewhere.
Now, the problem is - that I want to change this QuestionItem and add some variables to it.
But I can't send this NewQuestionItem to the server, because the items in the server are of type OldQuestionItem.
How do I merge these two types, or convert the old type to the new one, while the users with the old version will still be able to use the app?
You are using an Object Oriented Language, so you might aswell use inheritance if possible.
Assuming your old QuestionItem to be:
[JsonObject(MemberSerialization.OptOut)]
public class QuestionItem
{
[JsonConstructor]
public QuestionItem(int Id, int Variant)
{
this.Id = Id;
this.Variant = Variant;
}
public int Id { get; }
public int Variant { get; }
public string Name { get; set; }
}
you can extend it by creating a child class:
[JsonObject(MemberSerialization.OptOut)]
public class NewQuestionItem : QuestionItem
{
private DateTime _firstAccess;
[JsonConstructor]
public NewQuestionItem(int Id, int Variant, DateTime FirstAccess) : base(Id, Variant)
{
this.FirstAccess = FirstAccess;
}
public DateTime FirstAccess { get; }
}
Note that using anything different than the default constructor for a class requires you to use the [JsonConstructor] Attribute on this constructor and every argument of said constructor must be named exactly like the corresponding JSON properties. Otherwise you will get an exception, because there is no default constructor available.
Your WebAPI will now send serialized NewQuestionItems, which can be deserialized to QuestionItems. In fact: By default, JSON.NET as with most Json libraries, will deserialize it to any object if they have at least one property in common. Just make sure that any member of the object you want to serialize/desreialize can actually be serialized.
You can test the example above with the following three lines of code:
var newQuestionItem = new NewQuestionItem(1337, 42, DateTime.Now) {Name = "Hello World!"};
var jsonString = JsonConvert.SerializeObject(newQuestionItem);
var oldQuestionItem = JsonConvert.DeserializeObject<QuestionItem>(jsonString);
and simply looking at the property values of the oldQuestionItem in the debugger.
So, this is possible as long as your NewQuestionItem only adds properties to an object and does neither remove nor modify them.
If that is the case, then your objects are different and thus, requiring completely different objects with a different URI in your API, as long as you still need to maintain the old instance on the existing URI.
Which brings us to the general architecture:
The most clean and streamline approach to what you are trying to achieve is to properly version your API.
For the purpose of this link I am assuming an Asp.NET WebApi, since you are handling the JSON in C#/.NET. This allows different controller methods to be called upon different versions and thus, making structural changes the resources your API is providing depending on the time of the implementation. Other API will provide equal or at least similar features or they can be implemented manually.
Depending on the amount and size of the actual objects and potential complexity of the request- and resultsets it might also be worth looking into wrapping requests or responses with additional information. So instead of asking for an object of type T, you ask for an Object of type QueryResult<T> with it being defined along the lines of:
[JsonObject(MemberSerialization.OptOut)]
public class QueryResult<T>
{
[JsonConstructor]
public QueryResult(T Result, ResultState State,
Dictionary<string, string> AdditionalInformation)
{
this.Result = result;
this.State = state;
this.AdditionalInformation = AdditionalInformation;
}
public T Result { get; }
public ResultState State { get; }
public Dictionary<string, string> AdditionalInformation { get; }
}
public enum ResultState : byte
{
0 = Success,
1 = Obsolete,
2 = AuthenticationError,
4 = DatabaseError,
8 = ....
}
which will allow you to ship additional information, such as api version number, api version release, links to different API endpoints, error information without changing the object type, etc.
The alternative to using a wrapper with a custom header is to fully implement the HATEOAS constraint, which is also widely used. Both can, together with proper versioning, save you most of the trouble with API changes.
How about you wrapping your OldQuestionItem as a property of QuestionItem? For example:
public class NewQuestionItem
{
public OldQuestionItem OldItem { get; set; }
public string Property1 {get; set; }
public string Property2 {get; set; }
...
}
This way you can maintain the previous version of the item, yet define new information to be returned.
Koda
You can use something like
public class OldQuestionItem
{
public DateTime UploadTimeStamp {get; set;} //if less then DateTime.Now then it QuestionItem
public string Property1 {get; set; }
public string Property2 {get; set; }
...
public OldQuestionItem(NewQuestionItem newItem)
{
//logic to convert new in old
}
}
public class NewQuestionItem : OldQuestionItem
{
}
and use UploadTimeStamp as marker to understand, what Question is it.

ResponstDTO with complex Property in ServiceStack

Havin a Response with a complex property, i want to to map to my responseDTO properly. For all basic types it works out flawlessly.
The ResponseDTO looks like this:
public class ResponseDto
{
public string Id {
get;
set;
}
public struct Refs
{
public Genre GenreDto {
get;
set;
}
public Location LocationDto {
get;
set;
}
}
public Refs References {
get;
set;
}
}
Genre and Location are both for now simple classes with simple properties (int/string)
public class GenreDto {
public string Id {
get;
set;
}
public string Name {
get;
set;
}
}
Question:
Is there any way, without changing/replacing the generic unserializer ( and more specific example) (in this example JSON ) to map such complex properties?
One specific difference to the GithubResponse example is, that i cant use a dictionry of one type, since i have different types under references. Thats why i use a struct, but this seems not to work. Maybe only IEnumerable are allowed?
Update
There is a way using lamda expressins to parse the json manually github.com/ServiceStack/ServiceStack.Text/blob/master/tests/ServiceStack.Text.Tests/UseCases/CentroidTests.cs#L136 but i would really like to avoid this, since the ResponseDTO becomes kinda useless this way - since when writing this kind of manual mapping i would no longer us Automapper to map from ResponseDto to DomainModel - i though like this abstraction and "seperation".
Thanks
I used lambda expressions to solve this issue, a more complex example would be
static public Func<JsonObject,Cart> fromJson = cart => new Cart(new CartDto {
Id = cart.Get<string>("id"),
SelectedDeliveryId = cart.Get<string>("selectedDeliveryId"),
SelectedPaymentId = cart.Get<string>("selectedPaymentId"),
Amount = cart.Get<float>("selectedPaymentId"),
AddressBilling = cart.Object("references").ArrayObjects("address_billing").FirstOrDefault().ConvertTo(AddressDto.fromJson),
AddressDelivery = cart.Object("references").ArrayObjects("address_delivery").FirstOrDefault().ConvertTo(AddressDto.fromJson),
AvailableShippingTypes = cart.Object("references").ArrayObjects("delivery").ConvertAll(ShippingTypeDto.fromJson),
AvailablePaypmentTypes = cart.Object("references").ArrayObjects("payment").ConvertAll(PaymentOptionDto.fromJson),
Tickets = cart.Object("references").ArrayObjects("ticket").ConvertAll(TicketDto.fromJson)
});
So this lamda exprpession is used to parse the JsonObject response of the request and map everything inside, even nested ressources. This works out very well and flexible
Some time ago i stumbled upon a similar problem. Actually ServiceStack works well with complex properties. The problem in my scenario was that i was fetching data from a database and was passing the objects returned from the DB provider directly to ServiceStack. The solution was to either create DTOs out of the models returned by the DB provider or invoke .ToList() on those same models.
I'm just sharing some experience with SS but may be you can specify what's not working for you. Is there an exception thrown or something else.

C#, problem mixing Xml Serialization with Nhibernate

I am working on a program that uses Nhibernate to persist objects, and Xml Serialization to import and export data. I can't use the same properties for collections as, for example, Nhibernate needs them to be Ilists, because it has it's own implementation of that interface, and I can't Serialize interfaces. But as I need both properties to be synchronized, I thought I could use two different properties for the same Field. The properties will be according to what I need for each framework, and they will update the Field accrodingly.
So, I have the following field:
private IList<Modulo> modulos;
And the following properties:
[XmlIgnore]
public virtual IList<Modulo> Modulos
{
get { return modulos; }
set { modulos = value; }
}
[XmlArray]
[XmlArrayItem(typeof(Modulo))]
public virtual ArrayList XmlModulos
{
get
{
if (modulos == null) return new ArrayList();
var aux = new ArrayList();
foreach (Modulo m in modulos)
aux.Add(m);
return aux;
}
set
{
modulos = new List<Modulo>();
foreach (object o in value)
modulos.Add((Modulo)o);
}
}
The first one is working perfectly, being quite standard, but I have some problems with the second one. The get is working great as I am having no problems Serializing objects (meaning it correctly takes the info from the field). But when I need to Deserialize, it is not getting all the info. The debugger says that after the Deserialization, the field is not updated (null) and the Property is empty (Count = 0).
The obvious solution would be using two unrelated properties, one for each framework, and passing the information manually when needed. But the class structure is quite complicated and I think there should be a more simple way to do this.
Any Idea how I can modify my property for it to do what I want? Any help will be appreciated.
The short answer is that you cant.
Typically you would create a DTO ( Data transfer object ) separate from your NHibernate objects. For example:
public class PersonDto
{
[XmlAttribute(AttributeName = "person-id")]
public int Id { get; set; }
[XmlAttribute(AttributeName = "person-name")]
public string Name{ get; set; }
}
On your DTO object you only put the properties that you intend to serialize. You than create a DTO from your domain model when you need to serialize one.
There is a great little library called automapper that makes mapping from your domain objects to your dto's pretty straight forward. See: http://automapper.codeplex.com/
Here is an example of a person class that supports mapping to the above DTO.
public class Person
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
static Person()
{
Mapper.CreateMap<PersonDto, Person>();
Mapper.CreateMap<Person, PersonDto>();
}
public Person(PersonDto dto)
{
Mapper.Map<PersonDto, Person>(dto, this);
}
public PersonDto ToPersonDto()
{
var dto = new PersonDto();
Mapper.Map<Person, PersonDto>(this, dto);
return dto;
}
}

Categories

Resources