Best way to store sets of data (In a table) - c#

I have a client and a server program, and I am currently using hashtables to store the clients Name and IP address when they connect. I now need to add another variable that the client will send to the server when it connects, but as far as I understand it, hashtables only have 2 columns (Key and value). Is there another way I could store this data instead of using hashtables?

You can stick objects into hash tables, or dictionaries.
So make yourself a user class of some description and then store your user object in your dictionary under their name.
This would then lend itself to take other properties as and when you need to add them.
public class User
{
public string Name { get; set; }
public string IPAddress { get; set; }
public string AnotherProperty { get; set;}
}
Dictionary<string, User> userTable = new Dictionary<string, User>();
userTable.Add(userName, new User(){Name = "fred", IPAddress = "127.0.0.1", AnotherProperty = "blah"});
Something like this

You could create a new class specifically for sotring this information e.g.
public class Client
{
private string Name { get; set;}
private string IP { get; set;}
//... etc.
}
Instead of using a HashTable, simply store an instance of the Client class into the Session object (or whatever your using to store the HashTable.)
If you need to store more then one client's details, then use a List<Client>, or a Dictionary if you need to use keys to lookup items. Experiment to see what works for you.

I'm not sure it's the best solution, but just for general knowledge, if you really need a table, you can use DataTable Class which is a part of ADO.NET (which is native to .NET).
The reason this is not advisable is that it doesn't support unique keys, as a dictionary, and that it generally complicates things in your scenario.

Related

How to automatically convert encrypted string column to decimal in C#?

I'm working on an application in .Net Core 3.1 where we need to encrypt some database columns. Initially we tried to use SQL Server's own column-level encryption. But during the tests we came across some problems and conflicts with the certificate, because every time we changed the columns, the certificate stopped working.
Therefore, we decided to try another approach, applying encryption in the application itself. After some research, I found two packages:
EntityFrameworkCore.DataEncryption
EntityFrameworkCore.EncryptColumn
I followed some examples I found on the internet, and implemented an example using the EntityFrameworkCore.DataEncryption package. The problem is that encryption can only be applied to string-type fields and the data I need to encrypt is decimal, such as salary. As the application performs several operations involving these decimal fields, I would like to somehow perform the automatic conversion of the fields during reading and writing.
Example:
public class Produto
{
[Key]
public int IdProduto { get; set; }
public string NomeProduto { get; set; }
[Encrypted]
public string Valor { get; set; }
[Encrypted]
public string Desconto { get; set; }
[Encrypted]
public string ValorVenda { get; set; }
}
In my Product class, I need to encrypt some fields, they need to be string to work. I would like to somehow check if the field has the annotation [Encrypted] and when performing the get, it would be automatically converted to decimal and before persisting in the database, convert it again to string.
I've tried examples I've found, but so far without success. Could someone please tell me if this is possible and if so how could it be done?
Thank you
You can use reflection and check that the type/object has the [Encryption] attribute. I would adapt the code from this page:
https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/attributes/accessing-attributes-by-using-reflection
The key is to use reflection. You can read a class properties, methods etc to check for attributes.
Can you try something like adding an additional property for each of the fields you want encrypted, but don't map it to the database table. For example for ValorVenda:
// omit the [Encrypted] attribute
public string ValorVenda {get; set;}
[NotMapped]
public string ValorVendaLocal
{
get
{
// return decrypted ValorVenda
}
set
{
// ValorVenda = ... encrypt ValorVendaLocal
}
}

Return only a subset of properties of an object from an API

Say I have a database in which I am storing user details of this structure:
public class User
{
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string PasswordHash { get; set; }
}
I have a data access layer that works with this that contains methods such as GetById() and returns me a User object.
But then say I have an API which needs to return a users details, but not sensitive parts such as the PasswordHash. I can get the User from the database but then I need to strip out certain fields. What is the "correct" way to do this?
I've thought of a few ways to deal with this most of which involve splitting the User class into a BaseClass with non sensitive data and a derived class that contains the properties I would want kept secret, and then converting or mapping the object to the BaseClass before returning it, however this feels clunky and dirty.
It feels like this should be a relatively common scenario, so am I missing an easy way to handle it? I'm working with ASP.Net core and MongoDB specifically, but I guess this is more of a general question.
It seems for my purposes the neatest solution is something like this:
Split the User class into a base class and derived class, and add a constructor to copy the required fields:
public class User
{
public User() { }
public User(UserDetails user)
{
this.UserId = user.UserId;
this.Name = user.Name;
this.Email = user.Email;
}
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
}
public class UserDetails : User
{
public string PasswordHash { get; set; }
}
The data access class would return a UserDetails object which could then be converted before returning:
UserDetails userDetails = _dataAccess.GetUser();
User userToReturn = new User(userDetails);
Could also be done using AutoMapper as Daniel suggested instead of the constructor method. Don't love doing this hence why I asked the question but this seems to be the neatest solution and requires the least duplication.
There are two ways to do this:
Use the same class and only populate the properties that you want to send. The problem with this is that value types will have the default value (int properties will be sent as 0, when that may not be accurate).
Use a different class for the data you want to send to the client. This is basically what Daniel is getting at in the comments - you have a different model that is "viewed" by the client.
The second option is most common. If you're using Linq, you can map the values with Select():
users.Select(u => new UserModel { Name = u.Name, Email = u.Email });
A base type will not work the way you hope. If you cast a derived type to it's parent type and serialize it, it still serializes the properties of the derived type.
Take this for example:
public class UserBase {
public string Name { get; set; }
public string Email { get; set; }
}
public class User : UserBase {
public string UserId { get; set; }
public string PasswordHash { get; set; }
}
var user = new User() {
UserId = "Secret",
PasswordHash = "Secret",
Name = "Me",
Email = "something"
};
var serialized = JsonConvert.SerializeObject((UserBase) user);
Notice that cast while serializing. Even so, the result is:
{
"UserId": "Secret",
"PasswordHash": "Secret",
"Name": "Me",
"Email": "something"
}
It still serialized the properties from the User type even though it was casted to UserBase.
If you want ignore the property just add ignore annotation in you model like this, it will skip the property when model is serializing.
[JsonIgnore]
public string PasswordHash { get; set; }
if you want ignore at runtime(that means dynamically).there is build function avilable in Newtonsoft.Json
public class User
{
public string UserId { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public string PasswordHash { get; set; }
//FYI ShouldSerialize_PROPERTY_NAME_HERE()
public bool ShouldSerializePasswordHash()
{
// use the condtion when it will be serlized
return (PasswordHash != this);
}
}
It is called "conditional property serialization" and the documentation can be found here. hope this helps
The problem is that you're viewing this wrong. An API, even if it's working directly with a particular database entity, is not dealing with entities. There's a separation of concerns issue at play here. Your API is dealing with a representation of your user entity. The entity class itself is a function of your database. It has stuff on it that only matters to the database, and importantly, stuff on it that does not matter to your API. Trying to have one class that can satisfy multiple different applications is folly, and will only lead to brittle code with nested dependencies.
More to the point, how are you going to interact with this API? Namely, if your API exposes your User entity directly, then any code that consumes this API either must take a dependency on your data layer so it can access User or it must implement its own class representing a User and hope that it matches up with what the API actually wants.
Now imagine the alternative. You create a "common" class library that will be shared between your API and any client. In that library, you define something like UserResource. Your API binds to/from UserResource only, and maps that back and forth to User. Now, you have completely segregated your data layer. Clients only know about UserResource and the only thing that touches your data layer is your API. And, of course, now you can limit what information on User is exposed to clients of your API, simply by how you build UserResource. Better still, if your application needs should change, User can change without spiraling out as an API conflict for each consuming client. You simply fixup your API, and clients go on unawares. If you do need to make a breaking change, you can do something like create a UserResource2 class, along with a new version of your API. You cannot create a User2 without causing a whole new table to be created, which would then spiral out into conflicts in Identity.
Long and short, the right way to go with APIs is to always use a separate DTO class, or even multiple DTO classes. An API should never consume an entity class directly, or you're in for nothing but pain down the line.

Managing multiple versions of object in JSON

I have a class in C#, that has a number of variables. Let's call it "QuestionItem".
I have a list of this object, which the user modifies, and then sends it via JSON serialization (with Newtonsoft JSON library) to the server.
To do so, I deserialize the objects that are already in the server, as a List<QuestionItem>, then add this new modified object to the list, and then serialize it back to the server.
In order to display this list of QuestionItems to the user, I deserialize the JSON as my object, and display it somewhere.
Now, the problem is - that I want to change this QuestionItem and add some variables to it.
But I can't send this NewQuestionItem to the server, because the items in the server are of type OldQuestionItem.
How do I merge these two types, or convert the old type to the new one, while the users with the old version will still be able to use the app?
You are using an Object Oriented Language, so you might aswell use inheritance if possible.
Assuming your old QuestionItem to be:
[JsonObject(MemberSerialization.OptOut)]
public class QuestionItem
{
[JsonConstructor]
public QuestionItem(int Id, int Variant)
{
this.Id = Id;
this.Variant = Variant;
}
public int Id { get; }
public int Variant { get; }
public string Name { get; set; }
}
you can extend it by creating a child class:
[JsonObject(MemberSerialization.OptOut)]
public class NewQuestionItem : QuestionItem
{
private DateTime _firstAccess;
[JsonConstructor]
public NewQuestionItem(int Id, int Variant, DateTime FirstAccess) : base(Id, Variant)
{
this.FirstAccess = FirstAccess;
}
public DateTime FirstAccess { get; }
}
Note that using anything different than the default constructor for a class requires you to use the [JsonConstructor] Attribute on this constructor and every argument of said constructor must be named exactly like the corresponding JSON properties. Otherwise you will get an exception, because there is no default constructor available.
Your WebAPI will now send serialized NewQuestionItems, which can be deserialized to QuestionItems. In fact: By default, JSON.NET as with most Json libraries, will deserialize it to any object if they have at least one property in common. Just make sure that any member of the object you want to serialize/desreialize can actually be serialized.
You can test the example above with the following three lines of code:
var newQuestionItem = new NewQuestionItem(1337, 42, DateTime.Now) {Name = "Hello World!"};
var jsonString = JsonConvert.SerializeObject(newQuestionItem);
var oldQuestionItem = JsonConvert.DeserializeObject<QuestionItem>(jsonString);
and simply looking at the property values of the oldQuestionItem in the debugger.
So, this is possible as long as your NewQuestionItem only adds properties to an object and does neither remove nor modify them.
If that is the case, then your objects are different and thus, requiring completely different objects with a different URI in your API, as long as you still need to maintain the old instance on the existing URI.
Which brings us to the general architecture:
The most clean and streamline approach to what you are trying to achieve is to properly version your API.
For the purpose of this link I am assuming an Asp.NET WebApi, since you are handling the JSON in C#/.NET. This allows different controller methods to be called upon different versions and thus, making structural changes the resources your API is providing depending on the time of the implementation. Other API will provide equal or at least similar features or they can be implemented manually.
Depending on the amount and size of the actual objects and potential complexity of the request- and resultsets it might also be worth looking into wrapping requests or responses with additional information. So instead of asking for an object of type T, you ask for an Object of type QueryResult<T> with it being defined along the lines of:
[JsonObject(MemberSerialization.OptOut)]
public class QueryResult<T>
{
[JsonConstructor]
public QueryResult(T Result, ResultState State,
Dictionary<string, string> AdditionalInformation)
{
this.Result = result;
this.State = state;
this.AdditionalInformation = AdditionalInformation;
}
public T Result { get; }
public ResultState State { get; }
public Dictionary<string, string> AdditionalInformation { get; }
}
public enum ResultState : byte
{
0 = Success,
1 = Obsolete,
2 = AuthenticationError,
4 = DatabaseError,
8 = ....
}
which will allow you to ship additional information, such as api version number, api version release, links to different API endpoints, error information without changing the object type, etc.
The alternative to using a wrapper with a custom header is to fully implement the HATEOAS constraint, which is also widely used. Both can, together with proper versioning, save you most of the trouble with API changes.
How about you wrapping your OldQuestionItem as a property of QuestionItem? For example:
public class NewQuestionItem
{
public OldQuestionItem OldItem { get; set; }
public string Property1 {get; set; }
public string Property2 {get; set; }
...
}
This way you can maintain the previous version of the item, yet define new information to be returned.
Koda
You can use something like
public class OldQuestionItem
{
public DateTime UploadTimeStamp {get; set;} //if less then DateTime.Now then it QuestionItem
public string Property1 {get; set; }
public string Property2 {get; set; }
...
public OldQuestionItem(NewQuestionItem newItem)
{
//logic to convert new in old
}
}
public class NewQuestionItem : OldQuestionItem
{
}
and use UploadTimeStamp as marker to understand, what Question is it.

Does immutable types work for this caching issue

I sometimes have a problem where I get an object from the cache and need to change some properties of it, that didnt exist when I put the object into the Cache.
Lets say I have a class
public void class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
public int Identifier { get; set; }
public bool HasNotifications { get; set; }
}
FirstName and LastName are stored in the Database. When the object is fetched from the Database it is put into the cache with HasNotifications being false.
The Person object might be used in several parts of the application and HasNotifications will be set to different values depending on the part of the application.
Changing HasNotifications changes the object in the cache and the value is not predictable anymore.
This example seems a bit contrieved because there are easy ways to avoid the problem. The application I work on has the issue, because sometimes it is not obvious that the object you work on is retrieved from the Cache.
If I used a immutable version of Person would that avoid the problem? Is this a usecase Immutability is supposed to handle?
Caching should be used for objects that are unlikely to change all that frequently. It sounds like the majority of your object stays the same, aside from HasNotifications? If so, I would consider a means to remove this logic from the cached object if possible.

How to create a single drop down for multiple data types?

I am using ASP.Net MVC 3 and I need to create a single drop down list which contains items that relate to multiple database tables.
Normally, if I need to do a drop down list for a single data type I can easily use the ID as the "value" for each drop down option and would do something like this:
#Html.DropDownListFor(x => x.SelectedID, Model.GetMyList())
But now I want to mix up multiple data types. So lets say for this example I want to create a single list to represent something like "Owner" and this can be either a "User" or a "Customer". In this example, both User and Customer are separate database tables and therefore the ID value alone is not enough to identify them correctly.
So what are the best ways to achieve such functionality?
Straight off the top of my head, my first thoughts are to create a "custom" value string which could then be parsed server side to work out the ID and data type, something like...
"USER|1"
"CUSTOMER|1"
I know I can make this work, but am I making this more complicated than it needs to be? Is there a built-in or advised way of doing this?
In your Model can you not do something like this:-
public class Model
{
public string Owner { get; set; }
public List<MyList> ListCollection { get; set; }
public class MyList
{
public int Id { get; set; }
public string Value { get; set; }
}
}
So then when you are checking which list item is selected you also have access to the "Owner" field which will tell you what table it belongs to ?
As nobody has come up with anything better, I can confirm that my original idea (as unwanted as it was) did the job.
When setting the value of the select options, a custom string should be created that can easily be parsed server side, this was achieved using a pipe separating the TYPE of entity, and the ID, for example:
"USER|1"
"USER|2"
"CUSTOMER|1"
"CUSTOMER|2"
Once the selected value is passed to the server, it can then be parsed something like the following:
string option = "USER|1";
string[] values = option.Split('|');
string entityType = values[0];
int entityId = Int.Parse(values[1]);
which can then be used something like this:
if(entityType == "USER")
UpdateUser(entityId);
else//CUSTOMER
UpdateCustomer(entityId);

Categories

Resources