I present a simple model:
public class UserDocument
{
[BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; }
public string DisplayName { get; set; }
public List<string> Friends { get; set; }
}
I am using the latest C# driver which has the ability to replace a document using a C# object which will automatically update all its fields. Problem is I want to update all fields except for the user friends, because it's a field containing the object relations to other documents. Of course I can manually update each field of the ones I want to get updated, which here are just two.
But this example is simple just to make my point. In reality the fields are much more and it would be harder to update each field. That would require a single line for each one to use the Set operator. Also, newly-added fields would have to be supported in the same way as opposed to updating to automatically just works.
Is there a way to achieve that - automatically update all fields with just specifying a list of excluded fields?
There is no way, using the provided builders to have a "blacklist" update which excludes only specific fields.
You can query the old document, copy the old values of these fields to the new instance and then replace it entirely in the database.
You can also generate such an update command by iterating over the fields using reflection.
But the MongoDB driver doesn't offer such a query built in.
I figured out a way to do this with MongoDB using Javascript/NodeJS, but maybe the logic can translate to C#?
I wanted to update all fields without having to actually explicitly state them (all fields except for one, it turned out).
Attempted update of all document fields:
await examCollection.findOneAndUpdate(
{_id: new ObjectID(this.examId)},
{$set: this.data}
)
...except, this.data happened to have _id in it as well, which I didn't want to update. (In fact, it gave me an error, because _id is immutable.)
So, for my workaround, I ended up "deleting" all fields on the object that I didn't want to update (i.e. _id).
Successful update of all non-specified document fields:
// (1) specify fields that I don't want updated (aka get rid of them from object) (similar option in C#?)
delete this.data._id
//delete this.data.anotherField
//delete this.data.anotherField2
//delete this.data.anotherField3
// (2) update MongoDB document
await examCollection.findOneAndUpdate(
{_id: new ObjectID(this.examId)},
{$set: this.data}
)
This was much easier than explicitly stating all the fields I did want to update, because there were A LOT, and they could potentially change in the future (new fields added, fields deleted, etc.).
Hopefully this strategy can help!
Note: In reality, I did my "field specifying" earlier in another file, rather than immediately before updating like it shows in the example, but same effect.
Related
Having problem using update User with the google Admin SDK for C#.
https://developers.google.com/admin-sdk/directory/reference/rest/v1/users/update
This method supports patch semantics, meaning you only need to include the fields you wish to update. Fields that are not present in the request will be preserved, and fields set to null will be cleared.
This differs from the Patch as patch won't clear fields that are null but only update fields that have a value.
Problem is that I have to pass a full Google.Apis.Admin.Directory.directory_v1.Data.User class to the function which will contain null of even properties i do not want to clear.
example:
public User UpdateUser(Google.Apis.Admin.Directory.directory_v1.Data.User gUser)
{
UsersResource.UpdateRequest userUpdateRequest = _service.Users.Update(gUser, gUser.Id);
User updatedUser = userUpdateRequest.Execute();
return updatedUser;
}
Is there any way of modifying the Body in UpdateRequest before executing it?
Edit:
The UpdateRequest has a ModifyRequest Property that looks like this
I just have no Idea how to use it, any ideas?
public Action<HttpRequestMessage> ModifyRequest { get; set; }
As far as updating things to the concept of Null that is not something that can be done with PATCH. I recommend setting it to an empty string.
You should also not be sending the full user object if thats what you are currently doing. I am going to assume that you have done a users.list to find the user you want to update and change something in that user, say the name. Then you have simply submited the full user object to your method
UpdateUser(Google.Apis.Admin.Directory.directory_v1.Data.User gUser)
This wont work as some of the fields you have sent as part of the update/patch are not actually writeable.
What you should do instead would be to create a new user object change what ever it is you want
public User MakeUserAdmin(Google.Apis.Admin.Directory.directory_v1.Data.User gUserId)
{
var updateFields= new Google.Apis.Admin.Directory.directory_v1.Data.User();
change.IsAdmin = true;
change.Addresses = ""; // will set it to empty yes not null but the best you can do with this api.
UsersResource.UpdateRequest userUpdateRequest = _service.Users.Update(updateFields, gUserId);
User updatedUser = userUpdateRequest.Execute();
return updatedUser;
}
Notice how you just need to create a new object and update only the fields you need then send that.
Dont try to update every field, just update the ones that you know have changed. Dont include the id in the object that is not writeable either.
Using C# and MongoDb im saving a class similar to the following.
public class Zone
{
public string ZoneName { get; set; }
public List<string> IncludedCountries { get; set; } = new List<string>();
}
This is filled by user and saved in my DB, currently I am checking that the zone name isn't duplicated when inserting. Like so.
if (All().Any(x => x.Name.ToLower() == zone.Name.ToLower())) { throw new System.Exception($"Zone \"{zone.ZoneName}\" is already in database, please edit the zone"); };
But if user currently tries to add the exact same values (So exact same list of included countries) with different name, I wouldn't catch it.
I want to be able to, as dont want to be duplicating same classes in DB (My actual class will have more properties, this is an example). I am aware I can check it the same way im checking for name, but having in mind I have a lot of properties, i'd like to know what the best way is..
Ideally you wouldn't perform a search, then use that to decide whether to add or not. In a collaborative system with potentially multiple users you could find another user in another transaction runs the same code at the same time, and ends up adding the record just after your check, but just before your insert.
It's better, assuming your datastore supports it, to use a uniqueness constraint on some value of the data you're inserting. Here's the docs for Mongo: https://docs.mongodb.com/manual/core/index-unique/
This means the transaction will be failed by the database if you attempt to insert a duplicate. To be fair, there's nothing wrong with doing the "ask-then-tell" as well I suppose, in order to avoid ugly exceptions being shown to users, but if you're able to interrogate the exception details you can probably catch it and show the user some helpful information rather than letting them see an error page.
To support your requirement for "has the same list of things" in this way, I'd suggest creating a SHA256 hash value (here's a link: https://stackoverflow.com/a/6839784/26414) for the list, and storing that as a property in it's own right. Just make sure it's recalculated if the list changes.
One additional thing - technically "class" defines the schema, or shape of a bit of data. When you create an instance of a class at runtime, which has actual values and takes up memory, that's technically an "object". So an "object" is an "instance" of a "class".
I've written a SaveLoad class, which contains a Savegame class that has a bunch of ints, doubles, bools but also more complex things like an array of self-written class objects.
That savegame object is being created, serialized and AES encrypted on save and vice versa on load - so far, so good.
The problem I'm facing now is that if there are new variables (in a newer version of the game) that have to be stored and loaded, the game crashes on load, because the new variables can't be loaded correctly (because they are not contained in the old save file). E.g. ints and doubles contain the default 0 while an array is not initialized, thus null.
My current "solution": For each variable that is being loaded, check if it doesn't contain a specific value (which I set in the Savegame class).
For example: In Savegame I set
public int myInt = int.MinValue;
and when loading, I check:
if(savegame.myInt != int.MinValue){
//load successful
}else{
//load failed
};
This works so far for int and double, but once I hit the first bool, I realized, that for every variable I have to find a value that makes "no sense"(not reachable usually), thus was a failed load. => Shitty method for bools.
I could now go ahead and convert all bools to int, but this is getting ugly...
There must be a cleaner and/or smarter solution to this. Maybe some sort of savegame migrator? If there is a well done, free plugin for this, that would also be fine for me, but I'd prefer a code-solution, which may also be more helpful for other people with a similar problem.
Thanks in advance! :)
Your issue is poor implementation.
If you are going to be having changes like this, you should be following Extend, Deprecate, Delete (EDD).
In this case, you should be implementing new properties/fields as nullables until you can go through and data repair your old save files. This way, you can check first if the loaded field is null or has a value. If it has a value, you're good to go, if it's null, you don't have a value, you need to handle that some way.
e.g.
/*We deprecate the old one by marking it obsolete*/
[Obsolete("Use NewSaveGameFile instead")]
public class OldSaveGameFile
{
public int SomeInt { get; set; }
}
/*We extend by creating a new class with old one's fields*/
/*and the new one's fields as nullables*/
public class NewSaveGameFile
{
public int SomeInt { get; set; }
public bool? SomeNullableBool { get; set; }
}
public class FileLoader
{
public SavedGame LoadMyFile()
{
NewSaveGameFile newFile = GetFileFromDatabase(); // Code to load the file
if (newFile.SomeNullableBool.HasValue)
{
// You're good to go
}
else
{
// It's missing this property, so set it to a default value and save it
}
}
}
Then once everything has been data repaired, you can fully migrate to the NewSaveGameFile and remove the nullables (this would be the delete step)
So one solution would be to store the version of the save file system in the save file itself. So a property called version.
Then when initially opening the file, you can call the correct method to load the save game. It could be a different method, an interface which gets versioned, different classes, etc but then you would require one of these for each save file version you have.
After loading it in file's version, you could then code migration objects/methods that would populate the default values as it becomes a newer version in memory. Similar to your checks above, but you'd need to know which properties/values need to be set between each version and apply the default. This would give you the ability to migrate forward to each version of the save file, so a really old save could be updated to the newest version available.
I'm facing the same problem and trying to build a sustainable solution. Ideally someone should be able to open the game in 10 years and still access their save, even if the game has changed substantially.
I'm having a hard time finding a library that does this for me, so I may build my own (please let me know if you know of one!)
The way that changing schemas is generally handled in the world of web-engineering is through migrations-- if an old version of a file is found, we run it through sequential schema migrations until it's up-to-date.
I can think of two ways to do this:
Either you could save all saved files to the cloud, say, in MongoDB, then change their save data for them whenever they make updates or
You need to run old save data through standardized migrations on the client when they attempt to load an old version of the save file
If I wanted to make the client update stale saved states then, every time I need to change the structure of the save file (on a game that's been released):
Create a new SavablePlayerData0_0_0 where 0_0_0 is using semantic versioning
Make sure every SavablePlayerData includes public string version="0_0_0"
We'll maintain static Dictionary<string, SavedPlayerData> versionToType = {"0_0_0": typeof(SavablePlayerData0_0_0)} and a static string currentSavedDataVersion
We'll also maintain a list of migration methods which we NEVER get rid of, something like:
Something like
public SavablePlayerData0_0_1 Migration_0_0_0_to_next(SavablePlayerData0_0_0 oldFile)
{
return new SavablePlayerData0_0_1(attrA: oldFile.attrA, attrB: someDefault);
}
Then you'd figure out which version they were on from the file version, the run their save state through sequential migrations until it matches the latest, valid state.
Something like (total pseudocode)
public NewSavedDataVersion MigrateToCurrent(PrevSavedDataVersion savedData)
{
nextSavedData = MigrationManager.migrationDict[GetVersion(savedData)]
if (GetVersion(nextSavedData) != MigrationManager.currentVersion) {
return MigrateToCurrent(nextSavedData, /* You'd keep a counter to look up the next one */)
}
}
Finally, you'd want to make sure you use a type alias and [Obsolete] to quickly shift over your codebase to the new save version
It might all-in-all be easier to just work with save-file-in-the-cloud so you can control migration. If you do this, then when a user tries to open the game with an older version, you must block them and force them to update the game to match the saved version stored in the cloud.
I can't seem to find an answer for this, even after Googling around.
We are experiencing issues causing our app to lock up. Partly this is because we have outstanding WaitForNonStaleResultsAsOfNow calls that we are waiting to release fixes for (i.e. we have removed them) but also this is being caused by a total rebuild of all indexes. I believe the trigger that causes all indexes to be rebuilt is when we make a change to one (type of) document. For example:
We have a model called "Agency". When our users log in, we use their "AgencyId" in order to provide them with data specific to them. As such, most other documents (such as "Placements", "Invoices" etc) have an "AgencyId" field.
Agency model looks something like:
public class Agency
{
public string Id {get;set;}
public string AgencyName {get;set;}
// ...
}
Example of Placement (and other Agency specific documents)
public class Placement
{
public string Id {get;set;}
public string AgencyId {get;set;} // relates to Agency Document
// ...
}
We have a feature that allows Administrators to upload documents (PDFs) to an Agency's profile. We store the PDF in a DFS and set the "DocumentPath" property on the Agency model to where it's saved.
My question: Would updating the Agency record cause a rebuild of all related documents' indexes? i.e. I know the AgencyIndex would rebuild but would this cause the PlacementIndex (and all other related indexes) to rebuild as well?
More information:
Raven Client Build#: 2.5.2952
Raven Server Build#: 2.5.2952 (RavenHQ)
Also worth noting: We are working on upgrading to RavenDB 3.0 asap but this is a real live problem and I need to understand why it's happening!
Yes, for sure updating a doc the many others points to causes the indexes to rebuild.
Some types of operations needs the index no to be stale (or force update on stale index). It's necessary to pass a deadline to your WaitForNonStaleResultsAsOfNow, that can receive a TimeSpan as param, so you'll wait for the index not to be stale for predefined type.
i am using mongo .net client and using the collection to objects features. issue i have come across in schema evolution
when i rename a field in my class for example change field name from Comment to Comments and i make this change in my class, i get an exception from Mongo when i perform a fetch.
my expectation is that mongo client will ignore fields that exists in the collection but doesnt exists in my .net class.
will be happy if its possible without doing the transformation between bson and .net class.
If you want to continue to use the old name you could use the BsonElement attribute:
class Demo {
[BsonElement("Comment")]
public string Comments { get; set; }
}
Using that syntax would tell the MongoDB C# driver to find the data for the Comments property/field in a field in the document in a field named Comment. That would mean that you don't need to worry about moving/copying the data from the old location. It's often used so that you can use longer friendly names in source code, while minimizing the actual BSON Document size (as the full property name is always stored in the document in the database collection). When shortening, you might for example just use:
[BsonElement("c")]
public string Comments { get; set; }
Some of the MongoDB drivers don't have this functionality (and I wish they did!).
Secondly, you could also just add a special attribute to your class ignore all unknown elements for the class and not thrown an exception:
[BsonIgnoreExtraElements]
public Demo {
public string Comments { get; set; }
}
Then, if a field named Comment is found, but can't be matched to a property of your C# class, it will be ignored. I'll often use this during development as the schema changes, but then remove it later so that I can catch unexpected fields.
Or you can also use the BsonClassMap to make similar changes:
BsonClassMap.RegisterClassMap<Demo>(cm => {
cm.AutoMap();
cm.SetIgnoreExtraElements(true);
});
There are even some more options documented here if you want complete control.