This is more of a design question, I'm looking for a good approach:
I have an Object which consists of a few properties (some Integers and a byte[] array).
I'm using BinaryFormatter to serialize my objects - I'm holding a List<T> of all objects at any given time.
When the application starts up, I de-serialize the file in which the objects were de-serialized to.
When the application closes, I serialize the whole List<T> and save up everything to the file.
My problem is: In case of a system failure, the objects I hold in my List<T> will obviously get lost, since I serialize the List<T> only when the application shuts down normally.
I'm not looking into de-serializing & serializing each time I want to insert an object to my List, since that will be very expensive.
The solution I thought of is to hold a local database with a BLOB column to which the Objects will be serialized to, but I'm not too sure of this approach.
Any thoughts would be appreciated!!
Only deserialize when the application is started.
You need to serialize every time a new item is added (or use the UnhandledException event) if you want to make sure that no items are lost if the application crashes.
If not, I would use a background thread to serialize the list when new items are added and a serialization from the main thread when the application is exited.
The solution I thought of is to hold a local database with a BLOB column to which the Objects will be serialized to, but I'm not too sure of this approach.
I don't see any benefits of using a database. It will actually be slower than serializing everything to a file.
I would add simple ORM to, let's say Sqlite database and would avoid standart binary serialization. I, personally, don't like it too much as in case of changes applied to the object serialized before (like add/remove properties, change types, function parameters or its sequence) will lead to deserialization failure. In other words it's not scallable, imo.
My approach would be binlogging:
While the application runs, if you delete an object from the list
just write out a notice of that to a file. If you add an object to
the list, serialize only this one and write it out again. (All the
way using filenames such as change-00000000001.del,
change-00000000002.add etc.)
When the application shuts down, after final serialization delete all change-* files
On startup after deserialization (old state), check for change-* files: If they exist, this was a crash and we have to work them up, then do a new serialization and delete changefiles
Related
Currently I generate a singleton object for my site which is created from a flat file. The object is never changed by the application, it is purely used as reference by some of the functions on my site. It effectively describes the schema for another file type we use, similar to XML & XSD
The singleton object generated is fairly large (5000+ child objects, with up to 500 properties each) and it contains circular references as child objects can reference the parent object due to 2 way references.
This all works fine currently, however the first time the app loads, it takes over a minute to generate the singleton. Which is the reason I am using a singleton here, so I don't have to regenerate it every request. But it also means every time the app pool restarts, the first request takes well over a minute to load. Every consecutive request is fast once the object is in memory.
Seeing that the flat file rarely changes, I would like to find a good way to generate the object once and store the object in way that I can quickly retrieve it when needed.
I tried serializing the object to json, and storing it in the database, however due to the circular references Json.net fails or I end up losing information if I configure it to ignore the references when serializing.
Are there any better ways of handling such an object better, or am I stuck to using the singleton for now?
Given the static nature of this object, serialization would one option.
The circular reference issue you mention with Json.Net can be easily remedied using appropriate JsonSerializerSettings (refer to this answer)
If speed is of the essence, than you may want to investigate other serialization options (netserializer claims to be one of the fastest).
Ultimately though, you should look to put the file / object structure into some sort of cache that sits outside of the app pool (Redis perhaps), or even load the flat file's data into a well designed database schema (i.e. parent - child relationships etc).
Creating a massive object graph is rather inefficient and will potentially create memory issues.
I'm preparing WinForm application with local DB. I want to remember changes, when somebody will leave application, he decide to save or discard changes. I expect to have three tables, and about 1000 rows in every table. On one window I have three tables which can be modified in every time. So for I have found some solution and I had use the Local method on context. eg.
people.DataSource = dbctx.People.Local.Where(...).ToList();
I recently I try to use dynamic linq query, and I couldn't use when I access to dbctx with Local. Is something better TransactionScope, IEditableObject? How to use it.
If they close the winforms application entirely (losing any in memory storage options) then you will need to use a store of some sort (database, file etc), as you are already using a database it sounds like your best option would be to store it in there.
If you are concerned about space etc, you could simply serialize the in memory objects (perhaps to json) and store it against the user record, then next time they log in/ come back you can deserialize it back to memory and they can carry on.
If you choose this option I would recommend using Json.Net:
To serialize your in memory records use:
string json = JsonConvert.SerializeObject(people);
And deserialize back out:
var people = JsonConvert.DeserializeObject<People>(json);
I've complex object (nested properties, collections etc) in my ASP.NET MVC C# application. I don't need to save it into the multiple tables in the DB, serializing the whole object and store it like a whole is ok.
I plan to serialize the whole object (in something human-readable like JSON/XML) and store in text field in the DB.
I need to later load this object from the DB and render it using strongly-typed view.
Here comes the question: in the future the class of the object can change (I can add\remove fields etc). But serialized versions saved into the DB before will not reflect change.
How to deal with this?
You should write some sort of conversion utility every time you significantly change structured, serialized messages, and run it as part of an upgrade process. Adding or removing fields that are nullable isn't likely to be a problem, but larger structural changes will be.
You could do something like implement IXmlSerializable, peek at the message and figure out what version the message is and convert it appropriately, but this will quickly become a mess if you have to do this a lot and your application has a long lifecycle. So, you're better off doing it up front in an upgrade process, and outside of your application.
If you are worried about running the conversion on lots of records as part of an upgrade, you could come up with some ways to make it more efficient (for example, add a column to the table that contains the message schema version, so you can efficiently target messages that are out of date).
As long as you're using JSON or XML, added fields shouldn't be a problem (as long as no specific version schemas are enforced), The default .net XML serializer for instance, doesn't include fields that have their default value (which can be set with the System.Component.DefaultValue attribute). So the new fields will be treated the same as the omitted fields while deserializing and get their default values (default class values that is, the DefaultValue attribute only applies to serialization/designer behaviour).
Removed fields depends on your deserialization implementation, but can be made so that those are ignored. Personally I tend to keep the properties but mark them as obsolete with a message of what they were once for. That way when coding you'll know not to use them, but can still be filled for backwards compatibility (and they shouldn't be serialized when marked obsolete). When possible you could implement logic in the obsolete property that fills the renewed data structure.
I want to save all the metadata connected to a file system, but not the "useful" data. The metadata should be available for viewing even when the original files aren't.
I first thought that I could accomplish this by serializing for example a DirectoryInfo object, but I now understand that the object doesn't actually save the data but rather merely saves the path and accesses the file itself when the methods are called. Thus serialization would be worthless, since the deserialized object would look for the file instead of "remembering" the metadata.
So: is there some kind of built in framework class for doing this or should I just implement it myself?
This object is an object hierarchy so it could get a bit tricky to serialize? You might try creating an a simple object to model the data you want to save. You could then use AutoMapper to copy the data over into the DTO-like object and then serialize that. This way if you wanted to actually persist the entire tree of data you could without writing much code.
I have multiple "command objects" serialized on a single file. I need to get those objects back by deserializing so that I could replay those commands. Please, help me do this.
I never use the stock serializer for persistent store. I find it works for remoting just fine, but I don't want my file formats bound to .NET framework.