How do I mock the FileInfo information for a file? - c#

I have a scenario in which I need much of the information from the FileInfo structure -- creation time, last write time, file size, etc. I need to be able to get this information cleanly, but also be able to truly unit test without hitting the file system.
I am using System.IO.Abstractions, so that gets me 99% of the way there for most of my class, except this one. I don't know how to use it to get the information I need from my MockFileSystem object.
public void Initialize(IFileSystem fs, string fullyQualifiedFileName)
{
string pathOnly = fs.Path.GetDirectoryName(fullyQualifiedFileName);
string fileName = fs.Path.GetFileName(fullyQualifiedFileName);
// Here's what I don't know how to separate out and mock
FileInfo fi = new FileInfo(fullyQualifiedFileName);
long size = fi.LongLength;
DateTime lastWrite = fi.LastWriteTime;
...
...
}
Any help on that line would be greatly appreciated.
UPDATE:
This is not an exact duplicate of existing questions because I'm asking how do I do it with System.IO.Abstractions, not how do I do it in general.
For those who are interested, I did find a way to accomplish it:
FileInfoBase fi = fs.FileInfo.FromFileName(fullFilePath);
If I use this line, I get the same information I need in both TEST and PROD environments.

https://github.com/TestableIO/System.IO.Abstractions/blob/5f7ae53a22dffcff2b5716052e15ff2f155000fc/src/System.IO.Abstractions/IFileInfo.cs
System.IO.Abstractions provides you with an IFileInfo interface to code against, which you could create your own implementation of to return the stats you're interested in testing against.
In this case you'd want another method to return an IFileInfo to your Initialize method, which could be virtual so that in your test it just returns your fake instead of really hitting the system.

Related

Is reading csv file from physical path is valid scenario in unit test case ? why?

string _inboundFilePath = AppDomain.CurrentDomain.BaseDirectory + #"\Inbound\CompareDataFile.csv";
var mockReaderStream = new Mock<IReaderStream>();
mockReaderStream.Setup(x => x.CreateStream())
.Returns(new System.IO.StreamReader(_inboundFilePath));
Here I am dependent on an inbound file to read data from and then perform other function checks. my question is how to avoid this? In this case I am checking data for a particular id that comes in from the csv.
It is not likely to be a good practice because a unit test must be deterministic. It means that whatever the situation, you must be sure that if this test runs, it will do exactly the same than before.
If you read a csv file, the test will depend on the external world. And unfortunately, the external world is not stable. For a start somebody can change the csv file.
That is why it is a better practice to get the csv file stream from a an embedded resource in the assembly instead of getting it from a file on the hard drive.
In addition to the answer from Stephane. I will suggest you:
Put #"\Inbound\CompareDataFile.csv" to config file.
Create public property or separate method in the aforementioned class that will be able to retrieve and return the inbound file absolute path (call it GetPath).
Create UnitTest method that have access to the config file. This TestMethod must read config files, call AppDomain.CurrentDomain.BaseDirectory. So this method could also retrieve the path of inbound file absolute path and after this call GetPath()
Create abstraction (interface or abstract class above Mock - (it's not a mock by the way in the code you've provided, so hard to guess why you call it so)).
Create public method in the aforementioned class that will require an object of the abstraction and call its method ReadFromCsv().
Create Test class (mock) that will implement this abstraction and will return desired/undesired values when you call it's method ReadFromCsv().
Finally, test your class.
PS. This is not a strict algorithm of testing the class, you can use it. By what you would like to get from all this seven items is the notion of unit-test-likeness.
Also you do not free your class from config file, so, use this approach to free your class from config:
How to mock ConfigurationManager.AppSettings with moq

How to get source of the same file from two different commits?

I am trying to figure out way to get source of same file from two different commits, but I just can't find any documentation about this.
I know there is Repository.Dif.Compare which is useful but I can only get Patch from it and that doesn't help much since I would like to implement my own side by side comparison.
Can someone provide an example? Is this even possible in libgit2sharp?
I am trying to figure out way to get source of same file from two different commits [...] Is this even possible in libgit2sharp?
Each Commit or Tree type exposes a string indexer that allows one to directly access a TreeEntry through its path. A TreeEntry can either be a Blob (ie. a file), another Tree (ie. a directory), or a GitLink (ie. a submodule).
The code below, gives a quick example of how to access the content of the same file in two different commits.
[Fact]
public void CanRetrieveTwoVersionsOfTheSameBlob()
{
using (var repo = new Repository(BareTestRepoPath))
{
var c1 = repo.Lookup<Commit>("8496071");
var b1 = c1["README"].Target as Blob;
var c2 = repo.Lookup<Commit>("4a202b3");
var b2 = c2["README"].Target as Blob;
Assert.NotEqual(b1.ContentAsText(), b2.ContentAsText());
}
}
I would like to implement my own side by side comparison
Depending of the size of the blob you're dealing with, you may not be willing to retrieve the whole content in memory. In that case, blob.ContentStream may be handy.
Update
I was missing the cast to blob, to figure the rest out
FWIW, you can rely on revparse expressions to directly access the Blob. As the result, the following should also work ;-)
[Fact]
public void CanRetrieveTwoVersionsOfTheSameBlob_ReduxEdition()
{
using (var repo = new Repository(BareTestRepoPath))
{
var b1 = repo.Lookup<Blob>("8496071:README");
var b2 = repo.Lookup<Blob>("4a202b3:README");
Assert.NotEqual(b1.ContentAsText(), b2.ContentAsText());
}
}
This post should answer your question, it's from another question here on this site:
How to diff the same file between two different commits on the same branch?
I hope that helps.

Unit testing while implementing XML file converter

I am writing a simple file converter which will take an XML file and convert it to CSV or vice-versa.
I have implemented 2 classes, XMLtoCSV and CSVtoXML and both implement a Convert method which takes the input file path and filter text and filters the XML by the given filter and performs the conversion. (e.g if the XML contains employee details, we might want to filter it so only employees from a certain department is retrieved and converted to CSV file).
I have a unit test which tests this Convert method. In it I am specifying the input file path and filter string and call the Convert function and assert the boolean result but I also need to test if the filtering worked and conversion has been completed.
My question is that do you really need to access the file IO and do the filtering and conversion via unit test? Is this not integration testing? If not then how can I assert the filtering has worked without actually converting the file and returning the results? I thought about Moq'ing the Convert method, but that will not necessarily prove that my Convert method is working fine.
Any help/advice is appreciated.
Thanks
I will suggest you to use streams in your classes and pass file stream in application and a "fake" or StringStream, for example, in unit tests. This will makes you more flexible in case you will decide to get this xml from WebService or any other way - you will just need to pass a stream, not file path.
My question is that do you really need to access the file IO and do
the filtering and conversion via unit test? Is this not integration
testing?
Precisely - in this case you are testing 3 things - the File IO system, the actual file contents, and the Convert method itself.
I think you need to look at restructuring your code to make it more amenable to unit testing (that's not a criticism of your code!). Consider your definition of the Convert method:
In it I am specifying the input file path and filter string
So your Convert method is actually doing two things - opening/reading a file, and converting the contents. You need to change things around so that the Convert method does one thing only - specifically, perform the conversion of a string (or indeed a stream) without having any reference to where it came from.
This way, you can correctly test the Convert method by supplying it with a string that you define yourself in your unit test - One test with known good data,and one with known bad data.
e.g.
void Convert_WithGoodInput_ReturnsTrue()
{
var input="this is a piece of data I know is good and should pass";
var sut = new Converter(); //or whatever it's called :)
bool actual = sut.Convert(input);
Assert.AreEqual(true,actual,"Convert failed to convert good data...");
}
void Convert_WithBadInput_ReturnsFalse()
{
var input="this is a piece of data I know is BAD and should Fail. Bad Data! Bad!";
var sut = new Converter(); //or whatever it's called :)
bool actual = sut.Convert(input);
Assert.AreEqual(false,actual,"Convert failed to complain about bad data...");
}
Of course inside your Convert method you are doing all sorts of arcane and wonderful things and at this point you might then look at that method and see if perhaps you can split it out into several internal methods, the functionality of which is perhaps provided by separate classes, which you provide as dependencies to the Converter class, and which in turn can all be tested in isolation.
By doing this you will be able to test both the functionality of the converter method, and you will be in a position to start using Mocks so that you can test the functional behaviour of it as well - such as ensuring that the frobber is called exactly once, and always before the gibber, and that the gibber always calls the munger, etc.
Bonus
But wait, there's more!!!!1!! - once your Converter class/method is arranged like this you will suddenly find that you can now implement an XML to Tab-delimited, or XML to JSON, or XML to ???? simply by writing the relevant component and plugging it into the Converter class. Loose coupling FTW!
e.g (and here I am just imagining how the guts of your convert function might work)
public class Converter
{
public Converter(ISourceReader reader, IValidator validator, IFilter filter,IOutputformatter formatter)
{
//boring saving of dependencies to local privates here...
}
public bool Convert(string data,string filter)
{
if (!validator.Validate(data)) return false;
var filtered = filter.Filter(data);
var raw = reader.Tokenise(filtered);
var result = formatter.Format(raw);
//and so on
return true; //or whatever...
}
}
Of course I am not trying to tell you how to write your code but the above is a very testable class for both unit and functional testing, because you can mix and match Mocks, Stubs and Reals as and where you like.

fastest way to search and delete files inside a directory

I've got a array of class in which one member is the full path to a file. I need to delete all those files from the directory which is not included in the array. As Usual, I am using the convential compare and delete method. I need to know if there any fast way to accomplish this.
I heard it can be done using Linq, but i dont have knowledge on linq.
My class struct is like below.
Class ImageDetails
{
public string Title;
public Boolean CanShow;
public String PathToFile;
}
I have an array of ImageDetails. The PathToFile contains full path
}
You can use Except() to handle this:
var filesToDelete = Directory.GetFiles(Path.GetDirectoryName(yourClass.FilePath)).Except(yourClass.TheArray);
Why do you need to compare? If you have the full file name, then
File.Delete(fileName);
is all you need. The file IO is likely to be the slowest part of this, so I don't think Linq will make much difference to the performance.
If the file may not exist, then check for that first:
if (File.Exists(fileName))
{
File.Delete(fileName);
}
Edit: I see you mean that you want to delete the file if it is not in the array. I read your question to mean that the directory is not included in the array.
Still, the actual file deletion is likely to be the slowest part of this.

How to store data locally in .NET (C#) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 12 months ago.
Improve this question
I'm writing an application that takes user data and stores it locally for use later. The application will be started and stopped fairly often, and I'd like to make it save/load the data on application start/end.
It'd be fairly straightforward if I used flat files, as the data doesn't really need to be secured (it'll only be stored on this PC). The options I believe are thus:
Flat files
XML
SQL DB
Flat files require a bit more effort to maintain (no built in classes like with XML), however I haven't used XML before, and SQL seems like overkill for this relatively easy task.
Are there any other avenues worth exploring? If not, which of these is the best solution?
Edit: To add a little more data to the problem, basically the only thing I'd like to store is a Dictionary that looks like this
Dictionary<string, List<Account>>
where Account is another custom type.
Would I serialize the dict as the xmlroot, and then the Account type as attributes?
Update 2:
So it's possible to serialize a dictionary. What makes it complicated is that the value for this dict is a generic itself, which is a list of complex data structures of type Account. Each Account is fairly simple, it's just a bunch of properties.
It is my understanding that the goal here is to try and end up with this:
<Username1>
<Account1>
<Data1>data1</Data1>
<Data2>data2</Data2>
</Account1>
</Username1>
<Username2>
<Account1>
<Data1>data1</Data1>
<Data2>data2</Data2>
</Account1>
<Account2>
<Data1>data1</Data1>
<Data2>data2</Data2>
</Account2>
</Username2>
As you can see the heirachy is
Username (string of dict) >
Account (each account in the List) >
Account data (ie class properties).
Obtaining this layout from a Dictionary<Username, List<Account>> is the tricky bit, and the essence of this question.
There are plenty of 'how to' responses here on serialisation, which is my fault since I didn't make it clearer early on, but now I'm looking for a definite solution.
I'd store the file as JSON. Since you're storing a dictionary which is just a name/value pair list then this is pretty much what json was designed for.
There a quite a few decent, free .NET json libraries - here's one but you can find a full list on the first link.
It really depends on what you're storing. If you're talking about structured data, then either XML or a very lightweight SQL RDBMS like SQLite or SQL Server Compact Edition will work well for you. The SQL solution becomes especially compelling if the data moves beyond a trivial size.
If you're storing large pieces of relatively unstructured data (binary objects like images, for example) then obviously neither a database nor XML solution are appropriate, but given your question I'm guessing it's more of the former than the latter.
All of the above are good answers, and generally solve the problem.
If you need an easy, free way to scale to millions of pieces of data, try out the ESENT Managed Interface project on GitHub or from NuGet.
ESENT is an embeddable database storage engine (ISAM) which is part of Windows. It provides reliable, transacted, concurrent, high-performance data storage with row-level locking, write-ahead logging and snapshot isolation. This is a managed wrapper for the ESENT Win32 API.
It has a PersistentDictionary object that is quite easy to use. Think of it as a Dictionary() object, but it is automatically loaded from and saved to disk without extra code.
For example:
/// <summary>
/// Ask the user for their first name and see if we remember
/// their last name.
/// </summary>
public static void Main()
{
PersistentDictionary<string, string> dictionary = new PersistentDictionary<string, string>("Names");
Console.WriteLine("What is your first name?");
string firstName = Console.ReadLine();
if (dictionary.ContainsKey(firstName))
{
Console.WriteLine("Welcome back {0} {1}", firstName, dictionary[firstName]);
}
else
{
Console.WriteLine("I don't know you, {0}. What is your last name?", firstName);
dictionary[firstName] = Console.ReadLine();
}
To answer George's question:
Supported Key Types
Only these types are supported as
dictionary keys:
Boolean Byte Int16 UInt16 Int32 UInt32
Int64 UInt64 Float
Double Guid DateTime TimeSpan String
Supported Value Types
Dictionary values can be any of the
key types, Nullable versions of the
key types, Uri, IPAddress or a
serializable structure. A structure is
only considered serializable if it
meets all these criteria:
• The structure is marked as
serializable • Every member of the
struct is either:
1. A primitive data type (e.g. Int32)
2. A String, Uri or IPAddress
3. A serializable structure.
Or, to put it another way, a
serializable structure cannot contain
any references to a class object. This
is done to preserve API consistency.
Adding an object to a
PersistentDictionary creates a copy of
the object though serialization.
Modifying the original object will not
modify the copy, which would lead to
confusing behavior. To avoid those
problems the PersistentDictionary will
only accept value types as values.
Can Be Serialized [Serializable] struct Good {
public DateTime? Received;
public string Name;
public Decimal Price;
public Uri Url; }
Can’t Be Serialized [Serializable] struct Bad {
public byte[] Data; // arrays aren’t supported
public Exception Error; // reference object }
XML is easy to use, via serialization. Use Isolated storage.
See also How to decide where to store per-user state? Registry? AppData? Isolated Storage?
public class UserDB
{
// actual data to be preserved for each user
public int A;
public string Z;
// metadata
public DateTime LastSaved;
public int eon;
private string dbpath;
public static UserDB Load(string path)
{
UserDB udb;
try
{
System.Xml.Serialization.XmlSerializer s=new System.Xml.Serialization.XmlSerializer(typeof(UserDB));
using(System.IO.StreamReader reader= System.IO.File.OpenText(path))
{
udb= (UserDB) s.Deserialize(reader);
}
}
catch
{
udb= new UserDB();
}
udb.dbpath= path;
return udb;
}
public void Save()
{
LastSaved= System.DateTime.Now;
eon++;
var s= new System.Xml.Serialization.XmlSerializer(typeof(UserDB));
var ns= new System.Xml.Serialization.XmlSerializerNamespaces();
ns.Add( "", "");
System.IO.StreamWriter writer= System.IO.File.CreateText(dbpath);
s.Serialize(writer, this, ns);
writer.Close();
}
}
I recommend XML reader/writer class for files because it is easily serialized.
Serialization in C#
Serialization (known as pickling in
python) is an easy way to convert an
object to a binary representation that
can then be e.g. written to disk or
sent over a wire.
It's useful e.g. for easy saving of settings to a file.
You can serialize your own classes if
you mark them with [Serializable]
attribute. This serializes all members
of a class, except those marked as
[NonSerialized].
The following is code to show you how to do this:
using System;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
namespace ConfigTest
{ [ Serializable() ]
public class ConfigManager
{
private string windowTitle = "Corp";
private string printTitle = "Inventory";
public string WindowTitle
{
get
{
return windowTitle;
}
set
{
windowTitle = value;
}
}
public string PrintTitle
{
get
{
return printTitle;
}
set
{
printTitle = value;
}
}
}
}
You then, in maybe a ConfigForm, call your ConfigManager class and Serialize it!
public ConfigForm()
{
InitializeComponent();
cm = new ConfigManager();
ser = new XmlSerializer(typeof(ConfigManager));
LoadConfig();
}
private void LoadConfig()
{
try
{
if (File.Exists(filepath))
{
FileStream fs = new FileStream(filepath, FileMode.Open);
cm = (ConfigManager)ser.Deserialize(fs);
fs.Close();
}
else
{
MessageBox.Show("Could not find User Configuration File\n\nCreating new file...", "User Config Not Found");
FileStream fs = new FileStream(filepath, FileMode.CreateNew);
TextWriter tw = new StreamWriter(fs);
ser.Serialize(tw, cm);
tw.Close();
fs.Close();
}
setupControlsFromConfig();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
After it has been serialized, you can then call the parameters of your config file using cm.WindowTitle, etc.
If your collection gets too big, I have found that Xml serialization gets quite slow. Another option to serialize your dictionary would be "roll your own" using a BinaryReader and BinaryWriter.
Here's some sample code just to get you started. You can make these generic extension methods to handle any type of Dictionary, and it works quite well, but is too verbose to post here.
class Account
{
public string AccountName { get; set; }
public int AccountNumber { get; set; }
internal void Serialize(BinaryWriter bw)
{
// Add logic to serialize everything you need here
// Keep in synch with Deserialize
bw.Write(AccountName);
bw.Write(AccountNumber);
}
internal void Deserialize(BinaryReader br)
{
// Add logic to deserialize everythin you need here,
// Keep in synch with Serialize
AccountName = br.ReadString();
AccountNumber = br.ReadInt32();
}
}
class Program
{
static void Serialize(string OutputFile)
{
// Write to disk
using (Stream stream = File.Open(OutputFile, FileMode.Create))
{
BinaryWriter bw = new BinaryWriter(stream);
// Save number of entries
bw.Write(accounts.Count);
foreach (KeyValuePair<string, List<Account>> accountKvp in accounts)
{
// Save each key/value pair
bw.Write(accountKvp.Key);
bw.Write(accountKvp.Value.Count);
foreach (Account account in accountKvp.Value)
{
account.Serialize(bw);
}
}
}
}
static void Deserialize(string InputFile)
{
accounts.Clear();
// Read from disk
using (Stream stream = File.Open(InputFile, FileMode.Open))
{
BinaryReader br = new BinaryReader(stream);
int entryCount = br.ReadInt32();
for (int entries = 0; entries < entryCount; entries++)
{
// Read in the key-value pairs
string key = br.ReadString();
int accountCount = br.ReadInt32();
List<Account> accountList = new List<Account>();
for (int i = 0; i < accountCount; i++)
{
Account account = new Account();
account.Deserialize(br);
accountList.Add(account);
}
accounts.Add(key, accountList);
}
}
}
static Dictionary<string, List<Account>> accounts = new Dictionary<string, List<Account>>();
static void Main(string[] args)
{
string accountName = "Bob";
List<Account> newAccounts = new List<Account>();
newAccounts.Add(AddAccount("A", 1));
newAccounts.Add(AddAccount("B", 2));
newAccounts.Add(AddAccount("C", 3));
accounts.Add(accountName, newAccounts);
accountName = "Tom";
newAccounts = new List<Account>();
newAccounts.Add(AddAccount("A1", 11));
newAccounts.Add(AddAccount("B1", 22));
newAccounts.Add(AddAccount("C1", 33));
accounts.Add(accountName, newAccounts);
string saveFile = #"C:\accounts.bin";
Serialize(saveFile);
// clear it out to prove it works
accounts.Clear();
Deserialize(saveFile);
}
static Account AddAccount(string AccountName, int AccountNumber)
{
Account account = new Account();
account.AccountName = AccountName;
account.AccountNumber = AccountNumber;
return account;
}
}
A fourth option to those you mention are binary files. Although that sounds arcane and difficult, it's really easy with the serialization API in .NET.
Whether you choose binary or XML files, you can use the same serialization API, although you would use different serializers.
To binary serialize a class, it must be marked with the [Serializable] attribute or implement ISerializable.
You can do something similar with XML, although there the interface is called IXmlSerializable, and the attributes are [XmlRoot] and other attributes in the System.Xml.Serialization namespace.
If you want to use a relational database, SQL Server Compact Edition is free and very lightweight and based on a single file.
Just finished coding data storage for my current project. Here is my 5 cents.
I started with binary serialization. It was slow (about 30 sec for load of 100,000 objects) and it was creating a pretty big file on the disk as well. However, it took me a few lines of code to implement and I got my all storage needs covered.
To get better performance I moved on custom serialization. Found FastSerialization framework by Tim Haynes on Code Project. Indeed it is a few times faster (got 12 sec for load, 8 sec for save, 100K records) and it takes less disk space. The framework is built on the technique outlined by GalacticJello in a previous post.
Then I moved to SQLite and was able to get 2 sometimes 3 times faster performance – 6 sec for load and 4 sec for save, 100K records. It includes parsing ADO.NET tables to application types. It also gave me much smaller file on the disk. This article explains how to get best performance out of ADO.NET: http://sqlite.phxsoftware.com/forums/t/134.aspx. Generating INSERT statements is a very bad idea. You can guess how I came to know about that. :) Indeed, SQLite implementation took me quite a bit of time plus careful measurement of time taking by pretty much every line of the code.
The first thing I'd look at is a database. However, serialization is an option. If you go for binary serialization, then I would avoid BinaryFormatter - it has a tendency to get angry between versions if you change fields etc. Xml via XmlSerialzier would be fine, and can be side-by-side compatible (i.e. with the same class definitions) with protobuf-net if you want to try contract-based binary serialization (giving you a flat file serializer without any effort).
If your data is complex, high in quantity or you need to query it locally then object databases might be a valid option. I'd suggest looking at Db4o or Karvonite.
A lot of the answers in this thread attempt to overengineer the solution. If I'm correct, you just want to store user settings.
Use an .ini file or App.Config file for this.
If I'm wrong, and you are storing data that is more than just settings, use a flat text file in csv format. These are fast and easy without the overhead of XML. Folks like to poo poo these since they aren't as elegant, don't scale nicely and don't look as good on a resume, but it might be the best solution for you depending on what you need.
Without knowing what your data looks like, i.e. the complexity, size, etc...XML is easy to maintain and easily accessible. I would NOT use an Access database, and flat files are more difficult to maintain over the long haul, particularly if you are dealing with more than one data field/element in your file.
I deal with large flat-file data feeds in good quantities daily, and even though an extreme example, flat-file data is much more difficult to maintain than the XML data feeds I process.
A simple example of loading XML data into a dataset using C#:
DataSet reportData = new DataSet();
reportData.ReadXml(fi.FullName);
You can also check out LINQ to XML as an option for querying the XML data...
HTH...
I have done several "stand alone" apps that have a local data store. I think the best thing to use would be SQL Server Compact Edition (formerly known as SQLAnywhere).
It's lightweight and free. Additionally, you can stick to writing a data access layer that is reusable in other projects plus if the app ever needs to scale to something bigger like full blown SQL server, you only need to change the connection string.
Depending on the compelexity of your Account object, I would recomend either XML or Flat file.
If there are just a couple of values to store for each account, you could store them on a properties file, like this:
account.1.somekey=Some value
account.1.someotherkey=Some other value
account.1.somedate=2009-12-21
account.2.somekey=Some value 2
account.2.someotherkey=Some other value 2
... and so forth. Reading from a properties file should be easy, as it maps directly to a string dictionary.
As to where to store this file, the best choise would be to store into AppData folder, inside a subfolder for your program. This is a location where current users will always have access to write, and it's kept safe from other users by the OS itself.
My first inclination is an access database. The .mdb files are stored locally, and can be encrypted if that is deemed necessary. Though XML or JSON would also work for many scenarios. Flat files I would only use for read only, non-search (forward read only) information. I tend to prefer csv format to set width.
It depends on the amount of data you are looking to store. In reality there's no difference between flat files and XML. XML would probably be preferable since it provides a structure to the document. In practice,
The last option, and a lot of applications use now is the Windows Registry. I don't personally recommend it (Registry Bloat, Corruption, other potential issues), but it is an option.
If you go the binary serialization route, Consider the speed at which a particular member of the datum needs to be accessed. If it is only a small collection, loading the whole file will make sense, but if it will be large, you might also consider an index file.
Tracking Account Properties/fields that are located at a specific address within the file can help you speed up access time, especially if you optimize that index file based on key usage. (possibly even when you write to disk.)
Keep it simple - as you said, a flat file is sufficient. Use a flat file.
This is assuming that you have analyzed your requirements correctly. I would skip the serializing as XML step, overkill for a simple dictionary. Same thing for a database.
In my experience in most cases JSON in a file is enough (mostly you need to store an array or an object or just a single number or string). I rarely need SQLite (which needs more time for setting it up and using it, most of the times it's overkill).

Categories

Resources