Currently, we use a giant configuration object that is serialized to/from XML. This has worked fine for the most part, but we are finding that in the case of power loss and application crashes, that the file could be left in a state that renders it unable to deserialize properly, effectively corrupting the configuration information.
I would like to use the built-in app.config, but it doesn't seem to easily support custom classes. For example, with XML serialization, I can easily serialize a generic list<ComplexClass> with no extra code. It just works. It seems that when using app.config, you have to provide a ton of information and custom classes for this to work. Plus, most of the "custom configuration" tutorials are from circa-2007 and may be outdated for all I know. Does anyone have information to the latest way to do this in .NET 4.0?
In addition, when a problem occurs in the application, 9/10 times it is because of an improper configuration. App.config likes to store user-changeable settings in a very inaccessible location for users who aren't familiar with hidden directories and such. Is there any way to have a single location to store the config file, which the user can easily email us when a problem occurs?
Or, is all this easier than I remember it being in early 2.0 days? Any links or quick examples of how to easily do custom app.config information would be great.
As a further example, this is a pared-down version of one of the types of objects I want to serialize as List<Alarm>, as the amount of Alarms can vary or be empty. Is there an analogous way to store something like this in app.config?
[Serializable]
public class Alarm
{
[Serializable]
public class AlarmSetting
{
public enum AlarmVariables { Concentration, RSquared }
public enum AlarmComparisons { LessThan, GreaterThan }
[Description("Which entity is being alarmed on.")]
public AlarmVariables Variable { get; set; }
[Description("Method of comparing the entity to the setpoint.")]
public AlarmComparisons Comparator { get; set; }
[Description("Value at which to alarm.")]
public Double Setpoint { get; set; }
}
public String Name { get; set; }
public Boolean Enabled { get; set; }
public String Parameter { get; set; }
public List<AlarmSetting> AlarmSettings { get; set; }
public System.Drawing.Color RowColor { get; set; }
}
I would suggest moving away from any sort of config file and instead use some type of local database such as sqlite or sql server express which is much more resilient to app crashes.
IMHO, config settings shouldn't be a default container for user settings. To me a config file is there to make sure the app runs in the given environment. For example, defining connection strings or polling rates or things of that nature.
User settings, especially ones that change often, need a better storage mechanism such as a local database. Unless, of course, it's a client/server application. In which case those settings should be up at the server itself and only persisted locally if the app has to work in a disconnected state.
The example you gave, one of configuring what appears to be one or more alarms is a perfect example of something that belongs in a database table.
I have been using XML serialization, similar to what you are describing, for many years on a number of different projects. Unless you want to bite off SQL for configuration, this seems to be the best solution.
IMHO, the app.config mechanism is not any better than straight XML serialization. It is actually more difficult to access this configuration from a number of different projects. If you are only saving transient state (user options etc) from a WinForms application, then application settings can be convenient for simple data types.
It seems to me like you have another issue that is causing the corruption. I rarely get file corruption with these XML files. Whenever I do, it is related to an exception that is thrown during serialization, not due to application crash etc. If you want to double check this, you might want to serialize to memory stream and then dump the memory stream to disk. You can actually serialize, deserialize the stream to make sure it's valid prior to dumping the file to disk.
Unless you are writing this file a lot I would be skeptical that the file corruption is due to power outages.
Unless you can track down the source of the error, you're only just guessing that it has anything to do with Xml files. It's entirely possible the built-in XmlSerializer is failing .. e.g. you may have a circular reference somewhere, but it's hard to comment unless you know what your error is.
Sometimes using the built-in Xml Serializer isn't the best choice, and when objects get complex, it can be better to perform the serialization and deserialization yourself. You'll have more control and be able to more accurately determine / recover from bad file data.
XDocument doc = new XDocument(
new XElement("attachments",
new XElement("directory", attachmentDirectory),
new XElement("attachment-list",
from attached in attachedFiles
select new XElement("file",
new XAttribute("name", attached.FileName),
new XAttribute("size", attached.FileSize))
)
)
);
Other than that, configuration files are for configuration, not program data. The difference is configuration data shouldn't change often, and normally isn't too directly editable from users. In a winforms app, you don't share data between users in a configuration file. If you do, then you should consider if your app is really a database application.
Since we have made a decision to abdicate from Microsoft Configuration System in 2007
we have not regretted for a second.
Take a look at this:
http://blog.aumcode.com/2013/08/aum-configuration-as-facilitated-by-nfx.html
Related
I am trying to update a legacy application and need some advice about how to organize the data level.
Today, all the data is stored in a binary file created with the help of binary serialization. The data that is stored is a several levels deep tree structure.
The object level of the saved data:
ApplicationSettings
CommunicationSettings
ConfigurationSettings
HardwareSettings
and so forth some additional levels
All this classes have a lot of logic to do different things. They also have status information that should not be saved to the file.
The data is constantly updated during the execution of the program, and saved when updated to the binary file by the "business logic".
I try to update the program, but doing unit tests for this is a nightmare.
I want the data still be saved in a file in any way. But otherwise, I'm open to suggestions how to improve this.
Edit:
The program is quite small, and I do not want to be dependent on large, complex frameworks.
The reason I need help is to try to clean up the code where virtually the entire application logic is in one huge method.
What I would do;
First, turn the settings into contracts;
public interface IApplicationSettings
{
ICommunicationSettings CommunicationSettings{get;}
IConfigurationSettings ConfigurationSettings{get;}
}
Now, break up your logic into separate concerns and pass in your settings at the highest level posible; Such that if MyLogicForSomething only concerns itself with the communication settings, then only pass in the communication settings;
public class MyLogicForSomething
{
public MyLogicForSomething(ICommunicationSettings commSettings)
{
}
public void PerformSomething(){/* ... */}
}
ICommunicationSettings is easily mockable here; with something like Rhino Mocks
You can now easily test to ensure something in your settings is called/set when you run your logic
var commSettings = MockRepository.GenerateStub<ICommunicationSettings>();
var logic = new MyLogicForSomething(commSettings );
logic.PerformSomething()
commSettings.AssertWasCalled( x => x.SaveSetting(null), o=>o.IgnoreArguments() );
I am currently developing a system that needs to expose some of its metadata/documentation at runtime. I know there are methods of using XML Comments and bringing that data back into the app via homegrown Reflection extension methods.
I feel it might be easier to use the description attribute from the System.ComponentModel namespace (but located in System assembly). This way I and other developers would be able use regular reflection to get the Description of fields. I much rather use this than using a custom attribute. What are the downsides to this approach?
Example:
public Customer
{
public int Id { get; set; }
[Description("The common friendly name used for the customer.")]
public string Name { get; set; }
[Description("The name used for this customer in the existing Oracle ERP system.")]
public string ErpName { get; set; }
}
I am doing the exact same thing (with ERP software no less!) and have encountered no drawbacks. One thing you may consider a drawback in your situation depending on your architecture is that many documentation tools are directly or indirectly based on XML comments. They will likely not be able to pick up description attributes. But in our architecture, the Description attribute code is not actually the master/source of the documentation. We have a database of MetaData that defines and describes every property. We can generate XML comments and Description attributes from that same source. Actually in our case, we do not generate the XML comments at all, but instead directly generate the XML file that would normally be generated by the XML comments directly. That's the file used by the documentation tools we use. You could probably write a simple utility to extract the description attributes into a similar XML file if you want to use documentation tools that rely on the xml file output by xml comments, if it can't accept the Describiton attribute directly.
I have a database application that is configurable by the user - some of these options are selecting from different external plugin systems.
I have a base Plugin type, my database schema has the same Plugin record type with the same fields. I have a PlugingMananger to load plugins (via an IoC container) at application start and link them to the database (essentially copies the fields form the plugin on disk to the database).
public interface IPlugin
{
Guid Id{ get; }
Version Version { get; }
string Name { get; }
string Description { get; }
}
Plugins can then be retrieved using PlugingMananger.GetPlugin(Guid pluginId, Guid userId), where the user ID is that of the one of the multiple users who a plugin action may be called for.
A set of known interfaces have been declared by the application in advance each specific to a certain function (formatter, external data, data sender etc), if the plugin implements a service interface which is not known then it will be ignored:
public interface IAccountsPlugin : IPlugin
{
IEnumerable<SyncDto> GetData();
bool Init();
bool Shutdown();
}
Plugins can also have settings attributes PluginSettingAttribute defined per user in the multi-user system - these properties are set when a plugin is retrieved for a specific user, and a PluginPropertyAttribute for properties which are common across all users and read-only set by the plugin one time when the plugin is registered at application startup.
public class ExternalDataConnector : IAccountsPlugin
{
public IEnumerable<AccountSyncDto> GetData() { return null; }
public void Init() { }
public void Shutdown() { }
private string ExternalSystemUsername;
// PluginSettingAttribute will create a row in the settings table, settingId
// will be set to provided constructor parameter. this field will be written to
// when a plugin is retrieved by the plugin manager with the value for the
// requesting user that was retrieved from the database.
[PluginSetting("ExternalSystemUsernameSettingName")]
public string ExternalSystemUsername
{
get { return ExternalSystemUsername }
set { ExternalSystemUsername = value; }
}
// PluginPropertyAttribute will create a row in the attributes table common for all users
[PluginProperty("ShortCodeName")]
public string ShortCode
{
get { return "externaldata"; }
}
public Version PluginVersion
{
get { return new Version(1, 0, 0, 0); }
}
public string PluginName
{
get { return "Data connector"; }
}
public string PluginDescription
{
get { return "Connector for collecting data"; }
}
}
Here are my questions and areas I am seeking guidance on:
With the above abstraction of linking plugins in an IoC container to database, the user can select the database field Customer.ExternalAccountsPlugin = idOfExternalPlugin. This feels heavy - is there a simpler way that other systems achieve this (SharePoint for instance has lots of plugins that are referenced by the user database)?
My application dictates at compile time the interfaces that it supports and ignores all others - I have seen some systems claim to be fully expandable with open plugins which I presume would mean lots of loosely typed interfaces and casting, is there a half-way ground between the two options that would allow future updates to be issued without recompile but still use concrete interfaces?
My plugins could contain metadata (PluginProperty or PluginSetting) and I am unsure the best place to store this, either in a plugin metadata table (would make linq queries more complex) or direct in the plugin database record row (easy linq queries PluginManager.GetPluginsOfType<IAccounts>.Where(x => x.ShortCode = "externaldata").FirstOrDefault();, which is used as best practice?
Since plugins capabilities and interfaces rely so heavily on the database schema, what is the recommended way I can limit a plugin for use with a specific schema revision? Would I keep this schema revision as a single row in a settings table in the database and update this manually after each release? Would the plugin support a maximum schema version, or would the application support a list of known plugin versions?
1) I'm sorry, but I don't know for sure. However, I'm pretty sure, in software that have data created or handled by custom plugin, they handle the plugin the way you described. The idea being, if a user load the data but is missing that specific plugin, the data doesn't become corrupted and the user isn't allowed to modify that data. (An example that comes to my minds is 3D softwares in general)
2) Only giving a very strict interface implementation, of course, highly restrict the plugin creation. (Ex.: Excel, I can't create a new cell type) It's not bad or good, it highly depends what you want from it, it's a choice. If you want the plugin creator to only access the data by some very specific pipes, limit the types of data he can create, then it goes with your design. Otherwise, if you goal is to open your software to improvement, then you should also expose some classes and methods you judge safe enough to be used externally. (Ex.: Maya, I can create a new entity type that derive from a base class, not just an interface)
3) Well, it does depends of a lot of things, no? When serializing your data, you could create a wrapper that contain all information for a specific plugin, ID, MetaData and whatever else you would judge needed. I would go that way, as it would be easier to retrieve, but is it the best way for what you need? Hard to say without more informations.
4) A good example of that is Firefox. Smaller version increment doesn't change the plugin compatibility. Medium version increment tests from a database if the plugin is still valid considering what it implements. If the plugin isn't implementing something that change, it is still valid. Major version increment requires a recompile of all plugins to use the new definition. From my point of view, it's a nice middle ground that allow devs to not always recompile, but it makes the development of the main software slightly more tricky as changes must be planned ahead. The idea is to balance the PitA (Pain in the Ass) factor between the software dev and the plugin dev.
Well... that was my long collection of 2 cents.
I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server.
However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries.
My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me.
I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client.
Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets.
So my questions are:
Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)?
If sending the object-graph down is the right way, is WCF the right tool?
And if WCF is right, what's the best way to get WCF to serialise my object graph?
Object graphs can be used with DataContract serialization.
Note: Make sure you're preserving object references, so that you don't end up with multiple copies of the same object in the graph when they should all be the same reference, the default behavior does not preserve identity like this.
This can be done by specifying the preserveObjectReferences parameter when constructing a DataContractSerializer or by specifying true for the IsReference property on DataContractAttribute (this last attribute requires .NET 3.5SP1).
However, when sending object graphs over WCF, you have the risk of running afoul of WCF quotas (and there are many) if you don't take care to ensure the graph is kept to a reasonable size.
For the net.tcp transport, for example, important ones to set are maxReceivedMessageSize, maxStringContentLength, and maxArrayLength. Not to mention a hidden quota of 65335 distinct objects allowed in a graph (maxObjectsInGraph), that can only be overridden with difficulty.
You can also use classes that only expose read accessors with the DataContractSerializer, and have no parameterless constructors:
using System;
using System.IO;
using System.Runtime.Serialization;
class DataContractTest
{
static void Main(string[] args)
{
var serializer = new DataContractSerializer(typeof(NoParameterLessConstructor));
var obj1 = new NoParameterLessConstructor("Name", 1);
var ms = new MemoryStream();
serializer.WriteObject(ms, obj1);
ms.Seek(0, SeekOrigin.Begin);
var obj2 = (NoParameterLessConstructor)serializer.ReadObject(ms);
Console.WriteLine("obj2.Name: {0}", obj2.Name);
Console.WriteLine("obj2.Version: {0}", obj2.Version);
}
[DataContract]
class NoParameterLessConstructor
{
public NoParameterLessConstructor(string name, int version)
{
Name = name;
Version = version;
}
[DataMember]
public string Name { get; private set; }
[DataMember]
public int Version { get; private set; }
}
}
This works because DataContractSerializer can instantiate types without calling the constructor.
You got yourself mixed up with the serializers:
the XmlSerializer requires a parameter-less constructor, since when deserializing, the .NET runtime will instantiate a new object of that type and then set its properties
the DataContractSerializer has no such requirement
Check out the blog post by Dan Rigsby which explains serializers in all their glory and compares the two.
Now for your design - my main question is: does it make sense to have a single function that return all the settings, the client manipulates those and then another function which receives back all the information?
Couldn't you break those things up into smaller chunks, smaller method calls? E.g. have separate service methods to set each individual item of your configuration? That way, you could
reduce the amount of data being sent across the wire - the object graph to be serialized and deserialized would be much simpler
make your configuration service more granular - e.g. if someone needs to set a single little property, that client doesn't need to read the whole big server config, set one single property, and send back the huge big chunk - just call the appropriate method to set that one property
Is it possible to pass a App setting "string" in the web.config to a Common C# class?
In any class you can use ConfigurationManager.AppSettings["KeyToSetting"] to access any value in the element of web.config (or app.config)
Of course it's possible - but the thing to keep in mind is that a properly designed class (unless it's explicitly designed for ASP.NET) shouldn't know or care where the information comes from. There should be a property (or method, but properties are the more '.NET way' of doing things) that you set with the string value from the application itself, rather than having the class directly grab information from web.config.
If you have configuration values that are used in many places consider developing a Configuration class that abstracts the actual loading of the configuration items and provides strongly typed values and conversions, and potentially default values.
This technique localizes access to the configuration file making it easy to switch implementations later (say store in registry instead) and makes it so the values only have to be read from the file once -- although, I would hope that the configuration manager would be implemented this way as well and read all values the first time it is used and provide them from an internal store on subsequent accesses. The real benefit is strong typing and one-time-only conversions, I think.
public static class ApplicationConfiguration
{
private static DateTime myEpoch;
public static DateTime Epoch
{
get
{
if (myEpoch == null)
{
string startEpoch = ConfigurationManager.AppSettings["Epoch"];
if (string.IsNullOrEmpty(startEpoch))
{
myEpoch = new DateTime(1970,1,1);
}
else
{
myEpoch = DateTime.Parse(startEpoch);
}
}
return myEpoch;
}
}
}