How to access common namespaces for XML parsing - c#

I have stored my common namespaces used in my Linq to Xml parsing in a config file. Where is the best place to access them in my application? Put them in my base class? Create a Config Class that I can call (call namespaces via accessors), ? What would be deemed a good practice here. I currently have about 7 namespaces.
Thanks,
S

What is the requirement? You currently have the namespaces in a config file which allows you to change them without recompiling the application. If you feel this is useful, I would keep them in the file and, as you suggest, create a type to hold the values at runtime which can be passed as a dependency to any code which needs to know about the namespaces.
If however, you expect these namespaces to fixed for ever, it may be reasonable to hard code them into your base class or wherever else in the source code makes sense (this could also be done using embedded resources rather than string literals).
This latter option would have the benefit of reducing unnecessary noise in your config file and the need for the added dependency type, but I would suggest that, in most cases, it's probably just as well to use the config file pattern regardless. Yes it may be a little extra clutter, but in this business things that you think will never change have a habit of changing.
Also, you say that you currently have 7 namespaces. This suggests to me that you think you may have more or less in the future. For this reason, it sounds like you probably should be using the config file pattern.

Related

Design principles to consider when wrapping a 3rd party logger like log4net

I'm creating a logger for a company that has several types of .NET projects (Windows Services, ASP.NET, etc), and I was considering using log4net as my logger, but I don't want to be married to log4net, so I was thinking of wrapping it in my own assembly. I realize some developers don't recommend wrapping log4net because that would be an anti-pattern, but assuming I was going that route anyway, I had some questions:
I am planning to use the design principles mentioned in this article to design my wrapper (using factory method, interfaces, and reflection, I can simply decide which logger I want to use (whether log4net, elmah or something else) by specifying in the config file:
https://www.simple-talk.com/dotnet/.net-framework/designing-c-software-with-interfaces/
Question is:
Should I create this logger project in a separate Visual Studio solution and just use the dll in my client applications? If so, where would the configuration details for log4net go? Would that be supplied by the client application's config file? If so, is that good design? For instance, if I decided to switch away from log4net to a different logging framework, I would not only have to change the config setting to specify the new concrete logger's assembly/class name, but would also have to remove the log4net config entries (and perhaps add the new logger's config entries). Is this considered as an acceptable design approach?
Oh my goodness your timing is awesome. And that article is very relevant to me so thanks! I am doing this very same thing right now. I realized that log4net is a decent logger, but a terrible library for making a logger.
I agree with the article, in that you should not directly expose to log4net. Unless this is a small app it would be too difficult to switch later. And log4net is showing age so that may happen. I like the interface approach overall.
But, wrapping log4net it is a pain in the butt. So in doing my prototype wrapper I feel like I rewrote 50% of log4net, and discarded 25%. Some issues I found:
log4net will grab the "caller information" for you. Normally that is great. But if you wrap log4net, the caller information will point to your logger. So you will have to explicitly grab it yourself. But log4net doesn't provide a way for you to override the caller information. So now you will have to create your own fields for the caller's file, line number, class, and package. Thus, not only do you not gain the benefit here, but it is actually more work than just doing it yourself.
Log4net uses the old pre C#-4.0 method of grabbing the caller information which is slow.
Your will be unable to completely wrap log4net without wrapping the configuration. The caller has to configure the loggers either in code or in their app.config. If they do it in their app.config then they are putting log4net specific stuff in their app, so you failed to hide it with your wrapper. But if you have your wrapper code automatically perform the configuration, you just lost the flexibility of log4net. The third option is to make your own configuration, but then what good did log4net do for you? You just rewrote another piece of it.
You are stuck with the log levels that come with log4net. In our app, we want "categories" instead of "levels" which I then have to map to the log4net "levels" under the hood. Now, all the predefined log4net filters are of no use to me.
Anyone using your wrapper still has to reference log4net in their project anyway.
If your wrapper needs a way to handle errors, or pass them back to the caller, you will have trouble. log4net has its own internal error handling and you will need to hook into that and provide your own. Otherwise, errors (like a misconfigured appender) will just go out to the console. If it was designed as a library for making loggers, it would just throw the exception back up or provide a simple event.
One thing we wanted to get out of log4net is the ability to write to different outputs without us us having to write that code ourselves. Ex: I've never written to the event log, and I think log4net can do that. But it might be easier for me to rip out the Event logging code, rather than to try and wrap that. Same thing with filters.
There are some other problems I had with log4net that aren't directly related to trying to wrap it necessarily.
The code is old. The interfaces don't use generics where they should. Lots of object.
They use the old System.Collections collections. Similar to #1.
It has ifdefs for .NET 1 versus 2, and ifdefs for the obsolete compact framework. ugh.
It is designed to log strings, not structured objects. I made my own code to do so, and so did these people: http://stephenjamescode.blogspot.com/2014/01/logging-custom-objects-and-fields-with.html and http://element533.blogspot.com/2010/05/mapping-message-object-properties-to.html but this feels like basic functionality.
It doesn't support CSV and it is cumbersome to add. http://element533.blogspot.com/2010/05/writing-to-csv-using-log4net.html
It doesn't have any kind of logging "service"
It doesn't provide a way to read or parse the log.
I found it was more effort to configure the appenders than to write your own. Ex: I mapped a bunch of fields to the AdoNetAppender, but it would have taken me less time to just rewrite AdoNetAppender. Debugging a database field mapping in XML is harder than trying to debug the equivalent C# + ADO.NET code. The XML is more verbose and less type safe. This might be specific to me because I had a very structured set of data and a lot of fields.
Sorry for the really long post, I have lots and lots of thoughts on this topic. I don't really dislike log4net, I just think it is out of date and if you are wrapping it, you might be better off writing your own.

If attributes are only constructed when they are reflected into, why are attribute constructors so limited?

As shown here, attribute constructors are not called until you reflect to get the attribute values. However, as you may also know, you can only pass compile-time constant values to attribute constructors. Why is this? I think many people would much prefer to do something like this:
[MyAttribute(new MyClass(foo, bar, baz, jQuery)]
than passing a string (causing stringly typed code too!) with those values, turned into strings, and then relying on Regex to try and get the value instead of just using the actual value, and instead of using compile-time warnings/errors depending on exceptions that might be thrown somewhere that has nothing to do with the class except that a method that it called uses some attributes that were typed wrong.
What limitation caused this?
Attributes are part of metadata. You need to be able to reflect on metadata in an assembly without running code in that assembly.
Imagine for example that you are writing a compiler that needs to read attributes from an assembly in order to compile some source code. Do you really want the code in the referenced assembly to be loaded and executed? Do you want to put a requirement on compiler writers that they write compilers that can run arbitrary code in referenced assemblies during the compilation? Code that might crash, or go into infinite loops, or contact databases that the developer doesn't have permission to talk to? The number of awful scenarios is huge and we eliminate all of them by requiring that attributes be dead simple.
The issue is with the constructor arguments. They need to come from somewhere, they are not supplied by code that consumes the attribute. They must be supplied by the Reflection plumbing when it creates the attribute object by calling its constructor. For which it needs the constructor argument values.
This starts at compile time with the compiler parsing the attribute and recording the constructor arguments. It stores those argument values in the assembly metadata in a binary format. At issue then is that the runtime needs a highly standardized way to deserialize those values, one that preferably doesn't depend on any of the .NET classes that you'd normally use the de/serialize data. Because there's no guarantee that such classes are actually available at runtime, they won't be in a very trimmed version of .NET like the Micro Framework.
Even something as common as binary serialization with the BinaryFormatter class is troublesome, note how it requires the [Serializable] attribute on the class to allow it to do its job. Versioning would also be an enormous problem, clearly such a serializer class could never change for the risk of breaking attributes in old assemblies.
This is a rock and a hard place, solved by the CLS designers by heavily restricting the allowed types for an attribute constructor. They didn't leave much, just the simple values types, string, a simple one-dimensional array of them and Type. Never a problem deserializing them since their binary representation is simple and unambiguous. Quite a restriction but attributes can still be pretty expressive. The ultimate fallback is to use a string and decode that string in the constructor at runtime. Creating an object of MyClass isn't an issue, you can do so in the attribute constructor. You'll have to encode the arguments that this constructor needs however as properties of the attribute.
The probably most correct answer as to why you can only use constants for attributes is because the C#/BCL design team did not judge supporting anything else important enough to be added (i.e. not worth the effort).
When you build, the C# compiler will instantiate the attributes you have placed in your code and serialize them, so that they can be stored in the generated assembly. It was probably more important to ensure that attributes can be retrieved quickly and reliably than it was to support more complex scenarios.
Also, code that fails because some attribute property value is wrong is much easier to debug than some framework-internal deserialization error. Consider what would happen if the class definition for MyClass was defined in an external assembly - you compile and embed one version, then update the class definition for MyClass and run your application: boom!
On the other hand, it's seriously frustrating that DateTime instances are not constants.
What limitation caused this?
The reason it isn't possible to do what you describe is probably not caused by any limitation, but it's purely a language design decision. Basically, when designing the language they said "this should be possible but not this". If they really wanted this to be possible, the "limitations" would have been dealt with and this would be possible. I don't know the specific reasoning behind this decision though.
/.../ passing a string (causing stringly typed code too!) with those values, turned into strings, and then relying on Regex to try and get the value instead of just using the actual value /.../
I have been in similar situations. I sometimes wanted to use attributes with lambda expressions to implement something in a functional way. But after all, c# is not a functional language, and if I wrote the code in a non-functional way I haven't had the need for such attributes.
In short, I think like this: If I want to develop this in a functional way, I should use a functional language like f#. Now I use c# and I do it in a non-functional way, and then I don't need such attributes.
Perhaps you should simply reconsider your design and not use the attributes like you currently do.
UPDATE 1:
I claimed c# is not a functional language, but that is a subjective view and there is no rigourous definition of "Functional Language". I agree with the Adam Wright, "/.../ As such, I wouldn't class C# as functional in general discussion - it is, at best, multi-paradigm, with some functional flavour." at Why is C# a functional programmming language?
UPDATE 2:
I found this post by Jon Skeet: https://stackoverflow.com/a/294259/1105687 It regards not allowing generic attribute types, but the reasoning could be similar in this case:
Answer from Eric Lippert (paraphrased): no particular reason, except
to avoid complexity in both the language and compiler for a use case
which doesn't add much value.

Why use ConfigurationManager vs reading custom configuration file?

I've always used my own format of configuration files which is XML and then I just deserialise the XML into an object in my project or just read it into an XML document.
This seems alot easier to read and access the information I need.
I've had a look at the ConfigurationManager class this morning and it seems a bit overly complicated just to read a config file.
Is there any argument as to why I should use ConfigurationManager?
It is just a built-in mechanism in .NET which is already implemented for you, so you don't need any extra code (probably except for wrapping it in your own IConfig to separate concerns).
There is a GUI for editing .NET configuration files which sometimes comes in handy.
ASP.NET application, for instance, automatically restart when web.config has been changed, while you would need some custom logic to have the same behaviour with your own config files.
The ConfigurationManager is used internally and you're not obligated in any way to use it, and I used to do what you do. Nowadays it depends, if it is a file a user is supposed to change I might still do my own configuration, otherwise the file is added as an embedded resource and I use the ConfigurationManager to read it, because I don't think there is another way of reading those files. The thing is, use whatever mechanism you feel like, ConfigurationManager provides a bit more encapsulation though and out of the box utils classes.

Automatically marking c# classes for serialization

I want to build a visual studio plugin that automatically annotates classes for serialization. For example for the built in binary serializer I could just add [Serializable] to the class declaration, for WCF it could add [DataContract] to the class and [DataMember] to the members and properties (I could get [KnownType] information through reflection and annotate where appropriate). If using protocol buffers it could add [ProtoContract], [ProtoMember] and [ProtoInclude] attributes and so on.
I am assuming that the classes we are going to use this on are safe to serialize (so no sockets or nonserializable stuff in there). What I want to know is what is the easier way to take an existent piece of code (or a binary if that's easier) and add those attributes while preserving the rest of the code intact. I am fine with the output being source code or binary.
It comes to mind the idea of a using a C# parser, parse everything find the interesting code elements, annotate them and write back the code. However that seems to be very complex given the relatively small amount of modifications I want to make to the code. Is there an easier way to do so?
Visual Studio already has an API for discovering and emitting code which you might take a look at. It's not exactly a joy to use but could work for this purpose.
While such a plugin would certainly be a useful thing, I would consider rather making an add-in for a tool like ReSharper instead of VS directly. The advantage is somebody already solved the huge pile of problems you haven't even dreamed of yet and so it will be a lot easier to build such a specific functionality.
it looks to me like you need to have a MSBuild task similar to this one http://kindofmagic.codeplex.com/. is that about right?

Classes Generated with XSD.exe Custom Class Names

Is it possible to have any control over the class names that get generated with the .Net XSD.exe tool?
As far as I'm aware I don't think this is possible, the class names match almost exactly to whats in the schema.
Personally I would change the class names after XSD has generated the code, but to be honest I usually just stick with what XSD generates. Its then easier for someone else reading the code to understand what classes map to what parts of the XML.
Alternatively, if you have control over the schema you could update that?
Basically, no. If you were writing the classes manually, you could have:
[XmlType("bar")]
class Foo {}
however, you can't do this with the xsd-generated classes. Unfortunately, one of the things you can't do with a partial class is rename it. Of course, you could use xsd to generate it, change the .cs file and don't generate it again, but that is not ideal for maintenance.
Any schema with somewhat deep nesting then ends up with utterly useless names.
I don't know of a way to work around the problem, but my tip to at least reduce the negative impact is this: Define a list of aliases for the awfully-named types. This way you can write code that isn't completely unreadable without losing the ability to regenerate.
using AgentAddress = Example.Namespace.DataContract.RootElementNestedElementAgentAddress;
...
It's a pity this list itself has to be copy-pasted to all code files needing it, but I think this at least constitutes an improvement.

Categories

Resources