I am currently rewriting a SDK to access a webservice.
Since the model for a database query consists of many classes (actually one class for each of about twenty possible filters), I decided to provide a fluent interface additonally.
So instead of
new Query(
Age = new AgeFilter() { From = 18, To = 65 },
Location = new PostalCodeFilter() { Zip = 12345, new RadiusDefinition() { ... } }
);
the user can now write:
Query.Create()
.WithAge(18, 65)
.WithLocation(12345, 50, "miles");
Now I found out that the traditional way has to be included as well (I cannot hide the actual objects as internal).
How can I avoid having to document both the parameters of the fluent interface and the fields of the data classes? The descriptions are the same. I thought about see/seealso but this wouldn't show up in Visual Studio's Code Assistant.
If you use Sandcastle you can use the <inheritdoc /> tag just like this:
///<param name="from">
///<inheritdoc cref="AgeFilter.From" select="/summary/node()" />
///</param>
or
///<summary>
///<inheritdoc cref="QueryFilters.WithAge" select="/param[#name='from']/node()"/>
///</summary>
I don't think you can. An xml-doc comment is applied to a very specific thing and isn't easily "shared". But, you can "link" between elements using the <see> tag. Have a look at http://msdn.microsoft.com/en-us/library/acd0tfbe.aspx and see if it's of use to you.
Understand that DRY really applies mainly to code; writing the same line of code twice means that if a change to the logic inherent in that code has to be made, it has to be made twice. What you're trying to avoid repeating is markup, which while it can have the same inherent problem of having to make changes in multiple places, markup usually has fewer tools available to avoid restating similar things. If you look at other libraries which have multiple ways to accomplish a similar goal, you'll find that a lot of the documentation appears copy-pasted.
Related
Let's say we have a custom attribute:
[Precondition(1, "Some precondition")]
This would implement [Test, Order(1), Description("Some precondition")]
Can I access and modify the Order attribute (or create one) for this method?
I can modify the Description and Author, but Order is not a possibility.
I have tried
1: context.Test.Properties["Order"][0] = order;
2:method.CustomAttributes.GetEnumerator()
by walking the stack frames with
Object[] attributes = method.GetCustomAttributes(typeof(PreconditionAttribute), false);
if (attributes.Length >= 1){...}
3:
OrderAttribute orderAttribute = (OrderAttribute)Attribute.GetCustomAttribute(i, typeof(OrderAttribute));
orderAttribute.Order = _order;
Which is readonly.
If I try orderAttribute.Order = new OrderAttribute(myOrd), it doesn't do anything.
I have two answers to choose from. One is in the vein of "Don't do this" and the other is about how to do it. Just for fun, I'm putting both answers up, separately, so they can compete with one another. This one is about why I don't think this is a good idea.
It's easy enough to write either
[Test, Order(1), Description("xxx")] or the equivalent...
[Test(Description="xxx"), Order(1)]
The proposed attribute gives users a second way to specify order, making it possible to assign two different orders to a test. Which of two attributes will win the day depends on (1) how each one is implemented, (2) the order in which the attributes are listed and (3) the platform on which you are running. For all practical purposes, it's non-deterministic.
Keeping the two things separate allows devs to decide which they need independently... which is why NUnit keeps them separate.
Using the standard attributes means that the devs can rely on the nunit documentation to tell them what the attributes do. If you implement your own attribute, you should document what it does in itself as well as what it does in the presence of the standard attributes... As stated above, that's difficult to predict.
I know this isn't a real answer in SO terms, but it's not pure opinion either. There are real technical issues in providing the kind of solution you want. I'd love to see what people think of it in comparison with "how to" I'm going to post next.
See my prior answer first! If you really want to do this, here's the how-to...
In order to combine the action of two existing attributes, you need equivalent code to those two attributes.
In this case both are extremely simple and both have about the same amount of code. DescriptionAttribute is based on PropertyAttribute so some of its code is hidden. OrderAttribute has a bit more logic because it checks to make sure the order has not already been set. Ultimately, both of them have code that implements the IApplyToTest interface.
Because they are both simple, I would copy the code, in order to avoid relying on implementation details that could change. Start with the slightly more complete OrderAttribute. Change its name. Modify the ApplyToTest method to set the description. You're done!
It will look something like this, depending on the names you use for properties...
public void ApplyToTest(Test test)
{
if (!test.Properties.ContainsKey(PropertyNames.Order))
test.Properties.Set(PropertyNames.Order, Order);
test.Properties.Set(PropertyNames.Description, Description);
}
A comment on what you tried...
There is no reason to think that creating an attribute in your code will do anything. NUnit has no way to know about those attributes. Your attribute cannot modify the code so that the test magically has other attributes. The only way Attributes communicate with NUnit is by having their interfaces (like IApplyToTest) called. And only attributes actually present in the code will receive such a call.
I'm working on a semantic highlighting plugin for VS. Here you can see a web Example.
The goal:
Acquiring all variables and creating different Classifications for every one of them.
The problem:
Getting the variables from the code without writing a C# lexer.
My current approach uses an ITagger. I use an ITagAggregator to get the tags of all the spans that get passed to the ITagger. Then I filter those and get only spans with the "identifier" classification which includes varibles, methods names, class names, usings and properties.
public class Classifier : ITagger<ClassificationTag> {
public IEnumerable<ITagSpan<ClassificationTag>> GetTags(NormalizedSnapshotSpanCollection spans) {
ITextSnapshot snapshot = spans[0].Snapshot;
var tags = _aggregator.GetTags(spans).Where((span) => span.Tag.ClassificationType.Classification.Equals("identifier")).ToArray();
foreach(var classifiedSpan in tags) {
foreach(SnapshotSpan span in classifiedSpan.Span.GetSpans(snapshot)) {
//generate classification based on variable name
yield return new TagSpan<ClassificationTag>(span, new ClassificationTag(_classification));
}
}
}
}
It would be a lot easier to use the builtin C# Lexer to get a list of all variables bundled to a bunch of meta data. Is this data available for plugin development? Is there an alternative way I could acquire it, if not?
The problem: Getting the variables from the code without writing a C# lexer.
Roslyn can do this: https://roslyn.codeplex.com/
There's even a Syntax Visualizer sample that might interest you. I also found an example using Roslyn to create a Syntax Highlighter.
Visual Studio exposes that information as a code model.
Here is an example how you can access class, and then find attribute on the class, and parse attribute arguments:
Accessing attribute info from DTE
Here is more information about code models:
http://msdn.microsoft.com/en-us/library/ms228763.aspx
Here's also automation object model chart what I've been using quite few times: http://msdn.microsoft.com/en-us/library/za2b25t3.aspx
Also, as said, Roslyn is indeed also a possible option. Here is an example for VS2015 using roslyn: https://github.com/tomasr/roslyn-colorizer/blob/master/RoslynColorizer/RoslynColorizer.cs
For building language tools if may be better to use a parser generator for C#. The GOLD parsing system is one such toolkit which can handle LALR grammars. It has a .NET component based engine that you can use in your project and it can be used to integrate with any IDE. You can also find the grammars for various programming languages including C#.
I have a regular C# class called "vehicle" with properties like Name, NumberPlate, MaxSpeed, etc.
All the data for the class is stored in a SQLite Database where I have a Table "Car" and "Boat". The tables colums have the same names as the class properties (however, there are more columns than class properties - vehicle is a more generic abstraction). At the moment, I have to assign the result of the query individually one by one like this:
while (await statement.StepAsync())
{
myVehicle.Name = statement.Columns["Name"];
//[...]
myVehicle.MaxSpeed = decimal.TryParse(statement.Columns["MaxSpeed"]);
}
Additionally, I have to check if some columns exist ("Car" and "Boat" have a different set of columns) which is more code than I'd like it to be.
I read about EntityFramework to map my db table to my class - but that seems overkill. My requirement is to map properties and columns that have the same name and ignore everything else.
Is there a "easy" (dev time, lines of code) way to map my table columns to my class?
Thanks for reading!
The restrictions in phone 8 mean that a lot of the standard answers to this ("just use {some ORM / micro-ORM}") won't apply, since they don't work on phone 8. You can probably use reflection for a lot of this, but: reflection can be (relatively) slow, so it depends on how much data you will be processing. If it is occasional and light: fine, reflect away.
Runtime meta-programming (the tricks used by libraries like "dapper" in full .NET to make these things really fast) is not available on restricted runtimes, so if you want to avoid lots of boiler-plate that leaves build-time meta-programming. At the simplest, I wonder if you could use something like T4 to automate creating these methods for you as C#. There are also ways to use the reflection-emit API to construct assemblies (at build-time) for phone 8, but that is a pretty hard-core route.
My thoughts:
if the amount of types here isn't huge, just write the code
if you have a lot of types, or you just feel like it, consider a build-time code-generation meta-programming step; you might even think "hmm, is this something I could make available to the community?"
of course, the first thing to do is to check that such a thing doesn't already exist
There is a little helper which might fit your case. Basically, it will take a dictionary and try it's best to populate a objects properties using reflection. I didn't try it by myself though.
You'd simply do something like:
while (await statement.StepAsync())
{
myVehicle = DictionaryToObject<Car>(statement.Columns);
}
It might need some further work to get it running but maybe a good start.
I was wondering is constantly reusing namespace names is valid for c# conventions/best practises.
I am develop most of my programs in Java, and i would have a packet for implementations, eg:
com.ajravindiran.jolt.game.items.sql
com.ajravindiran.jolt.game.users.sql
com.ajravindiran.jolt.events.impl
com.ajravindiran.jolt.tasks.impl
Let's talk about com.ajravindiran.jolt.game.items.sql, which is most close my situation. I current wrote a library that wraps the MySQL Connection/Net into a OODBMS.
So i have an interface called ISqlDataObject which has the following members:
bool Insert(SqlDatabaseClient client);
bool Delete(SqlDatabaseClient client);
bool Update(SqlDatabaseClient client);
bool Load(SqlDatabaseClient client);
and used like such:
public class SqlItem : Item, ISqlDataObject
{
public bool Load(SqlDatabaseClient client)
{
client.AddParameter("id", this.Id);
DataRow row = client.ReadDataRow("SELECT * FROM character_items WHERE item_uid = #id;");
this.Examine = (string)row["examine_quote"];
...
}
...
}
called:
SqlItem item = new SqlItem(int itemid);
GameEngine.Database.Load(item);
Console.WriteLine(item.Examine);
So i was wondering if it's ok to add the sql editions of the items into something like JoltEnvironment.Game.Items.Sql or should i just keep it at JoltEnvironment.Game.Items?
Thanks in adnvanced, AJ Ravindiran.
For naming conventions and rules, see MSDN's Framework Guidelines on Names of Namespaces.
That being said, that won't cover this specific issue:
So i was wondering if it's ok to add the sql editions of the items into something like JoltEnvironment.Game.Items.Sql or should i just keep it at JoltEnvironment.Game.Items?
It is okay to do either, and the most appropriate one depends a bit on your specific needs.
If the game items will be used pervasively throughout the game, but the data access will only be used by a small portion, I would probably split it out into its own namespace (though probably not called Sql - I'd probably use Data or DataAccess, since you may eventually want to add non-SQL related information there, too).
If, however, you'll always use these classes along with the classes in the Items namespace, I'd probably leave them in a single namespace.
You're asking about naming conventions, and the answer is, it's really up to you.
I allow for extra levels of hierarchy in a namespace if there will be multiple implementations. In your case, the .Sql is appropriate if there is some other storage mechanism that doesn't use Sql for queries. Maybe it's XML/Xpath. But if you don't have that, then it seems like the .Sql layer of naming isn't necessary.
At that poiint, though, I'm wondering why you would use {games,users} at the prior level. Feels like the namespace is more naturally
JoltEnvironment.Game.Storage
..And the Fully-qualified type names would be
JoltEnvironment.Game.Storage.SqlItem
JoltEnvironment.Game.Storage.SqlUser
and so on.
If a namespace, like JoltEnvironment.Game.Items, has only one or two classes, it seems like it ought to be collapsed into a higher level namespace.
What are you calling SQL Editions? Versions of SQL Server? Or Version of Database Connections? If the later, I would do something like:
JoltEnvironment.Game.Items.DataAccess.SQLServer
JoltEnvironment.Game.Items.DataAccess.MySQL
JoltEnvironment.Game.Items.DataAccess.Oracle
etc...
If the former, I thought that ADO.NET would take care of that for you anyway, based on the provider, so everything under the same namespace would be ok.
I am working on a custom ArcGIS Desktop tool project and I would like to implement an automated linear referencing feature in it. To make a long story short, I would like to display problematic segments along a route and show the severity by using a color code (say green, yellow, red, etc.). I know this is a pretty common scenario and have come to understand that the "right way" of accomplishing this task is to create a linear event table which will allow me to assign different codes to certain route segments. Some of my colleagues know how to do it manually but I can't seem to find any way to replicate this programaticaly.
The current tool is written in C# and already performs all the needed calculations to determine the problematic areas. The problem mainly is that I don't know where to start since I don't know a lot about ArcObjects. Any code sample or suggestion is welcome (C# is preferred but C++, VB and others will surely help me anyway).
EDIT :
I'm trying to use the MakeRouteEventLayer tool but can't seem to get the different pre-conditions met. The routes are hosted on an SDE server. So far, I am establishing a connection this way :
ESRI.ArcGIS.esriSystem.IPropertySet pConnectionProperties = new ESRI.ArcGIS.esriSystem.PropertySet();
ESRI.ArcGIS.Geodatabase.IWorkspaceFactory pWorkspaceFactory;
ESRI.ArcGIS.Geodatabase.IWorkspace pWorkspace;
ESRI.ArcGIS.Location.ILocatorManager pLocatorManager;
ESRI.ArcGIS.Location.IDatabaseLocatorWorkspace pDatabaseLocatorWorkspace;
pConnectionProperties.SetProperty("server", "xxxx");
pConnectionProperties.SetProperty("instance", "yyyy");
pConnectionProperties.SetProperty("database", "zzzz");
pConnectionProperties.SetProperty("AUTHENTICATION_MODE", "OSA");
pConnectionProperties.SetProperty("version", "dbo.DEFAULT");
pWorkspaceFactory = new ESRI.ArcGIS.DataSourcesGDB.SdeWorkspaceFactory();
pWorkspace = pWorkspaceFactory.Open(pConnectionProperties, 0);
pLocatorManager = new ESRI.ArcGIS.Location.LocatorManager();
pDatabaseLocatorWorkspace = (ESRI.ArcGIS.Location.IDatabaseLocatorWorkspace)pLocatorManager.GetLocatorWorkspace(pWorkspace);
Now I am stuck trying to prepare everything for MakeRouteEventLayer's constructor. I can't seem to find how i'm supposed to get the Feature Layer to pass as the Input Route Features. Also, I don't understand how to create an event table properly. I can't seem to find any exemple relating to what I am trying to accomplish aside from this one which I don't understand since it isn't documented/commented and the datatypes are not mentionned.
I'm not entirely certain what it is you want to do. If you want to get Linear Referencing values or manipulate them directly in a feature class that already has linear referencing defined, that's pretty straight forward.
IFeatureClass fc = ....;
IFeature feature = fc.GetFeature(...);
IMSegmentation3 seg = (IMSegmentation3)feature;
... blah ...
If you need to create a Feature class with linear referencing, you should start witht he "Geoprocessing" tools in the ArcToolbox. If the out-of-the-box tools can do most of what you need, this will minimize your coding.
I would strongly recommend trying to figure what you need to do with ArcMap if at all possible... then backing out the ArcObjects.
Linear Referencing API
Linear Referencing Toolbox
Understanding Linear Referencing