I've recently started using the DynamicQuery API, and it quickly became apparent that it has numerous limitations. I've found at least one improvement online: support for enum arguments, but it's pretty clear that this API is not actively maintained (if at all).
In case I'm wrong and there is somebody maintaining an improved version - please post a link!
Alternatively, a separate, active project with similar goals would also be of interest.
(Clarification: I'm looking to parse strings at runtime.)
In the end we just implemented some of the features we missed by editing the source code. Added support for passing in a static class as an "external" (DynamicQuery's terminology), support for calling methods on this static class, and type inference if any such methods are generic.
I suspect there isn't much demand for this, so I didn't bother making it available anywhere. Let me know if you think otherwise.
Edit: due to a request, DynamicQuery Enhanced is now available on BitBucket. Expect to be underwhelmed; take a look at this Info and this list of tweaks.
I've seen PredicateBuilder before mentioned (here on Stackoverflow) as an alternative. I've not used it though, but it might be useful to you.
Related
Here's my question, how does one "hack" the CustomCommand in Enterprise Architect's API to figure out what it's capabilities are? Here's what I'm currently using it for, which seems to be an accepted (by the community) and usable function:
repository.CustomCommand("Repository", "ImportRefData", xml);
I want to see what else I can do with it, namely some exporting of said reference data.
Also, while Sparx cannot officially support this functionality since it's undocumented, what are the odds that this command will stay functional with updated versions of EA, do they have a history of breaking illegal code like this?
Thanks,
Alex
The answer is: you can't. The few commands I documented were from postings on the Sparx forum. Eventually they originated from Sparx support itself. I remember having read from someone who knew about one of the commands asking for more info. But Sparx did not unveil more than was known. I tried to find the strings in the EXE but to no avail.
Since the function is there for quite some years and Sparx is very reluctant to substantial changes in the API it will likely not change. So it's save to use the function in future. IIRC Sparx itself recommended the use in certain cases. But only on the forum...
The whole point of GhostDocs is obviously to document your code. It asks you to name your methods well to do so. With well named methods however, theoritically shouldn't they be useful enough to be considered documentation?
I just want to hear some pros/cons of current users as I don't want to download it and clutter up my code with unnecessary and duplicating documentation.
While self-documenting code helps, if it were all you needed nobody would ever consult MSDN (which is, by the way, an expanded, language-merged HTML form of the XML documentation in the .NET libraries themselves).
XML-doc comments allow you to describe classes, methods, parameters, and other members more verbosely than you'd ever want to do with an identifier. You can recommend best practices, discourage incorrect or "hack-y" uses of your code, detail what could go wrong and why, etc etc. This information is available when your source code isn't (if you compile it properly), and is invaluable when your code isn't quite as self-documenting as you think (many things you might think obvious are because you think a certain way, and not everybody will think the same way).
I tried to google but didn't find a decent tutorial with snippet code.
Does anyone used typed DataSets\DataTable in c# ?
Is it from .net 3.5 and above?
To answer the second parts of the question (not the "how to..." from the title, but the "does anyone..." and "is it...") - the answer would be a yes, but a yes with a pained expression on my face. For new code, I would strongly recommend looking at a class-based model; pick your poison between the many ORMs, micro-ORMs, and raw ADO.NET. DataTable itself does still have a use, in particular for processing and storing unpredictable data (where you have no idea what the schema is in advance). By the time you are talking about typed data-sets, I would suggest you obviously know enough about the type that this no longer applies, and an object-model is a very valid alternative.
It is still a supported part of the framework, and it is still in use as a technology. It has some nice features like the diff-set. However, most (if not all) of that is also available against an object-based design, with classes and properties (without the added overhead of the DataTable abstraction).
MSDN has guidance. It really hasn't changed since typed datasets were first introduced.
http://msdn.microsoft.com/en-us/library/esbykkzb(v=VS.100).aspx
There are tons of videos available here: http://www.learnvisualstudio.net/series/aspdotnet_2_0_data_access_and_databinding/
And I found one more tutorial here: http://www.15seconds.com/issue/031223.htm
Sparingly.... Unless you need to know to maintain legacy software, learn an ORM or two, particularly in conjunction with LINQ.
Some of my colleagues have them, the software I work on doesn't use them at all, on account of some big mouth developer getting his way again...
We have a Visual Studio 2010 solution that contains several C# projects in accordance with Jeffery Palermo's Onion Architecture pattern (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/). We want to add the Visual Studio Intellisense Comments using the triple slashes, but we want to see if anyone knows of best practices on how far to take this. Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to?
Add them to any methods exposed in public APIs, that way you can give the caller all the information they need when working with a foreign interface. For example, which exceptions the method may throw and other remarks.
It's still beneficial to add these kinds of comments to private methods, I do it anyway to be consistent. It also helps if you plan on generating documentation from the comments.
While, technically, there is such a thing as too much documentation, 99.99999% of the time this exception doesn't apply.
Document everything as much as you can. Formal, informal, stream of thought..every scrap of comments will help some poor soul who inherits your code or has to interface with it.
(It's like the old rule "The error may be in the Compiler and not your code. Compilers have errors too. This is not one of those times.")
Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Yes
Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to? If you want to. Apply them to any function you write, and not what VS autogenerates
I've seen limited "intellisense" comments..but extensive in-code comments that follow. So long as the "content" is there, life will be good. I generally include a brief blurb about each function in the intellisense comments, but put the majority of "here's why i did this" in the function and dead-tree documents.
I agree with fletcher. Start with public facing classes and methods and then work your way down into private code. If you were starting from scratch I would highly recommend adding the XML comments to all code for your own convenience, but in this case starting with public methods and then updating other classes whenever you go in to update them is a good solution.
Relating to another question I asked yesterday with regards to logging I was introduced to TraceListeners which I'd never come across before and sorely wish I had. I can't count the amount of times I've written loggers needlessly to do this and nobody had ever pointed this out or asked my why I didn't use the built in tools. This leads me to wonder what other features I've overlooked and written into my applications needlessly because of features of .NET that I'm unaware of.
Does anyone else have features of .NET that would've completely changed the way they wrote applications or components of their applications had they only known that .NET already had a built in means of supporting it?
It would be handy if other developers posted scenarios where they frequently come across components or blocks of code that are completely needless in hindsight had the original developer only known of a built in .NET component - such as the TraceListeners that I previously noted.
This doesn't necessarily include newly added features of 3.5 per se, but could if pertinent to the scenario.
Edit - As per previous comments, I'm not really interested in the "Hidden Features" of the language which I agree have been documented before - I'm looking for often overlooked framework components that through my own (or the original developer's) ignorance have written/rewritten their own components/classes/methods needlessly.
The yield keyword changed the way I wrote code. It is an AMAZING little keyword that has a ton of implications for writing really great code.
Yield creates "differed invoke" with the data that allows you to string together several operations, but only ever traverse the list once. In the following example, with yield, you would only ever create one list, and traverse the data set once.
FindAllData().Filter("keyword").Transform(MyTransform).ToList()
The yield keyword is what the majority of LINQ extensions were built off of that gives you the performance that LINQ has.
Also:
Hidden Features of ASP.NET
Hidden Features of VB.NET?
Hidden Features of F#
The most frequently overlooked feature I came across is the ASP.net Health Monitoring system. A decent overview is here: https://web.archive.org/web/20210305134220/https://aspnet.4guysfromrolla.com/articles/031407-1.aspx
I know I personally recreated it on several apps before I actually saw anything in a book or on the web about it.
I spoke to someone at a conference one time and asked about it. They told me the developer at MS had bad communication skills so it was largely left undocumented :)
I re-wrote the System.Net.WebClient class a while back. I was doing some web scraping and started my own class to wrap HttpWebRequest/HttpWebReponse. Then I discovered WebClient part way through. I finished it anyway because I needed functionality that WebClient does not provide (control of cookies and user agent).
Something I'm thinking about re-writing is the String.Format() method. I want to reflect the code used to parse the input string and mimic it to build my own "CompiledFormat" class that you can use in a loop without having to re-parse your format string with each iteration. The result will allow efficient code like this:
var PhoneFormat = new CompiledFormat("({0}){1}-{2}x{3}");
foreach (PhoneNumber item in MyPhoneList)
{
Console.WriteLine(PhoneFormat.Apply(PhoneNumber.AreaCode, PhoneNumber.Prefix, PhoneNumber.Number, PhoneNumber.Extension));
}
Update:
This prompted me to finally go do it. See the results here: Performance issue: comparing to String.Format