Batch refactor "using" statement declarations in C# across multiple files - c#

I have a bunch of c# code I inherited that has "using" statement declarations like this
using Foo;
using NS1=Bar.x.y.z;
and I've been asked to make our codebase consistent with regard to namespacing - the policy is simply that
1 some namespaces should always be fully qualified (no aliases) - for example things inside "Foo" above should always be fully qualitied.
2 some namesapces like Bar.x.y.a should always be accessed through a specific using alias ("NS1" in the above example)
To illustrate what is desired,
if this is the BEFORE code
using FOO;
int x = SomeClass1.SomeStaticMethodMethod(1); // where SomeClass1 is in "FOO"
var y = new Bar.x.y.z.SomeClass2()
this is what is desired AFTER
using NS1 = Bar.x.y.z;
int x = Foo.SomeClass1.SomeStaticMethodMethod(1); // where SomeClass1 is in "FOO"
var y = new NS1.SomeClass2()
Of course I can do all this manually. But I have a lot of files to Fix. I'm looking for a tool that can do this over many files (100s of .CS files). I even have the latest version of Resharper (5.1) which doesn't seem to let me do this. (Actually resharper is in fact causing more problems because it loves adding using statements I don't want)
Are there tools or techinques I can use to simplify my task? I am allowed to purchase more dev tools so commercial tools are an option for me.

I know it probably is not ideal buy you could use DxCore from DevExpress to write a plugin similarly to those
http://code.google.com/p/dxcorecommunityplugins/

Related

Is it possible to create a C# compile time Method? [duplicate]

I've been puzzling about this for a while and I've looked around a bit, unable to find any discussion about the subject.
Lets assume I wanted to implement a trivial example, like a new looping construct: do..until
Written very similarly to do..while
do {
//Things happen here
} until (i == 15)
This could be transformed into valid csharp by doing so:
do {
//Things happen here
} while (!(i == 15))
This is obviously a simple example, but is there any way to add something of this nature? Ideally as a Visual Studio extension to enable syntax highlighting etc.
Microsoft proposes Rolsyn API as an implementation of C# compiler with public API. It contains individual APIs for each of compiler pipeline stages: syntax analysis, symbol creation, binding, MSIL emission. You can provide your own implementation of syntax parser or extend existing one in order to get C# compiler w/ any features you would like.
Roslyn CTP
Let's extend C# language using Roslyn! In my example I'm replacing do-until statement w/ corresponding do-while:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Roslyn.Compilers.CSharp;
namespace RoslynTest
{
class Program
{
static void Main(string[] args)
{
var code = #"
using System;
class Program {
public void My() {
var i = 5;
do {
Console.WriteLine(""hello world"");
i++;
}
until (i > 10);
}
}
";
//Parsing input code into a SynaxTree object.
var syntaxTree = SyntaxTree.ParseCompilationUnit(code);
var syntaxRoot = syntaxTree.GetRoot();
//Here we will keep all nodes to replace
var replaceDictionary = new Dictionary<DoStatementSyntax, DoStatementSyntax>();
//Looking for do-until statements in all descendant nodes
foreach (var doStatement in syntaxRoot.DescendantNodes().OfType<DoStatementSyntax>())
{
//Until token is treated as an identifier by C# compiler. It doesn't know that in our case it is a keyword.
var untilNode = doStatement.Condition.ChildNodes().OfType<IdentifierNameSyntax>().FirstOrDefault((_node =>
{
return _node.Identifier.ValueText == "until";
}));
//Condition is treated as an argument list
var conditionNode = doStatement.Condition.ChildNodes().OfType<ArgumentListSyntax>().FirstOrDefault();
if (untilNode != null && conditionNode != null)
{
//Let's replace identifier w/ correct while keyword and condition
var whileNode = Syntax.ParseToken("while");
var condition = Syntax.ParseExpression("(!" + conditionNode.GetFullText() + ")");
var newDoStatement = doStatement.WithWhileKeyword(whileNode).WithCondition(condition);
//Accumulating all replacements
replaceDictionary.Add(doStatement, newDoStatement);
}
}
syntaxRoot = syntaxRoot.ReplaceNodes(replaceDictionary.Keys, (node1, node2) => replaceDictionary[node1]);
//Output preprocessed code
Console.WriteLine(syntaxRoot.GetFullText());
}
}
}
///////////
//OUTPUT://
///////////
// using System;
// class Program {
// public void My() {
// var i = 5;
// do {
// Console.WriteLine("hello world");
// i++;
// }
//while(!(i > 10));
// }
// }
Now we can compile updated syntax tree using Roslyn API or save syntaxRoot.GetFullText() to text file and pass it to csc.exe.
The big missing piece is hooking into the pipeline, otherwise you're not much further along than what .Emit provided. Don't misunderstand, Roslyn brings alot of great things, but for those of us who want to implement preprocessors and meta programming, it seems for now that was not on the plate. You can implement "code suggestions" or what they call "issues"/"actions" as an extension, but this is basically a one off transformation of code that acts as a suggested inline replacement and is not the way you would implement a new language feature. This is something you could always do with extensions, but Roslyn makes the code analysis/transformation tremendously easier:
From what I've read of comments from Roslyn developers on the codeplex forums, providing hooks into the pipeline has not been an initial goal. All of the new C# language features they've provided in C# 6 preview involved modifying Roslyn itself. So you'd essentially need to fork Roslyn. They have documentation on how to build Roslyn and test it with Visual Studio. This would be a heavy handed way to fork Roslyn and have Visual Studio use it. I say heavy-handed because now anyone who wants to use your new language features must replace the default compiler with yours. You could see where this would begin to get messy.
Building Roslyn and replacing Visual Studio 2015 Preview's compiler with your own build
Another approach would be to build a compiler that acts as a proxy to Roslyn. There are standard APIs for building compilers that VS can leverage. It's not a trivial task though. You'd read in the code files, call upon the Roslyn APIs to transform the syntax trees and emit the results.
The other challenge with the proxy approach is going to be getting intellisense to play nicely with any new language features you implement. You'd probably have to have your "new" variant of C#, use a different file extension, and implement all the APIs that Visual Studio requires for intellisense to work.
Lastly, consider the C# ecosystem, and what an extensible compiler would mean. Let's say Roslyn did support these hooks, and it was as easy as providing a Nuget package or a VS extension to support a new language feature. All of your C# leveraging the new Do-Until feature is essentially invalid C#, and will not compile without the use of your custom extension. If you go far enough down this road with enough people implementing new features, very quickly you will find incompatible language features. Maybe someone implements a preprocessor macro syntax, but it can't be used along side someone else's new syntax because they happened to use similar syntax to delineate the beginning of the macro. If you leverage alot of open source projects and find yourself digging into their code, you would encounter alot of strange syntax that would require you side track and research the particular language extensions that project is leveraging. It could be madness. I don't mean to sound like a naysayer, as I have alot of ideas for language features and am very interested in this, but one should consider the implications of this, and how maintainable it would be. Imagine if you got hired to work somewhere and they had implemented all kinds of new syntax that you had to learn, and without those features having been vetted the same way C#'s features have, you can bet some of them would be not well designed/implemented.
You can check www.metaprogramming.ninja (I am the developer), it provides an easy way to accomplish language extensions (I provide examples for constructors, properties, even js-style functions) as well as full-blown grammar based DSLs.
The project is open source as well. You can find documentations, examples, etc at github.
Hope it helps.
You can't create your own syntactic abstractions in C#, so the best you can do is to create your own higher-order function. You could create an Action extension method:
public static void DoUntil(this Action act, Func<bool> condition)
{
do
{
act();
} while (!condition());
}
Which you can use as:
int i = 1;
new Action(() => { Console.WriteLine(i); i++; }).DoUntil(() => i == 15);
although it's questionable whether this is preferable to using a do..while directly.
I found the easiest way to extend the C# language is to use the T4 text processor to preprocess my source. The T4 Script would read my C# and then call a Roslyn based parser, which would generate a new source with custom generated code.
During build time, all my T4 scripts would be executed, thus effectively working as an extended preprocessor.
In your case, the none-compliant C# code could be entered as follows:
#if ExtendedCSharp
do
#endif
{
Console.WriteLine("hello world");
i++;
}
#if ExtendedCSharp
until (i > 10);
#endif
This would allow syntax checking the rest of your (C# compliant) code during development of your program.
No there is no way to achieve what you'are talking about.
Cause what you're asking about is defining new language construct, so new lexical analysis, language parser, semantic analyzer, compilation and optimization of generated IL.
What you can do in such cases is use of some macros/functions.
public bool Until(int val, int check)
{
return !(val == check);
}
and use it like
do {
//Things happen here
} while (Until(i, 15))

How to access project code meta data?

In my VSPackage I need to replace reference to a property in code with its actual value. For example
public static void Main(string[] args) {
Console.WriteLine(Resource.HelloWorld);
}
What I want is to replace "Resource.HelloWorld" with its actual value - that is, find class Resource and get value of its static property HelloWorld. Does Visual Studio expose any API to handle code model of the project? It definitely has one, because this is very similar to common task of renaming variables. I don't want to use reflection on output assembly, because it's slow and it locks the file for a while.
There is no straight forward way to do this that I know of. Reliably getting an AST out of Visual Studio (and changes to it) has always been a big problem. Part of the goal of the Rosalyn project is to create an unified way of doing this, because many tool windows had their own way of doing this sort of stuff.
There are four ways to do this:
Symbols
FileCodeModel + CodeDOM
Rosalyn AST
Unexplored Method
Symbols
I believe most tool windows such as the CodeView and things like Code Element Search use the symbols created from a compiled build. This is not ideal as it is a little more heavy weight and hard to keep in sync. You'd have to cache symbols to make this not slow. Using reflector, you can see how CodeView implements this.
This approach uses private assemblies. The code for getting the symbols would look something like this:
var compilerHost = new IDECompilerHost();
var typeEnumerator = (from compiler in compilerHost.Compilers.Cast<IDECompiler>()
from type in compiler.GetCompilation().MainAssembly.Types
select new Tuple<IDECompiler, CSharpType>(compiler, type));
foreach (var typeTuple in typeEnumerator)
{
Trace.WriteLine(typeTuple.Item2.Name);
var csType = typeTuple.Item2;
foreach (var loc in csType.SourceLocations)
{
var file = loc.FileName.Value;
var line = loc.Position.Line;
var charPos = loc.Position.Character;
}
}
FileCodeModel + CodeDOM
You could try using the EnvDTE service to get the FileCodeModel associated with a Code Document. This will let you get classes and methods. But it does not support getting the method body. You're messing with buggy COM. This ugly because an COM object reference to a CodeFunction or CodeClass can get invalided without you knowing it, meaning you'd have to keep your own mirror.
Rosalyn AST
This allows provides the same capabilities as both FileCodeModel and Symbols. I've been playing with this and it's actually not too bad.
Unexplored Method
You could try getting the underlying LanguageServiceProvider that is associated with the Code Document. But this is really difficult to pull off, and leaves you with many issues.

why would someone fully qualify an Object in code when there is a using statement declared?

I am looking at code which has the correct using statements declared but yet the objects within the code are fully qualified when they don't need to be due to the fact that the using statement is declared.
Is there a reason why the objects are fully qualified but yet still have the using statement declared?
Perhaps the using statement was added after the objects were fully qualified? Another less likely reason is that there are namespace conflicts with the objects in use.
Why some people (like me) do it intentionally:
When using a relatively rare class, it provides a lot of information about the class. And I like puting information in the code. Consider:
System.Runtime.Serialization.Formatters.Soap.SoapFormatter formatter =
new SoapFormatter(); // .NET 2
or
var formatter = new // .NET 3
System.Runtime.Serialization.Formatters.Soap.SoapFormatter.SoapFormatter();
I am fully aware about the inconsistency and that the 'when to use' is kind of arbitrary. But for somebody reading this code a lot of questions are answered before they come up.
And Intellisense could answer the same questions but it is not always available.
Sometimes namespaces 'conflict' - there are classes of the same name in multiple namespaces and fully-qualifying them distinguishes them.
It may be because there are conflicting names within two imported namespaces.
Say, A.A has a type called Foo (A.A.Foo), and B.B has a type called Foo (B.B.Foo). If you do this:
using A.A;
using B.B;
// class definitions... etc
var x = new Foo(); // which foo?
You could do this if you don't want to fully qualify it:
using A.A;
using B.B;
using AFoo = A.A.Foo;
using BFoo = B.B.Foo;
// class definitions... etc
var x = new AFoo();
Why not simply remove the using B.B; statement? Well, supposed you're also using types B.B.Bar, A.A.FooBar, B.B.Qux and A.A.Quux. You would want to keep the using statements then.
There are lots of reasons. A few might be:
People tend to lean on Intellisense to find the classes they are looking for.
Code has been pasted from elsewhere.
Bad habits die hard.
A good question is whether there is a way that exists in Visual Studio to eliminate this redundancy. I am not aware of such a tool (probably something like CodeRush), but perhaps someone will comment here with one.
When you use some of the built in refactorings they sometimes fully qualify the object. This is safer to avoid compiler confusion.
I can think of a few reasons:
there are multiple using statements, either nested or included on the same line, and there is some ambiguity as to which object the method property belongs to
the using block is particularly long and the statement itself might be off screen. Qualifying the methods and properties can make it more readable for other programmers.
there might be some very junior programmers on the project who might not quite understand the using statement and the programmer might be trying to make it more readable for them.
One reason this can happen: auto-generated code. (like the Windows Forms designer-generated code)

How will you use the C# 4 dynamic type?

C# 4 will contain a new dynamic keyword that will bring dynamic language features into C#.
How do you plan to use it in your own code, what pattern would you propose ? In which part of your current project will it make your code cleaner or simpler, or enable things you could simply not do (outside of the obvious interop with dynamic languages like IronRuby or IronPython)?
PS : Please if you don't like this C# 4 addition, avoid to bloat comments negatively.
Edit : refocussing the question.
The classic usages of dynamic are well known by most of stackoverflow C# users. What I want to know is if you think of specific new C# patterns where dynamic can be usefully leveraged without losing too much of C# spirit.
Wherever old-fashioned reflection is used now and code readability has been impaired. And, as you say, some Interop scenarios (I occasionally work with COM).
That's pretty much it. If dynamic usage can be avoided, it should be avoided. Compile time checking, performance, etc.
A few weeks ago, I remembered this article. When I first read it, I was frankly appalled. But what I hadn't realised is that I didn't know how to even use an operator on some unknown type. I started wondering what the generated code would be for something like this:
dynamic c = 10;
int b = c * c;
Using regular reflection, you can't use defined operators. It generated quite a bit of code, using some stuff from a Microsoft namespace. Let's just say the above code is a lot easier to read :) It's nice that it works, but it was also very slow: about 10,000 times slower than a regular multiplication (doh), and about 100 times slower than an ICalculator interface with a Multiply method.
Edit - generated code, for those interested:
if (<Test>o__SiteContainer0.<>p__Sitea == null)
<Test>o__SiteContainer0.<>p__Sitea =
CallSite<Func<CallSite, object, object, object>>.Create(
new CSharpBinaryOperationBinder(ExpressionType.Multiply,
false, false, new CSharpArgumentInfo[] {
new CSharpArgumentInfo(CSharpArgumentInfoFlags.None, null),
new CSharpArgumentInfo(CSharpArgumentInfoFlags.None, null) }));
b = <Test>o__SiteContainer0.<>p__Site9.Target(
<Test>o__SiteContainer0.<>p__Site9,
<Test>o__SiteContainer0.<>p__Sitea.Target(
<Test>o__SiteContainer0.<>p__Sitea, c, c));
The dynamic keyword is all about simplifying the code required for two scenarios:
C# to COM interop
C# to dynamic language (JavaScript, etc.) interop
While it could be used outside of those scenarios, it probably shouldn't be.
Recently I have blogged about dynamic types in C# 4.0 and among others I mentioned some of its potential uses as well as some of its pitfalls. The article itself is a bit too big to fit in here, but you can read it in full at this address.
As a summary, here are a few useful use cases (except the obvious one of interoping with COM libraries and dynamic languages like IronPython):
reading a random XML or JSON into a dynamic C# object. The .Net framework contains classes and attributes for easily deserializing XML and JSON documents into C# objects, but only if their structure is static. If they are dynamic and you need to discover their fields at runtime, they can could only be deserialized into dynamic objects. .Net does not offer this functionality by default, but it can be done by 3rd party tools like jsonfx or DynamicJson
return anonymous types from methods. Anonymous types have their scope constrained to the method where they are defined, but that can be overcome with the help of dynamic. Of course, this is a dangerous thing to do, since you will be exposing objects with a dynamic structure (with no compile time checking), but it might be useful in some cases. For example the following method reads only two columns from a DB table using Linq to SQL and returns the result:
public static List<dynamic> GetEmployees()
{
List<Employee> source = GenerateEmployeeCollection();
var queyResult = from employee in source
where employee.Age > 20
select new { employee.FirstName, employee.Age };
return queyResult.ToList<dynamic>();
}
create REST WCF services that returns dynamic data. That might be useful in the following scenario. Consider that you have a web method that returns user related data. However, your service exposes quite a lot of info about users and it will not be efficient to just return all of them all of the time. It would be better if you would be able to allow consumers to specify the fields that they actually need, like with the following URL
http://api.example.com/users?userId=xxxx&fields=firstName,lastName,age
The problem then comes from the fact that WCF will only return to clients responses made out of serialized objects. If the objects are static then there would be no way to return dynamic responses so dynamic types need to be used. There is however one last problem in here and that is that by default dynamic types are not serializable. In the article there is a code sample that shows how to overcome this (again, I am not posting it here because of its size).
In the end, you might notice that two of the use cases I mentioned require some workarounds or 3rd party tools. This makes me think that while the .Net team has added a very cool feature to the framework, they might have only added it with COM and dynamic languages interop in mind. That would be a shame because dynamic languages have some strong advantages and providing them on a platform that combines them with the strengths of strong typed languages would probably put .Net and C# ahead of the other development platforms.
Miguel de Icaza presented a very cool use case on his blog, here (source included):
dynamic d = new PInvoke ("libc");
d.printf ("I have been clicked %d times", times);
If it is possible to do this in a safe and reliable way, that would be awesome for native code interop.
This will also allow us to avoid having to use the visitor pattern in certain cases as multi-dispatch will now be possible
public class MySpecialFunctions
{
public void Execute(int x) {...}
public void Execute(string x) {...}
public void Execute(long x) {...}
}
dynamic x = getx();
var myFunc = new MySpecialFunctions();
myFunc.Execute(x);
...will call the best method match at runtime, instead of being worked out at compile time
I will use it to simplify my code which deals with COM/Interop where before I had to specify the member to invoke, its parameters etc. (basically where the compiler didn't know about the existence of a function and I needed to describe it at compile time). With dynamic this gets less cumbersome and the code gets leaner.

Generating classes automatically from unit tests?

I am looking for a tool that can take a unit test, like
IPerson p = new Person();
p.Name = "Sklivvz";
Assert.AreEqual("Sklivvz", p.Name);
and generate, automatically, the corresponding stub class and interface
interface IPerson // inferred from IPerson p = new Person();
{
string Name
{
get; // inferred from Assert.AreEqual("Sklivvz", p.Name);
set; // inferred from p.Name = "Sklivvz";
}
}
class Person: IPerson // inferred from IPerson p = new Person();
{
private string name; // inferred from p.Name = "Sklivvz";
public string Name // inferred from p.Name = "Sklivvz";
{
get
{
return name; // inferred from Assert.AreEqual("Sklivvz", p.Name);
}
set
{
name = value; // inferred from p.Name = "Sklivvz";
}
}
public Person() // inferred from IPerson p = new Person();
{
}
}
I know ReSharper and Visual Studio do some of these, but I need a complete tool -- command line or whatnot -- that automatically infers what needs to be done.
If there is no such tool, how would you write it (e.g. extending ReSharper, from scratch, using which libraries)?
What you appear to need is a parser for your language (Java), and a name and type resolver. ("Symbol table builder").
After parsing the source text, a compiler usually has a name resolver, that tries to record the definition of names and their corresponding types, and a type checker, that verifies that each expression has a valid type.
Normally the name/type resolver complains when it can't find a definition. What you want it to do is to find the "undefined" thing that is causing the problem, and infer a type for it.
For
IPerson p = new Person();
the name resolver knows that "Person" and "IPerson" aren't defined. If it were
Foo p = new Bar();
there would be no clue that you wanted an interface, just that Foo is some kind of abstract parent of Bar (e.g., a class or an interface). So the decision as which is it must be known to the tool ("whenever you find such a construct, assume Foo is an interface ..."). You could use a heuristic: IFoo and Foo means IFoo should be an interface, and somewhere somebody has to define Foo as a class realizing that interface. Once the
tool has made this decision, it would need to update its symbol tables so that it can
move on to other statements:
For
p.Name = "Sklivvz";
given that p must be an Interface (by the previous inference), then Name must be a field member, and it appears its type is String from the assignment.
With that, the statement:
Assert.AreEqual("Sklivvz", p.Name);
names and types resolve without further issue.
The content of the IFoo and Foo entities is sort of up to you; you didn't have to use get and set but that's personal taste.
This won't work so well when you have multiple entities in the same statement:
x = p.a + p.b ;
We know a and b are likely fields, but you can't guess what numeric type if indeed they are numeric, or if they are strings (this is legal for strings in Java, dunno about C#).
For C++ you don't even know what "+" means; it might be an operator on the Bar class.
So what you have to do is collect constraints, e.g., "a is some indefinite number or string", etc. and as the tool collects evidence, it narrows the set of possible constraints. (This works like those word problems: "Joe has seven sons. Jeff is taller than Sam. Harry can't hide behind Sam. ... who is Jeff's twin?" where you have to collect the evidence and remove the impossibilities). You also have to worry about the case where you end up with a contradiction.
You could rule out p.a+p.b case, but then you can't write your unit tests with impunity. There are standard constraint solvers out there if you want impunity. (What a concept).
OK, we have the ideas, now, can this be done in a practical way?
The first part of this requires a parser and a bendable name and type resolver. You need a constraint solver or at least a "defined value flows to undefined value" operation (trivial constraint solver).
Our DMS Software Reengineering Toolkit with its Java Front End could probably do this. DMS is a tool builder's tool, for people that want to build tools that process computer langauges in arbitrary ways. (Think of "computing with program fragments rather than numbers").
DMS provides general purpose parsing machinery, and can build an tree for whatever front end it is given (e.g., Java, and there's a C# front end).
The reason I chose Java is that our Java front end has all that name and type resolution machinery, and it is provided in source form so it can be bent. If you stuck to the trivial constraint solver, you could probably bend the Java name resolver to figure out the types. DMS will let you assemble trees that correspond to code fragments, and coalesce them into larger ones; as your tool collected facts for the symbol table, it could build the primitive trees.
Somewhere, you have to decide you are done. How many unit tests the tool have to see
before it knows the entire interface? (I guess it eats all the ones you provide?).
Once complete, it assembles the fragments for the various members and build an AST for an interface; DMS can use its prettyprinter to convert that AST back into source code like you've shown.
I suggest Java here because our Java front end has name and type resolution. Our C# front end does not. This is a "mere" matter of ambition; somebody has to write one, but that's quite a lot of work (at least it was for Java and I can't imagine C# is really different).
But the idea works fine in principle using DMS.
You could do this with some other infrastructure that gave you access to a parser and an a bendable name and type resolver. That might not be so easy to get for C#; I suspect MS may give you a parser, and access to name and type resolution, but not any way to change that. Maybe Mono is the answer?
You still need a was to generate code fragments and assemble them. You might try to do this by string hacking; my (long) experience with gluing program bits together is that if you do it with strings you eventually make a mess of it. You really want pieces that represent code fragments of known type, that can only be combined in ways the grammar allows; DMS does that thus no mess.
Its amazing how no one really gave anything towards what you were asking.
I dont know the answer, but I will give my thoughts on it.
If I were to attempt to write something like this myself I would probably see about a resharper plugin. The reason I say that is because as you stated, resharper can do it, but in individual steps. So I would write something that went line by line and applied the appropriate resharper creation methods chained together.
Now by no means do I even know how to do this, as I have never built anything for resharper, but that is what I would try to do. It makes logical sense that it could be done.
And if you do write up some code, PLEASE post it, as I could find that usefull as well, being able to generate the entire skeleton in one step. Very useful.
If you plan to write your own implementation I would definately suggest that you take a look at the NVelocity (C#) or Velocity (Java) template engines.
I have used these in a code generator before and have found that they make the job a whole lot easier.
It's doable - at least in theory. What I would do is use something like csparser to parse the unit test (you cannot compile it, unfortunately) and then take it from there. The only problem I can see is that what you are doing is wrong in terms of methodology - it makes more sense to generate unit tests from entity classes (indeed, Visual Studio does precisely this) than doing it the other way around.
I think a real solution to this problem would be a very specialized parser. Since that's not so easy to do, I have a cheaper idea. Unfortunately, you'd have to change the way you write your tests (namely, just the creation of the object):
dynamic p = someFactory.Create("MyNamespace.Person");
p.Name = "Sklivvz";
Assert.AreEqual("Sklivvz", p.Name);
A factory object would be used. If it can find the named object, it will create it and return it (this is the normal test execution). If it doesn't find it, it will create a recording proxy (a DynamicObject) that will record all calls and at the end (maybe on tear down) could emit class files (maybe based on some templates) that reflect what it "saw" being called.
Some disadvantages that I see:
Need to run the code in "two" modes, which is annoying.
In order for the proxy to "see" and record calls, they must be executed; so code in a catch block, for example, has to run.
You have to change the way you create your object under test.
You have to use dynamic; you'll lose compile-time safety in subsequent runs and it has a performance hit.
The only advantage that I see is that it's a lot cheaper to create than a specialized parser.
I like CodeRush from DevExpress. They have a huge customizable templating engine. And the best for me their is no Dialog boxes. They also have functionality to create methods and interfaces and classes from interface that does not exist.
Try looking at the Pex , A microsoft project on unit testing , which is still under research
research.microsoft.com/en-us/projects/Pex/
I think what you are looking for is a fuzzing tool kit (https://en.wikipedia.org/wiki/Fuzz_testing).
Al tough I never used, you might give Randoop.NET a chance to generate 'unit tests' http://randoop.codeplex.com/
Visual Studio ships with some features that can be helpful for you here:
Generate Method Stub. When you write a call to a method that doesn't exist, you'll get a little smart tag on the method name, which you can use to generate a method stub based on the parameters you're passing.
If you're a keyboard person (I am), then right after typing the close parenthesis, you can do:
Ctrl-. (to open the smart tag)
ENTER (to generate the stub)
F12 (go to definition, to take you to the new method)
The smart tag only appears if the IDE thinks there isn't a method that matches. If you want to generate when the smart tag isn't up, you can go to Edit->Intellisense->Generate Method Stub.
Snippets. Small code templates that makes it easy to generate bits of common code. Some are simple (try "if[TAB][TAB]"). Some are complex ('switch' will generate cases for an enum). You can also write your own. For your case, try "class" and "prop".
See also "How to change “Generate Method Stub” to throw NotImplementedException in VS?" for information snippets in the context of GMS.
autoprops. Remember that properties can be much simpler:
public string Name { get; set; }
create class. In Solution Explorer, RClick on the project name or a subfolder, select Add->Class. Type the name of your new class. Hit ENTER. You'll get a class declaration in the right namespace, etc.
Implement interface. When you want a class to implement an interface, write the interface name part, activate the smart tag, and select either option to generate stubs for the interface members.
These aren't quite the 100% automated solution you're looking for, but I think it's a good mitigation.
I find that whenever I need a code generation tool like this, I am probably writing code that could be made a little bit more generic so I only need to write it once. In your example, those getters and setters don't seem to be adding any value to the code - in fact, it is really just asserting that the getter/setter mechanism in C# works.
I would refrain from writing (or even using) such a tool before understanding what the motivations for writing these kinds of tests are.
BTW, you might want to have a look at NBehave?
I use Rhino Mocks for this, when I just need a simple stub.
http://www.ayende.com/wiki/Rhino+Mocks+-+Stubs.ashx

Categories

Resources