Fine-grained visibility for 'internal' members - c#

Recently I was reading about partitioning code with .NET assemblies and stumbled upon a nice suggestion from this post: "reduce the number of your .NET assemblies to the strict minimum".
I couldn't agree more! And one of the reasons I personally see most often, is that people just want to isolate some piece of code, so they make the types/methods internal and put them into a separate project.
There are many other reasons (valid and not) for splitting code into several assemblies, but if you want to isolate components/APIs while still having them located in one library, how can you do that?
namespace MyAssembly.SomeApiInternals
{
//Methods from this class should not
//be used outside MyAssembly.SomeApiInternals
internal class Foo
{
internal void Boo() { }
}
}
namespace MyAssembly.AnotherPart
{
public class Program
{
public void Test()
{
var foo = MyAssembly.SomeApiInternals.Foo();
foo.Boo(); //Ok, not a compiler error but some red flag at least
}
}
}
How can one restrict a type/method from being used by other types/methods in the same assembly but outside this very namespace?
(I'm going to give a few answers myself and see how people would vote.)
Thanks!

You could put the code in different assemblies, then merge the assemblies with ILMerge in a post-build step...

Use NDepend and put in CQL rules that embody what you want and run them as part of your build. The language isnt interested in this level of restrictions. (I hadnt followed your link yet - are you really trying to do this without NDepend? Your answer should rule it in or out)

Related

Is it possible to apply attribute to the generated main method in a top-level application?

C#9 supports top-level statements, but I am curious whether it is possible to apply any attribute to generated main method (STAThread, actually), or I have to use classical approach with Main method.
This feature was designed for newcomers to language so they won't need to write bunch of boilerplate each time. So this
namespace HelloWorldProg
{
public static class HelloWorldClass
{
public static void Main(string[] args)
{
System.Console.WriteLine("Finally I can write Hello World");
}
}
}
transforms to this
System.Console.WriteLine("That's much easier!");
It's a question of entry threshold and learning curve. Without top-level statement you need to know about
namespaces
classes
incapsulation
static/instance members
passing arguments
arrays
how to write text to console
While with top-level statements you need to know only about last item to be able to write program and you may dig into other themes latter.
It's like "how to write 'hello world' in Haskell". Well, you need to know monads, IO in particular and do-notation. In order to know monads you should learn category theory.
Now answering your question:
You cannot declare attributes with top-level statements. They were designed for different purposes. Proposal, priorities in platform design

CS-Script Evaluator LoadCode: How to compile and reference a second script (reusable library)

The question in short is: How do you reference a second script containing reusable script code, under the constraints that you need to be able to unload and reload the scripts when either of them changes without restarting the host application?
I'm trying to compile a script class using the CS-Script "compiler as service" (CSScript.Evaluator), while referencing an assembly that has just been compiled from a second "library" script. The purpose is that the library script should contain code that can be reused for different scripts.
Here is a sample code that illustrates the idea but also causes a CompilerException at runtime.
using CSScriptLibrary;
using NUnit.Framework;
[TestFixture]
public class ScriptReferencingTests
{
private const string LibraryScriptCode = #"
public class Helper
{
public static int AddOne(int x)
{
return x + 1;
}
}
";
private const string ScriptCode = #"
using System;
public class Script
{
public int SumAndAddOne(int a, int b)
{
return Helper.AddOne(a+b);
}
}
";
[Test]
public void CSScriptEvaluator_CanReferenceCompiledAssembly()
{
var libraryEvaluator = CSScript.Evaluator.CompileCode(LibraryScriptCode);
var libraryAssembly = libraryEvaluator.GetCompiledAssembly();
var evaluatorWithReference = CSScript.Evaluator.ReferenceAssembly(libraryAssembly);
dynamic scriptInstance = evaluatorWithReference.LoadCode(ScriptCode);
var result = scriptInstance.SumAndAddOne(1, 2);
Assert.That(result, Is.EqualTo(4));
}
}
To run the code you need NuGet packages NUnit and cs-script.
This line causes a CompilerException at runtime:
dynamic scriptInstance = evaluatorWithReference.LoadCode(ScriptCode);
{interactive}(7,23): error CS0584: Internal compiler error: The invoked member is not supported in a dynamic assembly.
{interactive}(7,9): error CS0029: Cannot implicitly convert type '<fake$type>' to 'int'
Again, the reason for using CSScript.Evaluator.LoadCode instead of CSScript.LoadCode is so that the script can be reloaded at any time without restarting the host application when either of the scripts changes. (CSScript.LoadCode already supports including other scripts according to http://www.csscript.net/help/Importing_scripts.html)
Here is the documentation on the CS-Script Evaluator: http://www.csscript.net/help/evaluator.html
The lack of google results for this is discouraging, but I hope I'm missing something simple. Any help would be greatly appreciated.
(This question should be filed under the tag cs-script which does not exist.)
There is some slight confusion here. Evaluator is not the only way to achieve reloadable script behavior. CSScript.LoadCode allows reloading as well.
I do indeed advise to consider CSScript.Evaluator.LoadCode as a first candidate for the hosting model as it offers less overhead and arguably more convenient reloading model. However it comes with the cost. You have very little control over reloading and dependencies inclusion (assemblies, scripts). Memory leaks are not 100% avoidable. And it also makes script debugging completely impossible (Mono bug).
In your case I would really advice you to move to the more conventional hosting model: CodeDOM.
Have look at "[cs-script]\Samples\Hosting\CodeDOM\Modifying script without restart" sample.
And "[cs-script]\Samples\Hosting\CodeDOM\InterfaceAlignment" will also give you an idea how to use interfaces with reloading.
CodeDOM was for years a default CS-Script hosting mode and it is in fact very robust, intuitive and manageable. The only real drawback is the fact that all object you pass to (or get from) the script will need to be serializable or inherited from MarshalByRef. This is the side effect of the script being executed in the "automatic" separate domain. Thus one have to deal with the all "pleasures" of Remoting.
BTW this is the only reason why I implemented Mono-based evaluator.
CodeDOM model will also automatically manage the dependencies and recompile them when needed. But it looks like you are aware about this anyway.
CodeDOM also allows you to define precisely the mechanism of checking dependencies for changes:
//the default algorithm "recompile if script or dependency is changed"
CSScript.IsOutOfDateAlgorithm = CSScript.CachProbing.Advanced;
or
//custom algorithm "never recompile script"
CSScript.IsOutOfDateAlgorithm = (s, a) => false;
The quick solution to the CompilerException appears to be not use Evaluator to compile the assembly, but instead just CSScript.LoadCode like so
var compiledAssemblyName = CSScript.CompileCode(LibraryScriptCode);
var evaluatorWithReference = CSScript.Evaluator.ReferenceAssembly(compiledAssemblyName);
dynamic scriptInstance = evaluatorWithReference.LoadCode(ScriptCode);
However, as stated in previous answer, this limits the possibilities for dependency control that the CodeDOM model offers (like css_include). Also, any change to the LibraryScriptCode are not seen which again limits the usefulness of the Evaluator method.
The solution I chose is the AsmHelper.CreateObject and AsmHelper.AlignToInterface<T> methods. This lets you use the regular css_include in your scripts, while at the same time allowing you at any time to reload the scripts by disposing the AsmHelper and starting over. My solution looks something like this:
AsmHelper asmHelper = new AsmHelper(CSScript.Compile(filePath), null, false);
object obj = asmHelper.CreateObject("*");
IMyInterface instance = asmHelper.TryAlignToInterface<IMyInterface>(obj);
// Any other interfaces you want to instantiate...
...
if (instance != null)
instance.MyScriptMethod();
Once a change is detected (I use FileSystemWatcher), you just call asmHelper.Dispose and run the above code again.
This method requires the script class to be marked with the Serializable attribute, or simply inherit from MarshalByRefObject.
Note that your script class does not need to inherit any interface. The AlignToInterface works both with and without it. You could use dynamic here, but I prefer having a strongly typed interface to avoid errors down the line.
I couldn't get the built in try-methods to work, so I made this extension method for less clutter when it is not known whether or not the interface is implemented:
public static class InterfaceExtensions
{
public static T TryAlignToInterface<T>(this AsmHelper helper, object obj) where T : class
{
try
{
return helper.AlignToInterface<T>(obj);
}
catch
{
return null;
}
}
}
Most of this is explained in the hosting guidelines http://www.csscript.net/help/script_hosting_guideline_.html, and there are helpful samples mentioned in previous post.
I feel I might have missed something regarding script change detection, but this method works solidly.

Should I place every class in separate file?

Should I place every class in separate file? Even those short helper classes that are used only in one place? Like this one:
public class IntToVisibilityConverter : GenericValueConverter<int, Visibility>
{
protected override Visibility Convert(int value)
{
return value == 0 ? Visibility.Collapsed : Visibility.Visible;
}
}
I do this and it is usually best practice to do so, but it is sometimes a matter of opinion.
That depends greatly of personal preference, but I like to do it.
In this case, I would have a folder inside my application called ValueConverters, and put all converters, including short ones, inside their own files.
I find it makes it easier to get an overview of what your project consist of from the Solution Explorer.
I'll rephrase the question for you: should I use StyleCop? (it includes this rule). The answer is yes. I use it and my code is much more readable (but I have to admit I disable all the rules that require the method documentation to be complete :-) )
I do think that when you program in a team, having a fixed and uniform code format is very important. And even when you program "solo". A cluttered code is more difficult to read and errors can hide better in the clutter :-)
It is usually the best practise to put every class in a seperate file. Taking into account your short helper classes; you could create a helper class which contain all your helper methods, to prevent having way too many classes. If your helper class gets too big, you can seperate your helper functions per category
It is good practice to do so.
You can easily find the class if you name the file after the class.
Resharper has a built in error for classes not matching the file name they are in...
Typically, IMO yes. Think about any new developers who must find where code lives. Yes, you can use go to definition, but that is not the be all, end all. However, I will say that sometimes if you have an interface that is small and only used for the class that it is within, then you can probably get away with it. However, even that can expand and later be required to be pulled out (and maybe those contracts should be in another namespace anyways).
So, ultimately, I would say the majority of the time, yes, but there are some caveats. As with anything, it is never black and white

Is there a way to protect Unit test names that follows MethodName_Condition_ExpectedBehaviour pattern against refactoring?

I follow the naming convention of
MethodName_Condition_ExpectedBehaviour
when it comes to naming my unit-tests that test specific methods.
for example:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
But when I need to rename the method under test, tools like ReSharper does not offer me to rename those tests.
Is there a way to prevent such cases to appear after renaming? Like changing ReSharper settings or following a better unit-test naming convention etc. ?
A recent pattern is to groups tests into inner classes by the method they test.
For example (omitting test attributes):
public CityGetterTests
{
public class GetCity
{
public void TakesParidId_ReturnsParis()
{
//...
}
// More GetCity tests
}
}
See Structuring Unit Tests from Phil Haack's blog for details.
The neat thing about this layout is that, when the method name changes,
you'll only have to change the name of the inner class instead of all
the individual tests.
I also started with this convertion, however ended up with feeling that is not very good. Now I use BDD styled names like should_return_Paris_for_ParisID.
That makes my tests more readable and alsow allows me to refactor method names without worrying about my tests :)
I think the key here is what you should be testing.
You've mentioned TDD in the tags, so I hope that we're trying to adhere to that here. By that paradigm, the tests you're writing have two purposes:
To support your code once it is written, so you can refactor without fearing that you've broken something
To guide us to a better way of designing components - writing the test first really forces you to think about what is necessary for solving the problem at hand.
I know at first it looks like this question is about the first point, but really I think it's about the second. The problem you're having is that you've got concrete components you're testing instead of a contract.
In code terms, that means that I think we should be testing interfaces instead of class methods, because otherwise we expose our test to a variety of problems associated with testing components instead of contracts - inheritance strategies, object construction, and here, renaming.
It's true that interfaces names will change as well, but they'll be a lot more rigid than method names. What TDD gives us here isn't just a way to support change through a test harness - it provides the insight to realise we might be going about it the wrong way!
Take for example the code block you gave:
[TestMethod]
public void GetCity_TakesParidId_ReturnsParis(){...}
{
// some test logic here
}
And let's say we're testing the method GetCity() on our object, CityObtainer - when did I set this object up? Why have I done so? If I realise GetMatchingCity() is a better name, then you have the problem outlined above!
The solution I'm proposing is that we think about what this method really means earlier in the process, by use of interfaces:
public interface ICityObtainer
{
public City GetMatchingCity();
}
By writing in this "outside-in" style way, we're forced to think about what we want from the object a lot earlier in the process, and it becoming the focus should reduce its volatility. This doesn't eliminate your problem, but it may mitigate it somewhat (and, I think, it's a better approach anyway).
Ideally, we go a step further, and we don't even write any code before starting the test:
[TestMethod]
public void GetCity_TakesParId_ReturnsParis
{
ICityObtainer cityObtainer = new CityObtainer();
var result = cityObtainer.GetCity("paris");
Assert.That(result.Name, Is.EqualTo("paris");
}
This way, I can see what I really want from the component before I even start writing it - if GetCity() isn't really what I want, but rather GetCityByID(), it would become apparent a lot earlier in the process. As I said above, it isn't foolproof, but it might reduce the pain for this particular case a bit.
Once you've gone through that, I feel that if you're changing the name of the method, it's because you're changing the terms of the contract, and that means you should have to go back and reconsider the test (since it's possible you didn't want to change it).
(As a quick addendum, if we're writing a test with TDD in mind, then something is happening inside GetCity() that has a significant amount of logic going on. Thinking about the test as being to a contract helps us to separate the intention from the implementation - the test will stay valid no matter what we change behind the interface!)
I'm late, but maybe that Can be still useful. That's my solution (Assuming you are using XUnit at least).
First create an attribute FactFor that extends the XUnit Fact.
public class FactForAttribute : FactAttribute
{
public FactForAttribute(string methodName = "Constructor", [CallerMemberName] string testMethodName = "")
=> DisplayName = $"{methodName}_{testMethodName}";
}
The trick now is to use the nameof operator to make refactoring possible. For example:
public class A
{
public int Just2() => 2;
}
public class ATests
{
[FactFor(nameof(A.Just2))]
public void Should_Return2()
{
var a = new A();
a.Just2().Should().Be(2);
}
}
That's the result:

Easiest way to inject code to all methods and properties that don't have a custom attribute

There are a a lot of questions and answers around AOP in .NET here on Stack Overflow, often mentioning PostSharp and other third-party products. So there seems to be quite a range of AOP optons in the .NET and C# world. But each of those has their restrictions, and after downloading the promising PostSharp I found in their documentation that 'methods have to be virtual' in order to be able to inject code (edit: see ChrisWue's answer and my comment - the virtual constraint must have been on one of the contenders, I suppose). I haven't investigated the accuracy of this statement any further, but it's categoricality made me return back to Stack Overflow.
So I'd like to get an answer to this very specific question:
I want to inject simple "if (some-condition) Console.WriteLine" style code to every method and property (static, sealed, internal, virtual, non-virtual, doesn't matter) in my project that does not have a custom annotation, in order to dynamically test my software at run-time. This injected code should not remain in the release build, it is just meant for dynamic testing (thread-related) during development.
What's the easiest way to do this? I stumbled upon Mono.Cecil, which looks ideal, except that you seem to have to write the code that you want to inject in IL. This isn't a huge problem, it's easy to use Mono.Cecil to get an IL version of code written in C#. But nevertheless, if there was something simpler, ideally even built into .NET (I'm still on .NET 3.5), I'd like to know. [Update: If the suggested tool is not part of the .NET Framework, it would be nice if it was open-source, like Mono.Cecil, or freely available]
I was able to solve the problem with Mono.Cecil. I am still amazed how easy to learn, easy to use, and powerful it is. The almost complete lack of documentation did not change that.
These are the 3 sources of documentation I used:
static-method-interception-in-net-with-c-and-monocecil
Migration to 0.9
the source code itself
The first link provides a very gentle introduction, but as it describes an older version of Cecil - and much has changed in the meantime - the second link was very helpful in translating the introduction to Cecil 0.9. After getting started, the (also not documented) source code was invaluable and answered every question I had - expect perhaps those about the .NET platform in general, but there's tons of books and material on that somewhere online I'm sure.
I can now take a DLL or EXE file, modify it, and write it back to disk. The only thing that I haven't done yet is figuring out how to keep debugging information - file name, line number, etc. currently get lost after writing the DLL or EXE file. My background isn't .NET, so I'm guessing here, and my guess would be that I need to look at mono.cecil.pdb to fix that. Somewhere later - it's not that super important for me right now. I'm creating this EXE file, run the application - and it's a complex GUI application, grown over many years with all the baggage you would expect to find in such a piece of, ahem, software - and it checks things and logs errors for me.
Here's the gist of my code:
DefaultAssemblyResolver assemblyResolver = new DefaultAssemblyResolver();
// so it won't complain about not finding assemblies sitting in the same directory as the dll/exe we are going to patch
assemblyResolver.AddSearchDirectory(assemblyDirectory);
var readerParameters = new ReaderParameters { AssemblyResolver = assemblyResolver };
AssemblyDefinition assembly = AssemblyDefinition.ReadAssembly(assemblyFilename, readerParameters);
foreach (var moduleDefinition in assembly.Modules)
{
foreach (var type in ModuleDefinitionRocks.GetAllTypes(moduleDefinition))
{
foreach (var method in type.Methods)
{
if (!HasAttribute("MyCustomAttribute", method.method.CustomAttributes)
{
ILProcessor ilProcessor = method.Body.GetILProcessor();
ilProcessor.InsertBefore(method.Body.Instructions.First(), ilProcessor.Create(OpCodes.Call, threadCheckerMethod));
// ...
private static bool HasAttribute(string attributeName, IEnumerable<CustomAttribute> customAttributes)
{
return GetAttributeByName(attributeName, customAttributes) != null;
}
private static CustomAttribute GetAttributeByName(string attributeName, IEnumerable<CustomAttribute> customAttributes)
{
foreach (var attribute in customAttributes)
if (attribute.AttributeType.FullName == attributeName)
return attribute;
return null;
}
If someone knows an easier way how to get this done, I'm still interested in an answer and I won't mark this as the solution - unless no easier solutions show up.
I'm not sure where you got that methods have to be virtual from. We use Postsharp to time and log calls to WCF service interface implementations utilizing the OnMethodBoundaryAspect to create an attribute we can decorate the classes with. Quick Example:
[Serializable]
public class LogMethodCallAttribute : OnMethodBoundaryAspect
{
public Type FilterAttributeType { get; set; }
public LogMethodCallAttribute(Type filterAttributeType)
{
FilterAttributeType = filterAttributeType;
}
public override void OnEntry(MethodExecutionEventArgs eventArgs)
{
if (!Proceed(eventArgs)) return;
Console.WriteLine(GetMethodName(eventArgs));
}
public override void OnException(MethodExecutionEventArgs eventArgs)
{
if (!Proceed(eventArgs)) return;
Console.WriteLine(string.Format("Exception at {0}:\n{1}",
GetMethodName(eventArgs), eventArgs.Exception));
}
public override void OnExit(MethodExecutionEventArgs eventArgs)
{
if (!Proceed(eventArgs)) return;
Console.WriteLine(string.Format("{0} returned {1}",
GetMethodName(eventArgs), eventArgs.ReturnValue));
}
private string GetMethodName(MethodExecutionEventArgs eventArgs)
{
return string.Format("{0}.{1}", eventArgs.Method.DeclaringType, eventArgs.Method.Name);
}
private bool Proceed(MethodExecutionEventArgs eventArgs)
{
return Attribute.GetCustomAttributes(eventArgs.Method, FilterAttributeType).Length == 0;
}
}
And then us it like this:
[LogMethodCallAttribute(typeof(MyCustomAttribute))]
class MyClass
{
public class LogMe()
{
}
[MyCustomAttribute]
public class DoNotLogMe()
{
}
}
Works like a charm without having to make any methods virtual in Postsharp 1.5.6. Maybe they have changed it for 2.x but I certainly don't hope so - it would make it way less useful.
Update: I'm not sure if you can easily convince Postsharp to only inject code into certain methods based on with which attributes they are decorated. If you look at this tutorial it only shows ways of filtering on type and method names. We have solved this by passing the type we want to check on into the attribute and then in OnEntry you can use reflection to look for the attributes and decide whether to log or not. The result of that is cached so you only have to do it on the first call.
I adjusted the code above to demonstrate the idea.

Categories

Resources