BDD/TDD with Revit API - c#

My question is relevant to this question, but I've somehow moved a step further and implemented a test framework using nunit.
When I run the test framework addin inside Revit, the testing framework somehow locks the test assemblies making it impossible to recompile the test assemblies. To get around it, I tried making a shadow copy so the nunit test runner runs on the copied assemblies. But once I run the tests for the first time, the subsequent test runs does not work on updated copies. It is like the test runner caches the dlls and always tries to run tests on cached copy.
So each time the test assembly is updated, I need to close-reopen Revit to run the tests which is a real pain. The main reason I implemented the testing framework for Revit was to be able to do BDD/TDD using Revit API.
This is a code snippet of how I run the tests:
TestPackage theTestPackage = new TestPackage(testDll);
RemoteTestRunner testRunner = new RemoteTestRunner();
testRunner.Load(theTestPackage);
TestResult testResult = testRunner.Run(new NullListener());
Does anyone have any idea how to solve this?

You can try loading the assemby for testing with the Assembly.Load(byte[]) method. I'm not sure if your test runner can handle this, but this will give you an assembly to work on that is loaded from a byte stream in memory. Therefore the original assembly file can be recompiled anytime and you can have as many concurrent versions of this assembly loaded as you like. They are all separate, with separate types.
I use a similar strategy with the RevitPythonShell script loadplugin.py for loading plugins at runtime and then exercising them for testing. This seems to work quite well except for WPF controls defined in XAML. I suspect the XAML parser and loader keeps a cache of the types, but haven't been able to look into this yet due to time constraints.

Related

nUnit extension in the same assembly as tests

History: nUnit 3. I have tests with complex inheritance. A certain object is created within SetUp or OneTimeSetUp. Those methods are virtual. When this object is not closed, the leak occur.
Problem: The object is destroyed in TearDown or OneTimeTearDown, but those are only called when SetUp or OneTimeSetUp succeed. So when exception occurs somewhere within TearDown or OneTimeTearDown, the leak occurs. As I mentioned, there are multiple inheritance levels, so exception and critical object creation may happen in different classes, on the different stack frames.
What I want to do: I want to have ITestEventListenerto react on failure before the initialization finished and clean up the critical object.
What I tried: in my test assembly I created the class:
namespace My.Whatever.Tests.Web.Util
{
[Extension(EngineVersion = "3.4")]
public class NunitEventListener : ITestEventListener
{
public void OnTestEvent(string report)
{
Debug.WriteLine(report);
}
}
}
Then I tried running tests through
VS (nunit 3 test adapter)
nUnit console
None seem to load the extension.
Question: what am I doing wrong?
Sources of info: https://github.com/nunit/docs/wiki/Event-Listeners , https://github.com/nunit/docs/wiki/Writing-Engine-Extensions
The info about how extensions are located is found at https://github.com/nunit/docs/wiki/Engine-Extensibility#locating-addins which is linked from the second of the two references you mention.
Extensions are not searched for in your test assembly. We provided that in V2 for NUnit Addins as an easy way to test extensions, but it's a bit more complicated to do that for engine extensions. IMO it would be a good feature if we could make that work, but it involves making all extensions capable of being loaded and unloaded as new test assemblies are run. That's a major internal change to our extension service.
In the directory containing the engine assembly, you may find one or more files of type .addins. Whether there is one and how many and what they contain will depend on how you installed the runner and engine. That file has entries pointing to the extensions installed for that particular copy of the engine. See the above reference page for details.
In two cases, the location of extensions is more or less automatic, due to the presence of wild-card entries in the .addins file:
If you install the console runner using NuGet any extensions installed as nuget packages are found.
If you install the console runner using Chocolatey, any extensions installed by chocolatey are found.
In all other cases, I'm afraid that you would have to edit the .addins file manually.
In the specific case of the adapter, there is no .addins file and therefore no extensions are ever loaded. In theory, you could create such a file by hand and cause your extension to load, at least if the engine is installed in a directory to which you have access. That would be the case if you are using the nuget package. I suggest you first play with having your extension recognized under the console runner before trying this as it would involve additional complications.
BTW, not all third-party runners use the engine. If they are not using the engine at all, of course, it's not possible to provide extensions.
Update: I only just noticed your statement that TearDown and OneTimeTearDown are only run when SetUp orOneTimeSetUp` succeed. That's not a true statement. Both kinds of teardown only run if the corresponding setup is run, whether it succeeds or not. Of course, your teardowns have to be written to allow for the fact that the corresponding setup may not have run to completion, which can be tricky.

C# DllImport Inconsistency

I am testing that I am using the correct dll interface for a ThirdParty.dll but using a mocked unmanaged dll in some unit tests. The ThirdParty.dll is imported using DllImport("ThirdParty.dll") inside the production code. The mock dll is placed in the same directory as the NUnit test code, the working directory of the command line set to the same directory as the Test Dll and mock dll, and then NUnit is called with a full path.
Example:
TestDirectory contains:
Test.dll
nunit.framework.dll
pnunit.framework.dll
ThirdParty.dll
and some other dependencies.
and the following is called:
C:\TestDirectory>ProgFiles\NUnit\nunit-console-x86.exe Test.dll /config:Release /framework:net-4.0
On our development machines the mock version of ThirdParty.dll is picked up fine and our tests pass but when we put the same code and directory structure on the customer computer it picks up the real installed version of the dll instead, which we also have installed on our dev machines but gets blocked by the mock one during unit tests.
I'm aware of the Search Order used by Windows, but I think that in both instances the DLL should be found in step 1 "The directory from which the application loaded.". I'm also aware of picking up the same name DLL if it is in memory, but I believe this applies if it is in the same process memory, which it should not be.
Does anyone have any ideas on what I could check or what might be causing this?
Thanks
The search order for dynamic link libraries can be found here. It's always the same, but it does depend on operating system settings, so with two different machines with different settings, you may get different results.
Use the fusion log viewer to get a more detailed view into how your assembly is being found. The viewer will list all paths your application is searching to load an assembly and where they were found. This always give me an answer when I have unexpected DLL dependency problems.

How to break the build when referenced assembly version changes?

I have some C# which fixes an issue that is already fixed in a newer release of Sitecore CMS.
I want the fix to be removed if and when we upgrade to the newer version, but that may be some time in the future, and the presence of this fix would be easily forgotten.
Is it possible to break the build or otherwise draw attention to this section of code when the referenced assembly version changes (i.e. higher than v6.5.x) ? A conditional around an #error directive may work, but I don't know if or how this can refer to a referenced assembly version?
Note that I'm hoping this can happen at build-time, not runtime, and specifically draw attention to the code in question so that it can be reviewed.
Surely to do this all you would need to do would be to make sure your projects have the "SpecificVersion" flag set to true for that reference?
How about using a post-build (or pre-build I suppose) event in Visual Studio? You could run a powershell script or something that will check your assembly version and alert you that the assembly version has changed.
A bit ugly but I think it will work.
Info on build events...
http://msdn.microsoft.com/en-us/library/ke5z92ks.aspx
While the following solution won't run at build-time, you could do this with unit tests in your health check (assuming you are doing CI with unit tests).
You write a unit test to test that piece of code. Make sure the unit test asserts the version of the Sitecore DLL reference. Your health check build will fail when the unit test runs.
You could even gate the check-in with this to make sure nobody could check in without that unit test passing.

How do I work with shared assemblies and projects?

To preface, I've been working with C# for a few months, but I'm completely unfamiliar with concepts like deployment and assemblies, etc. My questions are many and varied, although I'm furiously Googling and reading about them to no avail (I currently have Pro C# 2008 and the .NET 3.5 Platform in front of me).
We have this process and it's composed of three components: an engine, a filter, and logic for the process. We love this process so much we want it reused in other projects. So now I'm starting to explore the space beyond one solution, one project.
Does this sound correct? One huge Solution:
Process A, exe
Process B, exe
Process C, exe
Filter, dll
Engine, dll
The engine is shared code for all of the processes, so I'm assuming that can be a shared assembly? If a shared assembly is in the same solution as a project that consumes it, how does it get consumed if it's supposed to be in the GAC? I've read something about a post build event. Does that mean the engine.dll has to be reployed on every build?
Also, the principle reason we separated the filter from the process (only one process uses it) is so that we can deploy the filter independently from the process so that the process executable doesn't need to be updated. Regardless of if that's best practice, let's just roll with it. Is this possible? I've read that assemblies link to specific versions of other assemblies, so if I update the DLL only, it's actually considered tampering. How can I update the DLL without changing the EXE? Is that what a publisher policy is for?
By the way, is any of this stuff Google-able or Amazon-able? What should I look for? I see lots of books about C# and .NET, but none about deployment or building or testing or things not related to the language itself.
I agree with Aequitarum's analysis. Just a couple additional points:
The engine is shared code for all of the processes, so I'm assuming that can be a shared assembly?
That seems reasonable.
If a shared assembly is in the same solution as a project that consumes it, how does it get consumed if it's supposed to be in the GAC?
Magic.
OK, its not magic. Let's suppose that in your solution your process project has a reference to the engine project. When you build the solution, you'll produce a project assembly that has a reference to the engine assembly. Visual Studio then copies the various files to the right directories. When you execute the process assembly, the runtime loader knows to look in the current directory for the engine assembly. If it cannot find it there, it looks in the global assembly cache. (This is a highly simplified view of loading policy; the real policy is considerably more complex than that.)
Stuff in the GAC should be truly global code; code that you reasonably expect large numbers of disparate projects to use.
Does that mean the engine.dll has to be reployed on every build?
I'm not sure what you mean by "redeployed". Like I said, if you have a project-to-project reference, the build system will automatically copy the files around to the right places.
the principle reason we separated the filter from the process (only one process uses it) is so that we can deploy the filter independently from the process so that the process executable doesn't need to be updated
I question whether that's actually valuable. Scenario one: no filter assembly, all filter code is in project.exe. You wish to update the filter code; you update project.exe. Scenario two: filter.dll, project.exe. You wish to update the filter code; you update filter.dll. How is scenario two cheaper or easier than scenario one? In both scenarios you're updating a file; why does it matter what the name of the file is?
However, perhaps it really is cheaper and easier for your particular scenario. The key thing to understand about assemblies is assemblies are the smallest unit of independently versionable and redistributable code. If you have two things and it makes sense to version and ship them independently of each other, then they should be in different assemblies; if it does not make sense to do that, then they should be in the same assembly.
I've read that assemblies link to specific versions of other assemblies, so if I update the DLL only, it's actually considered tampering. How can I update the DLL without changing the EXE? Is that what a publisher policy is for?
An assembly may be given a "strong name". When you name your assembly Foo.DLL, and you write Bar.EXE to say "Bar.EXE depends on Foo.DLL", then the runtime will load anything that happens to be named Foo.DLL; file names are not strong. If an evil hacker gets their own version of Foo.DLL onto the client machine, the loader will load it. A strong name lets Bar.EXE say "Bar.exe version 1.2 written by Bar Corporation depends on Foo.DLL version 1.4 written by Foo Corporation", and all the verifications are done against the cryptographically strong keys associated with Foo Corp and Bar Corp.
So yes, an assembly may be configured to bind only against a specific version from a specific company, to prevent tampering. What you can do to update an assembly to use a newer version is create a little XML file that tells the loader "you know how I said I wanted Foo.DLL v1.4? Well, actually if 1.5 is available, its OK to use that too."
What should I look for? I see lots of books about C# and .NET, but none about deployment or building or testing or things not related to the language itself.
Deployment is frequently neglected in books, I agree.
I would start by searching for "ClickOnce" if you're interested in deployment of managed Windows applications.
Projects can reference assemblies or projects.
When you reference another assembly/project, you are allowed to use all the public classes/enums/structs etc in the referenced assembly.
You do not need to have all of them in one solution. You can have three solutions, one for each Process, and all three solutions can load Engine and Filter.
Also, you could have Process B and Process C reference the compiled assemblies (the .dll's) of the Engine and Filter and have similar effect.
As long as you don't set the property in the reference to an assembly to require a specific version, you can freely update DLLs without much concern, providing the only code changes were to the DLL.
Also, the principle reason we
separated the filter from the process
(only one process uses it) is so that
we can deploy the filter independently
from the process so that the process
executable doesn't need to be updated.
Regardless of if that's best practice,
let's just roll with it. Is this
possible?
I actually prefer this method of updating. Less overhead to update only files that changed rather than everything everytime.
As for using the GAC, whole other level of complexity I won't get into.
Tamper proofing your assemblies can be done by signing them, which is required to use the GAC in the first place, but you should still be fine so long as a specific version is not required.
My recommendation is to read a book about the .NET framework. This will really help you understand the CLR and what you're doing.
Applied Microsoft .NET Framework Programming was a book I really enjoyed reading.
You mention the engine is shared code, which is why you put it in a separate project under your solution. There's nothing wrong with doing it this way, and it's not necessary to add this DLL to the GAC. During your development phase, you can just add a reference to your engine project, and you'll be able to call the code from that assembly. When you want to deploy this application, you can either deploy the engine DLL with it, or you can add the engine DLL to the GAC (which is another ball of wax in and of itself). I tend to lean against GAC deployments unless it's truly necessary. One of the best features of .NET is the ability to deploy everything you need to run your application in one folder without having to copy stuff to system folders (i.e. the GAC).
If you want to achieve something like dynamically loading DLL's and calling member methods from your processor without caring about specific version, you can go a couple of routes. The easiest route is to just set the Specific Version property to False when you add the reference. This will give you the liberty of changing the DLL later, and as long as you don't mess with method signatures, it shouldn't be a problem. The second option is the MEF (which uses Reflection and will be part of the framework in .NET 4.0). The idea with the MEF is that you can scan a "plugins" style folder for DLL's that implement specific functionality and then call them dynamically. This gives you some additional flexibility in that you can add new assemblies later without the need to modify your references.
Another thing to note is that there are Setup and Deployment project templates built into Visual Studio that you can use to generate MSI packages for deploying your projects. MSDN has lots of documentation related to this subject that you can check out, here:
http://msdn.microsoft.com/en-us/library/ybshs20f%28VS.80%29.aspx
Do not use the GAC on your build machine, it is a deployment detail. Visual Studio automatically copies the DLL into build directory of your application when you reference the DLL. That ensures that you'll run and debug with the expected version of the DLL.
When you deploy, you've got a choice. You can ship the DLL along with the application that uses it, stored in the EXE installation folder. Nothing special is needed, the CLR can always find the DLL and you don't have to worry about strong names or versions. A bug fix update is deployed simply by copying the new DLL into the EXE folder.
When you have several installed apps with a dependency on the DLL then deploying bug fix updates can start to get awkward. Since you have to copy to the DLL repeatedly, once for each app. And you can get into trouble when you update some apps but not others. Especially so when there's a breaking change in the DLL interface that requires the app to be recompiled. That's DLL Hell knocking, the GAC can solve that.
We found some guidance on this issue at MSDN. We started with two separate solution with no shared code, and then abstracted the commonalities to a shared assemblies. We struggled with ways to isolate changes in the shared code to impact only the projects that were ready for it. We were terrible at Open/Close.
We tried
branching the shared code for each project that used it and including it in the solution
copying the shared assembly from the shared solution when we made changes
coding pre-build events to build the shared code solution and copy the assembly
Everything was a real pain. We ended up using one large solution with all the projects in it. We branch each project as we want to stage features closer to production. This branches the shared code as well. It's simplified things a lot and we get a better idea of what tests fail across all projects, as the common code changes.
As far as deployment, our build scripts are setup to build the code and copy only the files that have changed, including the assemblies, to our environments.
By default, you have a hardcoded version number in your project (1.0.0.0). As long as you don't change it, you can use all Filter builds with the Process assembly (it only knows it should use the 1.0.0.0 version). This is not the best solution, however, because how do you distinguish between various builds yourself?
Another option is use different versions of the Filter by the same Process. You should add an app.config file to the Process project, and include a bindingRedirect element (see the docs). Whenever the Runtime looks for a particular version of the Filter, it's "redirected" to a version indicated in the config. Unfortunately, this means that although you don't have to update the Process assembly, you'll have to update the config file with the new version.
Whenever you encounter versioning problems, you can use Fuslogvw.exe (fusion log viewer) to troubleshoot these.
Have fun!
ulu

Problem using IronPython to code against .NET assemblies specifically with app.config aspect

I started looking into IronPython to develop scripts against my .NET assemblies for certain tasks and ad-hoc processes. I like the ease of development with IronPython and the small footprint compared to handling this with a .NET console application project. However, I immediately ran into a roadblock with settings from app.config file. The assemblies I am planning to use require settings from app.config file such as database connection string, application settings, etc. I saw this question on SO How to use IronPython with App.Config. However, from what I gather and assume, none of the suggested solutions worked or were acceptable. Modifying ipy.exe.config file has a potential. However, I would like to keep this as simple as possible and minimize the dependencies. So that anyone can grab the IronPython script and run it without having to modify ipy.exe.config file.
So I decided to try the following: I create a new application domain in the script and have AppDomainSetup.ConfigurationFile point to the app.config file. Then I could call AppDomain.DoCallBack and pass a delegate that has my logic. So below is the script that has my attempt. Note that I am just learning IronPython/Python so please keep that in mind.
import clr
import sys
sys.path.append(r"C:\MyApp");
clr.AddReference("System")
clr.AddReference("Foo.dll")
from System import *
from System.IO import Path
from Foo import *
def CallbackAction():
print "Callback action"
baseDirectory = r"C:\MyApp"
domainInfo = AppDomainSetup()
domainInfo.ApplicationBase = baseDirectory
domainInfo.ConfigurationFile = Path.Combine(baseDirectory,"MyApp.exe.config")
appDomain = AppDomain.CreateDomain("Test AppDomain",AppDomain.CurrentDomain.Evidence,domainInfo)
appDomain.DoCallBack(CallbackAction) #THIS LINE CAUSES SERIALIZATION EXCEPTION
Console.WriteLine("Hit any key to exit...")
Console.ReadKey()
In the above code, "c:\MyApp" folder contains everything; exe, dlls, and the app.config file. Hopefully the second appDomain will use MyApp.exe.config. CallbackAction method is intended to contain the code that will use the api from the .NET assemblies to do some work. CallbackAction will be invoked via appDomain.DoCallBack. Well, this is the part I am having a problem. When appDoming.DoCallBack is executed, I get a System.Runtime.Serialization.SerializationException:
Cannot serialize delegates over
unmanaged function pointers, dynamic
methods or methods outside the
delegate creator's assembly.
I can't make a complete sense out of this. I assume that something is being attempted to be serialized across appDomains and that operation is failing. I can create a CrossAppDomainDelegate and execute it just fine.
test = CrossAppDomainDelegate(lambda: CallbackAction())
test()
So does anyone have any ideas or recommendations? Basically, I need to have the assemblies that I want to code against in IronPython to have access to the app.config file.
Thanks for your time and recommendations in advance.
btw I have IronPyhton 2.0.1 installed and am using VS2008 Pro.
This is probably not the answer you are looking for, but.... Since this is for your own testing purposes, I'd recommend installing Ironpython 2.7.1 and see if the problem continues. There have been many improvements to Ironpython since 2.0.1.

Categories

Resources