Given that the THINC API is written to be backwards compatible, and lower versions enable a greater number of potential machines to run a given application, everyone should strive to use the minimum version necessary.
Does anyone know if there is an easy way determine what the minimum version necessary for a given application would be?
For example, I've got an application that only uses 3 API functions:
GetHourMeterCount, GetActiveProgramName, and GetMachiningReport
How do I know what API version I can use?
I can think of several possibilities:
For your situation, the easiest solution I can think of is just to check the .chm documentation for your earliest THINC API version to see if it supports GetHourMeterCount, GetActiveProgramName, and GetMachiningReport. If not, continue checking later versions until you find one that does.
If you had a more complicated solution that used more THINC API functionality, a quick check would be:
Make sure the project builds cleanly.
Go into the project references and remove the reference to THINC API. Now you will have a compilation error everywhere that THINC API is referenced.
Add a reference to your earliest version of THINC API.
Rebuild. If there are still compiler errors, then your code references one or more THINC methods that do not exist in this version. Move on to the next version and rebuild.
Once your project builds cleanly again, you have found the version of THINC API to reference.
You could also code a tool that examines your code (via code analysis) or your compiled assembly (via reflection) to find all THINC API functionality, and then looks at multiple versions of THINC API to find the earliest that implements all of the functionality. That shouldn't be difficult, but still seems like overkill.
For your purposes, it would also be convenient to have a table of all THINC API methods, vs. the versions in which those methods are supported. I don't have such a table, but someone conceivably might.
All of these methods just check whether certain functions exist in a given version of THINC API. They won't warn you about any breaking changes or different behavior between different versions. That requires knowing the API, checking the release notes, and/or testing.
Related
I have a class library that references a specific 3th-party DLL.
This DLL changes versions very often, but is always backwards compatible within the same major version. My class library uses specific types of the DLL, and executes different methods contained in these 3th-party DLLs.
I have to rethink the design of the app here, as I have one current issue but will have a bigger issue once there will be multiple major versions of the 3th-party DLL (there will be a limited set of major versions, 3 to be specific).
How can I make sure I can use a different version of the referenced assembly than the one which was originally used during compile-time? My runtime now loads in a DLL which is of a higher minor version, but which throws a 'Method not found' exception. I've removed the tag, as well as trying to execute the Assembly.Load to simulate any behavior when specifying the newer DLL, but both yield the same result; method not found.
What is the best approach to support three major (!) versions of referenced dlls within a single DLL? Because of the nature of the usage of the class library, it's not possible to either allow the user to choose the correct version or build 3 different DLLs.
If there is chance of your vendor breaking binary code compatibility and you cannot influence what one is doing there is no simple solution to this problem. Late binding would be one of the workarounds to deal with this in C# using reflection or dynamic, but it will come with the cost of the run-time performance and extremely increased complexity of your code.
If you are set to build this integration layer anyway, you would have to code for all three versions to cover known permutations between them and Adapter might be a good design pattern to start looking into. You would make sure the differences and the entities from the external library are not spilled out of the integration layer into your own business logic hence a lot of conversion logic would be required to isolate the fragile library from the rest of the code. Variations like differences in the types, methods, signatures, behavior and exceptions being thrown would have to be encapsulated and confined.
You would also have to re-design your application or presentation layer whichever depends on this hostile library to deal with the differences accordingly and make it rely only on your own wrappers.
Your integration tests would have to be execute against all three versions of the vendor library constantly possibly on every single time you check in code to the repository so you have enough protection and agility to move forward. And since the vendor keeps working on the code of the library you have to allocate enough time for the constant maintenance and support of the compatibility layer.
I'm developing a TypeScript code generator that will use custom attributes on C# classes to generate TypeScript definitions and code files.
I'm considering two options for TypeScript code generation / source file analysis:
Reflection on compiled assemblies
Roslyn CTP
The tool would use custom attributes on properties and methods to generate a TypeScript file. Right now I'm not planning to convert the C# method body to JavaScript, but in the future this may be done. So for this reason I am seriously considering Roslyn. However to simply generate the outline of my TypeScript classes I think I could use reflection and custom attributes.
I am wondering:
a) Does Roslyn provide functionality that is impossible with Reflection? My understanding is that I cannot get method bodies with Reflection.
b) Would the Roslyn CTP license prevent my from distributing the tool under an open source license? This is not clear to me after reading the license
I just did something along these lines - works great for creating your datamodel in Typescript from your c# classes. I built it to generate a single AMD-module with an interface which mimics the basic data of your Models. Also supports Generics, and creates a class with Knockout properties, including a toJS() method and an update(data:Interface) method to update your class.
The whole thing is just a single T4 template. If anyone finds this and is interested: http://spabuilder.wordpress.com/2014/07/31/generating-typescript-from-c/
Also honors [KeyAttribute] and [Timespan] attributes for data models if you are using data annotations.
I've been messing around with generating js, and I'm finding Reflection to be a better tool for this. I'm basically pointing my generator at the bin folder of the project which the metadata comes from. There might be some difficulties with loading all the needed assemblies, and caveats with versions of assemblies in the bin folder, and versions of the same assemblies that your generator project references. But once you get over all of this, which I did with minimal difficulty, Reflection is a lot easier to use, and more reliable.
With Roslyn, you are basically just parsing c#. Roslyn does this very well, but I'm hesitant to switch to it from Reflection. With reflection, you get metadata more reliably.
Let's say you want the Prefix property of a RoutePrefixAttribute that decorates a controller class. If you're parsing c#, you may have:
[RoutePrefix("stringliteral")] or [RoutePrefix(constantString)]. So, you have to worry about whether it's a literal or a constant expression, then find out how to get the value of a constant expression, worry about all the different ways in which you can pass parameters to an atatribute (for example, will this break your code: [RoutePrefix(Prefix="literal")]...
Once you're dealing with the actual runtime objects with reflection, everything is just easier. You have a nice RoutePrefixAttribute object, and you can go routePrefix.Prefix to get, reliably, the value of the prefix.
This is just one example of how doing things with Reflection is easier. It's the difference between gathering metadata from a set of c# objects in a type-safe way, and scraping data from c# code, albeit with a really nice scraping tool.
EDIT: Since writing this answer, I've bit the bullet and switched to Roslyn. It's fairly powerful once you get the hang of it, and I did find one big advantage: you can get a reference to the workspace from a visual studio plugin, and easily do all kinds of stuff within the plugin.
Update Nov, 2018
The accepted answer is valid because it's dated in Aprl,2013
Now roslyn is distributed under Apache License Version 2.0
excerpt from the license:
Redistribution.
You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You meet the following conditions:...
Roslyn have a number of nuget packages
Doesn't the license only forbid you personally from distributing the binaries? It doesn't forbid you from adding a dependency from your NuGet package to the Rosyln CTP NuGet package. You personally cannot deliver the bits, but you can have NuGet pull in Roslyn automatically.
So just avoid checking Rosyln source or binaries into your version control.
The Roslyn website not clearly states that:
The current license is for evaluation and preview purposes only and
does not allow redistribution of the Roslyn binaries. Sharing of
sample projects built on the Roslyn APIs is permitted, but sample
users must have either the Roslyn CTP or the Roslyn NuGet package
installed in order to build and run.
I wouldn't use the current Roslyn CTP - simply because there will be new versions in 2014 and those will bring many breaking changes for sure. So you might end up with totally deprecated code.
(There recently was a blog post on this by a MS team member, but I'm afraid I currently don't have the link at hand.)
EditThere's a good chance that Roslyn then will get a license that also permits for commercial use...
Update - July 2015
Roslyn is still in CTP, but their FAQ on GitHub is much more to the point:
For sample code or learning purposes, the recommended way to redistribute the Roslyn DLLs is with the Roslyn NuGet package: [url:Microsoft.CodeAnalysis|http://www.nuget.org/packages/Microsoft.CodeAnalysis].
So it appears that you still cannot redistribute the DLLs in finished products. The project will need to be open sourced and the solution will need a reference the NuGet package.
Original Answer (November 2012)
I don't believe you can distribute under open source.
6.DISTRIBUTABLE CODE. The software contains code that you are permitted to distribute in programs you develop if you comply with the
terms below.
6.c Distribution Restrictions you may not modify or distribute the source code of any Distributable Code so that any part of it becomes
subject to an Excluded License. An Excluded License is one that
requires, as a condition of use, modification or distribution,
the code be disclosed or distributed in source code form; or item
others have the right to modify it.
At first it sounds like you could do it if you just include the Roslyn binaries, but the Distributable Code definition specifically says "The software contains code..." and I believe that is what everything after is referring to.
To your other question, Roslyn isn't fully finished and is still Beta. I don't know exactly if it is currently in a state that allows it to handle your needs. That's something you may just want to spend a couple of hours tinkering with. I wouldn't think it had more functionality than what .NET currently allows. You can see what they recently added in September here and what is currently not implemented here.
For my experience using T4 generations based on reflection, as TypeLite does, is somehow simpler but has some drawbacks, like once the project depends on the classes that have been generated, regenerating them with a breaking change (renamed a class) will lead to a non compiling project so running the template again will output a blanck file and the user will have an hard time making everything compile again.
So, having the same need, i started experimenting with Roslyn, and it seems very promising, but i have many doubts on how to use it properly...
You can take a look at what i'm doing and maybe help me here: https://github.com/TrabacchinLuigi/RoslynExporter
I have an assembly that provides an API and is used by some other assemblies. I need to verify that a newer version of API dll is still compatible with the older assemblies that were using the older version of API.
I've found a couple of questions that ask the same, but there are no answers that resolve my problem:
Tool to verify compatibility of a public APIs
Tool for backwards compatibility for the C#/.NET API
Suggested tools can only compare two assemblies and say if there are possible breaking changes in API, but not if the newest API really breaks the older assembly that uses it.
I'd like to find a tool or write a test that will be able to check whether each of the older dlls can work with my new API dll.
As for the changes in API more likely that I will only extend it, but even though it still can break the code in older assemblies. Some of the examples of such changes can be found here:
A definite guide to API-breaking changes in .NET
.NET: with respect to AssemblyVersion, what defines binary compatibility?
For now the only solution I see is to compile the source code of the older assemblies with the newest API, but I would like to do it only with assemblies and add them as part of my unit tests. Is there any better way I can handle that?
edit:
I'm looking for a tool that will be able to automate the process of verifying the backward compatibility between .net assemblies. (command line or with some api too)
What you want is to do a diff and generate a the list of breaking changes. Then you want to search if of your assemblies does use any of the broken APIs. You can do this with ApiChange tool to do the diff and to find any affected users of it.
To make it more concrete. If you have removed a method from an interface then you need to find all implementers and users of this method in classes which uses the interface method or any class that does implement this method.
ApiChange can search for implementers and users of specific methods on the command line with the commands -whoimplementsinterface and -whousesmethod. It is not automated at the command line but you can directly use the ApiChange.Api.dll to automate this queries.
Edit1:
I just forgot: The ApiChange tool has actually the functionality you are interested in already. It is the option
-ShowrebuildTargets -new -old [-old2 ] -searchin
We did use it in our department with good results. The only gotcha are the XML Intellisense files. If another target does not use the removed method but references it inside the XmlDoc the compiler will write a warning that a non existing method was referenced. This is quite hard to catch and would involve to parse the intellisense docu files as well. But this is quite an edge case.
I've spent the day looking around for an answer to this. It seems like the tools referenced on the related (unhelpfully closed) questions are now dead or as good as. But I've just taken a look at Telerik's assembly diff tool JustAssembly and this looks much better than rolling your own, which, if you look at their library seems to be a whole heap of work and likely to go wrong.
They have a UI which isn't of that much help from the point of view of integrating into your CI build, it is pretty basic, but you can build the library from source, which I've just done and the library looks like it has everything you need to get yourself up and running pretty quickly.
I've come across a weird discrepancy between BigIntegers used by .Net 4.0 and Silverlight 4.0; In .Net, BigInteger has a Parse and TryParse method where as in the Silverlight version, it does not have either of these.
If you look at the .Net version of System.Numerics in Reflector, you also see that when you disassemble the code, every single method is just empty, and it lacks the BigIntergerBuilder and friends of the Silverlight version:
public static BigInteger Parse(string value)
{
}
What is going on here?
I don't think that System.Numerics.dll is part of the Silverlight 4.0 distribution. But that's not the real point. What you are looking at is a special version of the reference assembly. You'll find one in c:\program files\reference assemblies\microsoft\framework\.netframework\v4.0 for example.
The assemblies there have all their IL stripped from them. Their metadata is otherwise identical to the "real" reference assemblies in c:\windows\microsoft.net\framework\v4.0.30319
I have no clue what the function of these stripped assemblies is supposed to be. I can only imagine they are meant to speed-up compilation, since the compiler only needs the metadata. But that's a bit of a long shot. I can also imagine it has something to do with the mysterious new [TargetedPatchingOptOut] attribute, a very long shot as well since the mechanism behind it isn't explained anywhere that I could find. I had a conversation with JaredPar about this, he was going to ask inside MSFT about it. Haven't heard back.
Well, no real answer, but it does explain what you saw.
Following up on this, I do have one more theory, inspired when I noted that the folder is named "v4.0". Note that the build number is not part of it, as it is in c:\windows\microsoft.net. That ought to have interesting effects when they release new builds, similar to the updates to the base assemblies of .NET 2.0 when the service packs were released.
Infamously, one thing that went badly wrong is that these updates had changes in the core classes, without a change to the [AssemblyVersion]. The most visible one was the WaitHandle class, it acquired the WaitOne(int) overload. Very useful, because nobody could ever figure out what to pass for the exitContext argument. Using this new overload was easy to do, targeting .NET 2.0 doesn't prevent it, but hell breaks loose if the target machine has the original .NET 2.0 RTM release installed without getting the service packs.
My guess: these reference assemblies are the core assemblies for any current and future version of .NET 4.0. Their public interface is frozen. And prevents you from accidentally using a public method that gets added in a later build. It follows that IL isn't useful because that's going to change.
Well that clearly isn't the code of BigInteger.Parse - it wouldn't compile, given that it doesn't return anything.
This isn't the only thing "missing" from Silverlight, of course - there are plenty of bits of the .NET API which aren't in Silverlight. It's a cut-down version of the framework. I'm afraid that's just the way of things.
Integers being what they are, it probably wouldn't be too hard to implement it yourself in a naive fashion... if you're okay with it being pretty inefficient, I suspect it wouldn't be very much code at all.
My source code needs to support both .NET version 1.1 and 2.0 ... how do I test for the different versions & what is the best way to deal with this situation.
I'm wondering if I should have the two sections of code inline, in separate classes, methods etc. What do you think?
There are a lot of different options here. Where I work we use #if pragmas but it could also be done with separate assemblies for the separate versions.
Ideally you would at least keep the version dependant code in separate partial class files and make the correct version available at compile time. I would enforce this if I could go back in time, our code base now has a whole lot of #if pragmas and sometimes it can be hard to manage. The worst part of the whole #if pragma thing is that Visual Studio just ignores anything that won't compile with the current defines and so it's very easy to check in breaking changes.
NUnit supports both 1.1 and 2.0 and so is a good choice for a test framework. It's not too hard to use something like NAnt to make separate 1.1 and 2.0 builds and then automatically run the NUnit tests.
If you want to do something like this you will need to use preprocessor commands and conditional compilation symbols.
I would use symbols that clearly indicate the version of .NET you are targeting (say NET11 and NET20) and then wrap the relevant code like this:
#if NET11
// .NET 1.1 code
#elif NET20
// .NET 2.0 code
#endif
The reason for doing it this way rather than a simple if/else is an extra layer of protection in case someone forgets to define the symbol.
That being said, you should really drill down to the heart of the reason why you want/need to do this.
I would be asking the question of WHY you have to maintain two code bases, I would pick one and go with it if there is any chance of it.
Trying to keep two code bases in sync with the number of changes, and types of changes would be very complex, and a build process to build for either version would be very complex.
We had this problem and we ended up with a "compatability layer" where we implemented a single set of interfaces and utility code for .NET v1.1 and v2.0.
Then our installer laid down the right code for the right version. We used NSIS (free!), and they have functions you can call to determine the .NET version.