My understand is that each Establish should only be executed once, but the code below shows it executing multiple times. We're nesting the classes to provide some grouping while keeping the unit tests for a Subject in one file. This seems like it is a bug.
We're using the machine.specifications.runner.resharper Reshaper extension and MSpec 0.9.1.
[Subject(typeof(string))]
internal class EstablishRunTwice {
Establish sharedContext = () => Console.WriteLine("Shared context");
internal class ScenarioA : EstablishRunTwice {
Establish scenarioAContext = () => Console.WriteLine("ScenarioA context");
internal class ScenarioAVariation1 : ScenarioA {
Because of = () => Console.WriteLine("ScenarioAVariation1 Because");
It it1 = () => Console.WriteLine("ScenarioAVariation1 It1");
It it2 = () => Console.WriteLine("ScenarioAVariation1 It2");
}
internal class ScenarioAVariation2 : ScenarioA {
Because of = () => Console.WriteLine("ScenarioAVariation2 Because");
It it1 = () => Console.WriteLine("ScenarioAVariation2 It1");
It it2 = () => Console.WriteLine("ScenarioAVariation2 It2");
}
}
internal class ScenarioB : EstablishRunTwice {
Establish context = () => Console.WriteLine("ScenarioB context");
Because of = () => Console.WriteLine("ScenarioB Because");
It it1 = () => Console.WriteLine("ScenarioB It1");
It it2 = () => Console.WriteLine("ScenarioB It2");
}
}
The result is this for ScenarioAVariation1:
Shared context
Shared context
ScenarioA context
Shared context
Shared context
ScenarioA context
ScenarioAVariation1 Because
ScenarioAVariation1 It1
ScenarioAVariation1 It2
When we were doing our own custom context specification framework using NUnit, we got around issues with the NUnit running by making sure all subclasses were abstract (in this case, EstablishRunTwice and ScenarioA would be abstract), but MSpec throws an error attempting to do so.
It is because you are both nesting and inheriting the test classes. Normally you might use nested classes in C# purely for organizational purposes, but it also has an effect on execution in MSpec. This might be unexpected, but does fit with its declarative style. In fact, there isn't normally a need to use inheritance at all for MSpec unless you're reusing functionality across different files.
Just remove the inheritance in your example and keep the nesting and you'll see the output as:
Shared context
ScenarioA context
ScenarioAVariation1 Because
ScenarioAVariation1 It1
ScenarioAVariation1 It2
...
This makes it easy to use common setup in the Establish of the outer class and override specific parts in inner classes. On a personal note, before I realized it worked this way, I felt like I was fighting with MSpec for test cases that relied on different setups (vs ones where different values are passed directly to the Subject in the Because).
Say you had a weather sensor thingy, you might structure it this way:
[Subject(typeof(WeatherSensor))]
class when_reading_the_sensor : WithSubject<WeatherSensor> {
Establish context = () => { common setup }
class with_sunny_conditions {
Establish context = () => { setup sunny conditions }
Because of = () => Subject.Read();
It should_say_it_is_sunny => () => ...
It should_return_correct_temps => () => ...
}
class with_rainy_conditions {
...
}
}
That also reads well in the test results. Given the second test fails, it might show like this in the test tree:
(X) WeatherSensor, when reading the sensor with sunny conditions
(✔) should say it is sunny
(X) should return correct temps
If, as in that example, all the different conditions come purely from setup of dependencies injected into the Subject, you may even want to move the Because into the outer class. Then you can just have an Establish and some Its in the inner classes, making each test case very concise. The outer Because will still be run for each inner class after all the needed Establishes and before the Its.
That's a really confusing way to structure things - clever, but perhaps a bit too clever. I find it hard to read and understand the intent. In fact, I can't even begin to imagine what the compiler would do with that inheritance structure and therefore I can't understand the intent. I think maybe you're over-thinking this.
So let me see, ScenarioA is not only nested in EstablishRunTwice, but also inherits from it. Does that mean it inherits nested copies of itself all the way down to infinity? And then, ScenarioB inherits from all of that! My head has just exploded. I'm not surprised you get confusing results. What does that nesting really give you? Does it make the code more readable or easier to maintain? I'm not convinced it does.
Use the KISS principle. The normal way of doing things is to put each context in its own class, no nesting; just use files to group related tests, and you can also use the Concern argument in the [Subject] attribute as another way of grouping. You can inherit from other contexts if that makes sense, but after working with MSpec for a couple of years, I'm slowly coming to the conclusion that too much inheritance can harm readability and make the test code more viscous, so use inheritance wisely.
Update: Having reflected on what I think you are trying to achieve for a bit longer, I suspect you are trying to re-invent behaviours. This is perhaps a poorly documented and understood feature of MSpec, which lets you define a set of common behaviours that can later be applied in multiple test contexts. Does that sound like what you are trying to achieve? Here's an example of behaviours:
[Behaviors]
internal class DenebRightAscension
{
It should_have_20_hours_ = () => UUT.Degrees.ShouldEqual(20u);
It should_have_41_minutes = () => UUT.Minutes.ShouldEqual(41u);
It should_have_59_seconds = () => UUT.Seconds.ShouldEqual(59u);
protected static Bearing UUT;
}
[Subject(typeof(HourAngle), "sexagesimal")]
internal class when_converting_hour_angle_to_sexagesimal
{
Because of = () =>
{
RaDeneb = 20.6999491773451;
UUT = new Bearing(RaDeneb);
};
Behaves_like<DenebRightAscension> deneb;
protected static Bearing UUT;
static double RaDeneb;
}
[Subject(typeof(Bearing), "sexagesimal")]
internal class when_converting_to_sexagesimal
{
Because of = () =>
{
RaDeneb = 20.6999491773451;
UUT = new Bearing(RaDeneb);
};
Behaves_like<DenebRightAscension> deneb;
protected static Bearing UUT;
static double RaDeneb;
}
Note that in behaviours fields are matched by name, not by any kind of inheritance. So the behaviour magically knows what I mean by 'UUT' even though the classes are in no way related.
Related
In the context of a .net6 incremental source generator, what does the IncrementalValuesProvider.WithTrackingName(string name) method do?
In addition how/when is it intended to be used?
[Generator]
public class MyGenerator : IIncrementalGenerator
{
var incremntalValueProvider = context.SyntaxProvider
.CreateSyntaxProvider(predicate: (n, c) => { ... }, transform: (s, c) => { ... });
incremntalValueProvider = incremntalValueProvider.WithTrackingName("Some tracking name");
}
With most things when it comes to source generators, I have been able to google and use a bit of trial and error, to figure out how things work.
However with this particular method, those aproaches has not been helpful. In addition it seems like Micrsofts documentation is pretty much out of date since
https://github.com/dotnet/roslyn/blob/main/docs/features/incremental-generators.md does not even mention the method.
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
var uncachedRecordIDs = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if (!uncachedRecordIDs.Any()) return;
using (var db = new DBContext())
{
.....
}
}
The above method is repeated regularly throughout the project (except with different generics passed in). I'm looking to avoid repeated usages of the if (!uncachedRecordIDs.Any()) return; lines.
In short, is it possible to make the LocalCacheController.GetUncachedRecordIDs return the CacheUncachedMessageIDs method?
This will guarantee a new data context is not created unless it needs to be (stops accidentally forgetting to add the return line in the parent method).
It is not possible for a nested method to return from parent method.
You can do some unhandled Exception inside GetUncachedRecordIDs, that will do the trick, but it is not supposed to do this, so it creates confusion. Moreover, it is very slow.
Another not suggested mechanic is to use some goto magic. This also generates confusion because goto allows unexpected behaviour in program execution flow.
Your best bet would be to return a Result object with simple bool HasUncachedRecordIDs field and then check it. If it passes, then return. This solution solves the problem of calling a method, which is Any() in this case.
var uncachedRecordIDsResult = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if(uncachedRecordIDsResult.HasUncachedRecordIDs) return;
My reasoning for lack of this feature in the language is that calling GetUncachedRecordIDs in basically any function would unexpectedly end that parent function, without warning. Also, it would intertwine closely both functions, and best programming practices involve loose coupling of classes and methods.
You could pass an Action to your GetUncachedRecordIDs method which you only invoke if you need to. Rough sketch of the idea:
// LocalCacheController
void GetUncachedRecordIDs<T>(List<int> messageIDs, Action<List<int>> action)
{
// ...
if (!cached) {
action(recordIds);
}
}
// ...
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs, uncachedRecordIDs => {
using (var db = new DBContext())
{
// ...
}
});
}
Coming from using Moq, I'm used to being able to Setup mocks as Verifiable. As you know, this is handy when you want to ensure your code under test actually called a method on a dependency.
e.g. in Moq:
// Set up the Moq mock to be verified
mockDependency.Setup(x => x.SomethingImportantToKnow()).Verifiable("Darn, this did not get called.");
target = new ClassUnderTest(mockDependency);
// Act on the object under test, using the mock dependency
target.DoThingsThatShouldUseTheDependency();
// Verify the mock was called.
mockDependency.Verify();
I've been using VS2012's "Fakes Framework" (for lack of knowing a better name for it), which is quite slick and I'm starting to prefer it to Moq, as it seems a bit more expressive and makes Shims easy. However, I can't figure out how to reproduce behavior similar to Moq's Verifiable/Verify implementation. I found the InstanceObserver property on the Stubs, which sounds like it might be what I want, but there's no documentation as of 9/4/12, and I'm not clear how to use it, if it's even the right thing.
Can anyone point me in the right direction on doing something like Moq Verifiable/Verify with VS2012's Fakes?
-- 9/5/12 Edit --
I realized a solution to the problem, but I'd still like to know if there's a built-in way to do it with VS2012 Fakes. I'll leave this open a little while for someone to claim if they can. Here's the basic idea I have (apologies if it doesn't compile).
[TestClass]
public class ClassUnderTestTests
{
private class Arrangements
{
public ClassUnderTest Target;
public bool SomethingImportantToKnowWasCalled = false; // Create a flag!
public Arrangements()
{
var mockDependency = new Fakes.StubIDependency // Fakes sweetness.
{
SomethingImportantToKnow = () => { SomethingImportantToKnowWasCalled = true; } // Set the flag!
}
Target = new ClassUnderTest(mockDependency);
}
}
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
var arrangements = new Arrangements();
// act
arrangements.Target.DoThingThatShouldUseTheDependency();
// assert
Assert.IsTrue(arrangements.SomethingImportantToKnowWasCalled); // Voila!
}
}
-- 9/5/12 End edit --
Since I've heard no better solutions, I'm calling the edits from 9/5/12 the best approach for now.
EDIT
Found the magic article that describes best practices. http://www.peterprovost.org/blog/2012/11/29/visual-studio-2012-fakes-part-3/
Although it might make sense in complex scenarios, you don't have to use a separate (Arrangements) class to store information about methods being called. Here is a simpler way of verifying that a method was called with Fakes, which stores the information in a local variable instead of a field of a separate class. Like your example it implies that ClassUnderTest calls a method of the IDependency interface.
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
bool dependencyCalled = false;
var dependency = new Fakes.StubIDependency()
{
DoStuff = () => dependencyCalled = true;
}
var target = new ClassUnderTest(dependency);
// act
target.DoStuff();
// assert
Assert.IsTrue(dependencyCalled);
}
Long story short
Say I have the following code:
// a class like this
class FirstObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod() {
// logic
}
}
// and another class with properties and methods names
// which are similar or exact the same if needed
class SecondObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod(String canHaveParameters) {
// logic
}
}
// the consuming code would be something like this
public static void main(String[] args) {
FirstObject myObject=new FirstObject();
// Use its properties and methods
Console.WriteLine("FirstObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("FirstObject.OneMethod returned value: "+myObject.OneMethod());
// Now, for some reason, continue to use the
// same object but with another type
// -----> CHANGE FirstObject to SecondObject HERE <-----
// Continue to use properties and methods but
// this time calls were being made to SecondObject properties and Methods
Console.WriteLine("SecondObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("SecondObject.OneMethod returned value: "+myObject.OneMethod(oneParameter));
}
Is it possible to change FirstObject type to SecondObject and continue to use it's properties and methods?
I've total control over FirstObject, but SecondObject is sealed and totally out of my scope!
May I achieve this through reflection? How? What do you think of the work that it might take to do it? Obviously both class can be a LOT more complex than the example above.
Both class can have templates like FirstObject<T> and SecondObject<T> which is intimidating me to use reflection for such a task!
Problem in reality
I've tried to state my problem the easier way for the sake of simplicity and to try to extract some knowledge to solve it but, by looking to the answers, it seems obvious to me that, to help me, you need to understand my real problem because changing object type is only the tip of the iceberg.
I'm developing a Workflow Definition API. The main objective is to have a API able to be reusable on top of any engine I might want to use(CLR through WF4, NetBPM, etc.).
By now I'm writing the middle layer to translate that API to WF4 to run workflows through the CLR.
What I've already accomplished
The API concept, at this stage, is somehow similar to WF4 with ActivityStates with In/Out Arguments and Data(Variables) running through the ActivityStates using their arguments.
Very simplified API in pseudo-code:
class Argument {
object Value;
}
class Data {
String Name;
Type ValueType;
object Value;
}
class ActivityState {
String DescriptiveName;
}
class MyIf: ActivityState {
InArgument Condition;
ActivityState Then;
ActivityState Else;
}
class MySequence: ActivityState {
Collection<Data> Data;
Collection<ActivityState> Activities;
}
My initial approach to translate this to WF4 was too run through the ActivitiesStates graph and do a somehow direct assignment of properties, using reflection where needed.
Again simplified pseudo-code, something like:
new Activities.If() {
DisplayName=myIf.DescriptiveName,
Condition=TranslateArgumentTo_WF4_Argument(myIf.Condition),
Then=TranslateActivityStateTo_WF4_Activity(myIf.Then),
Else=TranslateActivityStateTo_WF4_Activity(myIf.Else)
}
new Activities.Sequence() {
DisplayName=mySequence.DescriptiveName,
Variables=TranslateDataTo_WF4_Variables(mySequence.Variables),
Activities=TranslateActivitiesStatesTo_WF4_Activities(mySequence.Activities)
}
At the end of the translation I would have an executable System.Activities.Activity object. I've already accomplished this easily.
The big issue
A big issue with this approach appeared when I began the Data object to System.Activities.Variable translation. The problem is WF4 separates the workflow execution from the context. Because of that both Arguments and Variables are LocationReferences that must be accessed through var.Get(context) function for the engine to know where they are at runtime.
Something like this is easily accomplished using WF4:
Variable<string> var1=new Variable<string>("varname1", "string value");
Variable<int> var2=new Variable<int>("varname2", 123);
return new Sequence {
Name="Sequence Activity",
Variables=new Collection<Variable> { var1, var2 },
Activities=new Collection<Activity>(){
new Write() {
Name="WriteActivity1",
Text=new InArgument<string>(
context =>
String.Format("String value: {0}", var1.Get(context)))
},
new Write() {
//Name = "WriteActivity2",
Text=new InArgument<string>(
context =>
String.Format("Int value: {0}", var2.Get(context)))
}
}
};
but if I want to represent the same workflow through my API:
Data<string> var1=new Data<string>("varname1", "string value");
Data<int> var2=new Data<int>("varname2", 123);
return new Sequence() {
DescriptiveName="Sequence Activity",
Data=new Collection<Data> { var1, var2 },
Activities=new Collection<ActivityState>(){
new Write() {
DescriptiveName="WriteActivity1",
Text="String value: "+var1 // <-- BIG PROBLEM !!
},
new Write() {
DescriptiveName="WriteActivity2",
Text="Int value: "+Convert.ToInt32(var2) // ANOTHER BIG PROBLEM !!
}
}
};
I end up with a BIG PROBLEM when using Data objects as Variables. I really don't know how to allow the developer, using my API, to use Data objects wherever who wants(just like in WF4) and later translate that Data to System.Activities.Variable.
Solutions come to mind
If you now understand my problem, the FirstObject and SecondObject are the Data and System.Activities.Variable respectively. Like I said translate Data to Variable is just the tip of the iceberg because I might use Data.Get() in my code and don't know how to translate it to Variable.Get(context) while doing the translation.
Solutions that I've tried or thought of:
Solution 1
Instead of a direct translation of properties I would develop NativeActivites for each flow-control activity(If, Sequence, Switch, ...) and make use of CacheMetadata() function to specify Arguments and Variables. The problem remains because they are both accessed through var.Get(context).
Solution 2
Give my Data class its own Get() function. It would be only an abstract method, without logic inside that it would, somehow, translate to Get() function of System.Activities.Variable. Is this even possible using C#? Guess not! Another problem is that a Variable.Get() has one parameter.
Solution 3
The worst solution that I thought of was CIL-manipulation. Try to replace the code where Data/Argument is used with Variable/Argument code. This smells like a nightmare to me. I know next to nothing about System.reflection.Emit and even if I learn it my guess is that it would take ages ... and might not even be possible to do it.
Sorry if I ended up introducing a bigger problem but I'm really stuck here and desperately needing a tip/path to go on.
This is called "duck typing" (if it looks like a duck and quacks like a duck you can call methods on it as though it really were a duck). Declare myObject as dynamic instead of as a specific type and you should then be good to go.
EDIT: to be clear, this requires .NET 4.0
dynamic myObject = new FirstObject();
// do stuff
myObject = new SecondObject();
// do stuff again
Reflection isn't necessarily the right task for this. If SecondObject is out of your control, your best option is likely to just make an extension method that instantiates a new copy of it and copies across the data, property by property.
You could use reflection for the copying process, and work that way, but that is really a separate issue.
This question already has answers here:
Where do I use delegates? [closed]
(8 answers)
Closed 9 years ago.
I think I understand the concept of a delegate in C# as a pointer to a method, but I cant find any good examples of where it would be a good idea to use them. What are some examples that are either significantly more elegant/better with delegates or cant be solved using other methods?
The .NET 1.0 delegates:
this.myButton.Click += new EventHandler(this.MyMethod);
The .NET 2.0 delegates:
this.myOtherButton.Click += delegate {
var res = PerformSomeAction();
if(res > 5)
PerformSomeOtherAction();
};
They seem pretty useful. How about:
new Thread(new ThreadStart(delegate {
// do some worker-thread processing
})).Start();
What exactly do you mean by delegates? Here are two ways in which they can be used:
void Foo(Func<int, string> f) {
//do stuff
string s = f(42);
// do more stuff
}
and
void Bar() {
Func<int, string> f = delegate(i) { return i.ToString(); }
//do stuff
string s = f(42);
// do more stuff
}
The point in the second one is that you can declare new functions on the fly, as delegates. This can be largely replaced by lambda expressions,and is useful any time you have a small piece of logic you want to 1) pass to another function, or 2) just execute repeatedly. LINQ is a good example. Every LINQ function takes a lambda expression as its argument, specifying the behavior. For example, if you have a List<int> l then l.Select(x=>(x.ToString()) will call ToString() on every element in the list. And the lambda expression I wrote is implemented as a delegate.
The first case shows how Select might be implemented. You take a delegate as your argument, and then you call it when needed. This allows the caller to customize the behavior of the function. Taking Select() as an example again, the function itself guarantees that the delegate you pass to it will be called on every element in the list, and the output of each will be returned. What that delegate actually does is up to you. That makes it an amazingly flexible and general function.
Of course, they're also used for subscribing to events. In a nutshell, delegates allow you to reference functions, using them as argument in function calls, assigning them to variables and whatever else you like to do.
I primarily use the for easy asynch programming. Kicking off a method using a delegates Begin... method is really easy if you want to fire and forget.
A delegate can also be used like an interface when interfaces are not available. E.g. calling methods from COM classes, external .Net classes etc.
Events are the most obvious example. Compare how the observer pattern is implemented in Java (interfaces) and C# (delegates).
Also, a whole lot of the new C# 3 features (for example lambda expressions) are based on delegates and simplify their usage even further.
For example in multithread apps. If you want several threads to use some control, You shoul use delegates. Sorry, the code is in VisualBasic.
First you declare a delegate
Private Delegate Sub ButtonInvoke(ByVal enabled As Boolean)
Write a function to enable/disable button from several threads
Private Sub enable_button(ByVal enabled As Boolean)
If Me.ButtonConnect.InvokeRequired Then
Dim del As New ButtonInvoke(AddressOf enable_button)
Me.ButtonConnect.Invoke(del, New Object() {enabled})
Else
ButtonConnect.Enabled = enabled
End If
End Sub
I use them all the time with LINQ, especially with lambda expressions, to provide a function to evaluate a condition or return a selection. Also use them to provide a function that will compare two items for sorting. This latter is important for generic collections where the default sorting may or may not be appropriate.
var query = collection.Where( c => c.Kind == ChosenKind )
.Select( c => new { Name = c.Name, Value = c.Value } )
.OrderBy( (a,b) => a.Name.CompareTo( b.Name ) );
One of the benefits of Delegates is in asynchronous execution.
when you call a method asynchronously you do not know when it will finish executing, so you need to pass a delegate to that method that point to another method that will be called when the first method has completed execution. In the second method you can write some code that inform you the execution has completed.
Technically delegate is a reference type used to encapsulate a method with a specific signature and return type
Some other comments touched on the async world... but I'll comment anyway since my favorite 'flavor' of doing such has been mentioned:
ThreadPool.QueueUserWorkItem(delegate
{
// This code will run on it's own thread!
});
Also, a huge reason for delegates is for "CallBacks". Let's say I make a bit of functionality (asynchronously), and you want me to call some method (let's say "AlertWhenDone")... you could pass in a "delegate" to your method as follows:
TimmysSpecialClass.DoSomethingCool(this.AlertWhenDone);
Outside of their role in events, which your probably familiar with if you've used winforms or asp.net, delegates are useful for making classes more flexible (e.g. the way they're used in LINQ).
Flexibility for "Finding" things is pretty common. You have a collection of things, and you want to provide a way to find things. Rather than guessing each way that someone might want to find things, you can now allow the caller to provide the algorithm so that they can search your collection however they see fit.
Here's a trivial code sample:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Delegates
{
class Program
{
static void Main(string[] args)
{
Collection coll = new Collection(5);
coll[0] = "This";
coll[1] = "is";
coll[2] = "a";
coll[3] = "test";
var result = coll.Find(x => x == "is");
Console.WriteLine(result);
result = coll.Find(x => x.StartsWith("te"));
Console.WriteLine(result);
}
}
public class Collection
{
string[] _Items;
public delegate bool FindDelegate(string FindParam);
public Collection(int Size)
{
_Items = new string[Size];
}
public string this[int i]
{
get { return _Items[i]; }
set { _Items[i] = value; }
}
public string Find(FindDelegate findDelegate)
{
foreach (string s in _Items)
{
if (findDelegate(s))
return s;
}
return null;
}
}
}
Output
is
test
there isn't really anything delgates will solve that can't be solved with other methods, but they provide a more elegant solution.
With delegates, any function can be used as long as it has the required parameters.
The alternative is often to use a kind of custom built event system in the program, creating extra work and more areas for bugs to creep in
Is there an advantage to use a delegate when dealing with external calls to a database?
For example can code A :
static void Main(string[] args) {
DatabaseCode("test");
}
public void DatabaseCode(string arg) {
.... code here ...
}
Be improved in code B :
static void Main(string[] args) {
DatabaseCodeDelegate slave = DatabaseCode;
slave ("test");
}
public void DatabaseCode(string arg) {
.... code here ...
}
public delegate void DatabaseCodeDelegate(string arg);
It seems that this is subjective, but an area where there are strong conflicting view points?