In the context of a .net6 incremental source generator, what does the IncrementalValuesProvider.WithTrackingName(string name) method do?
In addition how/when is it intended to be used?
[Generator]
public class MyGenerator : IIncrementalGenerator
{
var incremntalValueProvider = context.SyntaxProvider
.CreateSyntaxProvider(predicate: (n, c) => { ... }, transform: (s, c) => { ... });
incremntalValueProvider = incremntalValueProvider.WithTrackingName("Some tracking name");
}
With most things when it comes to source generators, I have been able to google and use a bit of trial and error, to figure out how things work.
However with this particular method, those aproaches has not been helpful. In addition it seems like Micrsofts documentation is pretty much out of date since
https://github.com/dotnet/roslyn/blob/main/docs/features/incremental-generators.md does not even mention the method.
Related
I am just learning about WebAPIs and curious if we can reuse the Post method inside get method or it just violates the coding standards. How can we test if this violation is already done by someone?
// GET api/values/5
public string Get(int id)
{
var value= vc.Values.Where(v => v.Id == id).Select(v => v.Value1).SingleOrDefault();
if (value==null) Post("New Value",id);
return vc.Values.Where(v => v.Id == id).Select(v => v.Value1).SingleOrDefault();
}
// POST api/values
public void Post([FromBody]string value, int id = 0)
{
vc.Values.Add(new Value { Id=id,Value1 = value });
vc.SaveChanges();
}
These are 2 questions, not one.
Reusing code like this is a recipe for disaster.
You can keep your endpoints very slim by moving the code into a library for example. Then you can simply call these new methods from the endpoints and this takes care of the code reuse part.
In terms of how you detect such issues, well, I wouldn't expect a tool to do it for you. You need a mature SDLC, you need code reviews and analysis on what you have already.
My understand is that each Establish should only be executed once, but the code below shows it executing multiple times. We're nesting the classes to provide some grouping while keeping the unit tests for a Subject in one file. This seems like it is a bug.
We're using the machine.specifications.runner.resharper Reshaper extension and MSpec 0.9.1.
[Subject(typeof(string))]
internal class EstablishRunTwice {
Establish sharedContext = () => Console.WriteLine("Shared context");
internal class ScenarioA : EstablishRunTwice {
Establish scenarioAContext = () => Console.WriteLine("ScenarioA context");
internal class ScenarioAVariation1 : ScenarioA {
Because of = () => Console.WriteLine("ScenarioAVariation1 Because");
It it1 = () => Console.WriteLine("ScenarioAVariation1 It1");
It it2 = () => Console.WriteLine("ScenarioAVariation1 It2");
}
internal class ScenarioAVariation2 : ScenarioA {
Because of = () => Console.WriteLine("ScenarioAVariation2 Because");
It it1 = () => Console.WriteLine("ScenarioAVariation2 It1");
It it2 = () => Console.WriteLine("ScenarioAVariation2 It2");
}
}
internal class ScenarioB : EstablishRunTwice {
Establish context = () => Console.WriteLine("ScenarioB context");
Because of = () => Console.WriteLine("ScenarioB Because");
It it1 = () => Console.WriteLine("ScenarioB It1");
It it2 = () => Console.WriteLine("ScenarioB It2");
}
}
The result is this for ScenarioAVariation1:
Shared context
Shared context
ScenarioA context
Shared context
Shared context
ScenarioA context
ScenarioAVariation1 Because
ScenarioAVariation1 It1
ScenarioAVariation1 It2
When we were doing our own custom context specification framework using NUnit, we got around issues with the NUnit running by making sure all subclasses were abstract (in this case, EstablishRunTwice and ScenarioA would be abstract), but MSpec throws an error attempting to do so.
It is because you are both nesting and inheriting the test classes. Normally you might use nested classes in C# purely for organizational purposes, but it also has an effect on execution in MSpec. This might be unexpected, but does fit with its declarative style. In fact, there isn't normally a need to use inheritance at all for MSpec unless you're reusing functionality across different files.
Just remove the inheritance in your example and keep the nesting and you'll see the output as:
Shared context
ScenarioA context
ScenarioAVariation1 Because
ScenarioAVariation1 It1
ScenarioAVariation1 It2
...
This makes it easy to use common setup in the Establish of the outer class and override specific parts in inner classes. On a personal note, before I realized it worked this way, I felt like I was fighting with MSpec for test cases that relied on different setups (vs ones where different values are passed directly to the Subject in the Because).
Say you had a weather sensor thingy, you might structure it this way:
[Subject(typeof(WeatherSensor))]
class when_reading_the_sensor : WithSubject<WeatherSensor> {
Establish context = () => { common setup }
class with_sunny_conditions {
Establish context = () => { setup sunny conditions }
Because of = () => Subject.Read();
It should_say_it_is_sunny => () => ...
It should_return_correct_temps => () => ...
}
class with_rainy_conditions {
...
}
}
That also reads well in the test results. Given the second test fails, it might show like this in the test tree:
(X) WeatherSensor, when reading the sensor with sunny conditions
(✔) should say it is sunny
(X) should return correct temps
If, as in that example, all the different conditions come purely from setup of dependencies injected into the Subject, you may even want to move the Because into the outer class. Then you can just have an Establish and some Its in the inner classes, making each test case very concise. The outer Because will still be run for each inner class after all the needed Establishes and before the Its.
That's a really confusing way to structure things - clever, but perhaps a bit too clever. I find it hard to read and understand the intent. In fact, I can't even begin to imagine what the compiler would do with that inheritance structure and therefore I can't understand the intent. I think maybe you're over-thinking this.
So let me see, ScenarioA is not only nested in EstablishRunTwice, but also inherits from it. Does that mean it inherits nested copies of itself all the way down to infinity? And then, ScenarioB inherits from all of that! My head has just exploded. I'm not surprised you get confusing results. What does that nesting really give you? Does it make the code more readable or easier to maintain? I'm not convinced it does.
Use the KISS principle. The normal way of doing things is to put each context in its own class, no nesting; just use files to group related tests, and you can also use the Concern argument in the [Subject] attribute as another way of grouping. You can inherit from other contexts if that makes sense, but after working with MSpec for a couple of years, I'm slowly coming to the conclusion that too much inheritance can harm readability and make the test code more viscous, so use inheritance wisely.
Update: Having reflected on what I think you are trying to achieve for a bit longer, I suspect you are trying to re-invent behaviours. This is perhaps a poorly documented and understood feature of MSpec, which lets you define a set of common behaviours that can later be applied in multiple test contexts. Does that sound like what you are trying to achieve? Here's an example of behaviours:
[Behaviors]
internal class DenebRightAscension
{
It should_have_20_hours_ = () => UUT.Degrees.ShouldEqual(20u);
It should_have_41_minutes = () => UUT.Minutes.ShouldEqual(41u);
It should_have_59_seconds = () => UUT.Seconds.ShouldEqual(59u);
protected static Bearing UUT;
}
[Subject(typeof(HourAngle), "sexagesimal")]
internal class when_converting_hour_angle_to_sexagesimal
{
Because of = () =>
{
RaDeneb = 20.6999491773451;
UUT = new Bearing(RaDeneb);
};
Behaves_like<DenebRightAscension> deneb;
protected static Bearing UUT;
static double RaDeneb;
}
[Subject(typeof(Bearing), "sexagesimal")]
internal class when_converting_to_sexagesimal
{
Because of = () =>
{
RaDeneb = 20.6999491773451;
UUT = new Bearing(RaDeneb);
};
Behaves_like<DenebRightAscension> deneb;
protected static Bearing UUT;
static double RaDeneb;
}
Note that in behaviours fields are matched by name, not by any kind of inheritance. So the behaviour magically knows what I mean by 'UUT' even though the classes are in no way related.
Coming from using Moq, I'm used to being able to Setup mocks as Verifiable. As you know, this is handy when you want to ensure your code under test actually called a method on a dependency.
e.g. in Moq:
// Set up the Moq mock to be verified
mockDependency.Setup(x => x.SomethingImportantToKnow()).Verifiable("Darn, this did not get called.");
target = new ClassUnderTest(mockDependency);
// Act on the object under test, using the mock dependency
target.DoThingsThatShouldUseTheDependency();
// Verify the mock was called.
mockDependency.Verify();
I've been using VS2012's "Fakes Framework" (for lack of knowing a better name for it), which is quite slick and I'm starting to prefer it to Moq, as it seems a bit more expressive and makes Shims easy. However, I can't figure out how to reproduce behavior similar to Moq's Verifiable/Verify implementation. I found the InstanceObserver property on the Stubs, which sounds like it might be what I want, but there's no documentation as of 9/4/12, and I'm not clear how to use it, if it's even the right thing.
Can anyone point me in the right direction on doing something like Moq Verifiable/Verify with VS2012's Fakes?
-- 9/5/12 Edit --
I realized a solution to the problem, but I'd still like to know if there's a built-in way to do it with VS2012 Fakes. I'll leave this open a little while for someone to claim if they can. Here's the basic idea I have (apologies if it doesn't compile).
[TestClass]
public class ClassUnderTestTests
{
private class Arrangements
{
public ClassUnderTest Target;
public bool SomethingImportantToKnowWasCalled = false; // Create a flag!
public Arrangements()
{
var mockDependency = new Fakes.StubIDependency // Fakes sweetness.
{
SomethingImportantToKnow = () => { SomethingImportantToKnowWasCalled = true; } // Set the flag!
}
Target = new ClassUnderTest(mockDependency);
}
}
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
var arrangements = new Arrangements();
// act
arrangements.Target.DoThingThatShouldUseTheDependency();
// assert
Assert.IsTrue(arrangements.SomethingImportantToKnowWasCalled); // Voila!
}
}
-- 9/5/12 End edit --
Since I've heard no better solutions, I'm calling the edits from 9/5/12 the best approach for now.
EDIT
Found the magic article that describes best practices. http://www.peterprovost.org/blog/2012/11/29/visual-studio-2012-fakes-part-3/
Although it might make sense in complex scenarios, you don't have to use a separate (Arrangements) class to store information about methods being called. Here is a simpler way of verifying that a method was called with Fakes, which stores the information in a local variable instead of a field of a separate class. Like your example it implies that ClassUnderTest calls a method of the IDependency interface.
[TestMethod]
public void DoThingThatShouldUseTheDependency_Condition_Result()
{
// arrange
bool dependencyCalled = false;
var dependency = new Fakes.StubIDependency()
{
DoStuff = () => dependencyCalled = true;
}
var target = new ClassUnderTest(dependency);
// act
target.DoStuff();
// assert
Assert.IsTrue(dependencyCalled);
}
Long story short
Say I have the following code:
// a class like this
class FirstObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod() {
// logic
}
}
// and another class with properties and methods names
// which are similar or exact the same if needed
class SecondObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod(String canHaveParameters) {
// logic
}
}
// the consuming code would be something like this
public static void main(String[] args) {
FirstObject myObject=new FirstObject();
// Use its properties and methods
Console.WriteLine("FirstObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("FirstObject.OneMethod returned value: "+myObject.OneMethod());
// Now, for some reason, continue to use the
// same object but with another type
// -----> CHANGE FirstObject to SecondObject HERE <-----
// Continue to use properties and methods but
// this time calls were being made to SecondObject properties and Methods
Console.WriteLine("SecondObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("SecondObject.OneMethod returned value: "+myObject.OneMethod(oneParameter));
}
Is it possible to change FirstObject type to SecondObject and continue to use it's properties and methods?
I've total control over FirstObject, but SecondObject is sealed and totally out of my scope!
May I achieve this through reflection? How? What do you think of the work that it might take to do it? Obviously both class can be a LOT more complex than the example above.
Both class can have templates like FirstObject<T> and SecondObject<T> which is intimidating me to use reflection for such a task!
Problem in reality
I've tried to state my problem the easier way for the sake of simplicity and to try to extract some knowledge to solve it but, by looking to the answers, it seems obvious to me that, to help me, you need to understand my real problem because changing object type is only the tip of the iceberg.
I'm developing a Workflow Definition API. The main objective is to have a API able to be reusable on top of any engine I might want to use(CLR through WF4, NetBPM, etc.).
By now I'm writing the middle layer to translate that API to WF4 to run workflows through the CLR.
What I've already accomplished
The API concept, at this stage, is somehow similar to WF4 with ActivityStates with In/Out Arguments and Data(Variables) running through the ActivityStates using their arguments.
Very simplified API in pseudo-code:
class Argument {
object Value;
}
class Data {
String Name;
Type ValueType;
object Value;
}
class ActivityState {
String DescriptiveName;
}
class MyIf: ActivityState {
InArgument Condition;
ActivityState Then;
ActivityState Else;
}
class MySequence: ActivityState {
Collection<Data> Data;
Collection<ActivityState> Activities;
}
My initial approach to translate this to WF4 was too run through the ActivitiesStates graph and do a somehow direct assignment of properties, using reflection where needed.
Again simplified pseudo-code, something like:
new Activities.If() {
DisplayName=myIf.DescriptiveName,
Condition=TranslateArgumentTo_WF4_Argument(myIf.Condition),
Then=TranslateActivityStateTo_WF4_Activity(myIf.Then),
Else=TranslateActivityStateTo_WF4_Activity(myIf.Else)
}
new Activities.Sequence() {
DisplayName=mySequence.DescriptiveName,
Variables=TranslateDataTo_WF4_Variables(mySequence.Variables),
Activities=TranslateActivitiesStatesTo_WF4_Activities(mySequence.Activities)
}
At the end of the translation I would have an executable System.Activities.Activity object. I've already accomplished this easily.
The big issue
A big issue with this approach appeared when I began the Data object to System.Activities.Variable translation. The problem is WF4 separates the workflow execution from the context. Because of that both Arguments and Variables are LocationReferences that must be accessed through var.Get(context) function for the engine to know where they are at runtime.
Something like this is easily accomplished using WF4:
Variable<string> var1=new Variable<string>("varname1", "string value");
Variable<int> var2=new Variable<int>("varname2", 123);
return new Sequence {
Name="Sequence Activity",
Variables=new Collection<Variable> { var1, var2 },
Activities=new Collection<Activity>(){
new Write() {
Name="WriteActivity1",
Text=new InArgument<string>(
context =>
String.Format("String value: {0}", var1.Get(context)))
},
new Write() {
//Name = "WriteActivity2",
Text=new InArgument<string>(
context =>
String.Format("Int value: {0}", var2.Get(context)))
}
}
};
but if I want to represent the same workflow through my API:
Data<string> var1=new Data<string>("varname1", "string value");
Data<int> var2=new Data<int>("varname2", 123);
return new Sequence() {
DescriptiveName="Sequence Activity",
Data=new Collection<Data> { var1, var2 },
Activities=new Collection<ActivityState>(){
new Write() {
DescriptiveName="WriteActivity1",
Text="String value: "+var1 // <-- BIG PROBLEM !!
},
new Write() {
DescriptiveName="WriteActivity2",
Text="Int value: "+Convert.ToInt32(var2) // ANOTHER BIG PROBLEM !!
}
}
};
I end up with a BIG PROBLEM when using Data objects as Variables. I really don't know how to allow the developer, using my API, to use Data objects wherever who wants(just like in WF4) and later translate that Data to System.Activities.Variable.
Solutions come to mind
If you now understand my problem, the FirstObject and SecondObject are the Data and System.Activities.Variable respectively. Like I said translate Data to Variable is just the tip of the iceberg because I might use Data.Get() in my code and don't know how to translate it to Variable.Get(context) while doing the translation.
Solutions that I've tried or thought of:
Solution 1
Instead of a direct translation of properties I would develop NativeActivites for each flow-control activity(If, Sequence, Switch, ...) and make use of CacheMetadata() function to specify Arguments and Variables. The problem remains because they are both accessed through var.Get(context).
Solution 2
Give my Data class its own Get() function. It would be only an abstract method, without logic inside that it would, somehow, translate to Get() function of System.Activities.Variable. Is this even possible using C#? Guess not! Another problem is that a Variable.Get() has one parameter.
Solution 3
The worst solution that I thought of was CIL-manipulation. Try to replace the code where Data/Argument is used with Variable/Argument code. This smells like a nightmare to me. I know next to nothing about System.reflection.Emit and even if I learn it my guess is that it would take ages ... and might not even be possible to do it.
Sorry if I ended up introducing a bigger problem but I'm really stuck here and desperately needing a tip/path to go on.
This is called "duck typing" (if it looks like a duck and quacks like a duck you can call methods on it as though it really were a duck). Declare myObject as dynamic instead of as a specific type and you should then be good to go.
EDIT: to be clear, this requires .NET 4.0
dynamic myObject = new FirstObject();
// do stuff
myObject = new SecondObject();
// do stuff again
Reflection isn't necessarily the right task for this. If SecondObject is out of your control, your best option is likely to just make an extension method that instantiates a new copy of it and copies across the data, property by property.
You could use reflection for the copying process, and work that way, but that is really a separate issue.
I'd like to list all the methods that are called from a specific method. E.g. if I have the following code:
public void test1() {
test2();
test3();
}
The list should contain test2() and test3(). It would be great if methods of the same class but also methods of another class could be listed.
Additionaly I'd like to find a way to detect which fields are used of a method:
public class A {
private String test1 = "";
private String test2 = "";
public void test() {
Console.WriteLine(test1);
}
}
Should therefore list test1.
I tried this using Mono.Cecil, but unfortunately I couldn't find lot of documentation about the project. So does anybody know how to do that?
Edit: I'd like to do it with Mono.Cecil because over its API I can directly use the results in my application. If I use built in tools in Visual Studio or similar, it's quite difficult to furhter process the results.
I haven't really worked with Cecil but the HowTo page shows how to enumerate the types, your problem only seems to require looping over the instructions for the ones your after: Call and Load Field. This sample code seems to handle the cases you mentioned but there may be more to it, you should probably check the other Call instructions too. If you make it recursive make sure you keep track of the methods you've already checked.
static void Main(string[] args)
{
var module = ModuleDefinition.ReadModule("CecilTest.exe");
var type = module.Types.First(x => x.Name == "A");
var method = type.Methods.First(x => x.Name == "test");
PrintMethods(method);
PrintFields(method);
Console.ReadLine();
}
public static void PrintMethods(MethodDefinition method)
{
Console.WriteLine(method.Name);
foreach (var instruction in method.Body.Instructions)
{
if (instruction.OpCode == OpCodes.Call)
{
MethodReference methodCall = instruction.Operand as MethodReference;
if(methodCall != null)
Console.WriteLine("\t" + methodCall.Name);
}
}
}
public static void PrintFields(MethodDefinition method)
{
Console.WriteLine(method.Name);
foreach (var instruction in method.Body.Instructions)
{
if (instruction.OpCode == OpCodes.Ldfld)
{
FieldReference field = instruction.Operand as FieldReference;
if (field != null)
Console.WriteLine("\t" + field.Name);
}
}
}
This can't be done simply using the reflection API within C#. Really you would need to parse the original source code which is probably not the kind of solution you're looking for. But for example this is how Visual Studio gets this kind of info to do refactoring.
You might get somewhere analysing the IL - along the lines of what Reflector does but that would be a huge piece of work I think.
you can use .NET Reflector tool if you want to pay. you could also take a look at this .NET Method Dependencies it gets tricky though, as you're going to be going into the IL. A third possible would be to use the macro engine in VS, it does have a facility to analyze code,CodeElement, I'm not sure if it can do dependencies though.