When using DoFixture I can set a domain object as System Under Test which allows me to call methods on that object instead of the fixture itself.
Unfortunately, if such a method requires more than one parameter I have to separate those parameters with empty cells because otherwise fitnesse/fitSharp uses odd/even cells to build up the method name. I can see how this makes my tests to resemble plain English better, but it's not really feasible to start renaming domain object methods just to satisfy test framework requirements.
For example, say I want to call method Entry AddEntry(string name, string description) and store the result as symbol e1. If I try the following table:
|name|e1|add entry|sample name|sample description|
it will try to find a method named AddEntrySampleDescription and pass it a single parameter "sample name".
I can do
|name|e1|add|sample name|entry|sample description|
but it just doesn't look right.
So, what I ended up doing is (note the extra empty cell between the parameters)
|name|e1|add entry|sample name||sample description|
which does what I want and isn't as ugly as the option #2 but it still seems like a hack. Do I miss something or is that actually the way to call methods on domain objects?
You can add the empty cell between parameters - this is a widely-used technique. Or you can use SequenceFixture:
http://fitnesse.org/FitNesse.UserGuide.FixtureGallery.FitLibraryFixtures.SequenceFixture
SequenceFixture is very similar to DoFixture and has almost the same
features — in fact the only difference between those two is the naming
convention for methods. Instead of using odd cells to construct a
method name, SequenceFixture takes the first cell in each row as the
method name, and all other cells as arguments
Related
I have a (not quite valid) CSV file that contains rows of multiple types. Any record could be one of about 6 different types and each type has a different number of properties. The first part of any row contains the timestamp and the type of record, followed by a standard CSV of the data.
Example
1456057920 PERSON, Ted Danson, 123 Fake Street, 555-123-3214, blah
1476195120 PLACE, Detroit, Michigan, 12345
1440581532 THING, Bucket, Has holes, Not a good bucket
And to make matters more complex, I need to be able to do different things with the records depending on certain criteria. So a PERSON type can be automatically inserted into a DB without user input, but a THING type would be displayed on screen for the user to review and approve before adding to DB and continuing the parse, etc.
Normally, I would use a library like CsvHelper to map the records to a type, but in this case since the types could be different, and the first part uses a space instead of comma, I dont know how to do that with a standard CSV library. So currently how I am doing it each loop is:
String split based off comma.
Split the first array item by the space.
Use a switch statement to determine the type and create the object.
Put that object into a List of type object.
Get confused as to where to go now because i now have a list of various types and will have to use yet another switch or if to determine the next parts.
I don't really know for sure if I will actually need that List but I have a feeling the user will want the ability to manually flip through records in the file.
By this point, this is starting to make for very long, confusing code, and my gut feeling tells me there has to be a cleaner way to do this. I thought maybe using Type.GetType(string) would help simplify the code some, but this seems like it might be terribly inefficient in a loop with 10k+ records and might make things even more confusing. I then thought maybe making some interfaces might help, but I'm not the greatest at using interfaces in this context and I seem to end up in about this same situation.
So what would be a more manageable way to parse this file? Are there any C# parsing libraries out there that would be able to handle something like this?
You can implement an IRecord interface that has a Timestamp property and a Process method (perhaps others as well).
Then, implement concrete types for each type of record.
Use a switch statement to determine the type and create and populate the correct concrete type.
Place each object in a List
After that you can do whatever you need. Some examples:
Loop through each item and call Process() to handle it.
Use linq .OfType<{concrete type}> to segment the list. (Warning with 10k
records, this would be slow since it would traverse the entire list for each concrete type.)
Use an overridden ToString method to give a single text representation of the IRecord
If using WPF, you can define a datatype template for each concrete type, bind an ItemsControl derivative to a collection of IRecords and your "detail" display (e.g. ListItem or separate ContentControl) will automagically display the item using the correct DataTemplate
Continuing in my comment - well that depends. What u described is actually pretty good for starters, u can of course expand it to a series of factories one for each object type - so that you move from explicit switch into searching for first factory that can parse a line. Might prove useful if u are looking to adding more object types in the future - you just add then another factory for new kind of object. Up to you if these objects should share a common interface. Interface is used generally to define a a behavior, so it doesn't seem so. Maybe you should rather just a Dictionary? You need to ask urself if you actually need strongly typed objects here? Maybe what you need is a simple class with ObjectType property and Dictionary of properties with some helper methods for easy typed properties access like GetBool, GetInt or generic Get?
I need to create a few tests for the user roles in a web application. To minimize the description, one of the tests involves checking if a menu entry is displayed or not for an user.
For this test, I use a table called UserRoles, that looks like this:
sUserName bDoesntHaveMenuX
User1 1
User2 0
User3 1
bDoesntHaveMenuX is of type bit.
I have a class derived from ValidationRule that checks if a certain text is present in a page, based on a XPath expression to locate the node where to look for the text.
The public properties of this class are:
string XPathExpression
string Text
bool FailIfFound
The last one dictates if the rule should fail if the text is found or not found.
In the test I added a datasource for the table mentioned in the beginning, called DS.
For the request I'm interested in I added a new instance of my validation rule class, with the following values:
Text=MenuX
XPathExpression=//div[#id='menu']//td
FailIfFound={{DS.UserRoles.bDoesntHaveMenuX}}
Unfortunately, this doesn't work.
The reason seems to be that the data binding process creates a context variable
DS.UserRoles.bDoesntHaveMenuX has the value "False" or "True". The value is a string, so the binding results in a casting error.
My options, as far as I can think of, are:
Change the validation rule to accept strings for FailIfFound. Not a valid
option, for 2 reasons: it's a hack and the same rule is used in
other places.
Make a new validation rule that will use the above mentioned one,
and implement the FailIfFound as string. I also don't like this, for
the same reason as above. It's a hack.
Make the test coded and do the proper cast before passing the data
to the validation rule. I don't like this one because I prefer to
have the test as coded only if there is no other way.
Which brings me to the question. Is there another way?
Thank you.
So the fundamental issue is that you have no control over how the data-binding treats the 'bit' data type, and it's getting converted to string instead of bool.
The only solution I can think of (which is sadly still a bit of a hack, but not so egregious as changing FailIfFound to string) is to create a WebTestPlugin, and in the PreRequestDataBinding or PreRequest event, convert the value from string to bool. Don't forget to add the plugin to your test(s) (easy mistake I have made).
Then when the validation rule is created it should pick up the nice new bool value and work correctly.
e.g.
string val = e.WebTest.Context["DS.UserRoles.bDoesntHaveMenuX"].ToString();
e.WebTest.Context["DS.UserRoles.bDoesntHaveMenuX"] = (val == "True");
I didn't actually try this... hope it works.
EDIT: round two... a better solution
Change the FailIfFound property to string (in a subclass as you mentioned), so it can work properly with data-binding.
Implement a TypeConverter that provides a dropdown list of valid values for the property in the rule's PropertyGrid (True, False), so in the GUI it looks identical to the rule having FailIfFound as a bool. You can still type your own value into the box when necessary (e.g. for data-binding).
Add the path of the .dll containing the TypeConverter code to your test project's References section.
This is what I have started doing and it is much more satisfying than having to type 'True' or 'False' in the property's edit box.
In my project I have a header class that represents a globally unique key for a piece of information inside the system, such as who it belongs to, what time it exists for, etc. Inside the same header class I also have fields for information that is specific to a given instance of data, such as who created this version of the information, when it was created, if its new data that needs to be saved to the database, etc.
Here is an example of stocking some information into a data transport class and querying it back out.
var header = new IntvlDataHeader(
datapoint: Guid.NewGuid(),
element: Guid.NewGuid(),
intervalUtc: DateTime.Now.Date);
package.StockData_Decimal(header, 5m);
decimal cloneData;
package.TryGetData_Decimal(ref header, out cloneData);
// header now refers to a different object, that could have different flags/information
Note how I made the TryGetData_Decimal pass the header variable by reference. IntvlDataHeader is a class, and if the data is found inside TryGetData then the reference is changed to point to a new instance of IntvlDataHeader that has the specific instance information in addition to having the same unique key information.
Is combining a key with instance specific information and using a ref parameter as both in and out a bad design? Would the effort of splitting out another class so that there would be two out parameters and no ref parameters be better or avoid any potential problems?
The signature of the method is public bool TryGetData_Decimal(ref IntvlDataHeader header, out decimal data)
I think the naming of your TryGetData_Decimal is misleading, if the ref parameter your passing in will then point to a new instance when the method exits. TryGetData_Decimal, to me, sounds like a variation of the TryParse methods on a number of value types (which has an out parameter containing the parsed value - similar to the cloneData parameter).
I guess I'm not sure why the header object has to point to a new instance, so I'm not sure I can recommend a design. If that's what you need to do, I think it may be more readable if your TryGetData_XXX methods have a signature something like this:
IntvlDataHeader ExtractValueAndGetNewInstance_Decimal(IntvlDataHeader header, out decimal cloneData)
where header is passed in, but doesn't change when the method exits. The method returns the new instance, and you can use it if you need it. I wouldn't change the cloneData - I think out parameters are OK as long as they aren't overused.
I'd try to change the name of the method to something more meaningful, too.
I hope this helps.
So, I've been searching around on the internet for a bit, trying to see if someone has already invented the wheel here. What I want to do is write an integration test that will parse the current project, find all references to a certain method, find it's arguments, and then check the database for that argument. For example:
public interface IContentProvider
{
ContentItem GetContentFor(string descriptor);
}
public class ContentProvider : IContentProvider
{
public virtual ContentItem GetContentFor(string descriptor)
{
// Fetches Content from Database for descriptor and returns in
}
}
Any other class will get an IContentProvider injected into their constructor using IOC, such that they could write something like:
contentProvider.GetContentFor("SomeDescriptor");
contentProvider.GetContentFor("SomeOtherDescriptor");
Basically, the unit test finds all these references, find the set of text ["SomeDescriptor", "SomeOtherDescriptor"], and then I can check the database to make sure I have rows defined for those descriptors. Furthermore, the descriptors are hard coded.
I could make an enum value for all descriptors, but the enum would have thousands of possible options, and that seems like kinda a hack.
Now, this link on SO: How I can get all reference with Reflection + C# basically says it's impossible without some very advanced IL parsing. To clarify; I don't need Reflector or anything - it's just to be an automated test I can run so that if any other developers on my team check in code that calls for this content without creating the DB record, the test will fail.
Is this possible? If so, does anyone have a resource to look at or sample code to modify?
EDIT: Alternatively, perhaps a different method of doing this VS trying to find all references? The end result is I want a test to fail when the record doesnt exist.
This will be very difficult: your program may compute the value of the descriptor, which will mean your test is able to know which value are possible without executing said code.
I would suggest to change the way you program here, by using an enum type, or coding using the type safe enum pattern. This way, each and every use of a GetContentFor will be safe: the argument is part of the enum, and the languages type checker performs the check.
Your test can then easily iterate on the different enum fields, and check they are all declared in your database, very easily.
Adding a new content key requires editing the enum, but this is a small inconvenient you can live with, as it help a log ensuring all calls are safe.
the application is very large so giving a brief back ground and the problem
when the user logs in, a button is displayed having the text of the function he is allowed to call.
the function he is allowed is mapped in the database table
its made sure that the name of the actual function is same to the ones used in the db.
problem
the name is extracted, and stored as text field of button and also in a string variable.
now how am i supposed to call this function using the string variable which has the name stored in it!
like we type
name-of-function();
but here i dont know the name, the string at run time does so i cant write like
string()!!?
You will need to use reflection to do this. Here is a rough sketch of what you need to do:
// Get the Type on which your method resides:
Type t = typeof(SomeType);
// Get the method
MethodInfo m = t.GetMethod("methodNameFromDb");
// Invoke dynamically
m.Invoke(instance, null);
Depending on your actual needs you will have to modify this a little - lookup the used methods and types on MSDN: MethodInfo, Invoke
Well, no matter what you do, there is going to have to be some kind of mapping done between a database "function" and your "real" function. You can probably use Reflection using your Types and MethodInfo.
However, this sounds like a maintenance nightmare. It also sounds like you are reinventing user roles or the like. I would be very cautious about going down this path - I think it will be much more complex and problematic than you think.