Some context:
Testing for 200 regression use cases for web API/WCF service is real pain and I want to automate it.
I am creating a tool which will automate "before my change" and "after my change" result comparison for given web api or wcf request and I should be able to ignore some tags for comparison which will be unique in both response such as response time, request execution time.
So far I am able to create two folder before and after, before contains response for all the use cases in json/xml format before my change and after contains response after my change for same operation or Web API.Files in folder are saved based on use case name.
after before
useCase1.xml useCase1.xml
useCase2.xml useCase2.xml
useCase3.xml useCase3.xml
In code I have two object for one given file in json format(I read xml and converted it to json as there is possibility that folder may contain json responses). I compared both objects and inside a loop ignoring the configured keys such as response time and request execution time.I wrote use case name in a text file if objects doesn't match for human evaluation.
So far I just had to compare all the test for the same operation or Web API, so json object structure of useCase1 from before folder and after folder was same (it differs only when I add new attributes in my change in which case i'll run comparison ignoring the new attributes to see i have not broke any historical stuff)
Question:
I also want to compare response from one endpoint of service with another on conditional basis to check that both the service are in sync when some feature get delivered.(this scenario is only "after change" scenario and user will have to provide two files which need to be compared)
Example:
AccountDetail.xml
<User>
<Account1>
<food>10</food>
</Account1>
<Account1>
<rent>20</rent>
</Account1>
<Account2>
<food>10</food>
</Account2>
<Account2>
<rent>30</rent>
</Account2>
</User>
AccountSummary.xml
<User>
<Summary>
<Account1>30</Account1>
<Account2>40</Account2>
</Summary>
</User>
Now I want to ask some input from user, so they can tell program to add all amount from Account1 from AccountDetail.xml and compare it with Summary/Account1.
Seems like I have to take SQL query kind of input to achieve above but I have no idea what to do and looking for help.So far whatever I have done is in c#, so I'll be happy if suggestions are for c# but I am open to anything which I can leverage. This is internal tool for me and my colleague and not for production.
Even if you think my entire approach is incorrect,let me know how can I correct or improve it.
I have some doubts here, I assume - "You are running repetitive test cases as part of regression test"
In such case, as I understand you should use same test case as well as data entries for validation. That way you shield your repetitive test runs from any changes (may be in logic getting tested and data as well). As a result of this you will be in a state to compare your OLD and NEW results of the test run.
Moreover, if you use same data for repetitive testing you are well aware about the asserts to be verified for making the test as pass or fail.
Related
I have created a library which communicates with a device and provides high-level APIs to user.
Now I am trying to create functional tests - tests that communicates with the real device.
Question: Is it OK to check results using own functions? For example there are methods GetChannelState() and SetChannelState(). Can I check 'Get' method with a help of 'Set' method and vice versa? Please describe an approach you use in similar cases.
Example:
There is oscilloscope. To turn its second channel ON, the library sends to the oscilloscope string "SELECT:CH2 ON". To check whether the channel is on, it sends "SELECT?" then parses the response. The response will look similar to following "SELECT:CH1 1;CH2 1;CH3 0;CH4 0".
To set a value there is SetChannelState(int channelNumber) API, and get a value there is GetChannelState(int channelNumber) API.
So the question is whether I can use SetChannelState to test GetChannelState and vise versa.
Sure - as long as you completed few tests proving that your set indeed sets whatever it was given (or obeys the rules you wanted it to). In case your setting logic is simple there might be no practical usage of that one, though growing complexity has to be tested before the rest of a code depdendent upon this bit.
However there are edge cases. You might want to mock your set logic with a dummy one and just ensure that it indeed was invoked, lets say, exactly once. The same applies to both get and set behaviors. Those are independent and shouldn't rely on realworld implementation.
When it's done, you have all power to trust your own code and use it in your functional tests without any doubts.
Right now my Coded UI Tests use their app.config to determine the domain they execute in, which has a 1-1 relationship with environment. To simplify it:
www.test.com
www.UAT.com
www.prod.com
and in App.config I have something like:
<configuration>
<appSettings>
<add key="EnvironmentURLMod" value ="test"/>
and to run the test in a different environment, I manually change the value between runs. For instance the I open the browser like this:
browserWindow.NavigateToUrl(new Uri("http://www."
+ ConfigurationManager.AppSettings.Get("EnvironmentURLMod")
+ ".com"));
Clearly this is inelegant. I suppose I had a vision where we'd drop in a new app.config for each run, but as a spoiler this test will be run in ~10 environments, not 3, and which environments it may run may change.
I know I could decouple these environment URL modifications to yet another XML file, and make the tests access them sequentially in a data-driven scenario. But even this seems like it's not quite what I need, since if one environment fails then the whole test collapses. I've seen Environment Variables as a suggestion, but this would require creating a test agent for each environment, modifying their registries, and running the tests on each of them. If that's what it takes then sure, but it seems like an enormous amount of VM bandwidth to be used for what's a collection of strings.
In an ideal world, I would like to tie these URL mods to something like Test Settings, MTM environments, or builds. I want to execute the suite of tests for each domain and report separately.
In short, what's the best way to parameterize these tests? Is there a way that doesn't involve queuing new builds, or dropping config files? Is Data Driven Testing the answer? Have I structured my solution incorrectly? This seems like it should be such a common scenario, yet my googling doesn't quite get me there.
Any and all help appreciated.
The answer here is data driven testing, and unfortunately there's no total silver bullet even if there's a "Better than most" option.
Using any data source lets you iterate through a test in multiple environments (or any other variable you can think of) and essentially return 3 different test results - one for each permutation or data row. However you'll have to update your assertions to show which environment you're currently executing in, as the test results only show "Data Row 0" or something similar by default. If the test passes, you'll get no clue as to what's actually in the data row for the successful run, unless you embed this information in the action log! I'm lucky that my use case does this automatically since I'm just using a URL mod, but other people may need to do that on their own.
To allow on-the-fly changing of what environments we're testing in, we chose to use a TestCase data source. This has a lot of flexibility - potentially more than using a database or XML for instance - but it comes with its own downsides. Like all data driven scenarios, you have to essentially "Hard Code" the test case ID into the decorator above your test method (Because it's considered a property). I was hoping we could drop an app.config into the build drop location when we wanted to change which test case we used, at least, but it looks like instead we're going to have to do a find + replace across a solution instead.
If anyone knows of a better way to decouple the test ID or any other part of the connection string from the code, I'll give you an answer here. For anyone else, you can find more information on MSDN.
I am debugging a .net(C#) app in VS2013. I am calling a rest service and it returns data as a list of phone calls that have been made between two given dates.
So the structure of List is something like CallDetails.Calls and then Calls property has several child properties like, call duration, timeofcall, priceperminute etc etc.
Since I am debugging the app, I want to avoid every time hitting the server where the rest service is hosted.
So my question is simply that if there is a way that when I have received the List of data items, I (kind of) copy and paste data into a file and later use it in a statically defined List instead of fetching it from the server?
In case someone wonders why would I want to do that, there is some caching of all incoming requests on the server and after a while it gets full so the server does not return data and ultimately a time out error occurs.
I know that should be solved on the server some how but that is not possible today that is why I am putting this question.
Thanks
You could create a unit test using MSTest or NUnit. I know unit tests are scary if you haven't used them before, but for simple automated tests they are awesome. You don't have to worry about lots of the stuff people talk about for "good unit testing" in order to get started testing this one item. Once you get the list from in the debugger while testing, you could then
save it out to a text file,
manually (one time) build the code to copy it from the text file,
Use that code as the set-up for your test.
MSTest tutorial: https://msdn.microsoft.com/en-us/library/ms182524%28v=vs.90%29.aspx
NUnit is generally considered superior to MSTest, and in particular has a feature that is better for running a set of data through the same test. Based on what you've said, I don't think you need the NUnit test case, but I've never used it. If you do want it, after the quickstart guide, the keyword to look for is testcase if you want to go that route. http://nunitasp.sourceforge.net/quickstart.html
We created a soap server using php and few of the functions give a varying outcome in terms of the xml elements depending on the arguments passed through.
To explain further, a function takes in argument a and depending on the data received it can return either of the two different arrays(complextype) with distinct number of child elements.
e.g.
if a =9 then outcome is array/struct ,,,,
a[delta]=20 ,,,
a[sigma]=yellow
if a =3 ,
a[aTotallyDifferentBallgame]=Omaha ,,,
a[t]=1 ,,,
a[theNumberOfElementsCanVary]=yup
In order to explain this possible variance we utilized choice in the schema, thereby trying to intimate that the outcome can be any single element within choice be it simple or complextype.
Now theoretically it sounds logical and it works fine with php's soap client but WHEN we tried to use the add Service Reference feature of the visual studio in a form application ; the application failed to create code for it citing that the use of xs:choice is not allowed for some unfathomable reasons.
Now what I would really like to know is what changes do I need to make to my wsdl or soap server to make this work. We thought a work around was by fixing the outcome to only one possible scenario and utilizing a completely different function to determine the outcome of the other thereby abstaining from use of choice but frankly this seems too redundant and weird.
Is there anything I have missed? Please let me know any ideas you have. Thanks!
The create service reference machinery tries to map the schema to C# classes, and there is no structure in a C# class corresponding to a choice in the schema - a class cannot have a value for either one property or another one but not for both.
My suggestion would be to replace choice with sequences of optional elements, the corresponding C# class will have properties for each of the elements - and only one of them will have a value, the other will be null, because the PHP service returns a value for only one of them at a time.
I'm writing some code that will serialize a C# object to JSON, send it over the wire and deserialize the JSON to a Python object.
The reverse will also be done, i.e. serialize a Python object to JSON, send it over the wire and deserialize the JSON to a C# object.
On the C# side, I use the ServiceStack JSON libraries. In Python, I'm using the built-in json libraries. The C# library can be changed easily if necessary, the Python one far less so.
// C#
var serializer = new JsonSerializer();
var json = serializer.SerializeToString(request);
// Python
// JsonCodec inherits from JSONEncoder and hooks to 'default' and just
// returns obj.__dict__ where obj is the object to be serialized
actualSerializedResponse = JsonCodec().encode(response)
I've written a unit test in C# to verify that the ServiceStack serialized JSON is as expected by the Python side. In the test, an instance of Foo is created, populated with some hardcoded values, then serialized. To ensure validity, I compare the serialized JSON to some JSON saved in a file, where the contents of the file represent what the Python side expects.
Similarly, there's a unit test in Python to verify that the built-in json library's serialized JSON is as expected by the C# side. Again, to ensure validity, I compare the actual serialized JSON to some JSON saved in a file.
In both cases, the fact that I'm comparing the serialized JSON to some JSON saved in a file implies that the order in which properties are serialized to JSON must be consistent every time and every where the tests are run.
My questions:
In the C# unit test, it seems that the order of the properties in the JSON match the order in which the properties were defined in the C# class whose instance is being serialized. Can this be relied on?
In the Python unit test, the order of the properties is consistent but arbitrary. This makes sense since it relies on dict and the Python dictionaries are unordered. But can this be relied every time/where?
Is there a better way to do all this?
Many thanks in advance.
Regarding the Python side: you can pass the JSON decoder an option to use an "ordered dictionary" (requires python 2.7 however):
from http://docs.python.org/library/json.html#json.load:
object_pairs_hook is an optional function that will be called with the result of any object literal decoded with an ordered list of pairs. The return value of object_pairs_hook will be used instead of the dict. This feature can be used to implement custom decoders that rely on the order that the key and value pairs are decoded (for example, collections.OrderedDict() will remember the order of insertion). If object_hook is also defined, the object_pairs_hook takes priority.
Your first two questions are easy to address. You should not rely on undocumented, implementation specific behavior whenever you can avoid it. I think you can avoid doing so here. To flesh out this answer a bit more, I'll spend some time on your last question.
Is there a better way to do all this?
The first step is recognizing that what you've currently written are not unit tests.
Specifically:
A test is not a unit test if:
It communicates across the network
It touches the file system
Your tests are doing both of these. That's not to say that your tests aren't worth anything, but it's important to note that they are not unit tests. The level that these tests are working at is integration testing. Integration testing is:
The phase in software testing in which individual software modules are
combined and tested as a group. It occurs after unit testing and
before validation testing
I think your problem is that you're trying to mix the tasks of integration and unit testing.
When writing your unit tests you want each test to rely on as few components as possible, and address as specific of a case as possible. In your case, that would mean tests in both C# and Python, neither of which rely on output from the other. In both programs, have your serialization code work on the simplest cases you require it to work for, and validate that you get the JSON loading/dumping that you want. This could mean hand writing JSON as strings in your unit testing code (you want your tests to be small enough that this wouldn't be a pain).
For your integration tests, you just want to check that nothing blows up when your different pieces talk to each other. Here, you don't have to worry that the serialization and reading is correct, since you've already covered that with your unit tests. If they talk well, great!
Then, if you run in to bugs, fix them and document them with an appropriate test case. In the end you should have lots of little unit tests and a few slightly bigger integration tests.