Is it possible to keep NDbUnit test data in separate XML files? - c#

I'm looking at using NDbUnit to help with the unit testing of an application. As the question title states, I'm wondering if it is possible to keep NDbUnit test data in separate XML files. I have noticed already that my single test data XML file is quite big and could start to become unmanageable when I add a few more entities to it.
Now, having read this question it looks as if it's not possible but I would just like to be sure.
If it helps, this is sample code which illustrates the problem. The idea is that programs are associated with vendors. I have set up test data containing 3 vendors, the second one of which has 3 programs. TestData.xml contains all of the test data for all of the vendors and programs. When I use it, the unit test passes as expected. If I try to read the individual XML file in separately using multiple calls to db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity); it seems as if the second call overwrites whatever was done in the first one.
private const string xmlSchema = #"..\..\schema.xsd";
// All of the test data in one file.
private const string xmlData = #"..\..\XML Data\TestData.xml";
// Individual test data files.
private const string vendorData = #"..\..\XML Data\Vendor_TestData.xml";
private const string programData = #"..\..\XML Data\Program_TestData.xml";
public void WorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(xmlData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
public void NotWorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(vendorData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
db.ReadXml(programData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
// This line throws an ArgumentOutOfRangeException because there are no vendors in the collection.
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
This does work:

Watch out for the meaning of the DbOperationFlag value you are using; the "Clean" part of "CleanInsertIdentity" means "clean out the existing records before performing the insert-identity part of the process".
See http://code.google.com/p/ndbunit/source/browse/trunk/NDbUnit.Core/DbOperationFlag.cs for more info on the possible enum values.
You might try the same process with either Insert or InsertIdentity to see if you can achieve what you are after, but by design CleanInsertIdentity isn't going to work for this scenario :)

Related

Add new record hangs on WinfForm - and I don't understand why

I have inherited a Windows Forms Application. We are having some performance issues, when trying to save a new record to the SQL Table. It hangs.
For the most part, I have ruled out the database or Table Structure as the problem. I don't believe that is the case here.
I am very new to this, and am having trouble stepping through and finding the root of the problem.
The form basics: (Only included what I thought was relevant code, but can add more if needed)
public partial class frmBadgeCreate : daBaseForm
{
private boBadger_History oBadger_History;
Form mainFormHandler;
private string whome;
}
public frmBadgeCreate(String cutomItem)
{
InitializeComponent();
if (daAppDesktop.IsRunning)
{
oBadger_History = new boBadger_History();
oBadger_History.GetAll(); /// This line right here seems to have some importance
whome = customItem;
}
}
public void saveitall() /// Save Action that hangs
{
// listing of form variables to be saved to table columns
var vlast = textbox_H_lname.Text;
var vfirst = textbox_H_fname.Text;
. . .and on and on . . .
var badger_History = new Badger_History() {hlastname = vlast, vfirstname = vfirst . . . and on and on . . };
oBadger_History.Add(badger_History);
oBadger_History.Save(); /// This is where things just hang forever.
Because this is a 'Lone Ranger App' that was handed to me, I am struggling to grasp it. What really confuses me is that when I comment out the 'oBadger_History.GetAll()' line, the save works very fast! Instantly. When I add the line back in, it hangs. I only know this, because I have spent days commenting out each line, one by one, and testing results.
The oBadger_History.GetAll(); looks like it is somehow used for the auto complete feature, so it is needed.
What has me totally scratching my head is that I can't see the connection between the 'GetAll' and the save. Why would the getAll impact the save function at all?
Here is the GetAll code, if that sheds any light:
public daBindingList<Badger_History> GetAll()
{
BadgerContext cn = (BadgerContext)this.context;
List<Badger_History> ents = cn.Badger_History.ToList();
this.EntityList = new daBindingList<Badger_History>(ents);
return this.EntityList;
}
Again, I don't believe that the SQL database/tables are the problem, seeing that I can get the save to work properly by removing the one line of code. I just can't seem to find a way to resolve
This line:
List<Badger_History> ents = cn.Badger_History.ToList();
Will load EVERY row in the Badger_History table into memory. How many records are in this table?
If there are lots of rows, then when you call Save(); (which I assume is some sort of wrapper around SaveChanges(), then EF will look through every row for anything that has changed. In your case, there may be 0 rows changed, as all you are interested in, is the row you are adding.
To speed things, you could change loading the data into a 'non-tracking' query
List<Badger_History> ents = cn.Badger_History.AsNoTracking().ToList();
This will still load in all the records, but they will no longer be counted when trying to save.

Stopping a test in C# visual studio if no data exist with smoke test

I have a pack of smoke test that run and randomly pull data from a table and search by that data in another method and assert after. The test will fail if no data exist. I have a reusable method called RandomVinSelect(). I want to stop the test if there is no data. I have searched for test result warnings that test could not be ran instead of failing the test. At this point I am stumped. This is the code I have I do not want to run the lines after UI.RandomVINSelect if no data found. I am thinking there may not be a way for this using Xunit and it would just be pass or fail...
public static string RandomVinSelect(this Browser ui, string table,
string selector)
{
//I want to stop test here if no data exist or create a dataexist
//method that stops a test.
int rows = ui.GetMultiple(table).Count;
Random num = new Random();
string randomnum = Convert.ToString(num.Next(1, rows));
string newselector = selector.Replace("1", randomnum);
string vin = ui.Get(newselector).Text;
return vin;
}
Perhaps just put smoke tests in a separate test package or collection and include a test that just checks to see if it can get data, then when you run this group of tests if that first test fails you know it is just due to no data being available.
Not ideal but might be good enough?
I installed the new nuget package(xunit.SkippableFact) added [SkippableFact] to my smoke test. Then I created a method that can be called to check and see if data is available and runs a Skip.If(condition,"my message") within that method and closes the test early if no data is present. Then in the Test Explorer show a warning symbol.
public static void IsDataAvaiable(this Browser ui)
{
bool data = true;
string pagecount = ui.GetPageCount();
if (pagecount == "0") data = false;
Skip.If(data == false, "The test could not run because no data available for validation.");
}

App Resources breaking in unit tests, due to uncontrollable shadow copying from ReSharper

Background:
On an application I am working on, I am writing tests using a mixture of Visual Studio 2015, SpecFlow, and ReSharper 2016.3 (I'll abbreviate this as R#, because I'm lazy.)
The application I am working on sends HTML-formatted emails, based on a template, which are stored in HTML files that are set as Copy Always in Visual Studio 2015.
Problem:
When I attempt to run my tests, I get the following exception:
System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Users\[Me]\AppData\Local\JetBrains\Installations\ReSharperPlatformVs14_001\Resources\SomeEmailTemplate.html`
The directory was not the output directory of the project I am working on, so I double-checked my R# settings, and confirmed that Shadow Copy was turned off. To be perfectly clear, my R# Shadow Copy checkbox is indeed unchecked.
The offending code is really pretty simple. Normal remedies like TestContext.CurrentContext.TestDirectory is not something I can, should, or even want to do, due to the fact that this code is needed by the application itself. It would be in appropriate to put test framework code in the application under test.
public class HtmlTemplateLog : ISectionedLog, IRenderableLog
{
#region Variables / Properties
private readonly string _rawHtml;
private readonly Dictionary<string, StringBuilder> _sectionDictionary = new Dictionary<string, StringBuilder>();
private StringBuilder _workingSection;
#endregion Variables / Properties
#region Constructor
public HtmlTemplateLog(string templateFile)
{
// This is specifically what breaks the tests.
_rawHtml = File.ReadAllText(templateFile)
.Replace("\r\n", string.Empty); // Replace all newlines with empty strings.
}
#endregion Constructor
#region Methods
// Methods work with the section dictionary.
// The RenderLog method does a string.Replace on all section names in the HTML.
// These methods aren't important for the question.
#endregion Methods
This is invoked as in the example below:
_someLog = new HtmlTemplateLog("Resources/SomeEmailTemplate.html");
// ...code...
_someLog.WriteLineInSection("{someSection}", "This is a message!");
string finalHtml = _someLog.RenderLog();
Questions:
1. I've turned off Shadow Copy on my R# tests. Why is this still doing Shadow Copies?
2. In what ways can I work around the fact that R# is not respecting the Shadow Copy checkbox, given that this is not test code, and thus that remedies that would normally be appropriate for test code aren't for this case?
I've discovered an answer for #2...though, it's rather clunky. I was inspired by the answer from #mcdon for a less-detailed version of the question.
Pretty much, if I don't want to resort to TestContext.CurrentContext.TestDirectory, then I need to make my local filenames into absolute paths. Unfortunately, R#'s broken Shadow Copy setting creates more work, since I can't just interrogate the currently-executing assembly - it will tell me the wrong thing. I need to get at the codebase instead and interrogate that.
I'm still a bit worried about what this code when we try to run it on the build server, however - I'm expecting 'unexpected' results. In that light, I'm wondering if the unexpected results can truly be called unexpected, given that I'm expecting that this won't work...
Anyways, the fix I came up with was this field-property system:
private string _presentWorkingDirectory;
private string PresentWorkingDirectory
{
get
{
if (!string.IsNullOrEmpty(_presentWorkingDirectory))
return _presentWorkingDirectory;
var assembly = Assembly.GetExecutingAssembly();
var codebase = new Uri(assembly.CodeBase);
var filePath = codebase.LocalPath;
var path = Directory.GetParent(filePath);
_presentWorkingDirectory = path.ToString();
return _presentWorkingDirectory;
}
}

Reading in a complex text file to input into database

I am working on a program that will read in a text file and then insert areas of the text file into different columns on a database. The text file is generally set up like this:
"Intro information"
"more Intro information"
srvrmgr> "information about system"
srbrmgr> list parameters for component *ADMBatchProc*
"Headers"
*Name of record* *alias of record* *value of record*
The columns create a table containing all of the setting information for this component. One all of the settings are listed, the file moves to another component and returns all the information for that component in a new table. I need to read in the component and the information on the tables without the headers or the other information. I will then need to be able to transfer that data into a database. The columns are fixed width on each table within the file.
Any recommendations about how to approach this are welcome. I have never read in a file this complex so I dont really know how to approach ignoring alot of information while trying to get other information ready for a database. Also the component value I am trying to gather always follows the word component on a line that starts with "srvrmgr".
The '*' represents areas that will be put into datbase.
Siebel Enterprise Applications Siebel Server Manager, Version 8.1.1.11 [23030] LANG_INDEPENDENT
Copyright (c) 1994-2012, Oracle. All rights reserved.
The Programs (which include both the software and documentation) contain
proprietary information; they are provided under a license agreement containing
restrictions on use and disclosure and are also protected by copyright, patent,
and other intellectual and industrial property laws. Reverse engineering,
disassembly, or decompilation of the Programs, except to the extent required to
obtain interoperability with other independently created software or as specified
by law, is prohibited.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of
Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
If you have received this software in error, please notify Oracle Corporation
immediately at 1.800.ORACLE1.
Type "help" for list of commands, "help <topic>" for detailed help
Connected to 1 server(s) out of a total of 1 server(s) in the enterprise
srvrmgr> configure list parameters show PA_NAME,PA_ALIAS,PA_VALUE
srvrmgr>
srvrmgr> list parameters for component ADMBatchProc
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
ADM Data Type Name ADMDataType
ADM EAI Method Name ADMEAIMethod Upsert
ADM Deployment Filter ADMFilter
213 rows returned.
srvrmgr> list parameters for component ADMObjMgr_enu
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
AccessibleEnhanced AccessibleEnhanced False
This is the beginning of the text file. It a produced in a system called Siebel to show all of the settings for this environment. I need to pull the component name (there are multiple on the actual file but the ones shown here are 'ADMBatchProc' and 'ADMObjMgr_enu'), and then the data shown on the table below it that was created by Siebel. The rest of the information is irrelevant for the purpose of the task I need.
I would recommend using Test-Driven Development techniques in this case. I'm guessing that your possible variations of input format are near infinite.
Try this:
1) Create an interface that will represent the data operations or parsing logic you expect the application to perform. For example:
public interface IParserBehaviors {
void StartNextComponent();
void SetTableName(string tableName);
void DefineColumns(IEnumerable<string> columnNames);
void LoadNewDataRow(IEnumerable<object> rowValues);
DataTable ProduceTableForCurrentComponent();
// etc.
}
2) Gather as many small examples of discrete inputs that have well-defined behaviors as possible.
3) Inject a behaviors handler into your parser. For example:
public class Parser {
private const string COMPONENT_MARKER = "srvrmgr";
private readonly IParserBehaviors _behaviors;
public Parser(IParserBehaviors behaviors) {
_behaviors = behaviors;
}
public void ReadFile(string filename) {
// bla bla
foreach (string line in linesOfFile) {
// maintain some state
if (line.StartsWith(COMPONENT_MARKER)) {
DataTable table = _behaviors.ProduceTableForCurrentComponent();
// save table to the database
_behaviors.StartNextComponent();
}
else if (/* condition */) {
// parse some text
_behaviors.LoadNewDataRow(values);
}
}
}
}
4) Create tests around the expected behaviors, using your preferred mocking framework. For example:
public void FileWithTwoComponents_StartsTwoNewComponents() {
string filename = "twocomponents.log";
Mock<IParserBehaviors> mockBehaviors = new Mock<IParserBehaviors>();
Parser parser = new Parser(mockBehaviors.Object);
parser.ReadFile(filename);
mockBehaviors.Verify(mock => mock.StartNextComponent(), Times.Exactly(2));
}
This way, you will be able to integrate under controlled tests. When (not if) someone runs into a problem, you can distill what case wasn't covered, and add a test surrounding that behavior, after extracting the case from the log being used. Separating concerns this way also allows your parsing logic to be independent from your data operation logic. The needs of parsing specific behaviors seems to be central to your application, so it seems like a perfect fit for creating some domain-specific interfaces.
You'll want to read the text file using StreamReader:
using (FileStream fileStream = File.OpenRead(path))
{
byte[] data = new byte[fileStream.Length];
for (int index = 0; index < fileStream.Length; index++)
{
data[index] = (byte)fileStream.ReadByte();
}
Console.WriteLine(Encoding.UTF8.GetString(data)); // Displays: your file - now you can decide how to manipulate it.
}
Perhaps then you'll use Regex to capture the date you'd like to insert:
You might insert into the db like this:
using (TransactionScope transactionScope = new TransactionScope())
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand command1 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘John’, ‘Doe’, null)”,
connection);
SqlCommand command2 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘Jane’, ‘Doe’, null)”,
connection);
command1.ExecuteNonQuery();
command2.ExecuteNonQuery();
}
transactionScope.Complete();
}
Examples adapted from Wouter de Kort's C# 70-483.

Using TDD with OpenXml-SDK

I have started using a TDD approach to develop a small app that reads data from Excel files. Using a repository pattern type approach I have come to a hurdle which baffles me.
In order to read the Excel files, I am using the OpenXml-SDK. Now typically reading from an Excel file using the SDK requires several if not more steps to actually get the values you want to read.
The approach I have taken thus far is reflected in the following test and accompanying function.
[Test]
public void GetRateData_ShouldReturn_SpreadSheetDocument()
{
//Arrange
var fpBuilder = new Mock<IDirectoryBuilder>();
fpBuilder.Setup(fp => fp.FullPath()).Returns(It.IsAny<string>());
var doc = new Mock<IOpenXmlUtilities>();
doc.Setup(d => d.OpenReadOnlySpreadSheet(It.IsAny<string>()))
.Returns(Mock.Of<SpreadsheetDocument>());
swapData = new SwapRatesRepository(fpBuilder.Object, doc.Object);
//Act
var result = swapData.GetRateData();
//Assert
doc.Verify();
fpBuilder.Verify();
}
public class SwapRatesRepository: IRatesRepository<SwapRates>
{
private const string SWAP_DATA_FILENAME = "DATE_MKT_ZAR_SWAPFRA1.xlsx";
private IDirectoryBuilder builder;
private IOpenXmlUtilities openUtils;
public SwapRatesRepository(IDirectoryBuilder builder)
{
// TODO: Complete member initialization
this.builder = builder;
}
public SwapRatesRepository(IDirectoryBuilder builder,
IOpenXmlUtilities openUtils)
{
// TODO: Complete member initialization
this.builder = builder;
this.openUtils = openUtils;
}
public SwapRates GetRateData()
{
// determine the path of the file based on the date
builder.FileName = SWAP_DATA_FILENAME;
var path = builder.FullPath();
// open the excel file
using(SpreadsheetDocument doc = openUtils.OpenReadOnlySpreadSheet(path))
{
//WorkbookPart wkBookPart = doc.WorkbookPart;
//WorksheetPart wkSheetPart = wkBookPart.WorksheetParts.First();
//SheetData sheetData = wkSheetPart.Worksheet
// .GetFirstChild<SheetData>();
}
return new SwapRates(); // ignore this class for now, design later
}
}
However, the next steps after the spreadsheet is open would be to actually start interrogating the Excel object model to retrieve the values. As noted above, I making use of mocks for anything open xml related. However, in some cases the objects can't be mocked(or I don't know how to mock them since they are static). That gave rise to IOpenXmlUtilities which are merely simple wrapper calls into the OpenXml-SDK.
In terms of design, we know that reading data from excel files is a short term solution (6-8 months), so these tests only affect the repository/data access for the moment.
Obviously I don't want to leave the TDD approach(as tempting as it is), so I am looking for advise and guidance on how to continue my TDD endeavours with the OpenXml SDK. The other aspect relates to mocking - I am confused as to when and how to use mocks in this case. I don't want to unknowingly writes tests that test the OpenXml-SDK.
*Side note: I know that the SOLIDity of my design can be improved but I leaving that for now. I have a set of separate tests that relate to the builder object. The other side effect that may occur is the design of an OpenXML-SDK wrapper library.
Edit: Unbeknown at the time, by creating the OpenXML-SDK wrappers for the OpenXML-SDK, i have used a design pattern similar (or exact) called the Adaptor pattern.
If you can't mock it and can't create a small unittest, it might be better to take it to a higher level and make a scenario test. You can use the [TestInitialize] and [TestCleanup] methods to create a setup for the test.
using Pex and Moles might be another way to get it tested.

Categories

Resources