Reading in a complex text file to input into database - c#

I am working on a program that will read in a text file and then insert areas of the text file into different columns on a database. The text file is generally set up like this:
"Intro information"
"more Intro information"
srvrmgr> "information about system"
srbrmgr> list parameters for component *ADMBatchProc*
"Headers"
*Name of record* *alias of record* *value of record*
The columns create a table containing all of the setting information for this component. One all of the settings are listed, the file moves to another component and returns all the information for that component in a new table. I need to read in the component and the information on the tables without the headers or the other information. I will then need to be able to transfer that data into a database. The columns are fixed width on each table within the file.
Any recommendations about how to approach this are welcome. I have never read in a file this complex so I dont really know how to approach ignoring alot of information while trying to get other information ready for a database. Also the component value I am trying to gather always follows the word component on a line that starts with "srvrmgr".
The '*' represents areas that will be put into datbase.
Siebel Enterprise Applications Siebel Server Manager, Version 8.1.1.11 [23030] LANG_INDEPENDENT
Copyright (c) 1994-2012, Oracle. All rights reserved.
The Programs (which include both the software and documentation) contain
proprietary information; they are provided under a license agreement containing
restrictions on use and disclosure and are also protected by copyright, patent,
and other intellectual and industrial property laws. Reverse engineering,
disassembly, or decompilation of the Programs, except to the extent required to
obtain interoperability with other independently created software or as specified
by law, is prohibited.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of
Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
If you have received this software in error, please notify Oracle Corporation
immediately at 1.800.ORACLE1.
Type "help" for list of commands, "help <topic>" for detailed help
Connected to 1 server(s) out of a total of 1 server(s) in the enterprise
srvrmgr> configure list parameters show PA_NAME,PA_ALIAS,PA_VALUE
srvrmgr>
srvrmgr> list parameters for component ADMBatchProc
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
ADM Data Type Name ADMDataType
ADM EAI Method Name ADMEAIMethod Upsert
ADM Deployment Filter ADMFilter
213 rows returned.
srvrmgr> list parameters for component ADMObjMgr_enu
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
AccessibleEnhanced AccessibleEnhanced False
This is the beginning of the text file. It a produced in a system called Siebel to show all of the settings for this environment. I need to pull the component name (there are multiple on the actual file but the ones shown here are 'ADMBatchProc' and 'ADMObjMgr_enu'), and then the data shown on the table below it that was created by Siebel. The rest of the information is irrelevant for the purpose of the task I need.

I would recommend using Test-Driven Development techniques in this case. I'm guessing that your possible variations of input format are near infinite.
Try this:
1) Create an interface that will represent the data operations or parsing logic you expect the application to perform. For example:
public interface IParserBehaviors {
void StartNextComponent();
void SetTableName(string tableName);
void DefineColumns(IEnumerable<string> columnNames);
void LoadNewDataRow(IEnumerable<object> rowValues);
DataTable ProduceTableForCurrentComponent();
// etc.
}
2) Gather as many small examples of discrete inputs that have well-defined behaviors as possible.
3) Inject a behaviors handler into your parser. For example:
public class Parser {
private const string COMPONENT_MARKER = "srvrmgr";
private readonly IParserBehaviors _behaviors;
public Parser(IParserBehaviors behaviors) {
_behaviors = behaviors;
}
public void ReadFile(string filename) {
// bla bla
foreach (string line in linesOfFile) {
// maintain some state
if (line.StartsWith(COMPONENT_MARKER)) {
DataTable table = _behaviors.ProduceTableForCurrentComponent();
// save table to the database
_behaviors.StartNextComponent();
}
else if (/* condition */) {
// parse some text
_behaviors.LoadNewDataRow(values);
}
}
}
}
4) Create tests around the expected behaviors, using your preferred mocking framework. For example:
public void FileWithTwoComponents_StartsTwoNewComponents() {
string filename = "twocomponents.log";
Mock<IParserBehaviors> mockBehaviors = new Mock<IParserBehaviors>();
Parser parser = new Parser(mockBehaviors.Object);
parser.ReadFile(filename);
mockBehaviors.Verify(mock => mock.StartNextComponent(), Times.Exactly(2));
}
This way, you will be able to integrate under controlled tests. When (not if) someone runs into a problem, you can distill what case wasn't covered, and add a test surrounding that behavior, after extracting the case from the log being used. Separating concerns this way also allows your parsing logic to be independent from your data operation logic. The needs of parsing specific behaviors seems to be central to your application, so it seems like a perfect fit for creating some domain-specific interfaces.

You'll want to read the text file using StreamReader:
using (FileStream fileStream = File.OpenRead(path))
{
byte[] data = new byte[fileStream.Length];
for (int index = 0; index < fileStream.Length; index++)
{
data[index] = (byte)fileStream.ReadByte();
}
Console.WriteLine(Encoding.UTF8.GetString(data)); // Displays: your file - now you can decide how to manipulate it.
}
Perhaps then you'll use Regex to capture the date you'd like to insert:
You might insert into the db like this:
using (TransactionScope transactionScope = new TransactionScope())
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand command1 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘John’, ‘Doe’, null)”,
connection);
SqlCommand command2 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘Jane’, ‘Doe’, null)”,
connection);
command1.ExecuteNonQuery();
command2.ExecuteNonQuery();
}
transactionScope.Complete();
}
Examples adapted from Wouter de Kort's C# 70-483.

Related

Fortify cleanse rule doesn't cleanse everything

My team has recently started using Fortify Static Code Analyzer (version 17.10.0 156) on our .NET code base (C# 6 and VB.NET), and are experiencing some pain with the amount of false positives it reports. For any given issue we can't know if it is a false positive without looking at it, and we don't want any actual problems to get lost in the clutter.
We have a utilities library with a method ReadEmbeddedSql which extracts sql from resources embedded in the assembly to execute. Fortify flags any OracleCommand (from Oracle.ManagedDataAccess.Client) which execute the sql returned from this method with a Sql Injection vulnerability.
This vulnerability is reported at the point the sql is set on the command, be it via constructor, or via the CommandText property.
It does not do this if the ReadEmbeddedSql method is defined in the local assembly.
A pared down listing of the source code which produces this result follows below. In the example code, ExecuteSomeSql() and ExecuteSomeSqlDifferently() are flagged with a vulnerability where ExecuteSomeLocalSql() is not.
For Analysis Evidence it only lists the line the OracleCommand is created:
TestDao.cs:27 - OracleCommand()
RuleID: 31D4607A-A3FF-447C-908A-CA2BBE4CE4B7
in the details for the issue it provides:
On line 27 of TestDao.cs, the method ExecuteSomeSql() invokes a SQL query
built using input coming from an untrusted source. This call could
allow an attacker to modify the statement's meaning or to execute
arbitrary SQL commands.
A sample diagram presented by Fortify for this issue:
After much searching, I came across this post describing a similar problem and proposed solution: Can I register a trusted source for SQL statements
After following the instructions there, and verifying the instructions independently in a different version of the user guide (page 90)
The result is unchanged. I added an additional 'SQL Injection Validation Rule' rule which is specifically described as "... identifies a function that properly validates data before using them in a SQL query."
Still no avail.
EDIT:
I have played around with custom rules more, and have been able to determine that the CustomCleanseRules are actually being applied (they do remove other types of taint), but not removing some trust specific flag Fortify applies to our in-house library.
Every value returned from any method of my libraries is distrusted, and none of the rules I've created seem to be able to remove this distrust.
Is there something I am doing wrong, or does Fortify just not work?
Is there a different type of rule needed to cleanse this general distrust?
Example Source code:
In library:
namespace Our.Utilities.Database
{
public abstract class BaseDao
{
protected string ReadEmbeddedSql(string key)
{
//... extract sql from assembly
return sql;
}
}
}
In application:
namespace Our.Application.DataAccess
{
public class TestDao: Our.Utilities.Database.BaseDao
{
public void ExecuteSomeSql()
{
//... connection is created
// Fortify Does not trust sqlText returned from library method.
var sqlText = ReadEmbeddedSql("sql.for.ExecuteSomeSql");
using(var someSqlCommand = new OracleCommand(sqlText, connection)) // Fortify flags creation of OracleCommand as SqlInjection vulnerability.
{
someSqlCommand.ExecuteNonQuery();
}
}
public void ExecuteSomeSqlDifferently()
{
//... connection is created
// Fortify Does not trust sqlText returned from library method.
var sqlText = ReadEmbeddedSql("sql.for.ExecuteSomeSql");
using(var someSqlCommand = connection.CreateCommand())
{
someSqlCommand.CommandText = sqlText; //Fortify flags setting CommandText as SqlInjection vulnerability.
someSqlCommand.ExecuteNonQuery();
}
}
public void ExecuteSomeLocalSql()
{
//... connection is created
var sqlText = ReadEmbeddedSqlLocallyDefined("sql.for.ExecuteSomeSql");
using(var someSqlCommand = new OracleCommand(sqlText, connection))
{
someSqlCommand.ExecuteNonQuery();
}
}
protected string ReadEmbeddedSqlLocallyDefined(string key)
{
//... extract sql from assembly
return sql;
}
}
}
XML of custom rules:
<?xml version="1.0" encoding="UTF-8"?>
<RulePack xmlns="xmlns://www.fortifysoftware.com/schema/rules">
<RulePackID>5A78FC44-4EEB-49C7-91DA-6564805C3F23</RulePackID>
<SKU>SKU-C:\local\path\to\custom\rules\Our-Utilities</SKU>
<Name><![CDATA[C:\local\path\to\custom\rules\Our-Utilities]]></Name>
<Version>1.0</Version>
<Description><![CDATA[]]></Description>
<Rules version="17.10">
<RuleDefinitions>
<DataflowCleanseRule formatVersion="17.10" language="dotnet">
<RuleID>7C49FEDA-AA67-490D-8820-684F3BDD58B7</RuleID>
<FunctionIdentifier>
<NamespaceName>
<Pattern>Our.Utilities.Database</Pattern>
</NamespaceName>
<ClassName>
<Pattern>BaseDao</Pattern>
</ClassName>
<FunctionName>
<Pattern>ReadSqlTemplate</Pattern>
</FunctionName>
<ApplyTo implements="true" overrides="true" extends="true"/>
</FunctionIdentifier>
<OutArguments>return</OutArguments>
</DataflowCleanseRule>
<DataflowCleanseRule formatVersion="17.10" language="dotnet">
<RuleID>14C423ED-5A51-4BA1-BAE1-075E566BE58D</RuleID>
<TaintFlags>+VALIDATED_SQL_INJECTION</TaintFlags>
<FunctionIdentifier>
<NamespaceName>
<Pattern>Our.Utilities.Database</Pattern>
</NamespaceName>
<ClassName>
<Pattern>BaseDao</Pattern>
</ClassName>
<FunctionName>
<Pattern>ReadSqlTemplate</Pattern>
</FunctionName>
<ApplyTo implements="true" overrides="true" extends="true"/>
</FunctionIdentifier>
<OutArguments>return</OutArguments>
</DataflowCleanseRule>
</RuleDefinitions>
</Rules>
</RulePack>
When I run your sample code (I do have to modify it since it will not compile as is). When I run it with SCA 17.10 with 2017Q3 (I also did this on 2017Q2) I did not get a the same SQL Injection Rule ID as you.
Looking at your Analysis Evidence, I assume that the analyzer that found this was not Dataflow or Control flow but maybe the Semantic or Structural?
You can see the type of analyzer that found the finding by looking at the summary tab:
Either way, I don't think a Custom Rule is what I would do here.
An option you can do is to use a Filter file.
This a file that can contain
RuleIDs
InstanceIDs
Categories
When this file is passed into the scan command, any finding that matches any of the fields in the filter file will be filtered out from the results.
You can find examples of using the filter file in <SCA Install Dir>\Samples\Advanced\filter
or you can look in the Fortify SCA Users Guide in Appendix B: Filtering the Analysis
*Note: Your analysis of using filters (in comment) are spot on.

Adding cabinet file to msi programatically with DTF (wix)

Introduction to the Task at hand: can be skipped if impatient
The company I work for is not a software company, but focus on mechanical and thermodynamic engineering problems.
To help solve their system design challenges, they have developed a software for calculating the system impact of replacing individual components.
The software is quite old, written in FORTRAN and has evolved over a period of 30 years, which means that we cannot quickly re-write it or update it.
As you may imagine the way this software is installed has also evolved, but significantly slower than the rest of the system, meaning that packaging is done by a batch script that gathers files from different places, and puts them in a folder, which is then compiled into an iso, burned to a cd, and shipped with mail.
You young programmers (I am 30), may expect a program to load dll's, but otherwise be fairly self-contained after linking. Even if the code is made up of several classes, from different namespaces etc..
In FORTRAN 70 however.. Not so much. Which means that the software it self consists of an alarming number of calls to prebuilt modules (read: seperate programs)..
We need to be able to distribute via the internet, as any other modern company have been able to for a while. To do this we could just make the *.iso downloadable right?
Well, unfortunately no, the iso contains several files which are user specific.
As you may imagine with thousands of users, that would be thousands of isos, that are nearly identical.
Also we wan't to convert the old FORTRAN based installation software, into a real installation package, and all our other (and more modern) programs are C# programs packaged as MSI's..
But the compile time for a single msi with this old software on our server, is close to 10 seconds, so it is simply not an option for us to build the msi, when requested by the user. (if multiple users requests at the same time, the server won't be able to complete before requests timeout..)
Nor can we prebuild the user specific msi's and cache them, as we would run out of memory on the server.. (total at ~15 giga Byte per released version)
Task Description tl:dr;
Here is what I though I would do: (inspired by comments from Christopher Painter)
Create a base MSI, with dummy files instead of the the user specific files
Create cab file for each user, with the user specific files
At request time inject the userspecific cab file into a temporary copy of the base msi using the "_Stream" table.
Insert a reference into the Media table with a new 'DiskID' and a 'LastSequence' corresponding to the extra files, and the name of the injected cabfile.
Update the Filetable with the name of the user specific file in the new cab file, a new Sequence number (in the range of the new cab files sequence range), and the file size.
Question
My code fails to do the task just described. I can read from the msi just fine, but the cabinet file is never inserted.
Also:
If I open the msi with DIRECT mode, it corrupts the media table, and if I open it in TRANSACTION mode, it fails to change anything at all..
In direct mode the existing line in the Media table is replaced with:
DiskId: 1
LastSequence: -2145157118
Cabinet: "Name of action to invoke, either in the engine or the handler DLL."
What Am I doing wrong ?
Below I have provided the snippets involved with injecting the new cab file.
snippet 1
public string createCabinetFileForMSI(string workdir, List<string> filesToArchive)
{
//create temporary cabinet file at this path:
string GUID = Guid.NewGuid().ToString();
string cabFile = GUID + ".cab";
string cabFilePath = Path.Combine(workdir, cabFile);
//create a instance of Microsoft.Deployment.Compression.Cab.CabInfo
//which provides file-based operations on the cabinet file
CabInfo cab = new CabInfo(cabFilePath);
//create a list with files and add them to a cab file
//now an argument, but previously this was used as test:
//List<string> filesToArchive = new List<string>() { #"C:\file1", #"C:\file2" };
cab.PackFiles(workdir, filesToArchive, filesToArchive);
//we will ned the path for this file, when adding it to an msi..
return cabFile;
}
snippet 2
public int insertCabFileAsNewMediaInMSI(string cabFilePath, string pathToMSIFile, int numberOfFilesInCabinet = -1)
{
//open the MSI package for editing
pkg = new InstallPackage(pathToMSIFile, DatabaseOpenMode.Direct); //have also tried direct, while database was corrupted when writing.
return insertCabFileAsNewMediaInMSI(cabFilePath, numberOfFilesInCabinet);
}
snippet 3
public int insertCabFileAsNewMediaInMSI(string cabFilePath, int numberOfFilesInCabinet = -1)
{
if (pkg == null)
{
throw new Exception("Cannot insert cabinet file into non-existing MSI package. Please Supply a path to the MSI package");
}
int numberOfFilesToAdd = numberOfFilesInCabinet;
if (numberOfFilesInCabinet < 0)
{
CabInfo cab = new CabInfo(cabFilePath);
numberOfFilesToAdd = cab.GetFiles().Count;
}
//create a cab file record as a stream (embeddable into an MSI)
Record cabRec = new Record(1);
cabRec.SetStream(1, cabFilePath);
/*The Media table describes the set of disks that make up the source media for the installation.
we want to add one, after all the others
DiskId - Determines the sort order for the table. This number must be equal to or greater than 1,
for out new cab file, it must be > than the existing ones...
*/
//the baby SQL service in the MSI does not support "ORDER BY `` DESC" but does support order by..
IList<int> mediaIDs = pkg.ExecuteIntegerQuery("SELECT `DiskId` FROM `Media` ORDER BY `DiskId`");
int lastIndex = mediaIDs.Count - 1;
int DiskId = mediaIDs.ElementAt(lastIndex) + 1;
//wix name conventions of embedded cab files is "#cab" + DiskId + ".cab"
string mediaCabinet = "cab" + DiskId.ToString() + ".cab";
//The _Streams table lists embedded OLE data streams.
//This is a temporary table, created only when referenced by a SQL statement.
string query = "INSERT INTO `_Streams` (`Name`, `Data`) VALUES ('" + mediaCabinet + "', ?)";
pkg.Execute(query, cabRec);
Console.WriteLine(query);
/*LastSequence - File sequence number for the last file for this new media.
The numbers in the LastSequence column specify which of the files in the File table
are found on a particular source disk.
Each source disk contains all files with sequence numbers (as shown in the Sequence column of the File table)
less than or equal to the value in the LastSequence column, and greater than the LastSequence value of the previous disk
(or greater than 0, for the first entry in the Media table).
This number must be non-negative; the maximum limit is 32767 files.
/MSDN
*/
IList<int> sequences = pkg.ExecuteIntegerQuery("SELECT `LastSequence` FROM `Media` ORDER BY `LastSequence`");
lastIndex = sequences.Count - 1;
int LastSequence = sequences.ElementAt(lastIndex) + numberOfFilesToAdd;
query = "INSERT INTO `Media` (`DiskId`, `LastSequence`, `Cabinet`) VALUES (" + DiskId.ToString() + "," + LastSequence.ToString() + ",'#" + mediaCabinet + "')";
Console.WriteLine(query);
pkg.Execute(query);
return DiskId;
}
update: stupid me, forgot about "committing" in transaction mode - but now it does the same as in direct mode, so no real changes to the question.
I will answer this my self, since I just learned something about DIRECT mode that I didn't know before, and wan't to keep it here to allow for the eventual re-google..
Apparently we only succesfully updates the MSI, if we closed the database handle before the program eventually chrashed.
for the purpose of answering the question, this destructor should do it.
~className()
{
if (pkg != null)
{
try
{
pkg.Close();
}
catch (Exception ex)
{
//rollback not included as we edit directly?
//do nothing..
//atm. we just don't want to break anything if database was already closed, without dereferencing
}
}
}
after adding the correct close statement, the MSI grew in size
(and a media row was added to the media table :) )
I will post the entire class for solving this task, when its done and tested,
but I'll do it in the related question on SO.
the related question on SO

How can i figure out a max_user_connections on ASP.NET | MySql application?

last year i developed an ASP.NET Application implenting MVP Model.
The site is not very large (about 9.000 views/day).
It is a common application witch just desplays articles, supports scheduling (via datetime),vote and views, sections and categories.
From then i create more than 15 sites with the same motive ( The database michanism was build in the same logic).
What i did was :
Every time a request arrive i have to take articles, sections, categories, views and votes from my Database and display them to the user...like all other web apps.
My database objects are somthing like the above :
public class MyObjectDatabaseManager{
public static string Table = DBTables.ArticlesTable;
public static string ConnectionString = ApplicationManager.ConnectionString;
public bool insertMyObject(MyObject myObject){/*.....*/}
public bool updateMyObject(MyObject myObject){/*.....*/}
public bool deleteMyObject(MyObject myObject){/*.....*/}
public MyObject getMyObject(int MyObjectID){/**/}
public List<MyObject> getMyObjects( int limit, int page, bool OrderBy, bool ASC){/*...*/}
}
When ever i want to communicate to the database i do something like the above
MySqlConnection myConnection = new MySqlConnection(ConnectionString);
try
{
myConnection.Open();
MySqlCommand cmd = new MySqlCommand(myQuery,myConnection);
cmd.Parameters.AddWithValue(...);
cmd.ExecuteReader(); /* OR */ ExecuteNonQuery();
}catch(Exception){}
finally
{
if (myConnection != null)
{
myConnection.Close();
myConnection.Dispose();
}
}
Two months later i've run into trouble.
The performance start falling down and the database starts to return errors : max_user_connections
Then i think.. " Let's cache the page "
And the start to use Output cache for the pages.
(not a very sophisticated good idea..)
12 months later my friend told to me to create a "live" article...
an article that can be updated without any delay. (from the output cache...)
Then it came into my mind that : " Why to use cache? joomla etc **doesn't"
So...i remove the magic "Output cache" directive...
From then i run again into the same problem...
MAX_USER_CONNETCTIONS! :/
What i'm doing wrong?
I know that my code communicates alot with the database but...
the connection pooling?
Sorry for my english
Please...help :/
i have no idea how to figure it out:/
Thank you.
I'm running into share hosting packet
*My db is over 60mb in size*
I have more than 6000 rows in some tables like articles
*My hosting provider gives me 25 connections to the database (very large number in my opinion)*
Your code looks fine to me, although from a style perspective I prefer "using" to "try / finally / Dispose()".
One thing to check is to make sure that the connection strings you're using are identical, everywhere in your code. Most DB drivers to connection pooling based on comparing the connection strings.
You may need to increase the max_connections variable in your mysql config.
See:
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Actually, Max #/connections is an OS-level configuration.
For example, under NT/XP, it was configurable in the registry, under HKLM, ..., TcpIp, Parameters, TcpNumConnections:
http://smallvoid.com/article/winnt-tcpip-max-limit.html
More important, you want to maximum the number of "ephemeral ports" needed to open new connections:
http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
Windows:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
On the Edit menu, click Add Value, and then add the following registry value:
Value Name: MaxUserPort Data Type: REG_DWORD Value: 65534
Linux:
sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"

Retrieve data from browser local storage using c#

Is it possible to retrieve data from chrome/firefox local storage using C#?
Disclaimer: I have tested this on my Windows 7 x64 running Google Chrome 13.0.782.220 at the moment. The information provided here is a result of my own research and is not any official way or API to retrieve this information. Use at your own risk. Also the technique presented here might break with any future release if Chrome changes the way to store this information.
So, Google Chrome uses SQLite to persist local storage data. You could use the System.Data.SQLite managed driver to read it from your .NET application. If you are running on Windows 7 (don't know for others as that's the one I have and can test), you will have the following folder:
c:\Users\SOMEUSERNAME\AppData\Local\Google\Chrome\User Data\Default\Local Storage\
This folder will contain multiple files with the .localstorage extension. Each file is for different site. For example for StackOverflow I have http_stackoverflow.com_0.localstorage but of course this naming is totally arbitrary and you cannot rely upon it. Each file represents a SQLite database.
I have noticed that this database contains a table called ItemTable with 2 string columns called key and value.
So to read the values it's a simple matter of sending a SQL query:
class Program
{
static void Main()
{
using (var conn = new SQLiteConnection("Data Source=http_stackoverflow.com_0.localstorage;Version=3;"))
using (var cmd = conn.CreateCommand())
{
conn.Open();
cmd.CommandText = "SELECT key, value FROM ItemTable";
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine(
"key: {0}, value: {1}",
reader.GetString(reader.GetOrdinal("key")),
reader.GetString(reader.GetOrdinal("value"))
);
}
}
}
}
}
When using Chromium Embedded Framework I found that the above solution has many limitations. It seems like Chromium have moved to using leveldb instead.
I ended up with a solution where I inject JS code that modify local storage in FrameLoadStart. (it should be easy to read values as well - JavascriptResponse.Result can be casted to a IDictionary when using this script: "window.localStorage;" instead)
// writing local storage in FrameLoadStart
foreach (var entry in LocalStorage)
{
script += $"window.localStorage.{entry.Key} = '{entry.Value}';";
}
IFrame frame = chromeBrowser.GetMainFrame();
var result = await frame.EvaluateScriptAsync(script , frame.Url, 0);
if(!result.Success)
{
throw new Exception(result.Message);
}

Is it possible to keep NDbUnit test data in separate XML files?

I'm looking at using NDbUnit to help with the unit testing of an application. As the question title states, I'm wondering if it is possible to keep NDbUnit test data in separate XML files. I have noticed already that my single test data XML file is quite big and could start to become unmanageable when I add a few more entities to it.
Now, having read this question it looks as if it's not possible but I would just like to be sure.
If it helps, this is sample code which illustrates the problem. The idea is that programs are associated with vendors. I have set up test data containing 3 vendors, the second one of which has 3 programs. TestData.xml contains all of the test data for all of the vendors and programs. When I use it, the unit test passes as expected. If I try to read the individual XML file in separately using multiple calls to db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity); it seems as if the second call overwrites whatever was done in the first one.
private const string xmlSchema = #"..\..\schema.xsd";
// All of the test data in one file.
private const string xmlData = #"..\..\XML Data\TestData.xml";
// Individual test data files.
private const string vendorData = #"..\..\XML Data\Vendor_TestData.xml";
private const string programData = #"..\..\XML Data\Program_TestData.xml";
public void WorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(xmlData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
public void NotWorkingExampleTest()
{
INDbUnitTest db = new SqlDbUnitTest(connectionString);
db.ReadXmlSchema(xmlSchema);
db.ReadXml(vendorData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
db.ReadXml(programData);
db.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
VendorCollection vendors = VendorController.List();
Assert.IsNotNull(vendors);
// This line throws an ArgumentOutOfRangeException because there are no vendors in the collection.
ProgramCollection collection = VendorController.GetPrograms(vendors[1].VendorID);
Assert.IsNotNull(collection);
Assert.IsTrue(collection.Count == 3);
}
This does work:
Watch out for the meaning of the DbOperationFlag value you are using; the "Clean" part of "CleanInsertIdentity" means "clean out the existing records before performing the insert-identity part of the process".
See http://code.google.com/p/ndbunit/source/browse/trunk/NDbUnit.Core/DbOperationFlag.cs for more info on the possible enum values.
You might try the same process with either Insert or InsertIdentity to see if you can achieve what you are after, but by design CleanInsertIdentity isn't going to work for this scenario :)

Categories

Resources