C# Get TestCases by TestCaseSource - Reading textfile - c#

I try to get my nunit TestCases by reading a textfile, which contains a TestCase per line. I'm using the TestCaseSourceAttribute in that case.
[Test, TestCaseSource("TestSelection")]
//TestMethod
public static object[] TestSelection()
{
var amountOfTestCases = GetAmountTestCases(#"path\TestCases.txt");
var result = new object[amountOfTestCases];
for (var i = 0; i < amountOfTestCases; i++)
{
result[i] = new object[] { GetTestCase(i, #"path\TestCases.txt") };
}
return result;
}
With GetAmountTestCases() I get lines of text within a textfile and GetTestCase() reads the specific line of text to get my TestCase.
public static int GetAmountTestCases(String path)
{
var lineCount = 0;
using (var reader = File.OpenText(path))
{
while (reader.ReadLine() != null)
{
lineCount++;
}
}
return lineCount;
}
public static String GetTestCase(int lineNum, String path)
{
var lineCount = 0;
String testCase = String.Empty;
using (var reader = File.OpenText(path))
{
while (reader.ReadLine() != null)
{
if (lineCount == lineNum)
{
testCase = reader.ReadLine();
}
else
{
lineCount++;
}
}
}
return testCase;
}
After building the .dll contains test modules with 'null' as content. I cant even Debug without a specific TestCase within my Code.

You have quite a complex way of going about it, the following code works for me:
[Test, TestCaseSource("GetMyFileData")]
//Method
public static string[] GetMyFileData()
{
var path = #"C:\temp\MyFile.txt";
return File.ReadAllLine(path)
.ToArray();
}
Please double check if your path is correct.
If it is a relative path to the file that is deployed on build, check if it is configured correctly: its Build Action should be something like Content and Copy to output directory should be Copy of newer or Copy always

Related

Assigning instance variables obtained through reflection in generic method

I have data in tab-separated values (TSV) text files that I want to read and (eventually) store in database tables. With the TSV files, each line contains one record, but in one file the record can have 2 fields, in another file 4 fields, etc. I wrote working code to handle the 2-field records, but I thought this might be a good case for a generic method (or two) rather than writing new methods for each kind of record. However, I have not been able to code this because of 2 problems: I can't create a new object for holding the record data, and I don't know how to use reflection to generically fill the instance variables of my objects.
I looked at several other similar posts, including Datatable to object by using reflection and linq
Below is the code that works (this is in Windows, if that matters) and also the code that doesn't work.
public class TSVFile
{
public class TSVRec
{
public string item1;
public string item2;
}
private string fileName = "";
public TSVFile(string _fileName)
{
fileName = _fileName;
}
public TSVRec GetTSVRec(string Line)
{
TSVRec rec = new TSVRec();
try
{
string[] fields = Line.Split(new char[1] { '\t' });
rec.item1 = fields[0];
rec.item2 = fields[1];
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show("Bad import data on line: " +
Line + "\n" + ex.Message, "Error",
System.Windows.Forms.MessageBoxButtons.OK,
System.Windows.Forms.MessageBoxIcon.Error);
}
return rec;
}
public List<TSVRec> ImportTSVRec()
{
List<TSVRec> loadedData = new List<TSVRec>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
{
loadedData.Add(GetTSVRec(Line));
}
}
return loadedData;
}
// *** Attempted generic methods ***
public T GetRec<T>(string Line)
{
T rec = new T(); // compile error!
Type t = typeof(T);
FieldInfo[] instanceVars = t.GetFields();
string[] fields = Line.Split(new char[1] { '\t' });
for (int i = 0; i < instanceVars.Length - 1; i++)
{
rec. ??? = fields[i]; // how do I finish this line???
}
return rec;
}
public List<T> Import<T>(Type t)
{
List<T> loadedData = new List<T>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
{
loadedData.Add(GetRec<T>(Line));
}
}
return loadedData;
}
}
I saw the line
T rec = new T();
in the above-mentioned post, but it doesn't work for me...
I would appreciate any suggestions for how to make this work, if possible. I want to learn more about using reflection with generics, so I don't only want to understand how, but also why.
I wish #EdPlunkett had posted his suggestion as an answer, rather than a comment, so I could mark it as the answer...
To summarize: to do what I want to do, there is no need for "Assigning instance variables obtained through reflection in generic method". In fact, I can have a generic solution without using a generic method:
public class GenRec
{
public List<string> items = new List<string>();
}
public GenRec GetRec(string Line)
{
GenRec rec = new GenRec();
try
{
string[] fields = Line.Split(new char[1] { '\t' });
for (int i = 0; i < fields.Length; i++)
rec.items.Add(fields[i]);
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show("Bad import data on line: " + Line + "\n" + ex.Message, "Error",
System.Windows.Forms.MessageBoxButtons.OK,
System.Windows.Forms.MessageBoxIcon.Error);
}
return rec;
}
public List<GenRec> Import()
{
List<GenRec> loadedData = new List<GenRec>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
loadedData.Add(GetRec(Line));
}
return loadedData;
}
I just tested this, and it works like a charm!
Of course, this isn't helping me to learn how to write generic methods or use reflection, but I'll take it...

Add two lines from csv file to array(s)

I have a csv file with the following data:
500000,0.005,6000
690000,0.003,5200
I need to add each line as a separate array. So 50000, 0.005, 6000 would be array1. How would I do this?
Currently my code adds each column into one element.
For example data[0] is showing 500000
690000
static void ReadFromFile(string filePath)
{
try
{
// Create an instance of StreamReader to read from a file.
// The using statement also closes the StreamReader.
using (StreamReader sr = new StreamReader(filePath))
{
string line;
// Read and display lines from the file until the end of
// the file is reached.
while ((line = sr.ReadLine()) != null)
{
string[] data = line.Split(',');
Console.WriteLine(data[0] + " " + data[1]);
}
}
}
catch (Exception e)
{
// Let the user know what went wrong.
Console.WriteLine("The file could not be read:");
Console.WriteLine(e.Message);
}
}
Using the limited data set you've provided...
const string test = #"500000,0.005,6000
690000,0.003,5200";
var result = test.Split('\n')
.Select(x=> x.Split(',')
.Select(y => Convert.ToDecimal(y))
.ToArray()
)
.ToArray();
foreach (var element in result)
{
Console.WriteLine($"{element[0]}, {element[1]}, {element[2]}");
}
Can it be done without LINQ? Yes, but it's messy...
const string test = #"500000,0.005,6000
690000,0.003,5200";
List<decimal[]> resultList = new List<decimal[]>();
string[] lines = test.Split('\n');
foreach (var line in lines)
{
List<decimal> decimalValueList = new List<decimal>();
string[] splitValuesByComma = line.Split(',');
foreach (string value in splitValuesByComma)
{
decimal convertedValue = Convert.ToDecimal(value);
decimalValueList.Add(convertedValue);
}
decimal[] decimalValueArray = decimalValueList.ToArray();
resultList.Add(decimalValueArray);
}
decimal[][] resultArray = resultList.ToArray();
That will give the exact same output as what I've done with the first example
If you may use a List<string[]> you do not have to worry about the array length.
In the following example, the variable lines will be a list arrays, like:
["500000", "0.005", "6000"]
["690000", "0.003", "5200"]
static void ReadFromFile(string filePath)
{
try
{
// Create an instance of StreamReader to read from a file.
// The using statement also closes the StreamReader.
using (StreamReader sr = new StreamReader(filePath))
{
List<string[]> lines = new List<string[]>();
string line;
// Read and display lines from the file until the end of
// the file is reached.
while ((line = sr.ReadLine()) != null)
{
string[] splittedLine = line.Split(',');
lines.Add(splittedLine);
}
}
}
catch (Exception e)
{
// Let the user know what went wrong.
Console.WriteLine("The file could not be read:");
Console.WriteLine(e.Message);
}
}
While other have split method, I will have a more "scolar"-"specified" method.
You have some Csv value in a file. Find a name for this object stored in a Csv, name every column, type them.
Define the default value of those field. Define what happends for missing column, and malformed field. Header?
Now that you know what you have, define what you want. This time again: Object name -> Property -> Type.
Believe me or not, the simple definition of your input and output solved your issue.
Use CsvHelper to simplify your code.
CSV File Definition:
public class CsvItem_WithARealName
{
public int data1;
public decimal data2;
public int goodVariableNames;
}
public class CsvItemMapper : ClassMap<CsvItem_WithARealName>
{
public CsvItemMapper()
{ //mapping based on index. cause file has no header.
Map(m => m.data1).Index(0);
Map(m => m.data2).Index(1);
Map(m => m.goodVariableNames).Index(2);
}
}
A Csv reader method, point a document it will give your the Csv Item.
Here we have some configuration: no header and InvariantCulture for decimal convertion
private IEnumerable<CsvItem_WithARealName> GetCsvItems(string filePath)
{
using (var fileReader = File.OpenText(filePath))
using (var csvReader = new CsvHelper.CsvReader(fileReader))
{
csvReader.Configuration.CultureInfo = CultureInfo.InvariantCulture;
csvReader.Configuration.HasHeaderRecord = false;
csvReader.Configuration.RegisterClassMap<CsvItemMapper>();
while (csvReader.Read())
{
var record = csvReader.GetRecord<CsvItem_WithARealName>();
yield return record;
}
}
}
Usage :
var filename = "csvExemple.txt";
var items = GetCsvItems(filename);

C# Finding the mean of every xth value in a file using streamreader

I'm relatively new to c# and I am trying to write a program that finds the mean of every xth value in a file using Streamreader. (For example if I wanted to find the mean of every fifth value in that file)
I written some code that reads the file and splits it into a new line for each comma, and this works fine, when I try and read each specific value.
However I'm struggling to think of a way to find every specific value, such as every 4th one and then find the mean of these and output it in the same program.
static void Main(string[] args)
{
using (var reader = new StreamReader(#"file"))
{
List<string> list = new List<string>();
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(',');
list.Add(values[0]);
}
}
}
Any suggestions or help would be greatly appreciated
Try like this;
static void Main()
{
using (var reader = new StreamReader(#"file"))
{
int lineNumber = 4;
bool streamEnded = false;
List<string> list = new List<string>();
while (!streamEnded)
{
var line = ReadSpecificLine(reader, lineNumber,out streamEnded);
if (string.IsNullOrEmpty(line))
{
continue;
}
var values = line.Split(',');
list.Add(values[0]);
}
}
}
public static string ReadSpecificLine(StreamReader sr, int lineNumber,out bool streamEnded)
{
streamEnded = false;
for (int i = 1; i < lineNumber; i++)
{
if (sr.EndOfStream)
{
streamEnded = true;
return "";
}
sr.ReadLine();
}
if (sr.EndOfStream)
{
streamEnded = true;
return "";
}
return sr.ReadLine();
}

How to search text based on line number in string

I have a function which searching a text in a string and returning me the line which contains the specific substring.Here is the function..
private static string getLine(string text,string text2Search)
{
string currentLine;
using (var reader = new StringReader(text))
{
while ((currentLine= reader.ReadLine()) != null)
{
if (currentLine.Contains(text2Search,StringComparison.OrdinalIgnoreCase))
{
break;
}
}
}
return currentLine;
}
Now in my condition i have to start searching the lines after a particular line suppose here its 10.Means have to start searching the string for specific text after 10 line.So my query is how can i add this into my current function..
Please help me.
You can use File.ReadLines method with Skip:
var line = File.ReadLines("path").Skip(10)
.SkipWhile(line => !line.Contains(text2Search,StringComparison.OrdinalIgnoreCase))
.First();
You can introduce a counter into your current code as so:
private static string getLine(string text,string text2Search)
{
string currentLine;
int endPoint = 10;
using (var reader = new StringReader(text))
{
int lineCount = 0;
while ((currentLine= reader.ReadLine()) != null)
{
if (lineCount++ >= endPoint &&
currentLine.Contains(text2Search,StringComparison.OrdinalIgnoreCase))
{
return currentLine;
}
}
}
return string.Empty;
}
Alternatively, use your current code to add all lines to a list in which you will then be able to use Selmans answer.
String.Contains doesn't have an overload taking StringComparison.OrdinalIgnoreCase
var match = text.Split(new char[]{'\n','\r'})
.Skip(10)
.FirstOrDefault(line=>line.IndexOf("", StringComparison.OrdinalIgnoreCase)>=0);

Refresing the C# DataGrid from a background thread

I'm using a DataGrid bound to an ObservableCollection source to display two columns, a file name and a number that I get from analysing the file.
ObservableCollection<SearchFile> fileso;
//...
private void worker_DoWork(object sender, DoWorkEventArgs e)
{
/* Get all the files to check. */
int dirCount = searchFoldersListView.Items.Count;
List<string> allFiles = new List<string>();
for(int i = 0; i < dirCount; i++)
{
try
{
allFiles.AddRange(Directory.GetFiles(searchFoldersListView.Items[i].ToString(), "*.txt").ToList());
allFiles.AddRange(Directory.GetFiles(searchFoldersListView.Items[i].ToString(), "*.pdf").ToList());
}
catch
{ /* stuff */ }
}
/* Clear the collection and populate it with unchecked files again, refreshing the grid. */
this.Dispatcher.Invoke(new Action(delegate
{
fileso.Clear();
foreach(var file in allFiles)
{
SearchFile sf = new SearchFile() { path=file, occurrences=0 };
fileso.Add(sf);
}
}));
/* Check the files. */
foreach(var file in allFiles)
{
this.Dispatcher.Invoke(new Action(delegate
{
int occurences;
bool result = FileSearcher.searchFile(file, searchTermTextBox.Text, out occurences);
fileso.AddOccurrences(file, occurences); // This is an extension method that alters the collection by finding the relevant item and changing it.
}));
}
}
//...
public static void AddOccurrences(this ObservableCollection<SearchFile> collection, string path, int occurrences)
{
for(int i = 0; i < collection.Count; i++)
{
if(collection[i].path == path)
{
collection[i].occurrences = occurrences;
break;
}
}
}
//...
public static bool searchTxtFile(string path, string term, out int occurences)
{
string contents = File.ReadAllText(path);
occurences = Regex.Matches(contents, term, RegexOptions.IgnoreCase).Count;
if(occurences>0)
return true;
return false;
}
public static bool searchDocxFile(string path, string term, out int occurences)
{
occurences = 0;
string tempPath = Path.GetTempPath();
string rawName = Path.GetFileNameWithoutExtension(path);
string destFile = System.IO.Path.Combine(tempPath, rawName + ".zip");
System.IO.File.Copy(path, destFile, true);
using(ZipFile zf = new ZipFile(destFile))
{
ZipEntry ze = zf.GetEntry("word/document.xml");
if(ze != null)
{
using(Stream zipstream = zf.GetInputStream(ze))
{
using(StreamReader sr = new StreamReader(zipstream))
{
string docContents = sr.ReadToEnd();
string rawText = Extensions.StripTagsRegexCompiled(docContents);
occurences = Regex.Matches(rawText, term, RegexOptions.IgnoreCase).Count;
if(occurences>0)
return true;
return false;
}
}
}
}
return false;
}
public static bool searchFile(string path, string term, out int occurences)
{
occurences = 0;
string ext = System.IO.Path.GetExtension(path);
switch(ext)
{
case ".txt":
return searchTxtFile(path, term, out occurences);
//case ".doc":
// return searchDocFile(path, term, out occurences);
case ".docx":
return searchDocxFile(path, term, out occurences);
}
return false;
}
But the problem is that sometimes, when I hit the update button (which starts the worker with the the do_work method above), some of the time, I'm getting random zeroes in the number column instead of the correct number. Why is that? I'm assuming it's because there's some problem with updating the number column twice, with sometimes the first zeroing getting applied after the actual update, but I'm not sure about the details.
I think this is a case of an access to a modified closure
/* Check the files. */
foreach(var file in allFiles)
{
var fileTmp = file; // avoid access to modified closure
this.Dispatcher.Invoke(new Action(delegate
{
int occurences;
bool result = FileSearcher.searchFile(fileTmp, searchTermTextBox.Text, out occurences);
fileso.AddOccurrences(fileTmp, occurences); // This is an extension method that alters the collection by finding the relevant item and changing it.
}));
}
Basically what's happening is that you are passing the file variable to a lambda expression, but file will be modified by the foreach loop before the action is actually invoked, using a temp variable to hold file should solve this.

Categories

Resources