Get property name, need to retrieve only certain columns - c#

Answer Summary:
Solved this problem using Jon Skeet's answer below. Here is the finished code
public static CSVData CreateCSVData(List<RegDataDisplay> rList,
string[] selectors)
{
CSVData csv = new CSVData(); // Create the CSVData object
foreach(string selector in selectors)
{
// Get the PropertyInfo for the property whose name
// is the value of selector
var property = typeof(RegDataDisplay).GetProperty(selector);
// Use LINQ to get a list of the values for the specified property
// of each RegDataDisplay object in the supplied list.
var values = rList.Select(row => property.GetValue(row, null)
.ToString());
// Create a new list with the property name to use as a header
List<string> templs = new List<string>(){selector};
// Add the returned values after the header
templs.AddRange(values);
// Add this list as a column for the CSVData object.
csv.Columns.Add(templs);
}
return csv;
}
Question
I am building my SQL query dynamically from user input, and then exporting the results to a CSV file. I have a class called RegDataDisplay which has a property for each of the possible columns returned by my query. I can tell what columns are being selected but in my CSV creator I need to be able to only output those specific columns.
In the example below, all of the data I have retrieved is in rList, and the names of the properties I need are in selectors. So I want to iterate through the list and then add only the properties I need to my CSV data.
public static CSVData CreateCSVData(List<RegDataDisplay> rList, string[] selectors)
{
CSVData csv = new CSVData();
for(int i = 0; i < selectors.Length; i++)
{
csv.Columns.Add(new List<string>(){selectors[i]});
}
// So now I have the headers for the CSV columns,
// I need the specific properties only which is where I'm stuck
for(int i = 0; i < selectors.Length; i++)
{
for(int j = 0; j < rList.Count; j++)
{
// If it was javascript I would do something like this
csv.Columns[i].Add(rList[j][selectors[i]]);
}
}
}
Thanks
EDIT: On the right track now but I'm coming up against an error "Object does not match target type".
public static CSVData CreateCSVData()
{
// I've created a test method with test data
string[] selectors = new string[] { "Firstname", "Lastname" };
List<RegDataDisplay> rList = new List<RegDataDisplay>();
RegDataDisplay rd = new RegDataDisplay();
rd.Firstname = "first";
rd.Lastname = "last";
rList.Add(rd);
CSVData csv = new CSVData();
foreach(string selector in selectors)
{
var property = typeof(RegDataDisplay).GetProperty(selector);
var values = rList.Select(row => property.GetValue(rList, null).ToString())
.ToList(); // Error throws here
csv.Columns.Add(values);
}
return csv;
}

Assuming you're on .NET 3.5 or higher, it sounds like you may want something like:
public static CSVData CreateCSVData(List<RegDataDisplay> rList,
string[] selectors)
{
CSVData csv = new CSVData();
foreach (string selector in selectors)
{
var prop = typeof(RegDataDisplay).GetProperty(selector);
var values = rList.Select(row => (string) prop.GetValue(row, null))
.ToList();
csv.Columns.Add(values);
}
}

Related

Adding dictionary values to csv file

I'm playing about with working with a dictionary and and adding the contents of it within an existing csv file. This is what I have so far:
List<string> files = new List<string>();
files.Add("test1");
files.Add("test2");
Dictionary<string, List<string>> data = new Dictionary<string, List<string>>();
data.Add("Test Column", files.ToList());
foreach ( var columnData in data.Keys)
{
foreach (var rowData in data[columnData])
{
var csv = File.ReadLines(filePath.ToString()).Select((line, index) => index == 0
? line + "," + columnData.ToString()
: line + "," + rowData.ToString()).ToList();
File.WriteAllLines(filePath.ToString(), csv);
}
}
This sort of works but not the way I'm intending. What I would like the output to be is something along the lines
but what I'm actually getting is:
as you'll be able to see I'm getting 2 columns instead of just 1 with a column each for both list values and the values repeating on every single row. How can I fix it so that it's like how I've got in the first image? I know it's something to do with my foreach loop and the way I'm inputting the data into the file but I'm just not sure how to fix it
Edit:
So I have the read, write and AddToCsv methods and when I try it like so:
File.WriteAllLines("file.csv", new string[] { "Col0,Col1,Col2", "0,1,2", "1,2,3", "2,3,4", "3,4,5" });
var filePath = "file.csv";
foreach (var line in File.ReadLines(filePath))
Console.WriteLine(line);
Console.WriteLine("\n\n");
List<string> files = new List<string>() { "test1", "test2" };
List<string> numbers = new List<string>() { "one", "two", "three", "four", "five" };
Dictionary<string, List<string>> newData = new Dictionary<string, List<string>>() {
{"Test Column", files},
{"Test2", numbers}
};
var data1 = ReadCsv(filePath);
AddToCsv(data1, newData);
WriteCsv(filePath.ToString(), data1);
It works perfectly but when I have the file path as an already created file like so:
var filePath = exportFile.ToString();
I get the error:
Message :Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
Source :System.Private.CoreLib
Stack : at System.Collections.Generic.List1.get_Item(Int32 index) at HMHExtract.Runner.ReadCsv(String path) in C:\tfs\Agility\Client\HMH Extract\HMHExtract\Runner.cs:line 194 at HMHExtract.Runner.Extract(Nullable1 ct) in C:\tfs\Agility\Client\HMH Extract\HMHExtract\Runner.cs:line 68
Target Site :Void ThrowArgumentOutOfRange_IndexException()
The lines in question are:
line 194 - var col = colNames[i]; of the ReadCsv method
line 68 - var data1 = ReadCsv(filePath);
Edit:
So after debugging I've figured out where the issue has come from.
In the csv I am trying to update there are 17 columns so obviously 17 rows of values. So the colNames count is 17. csvRecord Count = 0 and i goes up to 16.
However when it reaches a row where in one of the fields there are 2 values separated by a comma, it counts it s 2 row values instead of just 1 so for the row value instead of being string{17} it becomes string{18} and that causes the out of range error.
To clarify, for the row it gets to which causes the error one of the fields has the values Chris Jones, Malcolm Clark. Now instead of counting them as just 1 row, the method counts them as 2 separate ones, how can I change so it doesn't count them as 2 separate rows?
The best way is to read the csv file first into a list of records, and then add columns to each record. A record is a single row of the csv file, read as a Dictionary<string, string>. The keys of this dict are the column names, and the values are the elements of the row in that column.
public static void AddToCsv(string path, Dictionary<string, List<string>> newData)
{
var fLines = File.ReadLines(path);
var colNames = fLines.First().Split(',').ToList(); // col names in first line
List<Dictionary<string, string>> rowData = new List<Dictionary<string, string>>(); // A list of records for all other rows
foreach (var line in fLines.Skip(1)) // Iterate over second through last lines
{
var row = line.Split(',');
Dictionary<string, string> csvRecord = new Dictionary<string, string>();
// Add everything from this row to the record dictionary
for (int i = 0; i < row.Length; i++)
{
var col = colNames[i];
csvRecord[col] = row[i];
}
rowData.Add(csvRecord);
}
// Now, add new data
foreach (var newColName in newData.Keys)
{
var colData = newData[newColName];
for (int i = 0; i < colData.Count; i++)
{
if (i < rowData.Count) // If the row record already exists, add the new column to it
rowData[i].Add(newColName, colData[i]);
else // Add a row record with only this column
rowData.Add(new Dictionary<string, string>() { {newColName, colData[i]} });
}
colNames.Add(newColName);
}
// Now, write all the data
StreamWriter sw = new StreamWriter(path);
// Write header
sw.WriteLine(String.Join(",", colNames));
foreach (var row in rowData)
{
var line = new List<string>();
foreach (var colName in colNames) // Iterate over columns
{
if (row.ContainsKey(colName)) // If the row contains this column, add it to the line
line.Add(row[colName]);
else // Else add an empty string
line.Add("");
}
// Join all elements in the line with a comma, then write to file
sw.WriteLine(String.Join(",", line));
}
sw.Close();
}
To use this, let's create the following CSV file file.csv:
Col0,Col1,Col2
0,1,2
1,2,3
2,3,4
3,4,5
List<string> files = new List<string>() {"test1", "test2"};
List<string> numbers = new List<string>() {"one", "two", "three", "four", "five"};
Dictionary<string, List<string>> newData = new Dictionary<string, List<string>>() {
{"Test Column", files},
{"Test2", numbers}
}
AddToCsv("file.csv", newData);
And this results in file.csv being modified to:
Col0,Col1,Col2,Test Column,Test2
0,1,2,test1,one
1,2,3,test2,two
2,3,4,,three
3,4,5,,four
,,,,five
To make this more organized, I defined a struct CsvData to hold the column names and row records, and a function ReadCsv() that reads the file into this struct, and WriteCsv() that writes the struct to a file. Then separate responsibilities -- ReadCsv() only reads the file, WriteCsv() only writes the file, and AddToCsv() only adds to the file.
public struct CsvData
{
public List<string> ColNames;
public List<Dictionary<string, string>> RowData;
}
public static CsvData ReadCsv(string path)
{
List<string> colNames = new List<string>();
List<Dictionary<string, string>> rowData = new List<Dictionary<string, string>>(); // A list of records for all other rows
if (!File.Exists(path)) return new CsvData() {ColNames = colNames, RowData = rowData };
var fLines = File.ReadLines(path);
var firstLine = fLines.FirstOrDefault(); // Read the first line
if (firstLine != null) // Only try to parse the file if the first line actually exists.
{
colNames = firstLine.Split(',').ToList(); // col names in first line
foreach (var line in fLines.Skip(1)) // Iterate over second through last lines
{
var row = line.Split(',');
Dictionary<string, string> csvRecord = new Dictionary<string, string>();
// Add everything from this row to the record dictionary
for (int i = 0; i < row.Length; i++)
{
var col = colNames[i];
csvRecord[col] = row[i];
}
rowData.Add(csvRecord);
}
}
return new CsvData() {ColNames = colNames, RowData = rowData};
}
public static void WriteCsv(string path, CsvData data)
{
StreamWriter sw = new StreamWriter(path);
// Write header
sw.WriteLine(String.Join(",", data.ColNames));
foreach (var row in data.RrowData)
{
var line = new List<string>();
foreach (var colName in data.ColNames) // Iterate over columns
{
if (row.ContainsKey(colName)) // If the row contains this column, add it to the line
line.Add(row[colName]);
else // Else add an empty string
line.Add("");
}
// Join all elements in the line with a comma, then write to file
sw.WriteLine(String.Join(",", line));
}
sw.Close();
}
public static void AddToCsv(CsvData data, Dictionary<string, List<string>> newData)
{
foreach (var newColName in newData.Keys)
{
var colData = newData[newColName];
for (int i = 0; i < colData.Count; i++)
{
if (i < data.RowData.Count) // If the row record already exists, add the new column to it
data.RowData[i].Add(newColName, colData[i]);
else // Add a row record with only this column
data.RowData.Add(new Dictionary<string, string>() { {newColName, colData[i]} });
}
data.ColNames.Add(newColName);
}
}
Then, to use this, you do:
var data = ReadCsv(path);
AddToCsv(data, newData);
WriteCsv(path, data);
I managed to figure out a way that worked for me, might not be the most efficient but it does work. It involves using csvHelper
public static void AppendFile(FileInfo fi, List<string> newColumns, DataTable newRows)
{
var settings = new CsvConfiguration(new CultureInfo("en-GB"))
{
Delimiter = ";"
};
var dt = new DataTable();
using (var reader = new StreamReader(fi.FullName))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
using (var dataReader = new CsvDataReader(csv))
{
dt.Load(dataReader);
foreach (var title in newColumns)
{
dt.Columns.Add(title);
}
dt.Rows.Clear();
foreach (DataRow row in newRows.Rows)
{
dt.Rows.Add(row.ItemArray);
}
}
}
using var streamWriter = new StreamWriter(fi.FullName);
using var csvWriter = new CsvWriter(streamWriter, settings);
// Write columns
foreach (DataColumn column in dt.Columns)
{
csvWriter.WriteField(column.ColumnName);
}
csvWriter.NextRecord();
// Write row values
foreach (DataRow row in dt.Rows)
{
for (var i = 0; i < dt.Columns.Count; i++)
{
csvWriter.WriteField(row[i]);
}
csvWriter.NextRecord();
}
}
I start by getting the contents of the csv file into a data table and then adding in the new columns that I need. I then clear all the rows in the datatable and add new ones in (the data that is removed is added back in via the newRows parameter) and then write the datatable to the csv file

Filling <string> list or array to export element names to excel

The goal is to fill excel cells with elements names. Using EPPlus.
_elementsDB gives just "Autodesk.Revit.DB.Wall".
int col = 1;
for ( int row=2; row < _elementsDB.Count; row++ )
{
ws.Cells[row, col].Value = _elementsDB;
}
Tying to populate either an array or list. Nothing works.
FilteredElementCollector collector = new FilteredElementCollector(doc);
IList<Element> _elementsDB = collector.OfCategory(BuiltInCategory.OST_Walls).WhereElementIsNotElementType().ToElements();
List<Element> _elementsDB = collector.OfCategory(BuiltInCategory.OST_Walls).WhereElementIsNotElementType().ToElements();
//First Option
string[] _elementsNameArr = new string[] { };
foreach (Element e in _elementsDB)
{
_elementsNameArr = e.Name;
}
//Second Option
List <string> _elementsNameList = new List<string>;
foreach (Element e in _elementsDB)
{
_elementsNameList = e.Name;
}
Also tried to create a sorted list, didn't work either. Shows up an excxeption System.Argument.Exception "A record with such a key already exists".
SortedList<string, Element> _elementNameSorted = new SortedList<string, Element>();
foreach (Element e in _elementsDB)
{
_elementNameSorted.Add(e.Name,e);
}
When you use the method .ToElements() it returns a IList<Element> that you can convert it later, you cannot directly assign its result to List<Element>, you have to use LINQ to convert, using .ToElements().ToList()
If you don't have it yet, be sure to add using System.Linq; to the top of your code.
Anyway, there's no need to convert to List<Element> here, try the code below:
FilteredElementCollector collector = new FilteredElementCollector(doc);
IList<Element> _elementsDB = collector.OfCategory(BuiltInCategory.OST_Walls).WhereElementIsNotElementType().ToElements();
//List<Element> _elementsDB = collector.OfCategory(BuiltInCategory.OST_Walls).WhereElementIsNotElementType().ToElements(); // <---- IF YOU ALREADY DECLARED _elementsDB BEFORE, YOU CAN'T HAVE THIS HERE TOGETHER
List <string> _elementsNameList = new List<string>(); // <-- YOU'VE MISSED () HERE
foreach (Element e in _elementsDB)
{
_elementsNameList.Add(e.Name); // <-- HERE YOU HAVE TO ADD IT TO THE LIST, NOT ASSIGN TO THE LIST, YOU CANNOT ASSIGN A string TO A List<string>, YOU HAVE TO ADD
}
//Sorting your list would be
_elementsNameList = _elementsNameList.OrderBy(q => q).ToList();
//...... Write here if you have any code before writing the Excel
try
{
WriteXLS("YOU EXCEL FILE PATH HERE", "YOUR WORKSEET NAME HERE", _elementsNameList, 2); // <--- METHOD SOWN IN THE CODE BELOW
}
catch(Exception e)
{
TaskDialog.Show("Error", e.Message);
}
For writing to an existing Excel file you can use the method bellow:
private void WriteXLS(string filePath, string workSheetName, List<string> elementsNamesList, int startRow = 1, int col = 1)
{
FileInfo existingFile = new FileInfo(filePath);
using (ExcelPackage package = new ExcelPackage(existingFile))
{
ExcelWorksheet ws = GetWorkSheet(package, workSheetName);
int maxRows = elementsNamesList.Count;
for (int row = startRow; row <= maxRows; row++)
{
ws.Cells[row, col].Value = elementsNamesList[row];
}
}
}
And, before running it, be sure to have you Excel file closed, it doesn't work if the file is opened.

Given string array of column names, how do I read a .csv file to a DataTable?

Assume I have a .csv file with 70 columns, but only 5 of the columns are what I need. I want to be able to pass a method a string array of the columns names that I want, and for it to return a datatable.
private void method(object sender, EventArgs e) {
string[] columns =
{
#"Column21",
#"Column48"
};
DataTable myDataTable = Get_DT(columns);
}
public DataTable Get_DT(string[] columns) {
DataTable ret = new DataTable();
if (columns.Length > 0)
{
foreach (string column in columns)
{
ret.Columns.Add(column);
}
string[] csvlines = File.ReadAllLines(#"path to csv file");
csvlines = csvlines.Skip(1).ToArray(); //ignore the columns in the first line of the csv file
//this is where i need help... i want to use linq to read the fields
//of the each row with only the columns name given in the string[]
//named columns
}
return ret;
}
Read the first line of the file, line.Split(',') (or whatever your delimiter is), then get the index of each column name and store that.
Then for each other line, again do a var values = line.Split(','), then get the values from the columns.
Quick and dirty version:
string[] csvlines = File.ReadAllLines(#"path to csv file");
//select the indices of the columns we want
var cols = csvlines[0].Split(',').Select((val,i) => new { val, i }).Where(x => columns.Any(c => c == x.val)).Select(x => x.i).ToList();
//now go through the remaining lines
foreach (var line in csvlines.Skip(1))
{
var line_values = line.Split(',').ToList();
var dt_values = line_values.Where(x => cols.Contains(line_values.IndexOf(x)));
//now do something with the values you got for this row, add them to your datatable
}
You can look at https://joshclose.github.io/CsvHelper/
Think Reading individual fields is what you are looking for
var csv = new CsvReader( textReader );
while( csv.Read() )
{
var intField = csv.GetField<int>( 0 );
var stringField = csv.GetField<string>( 1 );
var boolField = csv.GetField<bool>( "HeaderName" );
}
We can easily do this without writing much code.
Exceldatareader is an awesome dll for that, it will directly as a datable from the excel sheet with just one method.
here is the links for example:http://www.c-sharpcorner.com/blogs/using-iexceldatareader1
http://exceldatareader.codeplex.com/
Hope it was useful kindly let me know your thoughts or feedbacks
Thanks
Karthik
var data = File.ReadAllLines(#"path to csv file");
// the expenses row
var query = data.Single(d => d[0] == "Expenses");
//third column
int column21 = 3;
return query[column21];
As others have stated a library like CsvReader can be used for this. As for linq, I don't think its suitable for this kind of job.
I haven't tested this but it should get you through
using (TextReader textReader = new StreamReader(filePath))
{
using (var csvReader = new CsvReader(textReader))
{
var headers = csvReader.FieldHeaders;
for (int rowIndex = 0; csvReader.Read(); rowIndex++)
{
var dataRow = dataTable.NewRow();
for (int chosenColumnIndex = 0; chosenColumnIndex < columns.Count(); chosenColumnIndex++)
{
for (int headerIndex = 0; headerIndex < headers.Length; headerIndex++)
{
if (headers[headerIndex] == columns[chosenColumnIndex])
{
dataRow[chosenColumnIndex] = csvReader.GetField<string>(headerIndex);
}
}
}
dataTable.Rows.InsertAt(dataRow, rowIndex);
}
}
}

Convert DataTable to LINQ Anonymous Type

I want a function which takes in a datatable & returns a List (object is not DataRow)
Eg. :
I know I can do this (but this requires column names to be known) :
// Datatable dt = Filled from a Database query & has 3 columns Code,Description & ShortCode
List<object> rtn = new List<object>();
var x = from vals in dt.Select()
select new
{
Code = vals["Code"],
Description = vals["Description"],
ShortCode = vals["ShortCode"],
};
rtn.AddRange(x)
return rtn;
What i want is a generic version so that i can pass in any datatable & it will generate based on column names in the datatable.
Since the property names are not known at compile time and you want to use the data for JSON serialization, you can use the following to create a list of dictionary. If you use Newtonsoft JSON, then the serialization takes care of converting the key value pairs in a JSON object format.
IEnumerable<Dictionary<string,object>> result = dt.Select().Select(x => x.ItemArray.Select((a, i) => new { Name = dt.Columns[i].ColumnName, Value = a })
.ToDictionary(a => a.Name, a => a.Value));
In order to dynamically create properties so as to treat different dataTables with different set of Columns, we can use the System.Dynamic.ExpandoObject. It basically implements, IDictionary <string,object>. The format, which can easily be converted to JSON.
int colCount = dt.Columns.Count;
foreach (DataRow dr in dt.Rows)
{
dynamic objExpando = new System.Dynamic.ExpandoObject();
var obj = objExpando as IDictionary<string, object>;
for (int i = 0; i < colCount; i++)
{
string key = dr.Table.Columns[i].ColumnName.ToString();
string val = dr[key].ToString();
obj[key] = val;
}
rtn.Add(obj);
}
String json = new System.Web.Script.Serialization.JavaScriptSerializer().Serialize(rtn);
You can use the following generic function:-
private static List<T> ConvertDataTable<T>(DataTable dt)
{
List<T> data = newList<T>();
foreach (DataRowrow in dt.Rows)
{
Titem = GetItem<T>(row);
data.Add(item);
}
return data;
}
private static TGetItem<T>(DataRow dr)
{
Type temp = typeof(T);
T obj =Activator.CreateInstance<T>();
foreach (DataColumncolumn in dr.Table.Columns)
{
foreach (PropertyInfopro in temp.GetProperties())
{
if (pro.Name == column.ColumnName)
pro.SetValue(obj,dr[column.ColumnName], null);
else
continue;
}
}
return obj;
}
Please check my article, which has complete demonstration on how to use this generic method.
Here is the original question:
// Datatable dt = Filled from a Database query & has 3 columns Code,Description & ShortCode
List<object> rtn = new List<object>();
var x = from vals in dt.Select()
select new
{
Code = vals["Code"],
Description = vals["Description"],
ShortCode = vals["ShortCode"],
};
rtn.AddRange(x)
return rtn;
Just replace with
List<object> rtn = JsonConvert.DeserializeObject<List<object>>(JsonConvert.SerializeObject(dt));
You will have the provide the anonymous object as a parameter and use json/xml serialization:
protected static List<T> ToAnonymousCollection<T>(DataTable dt, T anonymousObject)
{
List<DataColumn> dataColumns = dt.Columns.OfType<DataColumn>().ToList();
return dt.Rows.OfType<DataRow>().Select(dr =>
{
Dictionary<string, object> dict = new Dictionary<string, object>();
dataColumns.Each(dc => dict.Add(dc.ColumnName, dr[dc]));
return JsonConvert.DeserializeAnonymousType(JsonConvert.SerializeObject(dict), anonymousObject);
}).ToList();
}
Usage:
var anonymousCollection = ToAnonymousCollection(dt, new { Code = [ColumnTypeValue, eg. 0], Description = [ColumnTypeValue, eg. string.Empty], ShortCode = Code=[ColumnTypeValue, eg. 0] })

Regular Expression with Lambda Expression

I've got several text files which should be tab delimited, but actually are delimited by an arbitrary number of spaces. I want to parse the rows from the text file into a DataTable (the first row of the text file has headers for property names). This got me thinking about building an extensible, easy way to parse text files. Here's my current working solution:
string filePath = #"C:\path\lowbirthweight.txt";
//regex to remove multiple spaces
Regex regex = new Regex(#"[ ]{2,}", RegexOptions.Compiled);
DataTable table = new DataTable();
var reader = ReadTextFile(filePath);
//headers in first row
var headers = reader.First();
//skip headers for data
var data = reader.Skip(1).ToArray();
//remove arbitrary spacing between column headers and table data
headers = regex.Replace(headers, #" ");
for (int i = 0; i < data.Length; i++)
{
data[i] = regex.Replace(data[i], #" ");
}
//make ready the DataTable, split resultant space-delimited string into array for column names
foreach (string columnName in headers.Split(' '))
{
table.Columns.Add(new DataColumn() { ColumnName = columnName });
}
foreach (var record in data)
{
//split into array for row values
table.Rows.Add(record.Split(' '));
}
//test prints correctly to the console
Console.WriteLine(table.Rows[0][2]);
}
static IEnumerable<string> ReadTextFile(string fileName)
{
using (var reader = new StreamReader(fileName))
{
while (!reader.EndOfStream)
{
yield return reader.ReadLine();
}
}
}
In my project I've already received several large (gig +) text files that are not in the format in which they are purported to be. So can I see having to write methods such as these with some regularity, albeit with a different regular expression. Is there a way to do something like
data =data.SmartRegex(x => x.AllowOneSpace) where I can use a regular expression to iterate over the collection of strings?
Is something like the following on the right track?
public static class SmartRegex
{
public static Expression AllowOneSpace(this List<string> data)
{
//no idea how to return an expression from a method
}
}
I'm not too overly concerned with performance, just would like to see how something like this works
You should consult with your data source and find out why your data is bad.
As for the API design that you are trying to implement:
public class RegexCollection
{
private readonly Regex _allowOneSpace = new Regex(" ");
public Regex AllowOneSpace { get { return _allowOneSpace; } }
}
public static class RegexExtensions
{
public static IEnumerable<string[]> SmartRegex(
this IEnumerable<string> collection,
Func<RegexCollection, Regex> selector
)
{
var regexCollection = new RegexCollection();
var regex = selector(regexCollection);
return collection.Select(l => regex.Split(l));
}
}
Usage:
var items = new List<string> { "Hello world", "Goodbye world" };
var results = items.SmartRegex(x => x.AllowOneSpace);

Categories

Resources