How do I check to see if a column exists in a SqlDataReader object? In my data access layer, I have create a method that builds the same object for multiple stored procedures calls. One of the stored procedures has an additional column that is not used by the other stored procedures. I want to modified the method to accommodate for every scenario.
My application is written in C#.
public static class DataRecordExtensions
{
public static bool HasColumn(this IDataRecord dr, string columnName)
{
for (int i=0; i < dr.FieldCount; i++)
{
if (dr.GetName(i).Equals(columnName, StringComparison.InvariantCultureIgnoreCase))
return true;
}
return false;
}
}
Using Exceptions for control logic like in some other answers is considered bad practice and has performance costs. It also sends false positives to the profiler of # exceptions thrown and god help anyone setting their debugger to break on exceptions thrown.
GetSchemaTable() is also another suggestion in many answers. This would not be a preffered way of checking for a field's existance as it is not implemented in all versions (it's abstract and throws NotSupportedException in some versions of dotnetcore). GetSchemaTable is also overkill performance wise as it's a pretty heavy duty function if you check out the source.
Looping through the fields can have a small performance hit if you use it a lot and you may want to consider caching the results.
The correct code is:
public static bool HasColumn(DbDataReader Reader, string ColumnName) {
foreach (DataRow row in Reader.GetSchemaTable().Rows) {
if (row["ColumnName"].ToString() == ColumnName)
return true;
} //Still here? Column not found.
return false;
}
In one line, use this after your DataReader retrieval:
var fieldNames = Enumerable.Range(0, dr.FieldCount).Select(i => dr.GetName(i)).ToArray();
Then,
if (fieldNames.Contains("myField"))
{
var myFieldValue = dr["myField"];
...
Edit
Much more efficient one-liner that does not requires to load the schema:
var exists = Enumerable.Range(0, dr.FieldCount).Any(i => string.Equals(dr.GetName(i), fieldName, StringComparison.OrdinalIgnoreCase));
I think your best bet is to call GetOrdinal("columnName") on your DataReader up front, and catch an IndexOutOfRangeException in case the column isn't present.
In fact, let's make an extension method:
public static bool HasColumn(this IDataRecord r, string columnName)
{
try
{
return r.GetOrdinal(columnName) >= 0;
}
catch (IndexOutOfRangeException)
{
return false;
}
}
Edit
Ok, this post is starting to garner a few down-votes lately, and I can't delete it because it's the accepted answer, so I'm going to update it and (I hope) try to justify the use of exception handling as control flow.
The other way of achieving this, as posted by Chad Grant, is to loop through each field in the DataReader and do a case-insensitive comparison for the field name you're looking for. This will work really well, and truthfully will probably perform better than my method above. Certainly I would never use the method above inside a loop where performace was an issue.
I can think of one situation in which the try/GetOrdinal/catch method will work where the loop doesn't. It is, however, a completely hypothetical situation right now so it's a very flimsy justification. Regardless, bear with me and see what you think.
Imagine a database that allowed you to "alias" columns within a table. Imagine that I could define a table with a column called "EmployeeName" but also give it an alias of "EmpName", and doing a select for either name would return the data in that column. With me so far?
Now imagine that there's an ADO.NET provider for that database, and they've coded up an IDataReader implementation for it which takes column aliases into account.
Now, dr.GetName(i) (as used in Chad's answer) can only return a single string, so it has to return only one of the "aliases" on a column. However, GetOrdinal("EmpName") could use the internal implementation of this provider's fields to check each column's alias for the name you're looking for.
In this hypothetical "aliased columns" situation, the try/GetOrdinal/catch method would be the only way to be sure that you're checking for every variation of a column's name in the resultset.
Flimsy? Sure. But worth a thought. Honestly I'd much rather an "official" HasColumn method on IDataRecord.
Here is a working sample for Jasmin's idea:
var cols = r.GetSchemaTable().Rows.Cast<DataRow>().Select
(row => row["ColumnName"] as string).ToList();
if (cols.Contains("the column name"))
{
}
The following is simple and worked for me:
bool hasMyColumn = (reader.GetSchemaTable().Select("ColumnName = 'MyColumnName'").Count() == 1);
This works for me:
bool hasColumnName = reader.GetSchemaTable().AsEnumerable().Any(c => c["ColumnName"] == "YOUR_COLUMN_NAME");
I wrote this for Visual Basic users:
Protected Function HasColumnAndValue(ByRef reader As IDataReader, ByVal columnName As String) As Boolean
For i As Integer = 0 To reader.FieldCount - 1
If reader.GetName(i).Equals(columnName) Then
Return Not IsDBNull(reader(columnName))
End If
Next
Return False
End Function
I think this is more powerful and the usage is:
If HasColumnAndValue(reader, "ID_USER") Then
Me.UserID = reader.GetDecimal(reader.GetOrdinal("ID_USER")).ToString()
End If
If you read the question, Michael asked about DataReader, not DataRecord folks. Get your objects right.
Using a r.GetSchemaTable().Columns.Contains(field) on a DataRecord does work, but it returns BS columns (see screenshot below.)
To see if a data column exists AND contains data in a DataReader, use the following extensions:
public static class DataReaderExtensions
{
/// <summary>
/// Checks if a column's value is DBNull
/// </summary>
/// <param name="dataReader">The data reader</param>
/// <param name="columnName">The column name</param>
/// <returns>A bool indicating if the column's value is DBNull</returns>
public static bool IsDBNull(this IDataReader dataReader, string columnName)
{
return dataReader[columnName] == DBNull.Value;
}
/// <summary>
/// Checks if a column exists in a data reader
/// </summary>
/// <param name="dataReader">The data reader</param>
/// <param name="columnName">The column name</param>
/// <returns>A bool indicating the column exists</returns>
public static bool ContainsColumn(this IDataReader dataReader, string columnName)
{
/// See: http://stackoverflow.com/questions/373230/check-for-column-name-in-a-sqldatareader-object/7248381#7248381
try
{
return dataReader.GetOrdinal(columnName) >= 0;
}
catch (IndexOutOfRangeException)
{
return false;
}
}
}
Usage:
public static bool CanCreate(SqlDataReader dataReader)
{
return dataReader.ContainsColumn("RoleTemplateId")
&& !dataReader.IsDBNull("RoleTemplateId");
}
Calling r.GetSchemaTable().Columns on a DataReader returns BS columns:
TLDR:
There are lots of answers with claims about performance and bad practice, so I clarify that here.
The exception route is faster for higher numbers of returned columns, the loop route is faster for lower number of columns, and the crossover point is around 11 columns. Scroll to the bottom to see a graph and test code.
Full answer:
The code for some of the top answers work, but there is an underlying debate here for the "better" answer based on the acceptance of exception handling in logic and its related performance.
To clear that away, I do not believe there is much guidance regarding catching exceptions. Microsoft does have some guidance regarding throwing exceptions. There they do state:
Do not use exceptions for the normal flow of control, if possible.
The first note is the leniency of "if possible". More importantly, the description gives this context:
framework designers should design APIs so users can write code that does not throw exceptions
That means, if you are writing an API, that might be consumed by somebody else, give them the ability to navigate an exception without a try/catch. For example, provide a TryParse with your exception-throwing Parse method. Nowhere does this say though that you shouldn't catch an exception.
Further, as another user points out, catches have always allowed filtering by type and somewhat recently allow further filtering via the when clause. This seems like a waste of language features if we're not supposed to be using them.
It can be said that there is some cost for a thrown exception, and that cost may impact performance in a heavy loop. However, it can also be said that the cost of an exception is going to be negligible in a "connected application". Actual cost was investigated over a decade ago: How expensive are exceptions in C#?
In other words, the cost of a connection and query of a database is likely to dwarf that of a thrown exception.
All that aside, I wanted to determine which method truly is faster. As expected there is no concrete answer.
Any code that loops over the columns becomes slower as the number of columns increase. It can also be said that any code that relies on exceptions will slow depending on the rate in which the query fails to be found.
Taking the answers of both Chad Grant and Matt Hamilton, I ran both methods with up to 20 columns and up to a 50% error rate (the OP indicated he was using this two test between different stored procedures, so I assumed as few as two).
Here are the results, plotted with LINQPad:
The zigzags here are fault rates (column not found) within each column count.
Over narrower result sets, looping is a good choice. However, the GetOrdinal/Exception method is not nearly as sensitive to number of columns and begins to outperform the looping method right around 11 columns.
That said, I don't really have a preference performance wise as 11 columns sounds reasonable as an average number of columns returned over an entire application. In either case we're talking about fractions of a millisecond here.
However, from a code simplicity aspect, and alias support, I'd probably go with the GetOrdinal route.
Here is the test in LINQPad form. Feel free to repost with your own method:
void Main()
{
var loopResults = new List<Results>();
var exceptionResults = new List<Results>();
var totalRuns = 10000;
for (var colCount = 1; colCount < 20; colCount++)
{
using (var conn = new SqlConnection(#"Data Source=(localdb)\MSSQLLocalDb;Initial Catalog=master;Integrated Security=True;"))
{
conn.Open();
//create a dummy table where we can control the total columns
var columns = String.Join(",",
(new int[colCount]).Select((item, i) => $"'{i}' as col{i}")
);
var sql = $"select {columns} into #dummyTable";
var cmd = new SqlCommand(sql,conn);
cmd.ExecuteNonQuery();
var cmd2 = new SqlCommand("select * from #dummyTable", conn);
var reader = cmd2.ExecuteReader();
reader.Read();
Func<Func<IDataRecord, String, Boolean>, List<Results>> test = funcToTest =>
{
var results = new List<Results>();
Random r = new Random();
for (var faultRate = 0.1; faultRate <= 0.5; faultRate += 0.1)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
var faultCount=0;
for (var testRun = 0; testRun < totalRuns; testRun++)
{
if (r.NextDouble() <= faultRate)
{
faultCount++;
if(funcToTest(reader, "colDNE"))
throw new ApplicationException("Should have thrown false");
}
else
{
for (var col = 0; col < colCount; col++)
{
if(!funcToTest(reader, $"col{col}"))
throw new ApplicationException("Should have thrown true");
}
}
}
stopwatch.Stop();
results.Add(new UserQuery.Results{
ColumnCount = colCount,
TargetNotFoundRate = faultRate,
NotFoundRate = faultCount * 1.0f / totalRuns,
TotalTime=stopwatch.Elapsed
});
}
return results;
};
loopResults.AddRange(test(HasColumnLoop));
exceptionResults.AddRange(test(HasColumnException));
}
}
"Loop".Dump();
loopResults.Dump();
"Exception".Dump();
exceptionResults.Dump();
var combinedResults = loopResults.Join(exceptionResults,l => l.ResultKey, e=> e.ResultKey, (l, e) => new{ResultKey = l.ResultKey, LoopResult=l.TotalTime, ExceptionResult=e.TotalTime});
combinedResults.Dump();
combinedResults
.Chart(r => r.ResultKey, r => r.LoopResult.Milliseconds * 1.0 / totalRuns, LINQPad.Util.SeriesType.Line)
.AddYSeries(r => r.ExceptionResult.Milliseconds * 1.0 / totalRuns, LINQPad.Util.SeriesType.Line)
.Dump();
}
public static bool HasColumnLoop(IDataRecord dr, string columnName)
{
for (int i = 0; i < dr.FieldCount; i++)
{
if (dr.GetName(i).Equals(columnName, StringComparison.InvariantCultureIgnoreCase))
return true;
}
return false;
}
public static bool HasColumnException(IDataRecord r, string columnName)
{
try
{
return r.GetOrdinal(columnName) >= 0;
}
catch (IndexOutOfRangeException)
{
return false;
}
}
public class Results
{
public double NotFoundRate { get; set; }
public double TargetNotFoundRate { get; set; }
public int ColumnCount { get; set; }
public double ResultKey {get => ColumnCount + TargetNotFoundRate;}
public TimeSpan TotalTime { get; set; }
}
Hashtable ht = new Hashtable();
Hashtable CreateColumnHash(SqlDataReader dr)
{
ht = new Hashtable();
for (int i = 0; i < dr.FieldCount; i++)
{
ht.Add(dr.GetName(i), dr.GetName(i));
}
return ht;
}
bool ValidateColumn(string ColumnName)
{
return ht.Contains(ColumnName);
}
Here is a one-liner LINQ version of the accepted answer:
Enumerable.Range(0, reader.FieldCount).Any(i => reader.GetName(i) == "COLUMN_NAME_GOES_HERE")
Here is the solution from Jasmine in one line... (one more, though simple!):
reader.GetSchemaTable().Select("ColumnName='MyCol'").Length > 0;
To keep your code robust and clean, use a single extension function, like this:
Public Module Extensions
<Extension()>
Public Function HasColumn(r As SqlDataReader, columnName As String) As Boolean
Return If(String.IsNullOrEmpty(columnName) OrElse r.FieldCount = 0, False, Enumerable.Range(0, r.FieldCount).Select(Function(i) r.GetName(i)).Contains(columnName, StringComparer.OrdinalIgnoreCase))
End Function
End Module
This code corrects the issues that Levitikon had with their code:
(adapted from: [1]: http://msdn.microsoft.com/en-us/library/system.data.datatablereader.getschematable.aspx)
public List<string> GetColumnNames(SqlDataReader r)
{
List<string> ColumnNames = new List<string>();
DataTable schemaTable = r.GetSchemaTable();
DataRow row = schemaTable.Rows[0];
foreach (DataColumn col in schemaTable.Columns)
{
if (col.ColumnName == "ColumnName")
{
ColumnNames.Add(row[col.Ordinal].ToString());
break;
}
}
return ColumnNames;
}
The reason for getting all of those useless column names and not the name of the column from your table...
Is because your are getting the name of schema column (i.e. the column names for the Schema table)
NOTE: this seems to only return the name of the first column...
EDIT: corrected code that returns the name of all columns, but you cannot use a SqlDataReader to do it
public List<string> ExecuteColumnNamesReader(string command, List<SqlParameter> Params)
{
List<string> ColumnNames = new List<string>();
SqlDataAdapter da = new SqlDataAdapter();
string connection = ""; // your sql connection string
SqlCommand sqlComm = new SqlCommand(command, connection);
foreach (SqlParameter p in Params) { sqlComm.Parameters.Add(p); }
da.SelectCommand = sqlComm;
DataTable dt = new DataTable();
da.Fill(dt);
DataRow row = dt.Rows[0];
for (int ordinal = 0; ordinal < dt.Columns.Count; ordinal++)
{
string column_name = dt.Columns[ordinal].ColumnName;
ColumnNames.Add(column_name);
}
return ColumnNames; // you can then call .Contains("name") on the returned collection
}
Neither did I get GetSchemaTable to work, until I found this way.
Basically I do this:
Dim myView As DataView = dr.GetSchemaTable().DefaultView
myView.RowFilter = "ColumnName = 'ColumnToBeChecked'"
If myView.Count > 0 AndAlso dr.GetOrdinal("ColumnToBeChecked") <> -1 Then
obj.ColumnToBeChecked = ColumnFromDb(dr, "ColumnToBeChecked")
End If
public static bool DataViewColumnExists(DataView dv, string columnName)
{
return DataTableColumnExists(dv.Table, columnName);
}
public static bool DataTableColumnExists(DataTable dt, string columnName)
{
string DebugTrace = "Utils::DataTableColumnExists(" + dt.ToString() + ")";
try
{
return dt.Columns.Contains(columnName);
}
catch (Exception ex)
{
throw new MyExceptionHandler(ex, DebugTrace);
}
}
Columns.Contains is case-insensitive btw.
My data access class needs to be backward compatible, so I might be trying to access a column in a release where it doesn't exist in the database yet. We have some rather large data sets being returned so I'm not a big fan of an extension method that has to iterate the DataReader column collection for each property.
I have a utility class that creates a private list of columns and then has a generic method that attempts to resolve a value based on a column name and output parameter type.
private List<string> _lstString;
public void GetValueByParameter<T>(IDataReader dr, string parameterName, out T returnValue)
{
returnValue = default(T);
if (!_lstString.Contains(parameterName))
{
Logger.Instance.LogVerbose(this, "missing parameter: " + parameterName);
return;
}
try
{
if (dr[parameterName] != null && [parameterName] != DBNull.Value)
returnValue = (T)dr[parameterName];
}
catch (Exception ex)
{
Logger.Instance.LogException(this, ex);
}
}
/// <summary>
/// Reset the global list of columns to reflect the fields in the IDataReader
/// </summary>
/// <param name="dr">The IDataReader being acted upon</param>
/// <param name="NextResult">Advances IDataReader to next result</param>
public void ResetSchemaTable(IDataReader dr, bool nextResult)
{
if (nextResult)
dr.NextResult();
_lstString = new List<string>();
using (DataTable dataTableSchema = dr.GetSchemaTable())
{
if (dataTableSchema != null)
{
foreach (DataRow row in dataTableSchema.Rows)
{
_lstString.Add(row[dataTableSchema.Columns["ColumnName"]].ToString());
}
}
}
}
Then I can just call my code like so
using (var dr = ExecuteReader(databaseCommand))
{
int? outInt;
string outString;
Utility.ResetSchemaTable(dr, false);
while (dr.Read())
{
Utility.GetValueByParameter(dr, "SomeColumn", out outInt);
if (outInt.HasValue) myIntField = outInt.Value;
}
Utility.ResetSchemaTable(dr, true);
while (dr.Read())
{
Utility.GetValueByParameter(dr, "AnotherColumn", out outString);
if (!string.IsNullOrEmpty(outString)) myIntField = outString;
}
}
The key to the whole problem is here:
if (-1 == index) {
throw ADP.IndexOutOfRange(fieldName);
}
If the referenced three lines (currently lines 72, 73, and 74) are taken out, then you can easily check for -1 in order to determine if the column doesn't exist.
The only way around this while ensuring native performance is to use a Reflection based implementation, like the following:
Usings:
using System;
using System.Data;
using System.Reflection;
using System.Data.SqlClient;
using System.Linq;
using System.Web.Compilation; // I'm not sure what the .NET Core equivalent to BuildManager.cs
The Reflection based extension method:
/// Gets the column ordinal, given the name of the column.
/// </summary>
/// <param name="reader"></param>
/// <param name="name">The name of the column.</param>
/// <returns> The zero-based column ordinal. -1 if the column does not exist.</returns>
public static int GetOrdinalSoft(this SqlDataReader reader, string name)
{
try
{
// Note that "Statistics" will not be accounted for in this implemenation
// If you have SqlConnection.StatisticsEnabled set to true (the default is false), you probably don't want to use this method
// All of the following logic is inspired by the actual implementation of the framework:
// https://referencesource.microsoft.com/#System.Data/fx/src/data/System/Data/SqlClient/SqlDataReader.cs,d66096b6f57cac74
if (name == null)
throw new ArgumentNullException("fieldName");
Type sqlDataReaderType = typeof(SqlDataReader);
object fieldNameLookup = sqlDataReaderType.GetField("_fieldNameLookup", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(reader);
Type fieldNameLookupType;
if (fieldNameLookup == null)
{
MethodInfo checkMetaDataIsReady = sqlDataReaderType.GetRuntimeMethods().First(x => x.Name == "CheckMetaDataIsReady" && x.GetParameters().Length == 0);
checkMetaDataIsReady.Invoke(reader, null);
fieldNameLookupType = BuildManager.GetType("System.Data.ProviderBase.FieldNameLookup", true, false);
ConstructorInfo ctor = fieldNameLookupType.GetConstructor(new[] { typeof(SqlDataReader), typeof(int) });
fieldNameLookup = ctor.Invoke(new object[] { reader, sqlDataReaderType.GetField("_defaultLCID", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(reader) });
}
else
fieldNameLookupType = fieldNameLookup.GetType();
MethodInfo indexOf = fieldNameLookupType.GetMethod("IndexOf", BindingFlags.Public | BindingFlags.Instance, null, new Type[] { typeof(string) }, null);
return (int)indexOf.Invoke(fieldNameLookup, new object[] { name });
}
catch
{
// .NET Implemenation might have changed, revert back to the classic solution.
if (reader.FieldCount > 11) // Performance observation by b_levitt
{
try
{
return reader.GetOrdinal(name);
}
catch
{
return -1;
}
}
else
{
var exists = Enumerable.Range(0, reader.FieldCount).Any(i => string.Equals(reader.GetName(i), name, StringComparison.OrdinalIgnoreCase));
if (exists)
return reader.GetOrdinal(name);
else
return -1;
}
}
}
In your particular situation (all procedures has the same columns except one which has an additional one column), it will be better and faster to check the reader's FieldCount property to distinguish between them.
const int NormalColCount = .....
if(reader.FieldCount > NormalColCount)
{
// Do something special
}
You can also (for performance reasons) mix this solution with the solution iterating solution.
I would recommend using try{} catch{} for this simple issue. However, I would not recommend handling exception in catch.
try
{
if (string.IsNullOrEmpty(reader["Name"].ToString()))
{
name = reader["Name"].ToString();
}
}
catch
{
//Do nothing
}
This is a pretty old thread, but I wanted to provide my two cents.
The challenge with most of the proposed solutions is that it requires you to enumerate over all fields every time for every row for every column you're checking.
Others are using the GetSchemaTable method which is not globally supported.
Personally, I have no issue with throwing and catching exceptions to check if a field exists. In fact, I think it's probably the most straightforward solution from a programming perspective and the easiest to debug and create an extension for. I've noticed no negative performance hits on swallowing exceptions except where there is some other transaction involved or weird rollback logic.
Implementation using a try-catch block
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
public class MyModel {
public int ID { get; set; }
public int UnknownColumn { get; set; }
}
public IEnumerable<MyModel> ReadData(SqlCommand command) {
using (SqlDataReader reader = command.ExecuteReader()) {
try {
while (reader.Read()) {
// init the row
MyModel row = new MyModel();
// bind the fields
row.ID = reader.IfDBNull("ID", row.ID);
row.UnknownColumn = reader.IfDBNull("UnknownColumn", row.UnknownColumn);
// return the row and move forward
yield return row;
}
} finally {
// technically the disposer should handle this for you
if (!reader.IsClosed) reader.Close();
}
}
}
// I use a variant of this class everywhere I go to help simplify data binding
public static class IDataReaderExtensions {
// clearly separate name to ensure I don't accidentally use the wrong method
public static T IfDBNull<T>(this IDataReader reader, string name, T defaultValue) {
T value;
try {
// attempt to read the value
// will throw IndexOutOfRangeException if not available
object objValue = reader[name];
// the value returned from SQL is NULL
if (Convert.IsDBNull(objValue)) {
// use the default value
objValue = defaultValue;
}
else if (typeof(T) == typeof(char)) {
// chars are returned from SQL as strings
string strValue = Convert.ToString(objValue);
if (strValue.Length > 0) objValue = strValue[0];
else objValue = defaultValue;
}
value = (T)objValue;
} catch (IndexOutOfRangeException) {
// field does not exist
value = #defaultValue;
} catch (InvalidCastException, ex) {
// The type we are attempting to bind to is not the same as the type returned from the database
// Personally, I want to know the field name that has the problem
throw new InvalidCastException(name, ex);
}
return value;
}
// clearly separate name to ensure I don't accidentally use the wrong method
// just overloads the other method so I don't need to pass in a default
public static T IfDBNull<T>(this IDataReader reader, string name) {
return IfDBNull<T>(reader, name, default(T));
}
}
If you want to avoid exception handling, I'd recommend saving your results to a HashSet<string> when you initialize your reader, then checking back to it for the columns you want. Alternatively for a micro-optimization, you can implement your columns as a Dictionary<string, int> to prevent a duplicate resolution from Name to ordinal by the SqlDataReader object.
Implementation using HashSet<string>
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
public class MyModel {
public int ID { get; set; }
public int UnknownColumn { get; set; }
}
public IEnumerable<MyModel> ReadData(SqlCommand command) {
using (SqlDataReader reader = command.ExecuteReader()) {
try {
// first read
if (reader.Read()) {
// use whatever *IgnoreCase comparer that you're comfortable with
HashSet<string> columns = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
// init the columns HashSet<string, int>
for (int i = 0; i < reader.FieldCount; i++) {
string fieldName = reader.GetName(i);
columns.Add(fieldName);
}
// implemented as a do/while since we already read the first row
do {
// init a new instance of your class
MyModel row = new MyModel();
// check if column exists
if (columns.Contains("ID") &&
// ensure the value is not DBNull
!Convert.IsDBNull(reader["ID"])) {
// bind value
row.ID = (int)reader["ID"];
}
// check if column exists
if (columns.Contains("UnknownColumn") &&
// ensure the value is not DBNull
!Convert.IsDBNull(reader["UnknownColumn"])) {
// bind value
row.UnknownColumn = (int)reader["UnknownColumn"];
}
// return the row and move forward
yield return row;
} while (reader.Read());
}
} finally {
// technically the disposer should handle this for you
if (!reader.IsClosed) reader.Close();
}
}
}
Implementation using Dictionary<string, int>
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
public class MyModel {
public int ID { get; set; }
public int UnknownColumn { get; set; }
}
public IEnumerable<MyModel> ReadData(SqlCommand command) {
using (SqlDataReader reader = command.ExecuteReader()) {
try {
// first read
if (reader.Read()) {
// use whatever *IgnoreCase comparer that you're comfortable with
Dictionary<string, int> columns = new Dictionary<string, int>(StringComparer.OrdinalIgnoreCase);
// init the columns Dictionary<string, int>
for (int i = 0; i < reader.FieldCount; i++) {
string fieldName = reader.GetName(i);
columns[fieldName] = i;
}
// implemented as a do/while since we already read the first row
do {
// init a new instance of your class
MyModel row = new MyModel();
// stores the resolved ordinal from your dictionary
int ordinal;
// check if column exists
if (columns.TryGetValue("ID", out ordinal) &&
// ensure the value is not DBNull
!Convert.IsDBNull(reader[ordinal])) {
// bind value
row.ID = (int)reader[ordinal];
}
// check if column exists
if (columns.TryGetValue("UnknownColumn", out ordinal) &&
// ensure the value is not DBNull
!Convert.IsDBNull(reader[ordinal])) {
// bind value
row.UnknownColumn = (int)reader[ordinal];
}
// return the row and move forward
yield return row;
} while (reader.Read());
}
} finally {
// technically the disposer should handle this for you
if (!reader.IsClosed) reader.Close();
}
}
}
You can also call GetSchemaTable() on your DataReader if you want the list of columns and you don't want to have to get an exception...
Although there is no publicly exposed method, a method does exist in the internal class System.Data.ProviderBase.FieldNameLookup which SqlDataReader relies on.
In order to access it and get native performance, you must use the ILGenerator to create a method at runtime. The following code will give you direct access to int IndexOf(string fieldName) in the System.Data.ProviderBase.FieldNameLookup class as well as perform the book keeping that SqlDataReader.GetOrdinal()does so that there is no side effect. The generated code mirrors the existing SqlDataReader.GetOrdinal() except that it calls FieldNameLookup.IndexOf() instead of FieldNameLookup.GetOrdinal(). The GetOrdinal() method calls to the IndexOf() function and throws an exception if -1 is returned, so we bypass that behavior.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Reflection;
using System.Reflection.Emit;
public static class SqlDataReaderExtensions {
private delegate int IndexOfDelegate(SqlDataReader reader, string name);
private static IndexOfDelegate IndexOf;
public static int GetColumnIndex(this SqlDataReader reader, string name) {
return name == null ? -1 : IndexOf(reader, name);
}
public static bool ContainsColumn(this SqlDataReader reader, string name) {
return name != null && IndexOf(reader, name) >= 0;
}
static SqlDataReaderExtensions() {
Type typeSqlDataReader = typeof(SqlDataReader);
Type typeSqlStatistics = typeSqlDataReader.Assembly.GetType("System.Data.SqlClient.SqlStatistics", true);
Type typeFieldNameLookup = typeSqlDataReader.Assembly.GetType("System.Data.ProviderBase.FieldNameLookup", true);
BindingFlags staticflags = BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.IgnoreCase | BindingFlags.Static;
BindingFlags instflags = BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.IgnoreCase | BindingFlags.Instance;
DynamicMethod dynmethod = new DynamicMethod("SqlDataReader_IndexOf", typeof(int), new Type[2]{ typeSqlDataReader, typeof(string) }, true);
ILGenerator gen = dynmethod.GetILGenerator();
gen.DeclareLocal(typeSqlStatistics);
gen.DeclareLocal(typeof(int));
// SqlStatistics statistics = (SqlStatistics) null;
gen.Emit(OpCodes.Ldnull);
gen.Emit(OpCodes.Stloc_0);
// try {
gen.BeginExceptionBlock();
// statistics = SqlStatistics.StartTimer(this.Statistics);
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Call, typeSqlDataReader.GetProperty("Statistics", instflags | BindingFlags.GetProperty, null, typeSqlStatistics, Type.EmptyTypes, null).GetMethod);
gen.Emit(OpCodes.Call, typeSqlStatistics.GetMethod("StartTimer", staticflags | BindingFlags.InvokeMethod, null, new Type[] { typeSqlStatistics }, null));
gen.Emit(OpCodes.Stloc_0); //statistics
// if(this._fieldNameLookup == null) {
Label branchTarget = gen.DefineLabel();
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Ldfld, typeSqlDataReader.GetField("_fieldNameLookup", instflags | BindingFlags.GetField));
gen.Emit(OpCodes.Brtrue_S, branchTarget);
// this.CheckMetaDataIsReady();
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Call, typeSqlDataReader.GetMethod("CheckMetaDataIsReady", instflags | BindingFlags.InvokeMethod, null, Type.EmptyTypes, null));
// this._fieldNameLookup = new FieldNameLookup((IDataRecord)this, this._defaultLCID);
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Ldfld, typeSqlDataReader.GetField("_defaultLCID", instflags | BindingFlags.GetField));
gen.Emit(OpCodes.Newobj, typeFieldNameLookup.GetConstructor(instflags, null, new Type[] { typeof(IDataReader), typeof(int) }, null));
gen.Emit(OpCodes.Stfld, typeSqlDataReader.GetField("_fieldNameLookup", instflags | BindingFlags.SetField));
// }
gen.MarkLabel(branchTarget);
gen.Emit(OpCodes.Ldarg_0); //this
gen.Emit(OpCodes.Ldfld, typeSqlDataReader.GetField("_fieldNameLookup", instflags | BindingFlags.GetField));
gen.Emit(OpCodes.Ldarg_1); //name
gen.Emit(OpCodes.Call, typeFieldNameLookup.GetMethod("IndexOf", instflags | BindingFlags.InvokeMethod, null, new Type[] { typeof(string) }, null));
gen.Emit(OpCodes.Stloc_1); //int output
Label leaveProtectedRegion = gen.DefineLabel();
gen.Emit(OpCodes.Leave_S, leaveProtectedRegion);
// } finally {
gen.BeginFaultBlock();
// SqlStatistics.StopTimer(statistics);
gen.Emit(OpCodes.Ldloc_0); //statistics
gen.Emit(OpCodes.Call, typeSqlStatistics.GetMethod("StopTimer", staticflags | BindingFlags.InvokeMethod, null, new Type[] { typeSqlStatistics }, null));
// }
gen.EndExceptionBlock();
gen.MarkLabel(leaveProtectedRegion);
gen.Emit(OpCodes.Ldloc_1);
gen.Emit(OpCodes.Ret);
IndexOf = (IndexOfDelegate)dynmethod.CreateDelegate(typeof(IndexOfDelegate));
}
}
Use:
if (dr.GetSchemaTable().Columns.Contains("accounttype"))
do something
else
do something
It probably would not be as efficient in a loop.
This works to me:
public static class DataRecordExtensions
{
public static bool HasColumn(IDataReader dataReader, string columnName)
{
dataReader.GetSchemaTable().DefaultView.RowFilter = $"ColumnName= '{columnName}'";
return (dataReader.GetSchemaTable().DefaultView.Count > 0);
}
}
Use:
if(Enumerable.Range(0,reader.FieldCount).Select(reader.GetName).Contains("columName"))
{
employee.EmployeeId= Utility.ConvertReaderToLong(reader["EmployeeId"]);
}
You can get more details from Can you get the column names from a SqlDataReader?.
Related
Is there a good way to replace placeholders with dynamic data ?
I have tried loading a template and then replaced all {{PLACEHOLDER}}-tags, with data from the meta object, which is working.
But if I need to add more placeholders I have to do it in code, and make a new deployment, so if it is possible I want to do it through the database, like this:
Table Placeholders
ID, Key (nvarchar(50), Value (nvarchar(59))
1 {{RECEIVER_NAME}} meta.receiver
2 {{RESOURCE_NAME}} meta.resource
3 ..
4 .. and so on
the meta is the name of the parameter sent in to the BuildTemplate method.
So when I looping through all the placeholders (from the db) I want to cast the value from the db to the meta object.
Instead of getting "meta.receiver", I need the value inside the parameter.
GetAllAsync ex.1
public async Task<Dictionary<string, object>> GetAllAsync()
{
return await _context.EmailTemplatePlaceholders.ToDictionaryAsync(x => x.PlaceholderKey, x => x.PlaceholderValue as object);
}
GetAllAsync ex.2
public async Task<IEnumerable<EmailTemplatePlaceholder>> GetAllAsync()
{
var result = await _context.EmailTemplatePlaceholders.ToListAsync();
return result;
}
sample not using db (working))
private async Task<string> BuildTemplate(string template, dynamic meta)
{
var sb = new StringBuilder(template);
sb.Replace("{{RECEIVER_NAME}}", meta.receiver?.ToString());
sb.Replace("{{RESOURCE_NAME}}", meta.resource?.ToString());
return sb.ToString();
}
how I want it to work
private async Task<string> BuildTemplate(string template, dynamic meta)
{
var sb = new StringBuilder(template);
var placeholders = await _placeholders.GetAllAsync();
foreach (var placeholder in placeholders)
{
// when using reflection I still get a string like "meta.receiver" instead of meta.receiver, like the object.
// in other words, the sb.Replace methods gives the same result.
//sb.Replace(placeholder.Key, placeholder.Value.GetType().GetField(placeholder.Value).GetValue(placeholder.Value));
sb.Replace(placeholder.Key, placeholder.Value);
}
return sb.ToString();
}
I think it might be a better solution for this problem. Please let me know!
We have solved similar issue in our development.
We have created extension to format any object.
Please review our source code:
public static string FormatWith(this string format, object source, bool escape = false)
{
return FormatWith(format, null, source, escape);
}
public static string FormatWith(this string format, IFormatProvider provider, object source, bool escape = false)
{
if (format == null)
throw new ArgumentNullException("format");
List<object> values = new List<object>();
var rewrittenFormat = Regex.Replace(format,
#"(?<start>\{)+(?<property>[\w\.\[\]]+)(?<format>:[^}]+)?(?<end>\})+",
delegate(Match m)
{
var startGroup = m.Groups["start"];
var propertyGroup = m.Groups["property"];
var formatGroup = m.Groups["format"];
var endGroup = m.Groups["end"];
var value = propertyGroup.Value == "0"
? source
: Eval(source, propertyGroup.Value);
if (escape && value != null)
{
value = XmlEscape(JsonEscape(value.ToString()));
}
values.Add(value);
var openings = startGroup.Captures.Count;
var closings = endGroup.Captures.Count;
return openings > closings || openings%2 == 0
? m.Value
: new string('{', openings) + (values.Count - 1) + formatGroup.Value
+ new string('}', closings);
},
RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase);
return string.Format(provider, rewrittenFormat, values.ToArray());
}
private static object Eval(object source, string expression)
{
try
{
return DataBinder.Eval(source, expression);
}
catch (HttpException e)
{
throw new FormatException(null, e);
}
}
The usage is very simple:
var body = "[{Name}] {Description} (<a href='{Link}'>See More</a>)";
var model = new { Name="name", Link="localhost", Description="" };
var result = body.FormatWith(model);
You want to do it like this:
sb.Replace(placeholder.Key, meta.GetType().GetField(placeholder.Value).GetValue(meta).ToString())
and instead of meta.reciever, your database would just store receiver
This way, the placeholder as specified in your database is replaced with the corresponding value from the meta object. The downside is you can only pull values from the meta object with this method. However, from what I can see, it doesn't seem like that would be an issue for you, so it might not matter.
More clarification: The issue with what you tried
//sb.Replace(placeholder.Key, placeholder.Value.GetType().GetField(placeholder.Value).GetValue(placeholder.Value));
is that, first of all, you try to get the type of the whole string meta.reciever instead of just the meta portion, but then additionally that there doesn't seem to be a conversion from a string to a class type (e.g. Type.GetType("meta")). Additionally, when you GetValue, there's no conversion from a string to the object you need (not positive what that would look like).
As you want to replace all the placeholders in your template dynamically without replacing them one by one manually. So I think Regex is better for these things.
This function will get a template which you want to interpolate and one object which you want to bind with your template. This function will automatically replace your placeholders like
{{RECEIVER_NAME}} with values in your object.
You will need a class which contain all the properties that you want to bind. In this example by class is MainInvoiceBind.
public static string Format(string obj,MainInvoiceBind invoice)
{
try
{
return Regex.Replace(obj, #"{{(?<exp>[^}]+)}}", match =>
{
try
{
var p = Expression.Parameter(typeof(MainInvoiceBind), "");
var e = System.Linq.Dynamic.DynamicExpression.ParseLambda(new[] { p }, null, match.Groups["exp"].Value);
return (e.Compile().DynamicInvoke(invoice) ?? "").ToString();
}
catch
{
return "Nill";
}
});
}
catch
{
return string.Empty;
}
}
I implement this technique in a project where I hade to generates email dynamically from there specified templates. Its working good for me. Hopefully, Its solve your problem.
I updated habibs solution to the more current System.Linq.Dynamic.Core NuGet package, with small improvements.
This function will automatically replace your placeholders like {{RECEIVER_NAME}} with data from your object. You can even use some operators, since it's using Linq.
public static string Placeholder(string input, object obj)
{
try {
var p = new[] { Expression.Parameter(obj.GetType(), "") };
return Regex.Replace(input, #"{{(?<exp>[^}]+)}}", match => {
try {
return DynamicExpressionParser.ParseLambda(p, null, match.Groups["exp"].Value)
.Compile().DynamicInvoke(obj)?.ToString();
}
catch {
return "(undefined)";
}
});
}
catch {
return "(error)";
}
}
You could also make multiple objects accessible and name them.
I am reading rows from a table in SQL Server using C# in SSIS. As I loop through each column I want to get the datatype of the field from the table. Here is my code:
string s = "";
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
using (System.IO.StreamWriter file =
new System.IO.StreamWriter(#"C:\Users\cassf\Documents\Tyler Tech\FL\ncc3\CM_Property.csv", true))
{
foreach (PropertyInfo inputColumn in Row.GetType().GetProperties())
{
if (!inputColumn.Name.EndsWith("IsNull"))
{
try
{
s += Convert.ToString(inputColumn.GetValue(Row,null).ToString());
}
catch
{
some code
}
}
}
}
}
First issue is when I do the Convert.ToString() on a Bit field from the database, it changes the value to either True or False. I want the actual value of 1 or 0.
So to try and fix this I want to check the field type for Boolean, it appears that the script is converting from a bit to boolean. Then I can manually put the 1 or 0 back. I would prefer to have the value directly from the database though.
Any help would be greatly appreciated.
Thanks,
Kent
I'd implement a helper function to make your own conversion, when needed, like this:
string s = "";
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
using (System.IO.StreamWriter file =
new System.IO.StreamWriter(#"C:\Users\cassf\Documents\Tyler Tech\FL\ncc3\CM_Property.csv", true))
{
foreach (PropertyInfo inputColumn in Row.GetType().GetProperties())
{
if (!inputColumn.Name.EndsWith("IsNull"))
{
try
{
s += ValueToString(inputColumn.GetValue(Row,null));
}
catch
{
some code
}
}
}
}
}
protected string ValueToString(object value)
{
if (value == null)
throw new ArgumentNullException("I don't know how to convert null to string, implement me!");
switch (Type.GetTypeCode(value.GetType()))
{
// Any kind of special treatment, you implement here...
case TypeCode.Boolean: return Convert.ToInt16(value).ToString();
default: return value.ToString(); // ...otherwise, just use the common conversion
}
}
For booleans, you just convert it to Int, and the int to string (you'll get 1 or 0 in string format).
Depending on what you're going to do with the s variable, you might want to surround string values with quotes, if so, you could do it inside ValueToString() method.
As it known, mongoDb default driver doesn't support automatic integer Id generation.
I spent 2 days for thinking how to implement my own id generator of unique integer values.
So, how to make it ?
Its not good practice to make auto increment Id in MongoDB, as I will hurt in scaling your server, but If you want to make auto increment value it is not advisable to iterate your collection, Instead make a separate table (sequence like concept) and read value from there and increment it using findAndModify. It will be unique per table.
> db.counters.insert({_id: "userId", c: 0});
> var o = db.counters.findAndModify(
... {query: {_id: "userId"}, update: {$inc: {c: 1}}});
{ "_id" : "userId", "c" : 0 }
> db.mycollection.insert({_id:o.c, stuff:"abc"});
> o = db.counters.findAndModify(
... {query: {_id: "userId"}, update: {$inc: {c: 1}}});
{ "_id" : "userId", "c" : 1 }
> db.mycollection.insert({_id:o.c, stuff:"another one"});
I would use a GUID as primary key instead of an integer.
It has mainly two benefits
It is thread safe
You don't need to worry about calculating next ID.
Code needed to get new ID is pretty easy
Guid.NewGuid()
Check this useful article in CodingHorror that explains pros and cons of using GUID over classical integer IDs.
A late answer, but I thought I'd post this:
https://github.com/alexjamesbrown/MongDBIntIdGenerator
I made a start on an incremental ID generator.
Note - this is far from ideal, and not what mongodb was intended for.
Something like that:
public class UniqueIntGenerator : IIdGenerator
{
private static UniqueIntGenerator _instance;
public static UniqueIntGenerator Instance { get { return _instance; } }
static UniqueIntGenerator()
{
_instance = new UniqueIntGenerator();
}
public object GenerateId(object container, object document)
{
var cont = container as MongoCollection;
if (cont == null)
return 0;
var type = cont.Settings.DefaultDocumentType;
var cursor = cont.FindAllAs(type);
cursor.SetSortOrder(SortBy.Descending("_id"));
cursor.Limit = 1;
foreach (var obj in cursor)
return GetId(obj) + 1;
return 1;
}
private int GetId(object obj)
{
var properties = obj.GetType().GetProperties();
var idProperty = properties.Single(y => y.GetCustomAttributes(typeof(BsonIdAttribute), false).SingleOrDefault() != null);
var idValue = (int)idProperty.GetValue(obj, null);
return idValue;
}
public bool IsEmpty(object id)
{
return default(int) == (int)id;
}
}
I have a method that calls a stored procedure. It uses the employee number as a parameter to retrieve the data of a particular employee and then fills the data table with the result.
protected DataTable CreateDT(string empNo)
{
DataTable dataTable = null;
try
{
SqlCommand cmd = new SqlCommand("FIND_EMPLOYEE_BY_EMPNO", pl.ConnOpen());
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#EMPNO", (object)empNo));
SqlDataAdapter da = new SqlDataAdapter(pl.cmd);
dataTable = new DataTable("dt");
da.Fill(dt);
}
catch (Exception x)
{
MessageBox.Show(x.GetBaseException().ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
finally
{
pl.MySQLConn.Close();
}
return dt;
}
What I'm trying to accomplish is convert this code to LINQ, but I don't know how get the result and fill it to my data table. See below:
alt text http://a.imageshack.us/img706/9017/testki.jpg
protected DataTable CreateDT(string empNo)
{
DataTable dataTable = null;
try
{
DataClasses1DataContext dataClass1 = new DataClasses1DataContext();
// I tried to cast it to DataTable, but it doesn't work...
dataTable = (DataTable)dataClass1.findEmployeeByID(empNo);
}
catch (Exception x)
{
MessageBox.Show(x.GetBaseException().ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
finally
{
pl.MySQLConn.Close();
}
return dt;
}
Please guide me how to properly do this... Thanks in advance..
Why exactly do you need to fill a data table? Most bindable controls that use a data table accept any ienumerable-based object, which the collection result of standard LINQ produces.
You're having to refactor the code anyways to use the LINQ objects, so you might as well go ahead and change it all the way. You'll be happier in the long run as LINQ is much easier to use than ado.net.
But to answer the question, you would have to iterate through the list and insert each element into the datatable. Something like thus (code sample found at this article):
public DataTable LINQToDataTable<T>(IEnumerable<T> varlist)
{
DataTable dtReturn = new DataTable();
// column names
PropertyInfo[] oProps = null;
if (varlist == null) return dtReturn;
foreach (T rec in varlist)
{
// Use reflection to get property names, to create table, Only first time, others
will follow
if (oProps == null)
{
oProps = ((Type)rec.GetType()).GetProperties();
foreach (PropertyInfo pi in oProps)
{
Type colType = pi.PropertyType;
if ((colType.IsGenericType) && (colType.GetGenericTypeDefinition()
==typeof(Nullable<>)))
{
colType = colType.GetGenericArguments()[0];
}
dtReturn.Columns.Add(new DataColumn(pi.Name, colType));
}
}
DataRow dr = dtReturn.NewRow();
foreach (PropertyInfo pi in oProps)
{
dr[pi.Name] = pi.GetValue(rec, null) == null ?DBNull.Value :pi.GetValue
(rec,null);
}
dtReturn.Rows.Add(dr);
}
return dtReturn;
}
findEmployeeByID will most likely return IEnumerable<Employee>. Considering that you are switching to using LINQ, you should actually take advantage of strongly typed data and use it across you application. So, change the return type of CreateDT function and adjust the rest of the code accordingly(I assume that the stored procedure returns at most one result):
protected Employee CreateDT(string empNo)
{
try
{
DataClasses1DataContext dataClass1 = new DataClasses1DataContext();
// I tried to cast it to DataTable, but it doesn't work...
return dataClass1.findEmployeeByID(empNo).FirstOrDefault();
}
catch (Exception x)
{
MessageBox.Show(x.GetBaseException().ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
finally
{
//might need to dispose the context here
}
return null;
}
Usage:
var employee = CreateDT(1234);
//You can now access members of employee in a typesafe manner
string name = employee.Name;
EDIT Updated code - this is how you can rewrite the old DataTable code:
protected void RetrieveEmployee(string empNo) {
Employee emp = CreateDT(empNo);// <---- Here
txtEmployeeNo.Text = emp.EmployeeNo;
txtLastName.Text = emp.LastName;
//....
}
Note the absence of array indices and late bound column specifiers - ie dt[0]["EmployeeNo"] became emp.EmployeeNo - must safer, faster and easier to read.
Why does it need to return a DataTable? One of the big advantages of LINQ is that you can work with strong-typed collections, rather than string-keyed DataTables.
protected IEnumerable<Employee> GetEmployee(string empNo)
{
try
{
DataClasses1DataContext dataClass1 = new DataClasses1DataContext();
// I tried to cast it to DataTable, but it doesn't work...
return dataClass1.findEmployeeByID(empNo);
}
catch (Exception x)
{
MessageBox.Show(x.GetBaseException().ToString(), "Error",
MessageBoxButtons.OK, MessageBoxIcon.Error);
return null;
}
finally
{
pl.MySQLConn.Close();
}
}
Some other points:
I would remove the exception handling from this method, do it a higher-level place. If you're using WinForms (which you seem to be) I would just let the exception bubble all the way to the default WinForms exception handler (add a handler to the Application.ThreadException)
I would also make the DataContext class a member variable, rather than creating and destroying one each call. The advantage here is that if you do multiple updates on the same instance, you can call one Save() and apply them all - which results in a single call to the server, rather than one call to the server for each one.
I have a number of static classes that contain tables like this:
using System;
using System.Data;
using System.Globalization;
public static class TableFoo
{
private static readonly DataTable ItemTable;
static TableFoo()
{
ItemTable = new DataTable("TableFoo") { Locale = CultureInfo.InvariantCulture };
ItemTable.Columns.Add("Id", typeof(int));
ItemTable.Columns["Id"].Unique = true;
ItemTable.Columns.Add("Description", typeof(string));
ItemTable.Columns.Add("Data1", typeof(int));
ItemTable.Columns.Add("Data2", typeof(double));
ItemTable.Rows.Add(0, "Item 1", 1, 1.0);
ItemTable.Rows.Add(1, "Item 2", 1, 1.0);
ItemTable.Rows.Add(2, "Item 3", 2, 0.75);
ItemTable.Rows.Add(3, "Item 4", 4, 0.25);
ItemTable.Rows.Add(4, "Item 5", 1, 1.0);
}
public static DataTable GetItemTable()
{
return ItemTable;
}
public static int Data1(int id)
{
DataRow[] dr = ItemTable.Select("Id = " + id);
if (dr.Length == 0)
{
throw new ArgumentOutOfRangeException("id", "Out of range.");
}
return (int)dr[0]["Data1"];
}
public static double Data2(int id)
{
DataRow[] dr = ItemTable.Select("Id = " + id);
if (dr.Length == 0)
{
throw new ArgumentOutOfRangeException("id", "Out of range.");
}
return (double)dr[0]["Data2"];
}
}
Is there a better way of writing the Data1 or Data2 methods that return a single value from a single row that matches the given id?
Update #1:
I have created an extension method that seems quite nice:
public static T FirstValue<T>(this DataTable datatable, int id, string fieldName)
{
try
{
return datatable.Rows.OfType<DataRow>().Where(row => (int)row["Id"] == id).Select(row => (T)row[fieldName]).First();
}
catch
{
throw new ArgumentOutOfRangeException("id", "Out of range.");
}
}
My Data1 method then becomes:
public static int Data1(int id)
{
return ItemTable.FirstValue<int>(id, "Data1");
}
and Data2 becomes:
public static double Data2(int id)
{
return ItemTable.FirstValue<double>(id, "Data2");
}
Thanks to all your responses but especially to Anthony Pegram who gave the very nice single line of LINQ & Lambda code.
Have you considered using Linq (to DataSets)? With Linq expressions you wouldn't need those Data1 and Data2 functions at all since the lookup and filtering could happen in a single line of code.
Example added:
Shooting from the hip here, so please take it with a grain of salt (not near an IDE:)
DataTable itemTbl = GetItemTable().AsEnumerable();
double dt1 = ((From t In itemTbl Where t.Id = <your_id> Select t).First())["Data1"];
That's two lines of code but you could easily wrap the getting of the Enumerable.
I'm a little suspicious of your architecture, but never mind that. If you want a function that returns the first value of the first row of a datatable that it will get somehow, and you want it strongly typed, I think the function below will be an improvement. It would allow you to have just one function, reusable for different types. To use it you would have lines of code like:
int intValue = TableFoo.FirstValueOrDefault<int32>(7);
decimal decValue = TableFoo.FirstValueOrDefault<decimal>(7);
and if you feel like it:
string strValue = TableFoo.FirstValueOrDefault<string>(7);
int? nintValue = TableFoo.FirstValueOrDefault<int?>(7);
The function handles any type you generically give it, strings, other value types, nullable types, reference types. If the field is null, the function returns the "default" value for that type ("" for string). If it absolutely can't do the conversion, because you asked for an impossible conversion, it will throw an error. I've made it an extension method on the datarow type (called ValueOrDefault), and this sucker is really handy.
I adapted this data-tool extension method of mine for your situation. I'm in a VB shop, and I just don't have time to re-write the whole thing in C#, but you could do that easily enough.
Public Shared Function FirstValueOrDefault(Of T) (ByVal Int ID) As T
Dim r as datarow = ItemTable.Select("Id = " + id.ToString());
If r.IsNull(0) Then
If GetType(T) Is GetType(String) Then
Return CType(CType("", Object), T)
Else
Return Nothing
End If
Else
Try
Return r.Field(Of T)(0)
Catch ex As Exception
Return CType(r.Item(0), T)
End Try
End If
End Function