Related
I am using MariaDB. I have a table that I create for every IoT device at the time of the first insertion with a stored procedure. If anyone wonders Why I create a new table for every device is devices publish data every 5 seconds and it is impossible for me to store all of them in a single table.
So, my table structure is like below:
CREATE TABLE IF NOT EXISTS `mqttpacket_',device_serial_number,'`(
`data_type_id` int(11) DEFAULT NULL,
`data_value` int(11) DEFAULT NULL,
`inserted_date` DATE DEFAULT NULL,
`inserted_time` TIME DEFAULT NULL,
FOREIGN KEY(data_type_id) REFERENCES datatypes(id),
INDEX `index_mqttpacket`(`data_type_id`,`inserted_date`)) ENGINE = INNODB;
I have a very long SELECT query like below to fetch the data between selected type, date, and time.
SELECT mqttpacket_123.data_value, datatypes.data_name, datatypes.value_mult,
CONCAT(mqttpacket_123.inserted_date, ' ',
mqttpacket_123.inserted_time) AS 'inserted_date_time'
FROM mqttpacket_123
JOIN datatypes ON mqttpacket_123.data_type_id = datatypes.id
WHERE mqttpacket_123.data_type_id IN(1,2,3,4,5,6)
AND CASE WHEN mqttpacket_123.inserted_date = '2021-11-08'
THEN mqttpacket_123.inserted_time > '12:25:00'
WHEN mqttpacket_123.inserted_date = '2021-11-15'
THEN mqttpacket_123.inserted_time< '12:25:00'
ELSE (mqttpacket_123.inserted_date BETWEEN '2021-11-08'
AND '2021-11-15')
END;
and this returns around 500k records of the sample below:
| data_value | data_name | value_mult | inserted_date_time |
--------------------------------------------------------------------------------
| 271 | name_1 | 0.1 | 2021-11-08 12:25:04 |
| 106 | name_2 | 0.1 | 2021-11-08 12:25:04 |
| 66 | name_3 | 0.1 | 2021-11-08 12:25:04 |
| 285 | name_4 | 0.1 | 2021-11-08 12:25:04 |
| 61 | name_5 | 0.1 | 2021-11-08 12:25:04 |
| 454 | name_6 | 0.1 | 2021-11-08 12:25:04 |
| 299 | name_7 | 0.1 | 2021-11-08 12:25:04 |
Affected rows: 0 Found rows: 395,332 Warnings: 0 Duration for 1 query: 0.734 sec. (+ 7.547 sec. network)
I keep only the last 2 weeks' data in my tables and clean up the previous data as I have a backup system.
However, Loading the query result to DataTable also takes ~30sec. which is 4 times slower than MySQL.
Do you have any suggestions to improve this performance?
PS. I call this query from C# by the following statement in a Stored Procedure of RunQuery which takes the query and performs it as it is.
public DataTable CallStoredProcedureRunQuery(string QueryString)
{
DataTable dt = new DataTable();
try
{
using (var conn = new MySqlConnection(_connectionString))
{
conn.Open();
using (var cmd = new MySqlCommand("SP_RunQuery", conn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#query_string", MySqlDbType.VarChar).Value = QueryString;
using (MySqlDataAdapter sda = new MySqlDataAdapter(cmd))
{
sda.Fill(dt);
}
}
}
}
catch (Exception ex)
{
IoTemplariLogger.tLogger.EXC("Call Stored Procedure for RunQuery failed.", ex);
}
return dt;
}
EDIT: My sensors push a single MQTT packet which contains ~50 different data. There are 12 times 5seconds in a minute. So, basically, I receive ~600 rows per minute per device.
Data insertion is done in a Stored Procedure async. I push the JSON content along with the device_id and I iterate on the JSON to parse and insert into the table.
PS. The following code is just for clarification. It works fine.
/*Dynamic SQL -- IF they are registered to the system but have notable, create it.*/
SET create_table_query = CONCAT('CREATE TABLE IF NOT EXISTS `mqttpacket_',device_serial_number,'`(`data_type_id` int(11) DEFAULT NULL, `data_value` int(11) DEFAULT NULL,`inserted_date` DATE DEFAULT NULL, `inserted_time` TIME DEFAULT NULL, FOREIGN KEY(data_type_id) REFERENCES datatypes(id), INDEX `index_mqttpacket`(`data_type_id`,`inserted_date`)) ENGINE = InnoDB;');
PREPARE stmt FROM create_table_query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
/*Loop into coming value array. It is like: $.type_1,$.type_2,$.type_3, to iterate in the JSON. We reach each value like $.type_1*/
WHILE (LOCATE(',', value_array) > 0)
DO
SET arr_data_type_name = SUBSTRING_INDEX(value_array,',',1); /*pick first item of value array*/
SET value_array = SUBSTRING(value_array, LOCATE(',',value_array) + 1); /*remove picked first item from the value_array*/
SELECT JSON_EXTRACT(incoming_data, arr_data_type_name) INTO value_iteration; /*extract value of first item. $.type_1*/
SET arr_data_type_name := SUBSTRING_INDEX(arr_data_type_name, ".", -1); /*Remove the $ and the . to get pure data type name*/
/*Check the data type name exists or not in the table, if not insert and assign it's id to lcl_data_type_id*/
IF (SELECT COUNT(id) FROM datatypes WHERE datatypes.data_name = arr_data_type_name) > 0 THEN
SELECT id INTO lcl_data_type_id FROM datatypes WHERE datatypes.data_name = arr_data_type_name LIMIT 1;
ELSE
SELECT devices.device_type_id INTO lcl_device_type FROM devices WHERE devices.id = lcl_device_id LIMIT 1;
INSERT INTO datatypes (datatypes.data_name,datatypes.description,datatypes.device_type_id,datatypes.value_mult ,datatypes.inserted_time) VALUES(arr_data_type_name,arr_data_type_name,lcl_device_type,0.1,NOW());
SELECT id INTO lcl_data_type_id FROM datatypes WHERE datatypes.data_name = arr_data_type_name LIMIT 1;
END IF;
/*To retrieve the table of which device has which datatypes inserted, this is to not to retrieve the datatypes unneccesseraly for the selected device*/
IF (SELECT COUNT(device_id) FROM devicedatatypes WHERE devicedatatypes.device_id = lcl_device_id AND devicedatatypes.datatype_id = lcl_data_type_id) < 1 THEN
INSERT INTO devicedatatypes (devicedatatypes.device_id, devicedatatypes.datatype_id) VALUES(lcl_device_id,lcl_data_type_id);
END IF;
SET lcl_insert_mqtt_query = CONCAT('INSERT INTO mqttpacket_',device_serial_number,'(data_type_id,data_value,inserted_date,inserted_time) VALUES(',lcl_data_type_id,',',value_iteration,',''',data_date,''',''',data_time,''');');
PREPARE stmt FROM lcl_insert_mqtt_query;
EXECUTE stmt;
SET affected_data_row_count = affected_data_row_count + 1;
END WHILE;
Here and here are also extra information that can be found of the server and database regarding the comments.
I have an SSD on the server. There is nothing important else that works other than my dotnet application and database.
It is usually better to have a DATETIME column instead of splitting it into two (DATE and TIME) columns. That might simplify the WHERE clause.
Having one table per device is usually a bad idea. Instead, add a column for the device_id.
Not having a PRIMARY KEY is a bad idea. Do you ever get two readings in the same second for a specific device? Probably not.
Rolling those together plus some other likely changes, start by changing the table to
CREATE TABLE IF NOT EXISTS `mqttpacket`(
`device_serial_number` SMALLINT UNSIGNED NOT NULL,
`data_type_id` TINYINT UNSIGNED NOT NULL,
`data_value` SMALLINT NOT NULL,
`inserted_at` DATETIME NOT NULL,
FOREIGN KEY(data_type_id) REFERENCES datatypes(id),
PRIMARY KEY(device_serial_number, `data_type_id`,`inserted_at`)
) ENGINE = INNODB;
That PK will make the query faster.
This may be what you are looking for after the change to DATETIME:
AND inserted_at >= '2021-11-08 12:25:00'
AND inserted_at < '2021-11-08 12:25:00' + INTERVAL 7 DAY
To keep 2 weeks' worth of data, DROP PARTITION is an efficient way to do the delete. I would use PARTITION BY RANGE(TO_DAYS(inserted_at)) and have 16 partitions, as discussed in http://mysql.rjweb.org/doc.php/partitionmaint
If you are inserting a thousand rows every 5 seconds -- With table-per-device, you would need a thousand threads each doing one insert. This would be a nightmare for the architecture. With a single table (as I suggest), and if you can get the 1000 rows together in a process at the same, time, do one multi-row INSERT every 5 seconds. I discuss other high speed ingestion.
Rate Per Second = RPS
Suggestions to consider for your instance [mysqld] section
innodb_io_capacity=500 # from 200 to use more of available SSD IOPS
innodb_log_file_size=256M # from 48M to reduce log rotation frequency
innodb_log_buffer_size=128M # from 16M to reduce log rotation avg 25 minutes
innodb_lru_scan_depth=100 # from 1024 to conserve 90% CPU cycles used for function
innodb_buffer_pool_size=10G # from 128M to reduce innodb_data_reads 85 RPS
innodb_change_buffer_max_size=50 # from 25 percent to expedite pages created 590 RPhr
Observation,
innodb_flush_method=O_DIRECT # from fsync for method typically used on LX systems
You should find these significantly improve task completion performance. View profile for free downloadable Utility Scripts to assist with performance tuning.
There are additional opportunities to tune Global Variables.
I'm trying to read data from SAP ECC using Microsoft .NET. For this, I am using the SAP Connector for Microsoft .NET 3.0 Following is the code to retrieve the data, I'm getting the results too. However, I found that the exchange rate value is having a * if it exceeds 7 characters.
ECCDestinationConfig cfg = new ECCDestinationConfig();
RfcDestinationManager.RegisterDestinationConfiguration(cfg);
RfcDestination dest = RfcDestinationManager.GetDestination("mySAPdestination");
RfcRepository repo = dest.Repository;
IRfcFunction testfn = repo.CreateFunction("RFC_READ_TABLE");
testfn.SetValue("QUERY_TABLE", "TCURR");
// fields will be separated by semicolon
testfn.SetValue("DELIMITER", ";");
// Parameter table FIELDS contains the columns you want to receive
// here we query 3 fields, FCURR, TCURR and UKURS
IRfcTable fieldsTable = testfn.GetTable("FIELDS");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "FCURR");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "TCURR");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "UKURS");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "GDATU");
// the table OPTIONS contains the WHERE condition(s) of your query
// several conditions have to be concatenated in ABAP syntax, for instance with AND or OR
IRfcTable optsTable = testfn.GetTable("OPTIONS");
var dateVal = 99999999 - 20190701;
optsTable.Append();
optsTable.SetValue("TEXT", "gdatu = '" + dateVal + "' and KURST = 'EURX'");
testfn.Invoke(dest);
Values are as follows:
How to get the full value without any truncation?
You just ran into the worst limitation of RFC_READ_TABLE.
Its error is to return field values based on internal length and truncating the rest, rather than using the output length. TCURR-UKURS is a BCD decimal packed field of length 9,5 (9 bytes = 17 digits, including 5 digits after the decimal point) and an output length of 12. Unfortunately, RFC_READ_TABLE outputs the result on 9 characters, so a value of 105.48000- takes 10 characters is too long, so ABAP default logic is to set the * overflow character on the leftmost character (*5.48000-).
Either you create another RFC-enabled function module at SAP/ABAP side, or you access directly the SAP database (classic RDBMS connected to SAP server).
Just an addition to Sandra perfect explanation about this issue. Yes, the only solution here would be writing a custom module for fetching remote records.
If you don't want to rewrite it from scratch the simplest solution would be to copy RFC_READ_TABLE into Z module and change line 137
FIELDS_INT-LENGTH_DST = TABLE_STRUCTURE-LENG.
to
FIELDS_INT-LENGTH_DST = TABLE_STRUCTURE-OUTPUTLEN.
This solves the problem.
UPDATE: try BAPI_EXCHANGERATE_GETDETAIL BAPI, it is RFC-enabled and reads rates correctly. The interface is quite self-explanatory, the only difference is that date should be in native format, not in inverted:
CALL FUNCTION 'BAPI_EXCHANGERATE_GETDETAIL'
EXPORTING
rate_type = 'EURO'
from_curr = 'USD'
to_currncy = 'EUR'
date = '20190101'
IMPORTING
exch_rate = rates
return = return.
Use BBP_RFC_READ_TABLE. It is still not the best but it does one thing right which RFC_READ_TABLE did not: one additional byte for the decimal sign.
No need to go through all the ordeal if you only look for patching the decimal issue.
This is the sample code used with SAP connector for .NET, let it be helpful for someone who looks for the same. Thanks for all those who helped.
var RateForDate = 20190701;
ECCDestinationConfig cfg = new ECCDestinationConfig();
RfcDestinationManager.RegisterDestinationConfiguration(cfg);
RfcDestination dest = RfcDestinationManager.GetDestination("mySAPdestination");
RfcRepository repo = dest.Repository;
IRfcFunction sapFunction = repo.CreateFunction("RFC_READ_TABLE");
sapFunction.SetValue("QUERY_TABLE", "TCURR");
// fields will be separated by semicolon
sapFunction.SetValue("DELIMITER", ";");
// Parameter table FIELDS contains the columns you want to receive
// here we query 3 fields, FCURR, TCURR and UKURS
IRfcTable fieldsTable = sapFunction.GetTable("FIELDS");
fieldsTable.Append();
fieldsTable.SetValue("FIELDNAME", "FCURR");
//fieldsTable.Append();
//fieldsTable.SetValue("FIELDNAME", "TCURR");
//fieldsTable.Append();
//fieldsTable.SetValue("FIELDNAME", "UKURS");
// the table OPTIONS contains the WHERE condition(s) of your query
// here a single condition, KUNNR is to be 0012345600
// several conditions have to be concatenated in ABAP syntax, for instance with AND or OR
IRfcTable optsTable = sapFunction.GetTable("OPTIONS");
var dateVal = 99999999 - RateForDate;
optsTable.Append();
optsTable.SetValue("TEXT", "gdatu = '" + dateVal + "' and KURST = 'EURX'");
sapFunction.Invoke(dest);
var companyCodeList = sapFunction.GetTable("DATA");
DataTable Currencies = companyCodeList.ToDataTable("DATA");
//Add additional column for rates
Currencies.Columns.Add("Rate", typeof(double));
//------------------
sapFunction = repo.CreateFunction("BAPI_EXCHANGERATE_GETDETAIL");
//rate type of your system
sapFunction.SetValue("rate_type", "EURX");
sapFunction.SetValue("date", RateForDate.ToString());
//Main currency of your system
sapFunction.SetValue("to_currncy", "EUR");
foreach (DataRow item in Currencies.Rows)
{
sapFunction.SetValue("from_curr", item[0].ToString());
sapFunction.Invoke(dest);
IRfcStructure impStruct = sapFunction.GetStructure("EXCH_RATE");
item["Rate"] = impStruct.GetDouble("EXCH_RATE_V");
}
dtCompanies.DataContext = Currencies;
RfcDestinationManager.UnregisterDestinationConfiguration(cfg);
I have a datatable with a few columns .
I am trying to add the column values using the datacolumn.expression.
The columns used for adding is of type decimal. Also the calculated column is also decimal. But while processing the expression, (like datatable column1+ datatable column2) its just appending the data.
SlNo Name F1 F2 F3
1 A 1 2 3
2 B 3 4 5
I am expecting an output similar to this.
SlNo Name F1 F2 F3 Total
1 A 1 2 3 6
2 B 3 4 5 12
What I tried.
dtTempData.Columns.Add("Total", typeof(Decimal));
dtTempData.Columns["Total"].Expression = "[F1]+[F2]+[F3]";
Now the output I am getting is in the following way
123
345
its just appending the data.Thanks in advance of any help.
I don't know why this is happening.
dtTempData.Columns.Add("Total", typeof(Decimal));
dtTempData.Columns["Total"].DefaultValue = 0;
dtTempData.Columns["Total"].Expression = expression;
This is the way I created the columns, but while performing suming based on expression, it appends the data.
I am importing data from another datatable which is type of string. So I tried to convert the data by using the following manner.
string expression="Convert(F1, 'System.Decimal') + Convert(F2,
'System.Decimal') + Convert(F3, 'System.Decimal')"
Now this is working and the Total column is having the value after addition. Thanks all for your help.
This is a followup to my first question "Porting “SQL” export to T-SQL".
I am working with a 3rd party program that I have no control over and I can not change. This program will export it's internal database in to a set of .sql each one with a format of:
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField], [BinaryField])
VALUES
(1 , 'Some Text' , 0x123456),
(2 , 'B' , NULL),
--(SNIP, it does this for 1000 records)
(999, 'E' , null);
(1000 , 'F' , null);
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField] , BinaryField)
VALUES
(1001 , 'asdg', null),
(1002 , 'asdf' , 0xdeadbeef),
(1003 , 'dfghdfhg' , null),
(1004 , 'sfdhsdhdshd' , null),
--(SNIP 1000 more lines)
This pattern continues till the .sql file has reached a file size set during the export, the export files are grouped by EXPORT_PATH\%Table_Name%\Export#.sql Where the # is a counter starting at 1.
Currently I have about 1.3GB data and I have it exporting in 1MB chunks (1407 files across 26 tables, All but 5 tables only have one file, the largest table has 207 files).
Right now I just have a simple C# program that reads each file in to ram then calls ExecuteNonQuery. The issue is I am averaging 60 sec/file which means it will take about 23 hrs for it to do the entire export.
I assume if I some how could format the files to be loaded with a BULK INSERT instead of a INSERT INTO it could go much faster. Is there any easy way to do this or do I have to write some kind of Find & Replace and keep my fingers crossed that it does not fail on some corner case and blow up my data.
Any other suggestions on how to speed up the insert into would also be appreciated.
UPDATE:
I ended up going with the parse and do a SqlBulkCopy method. It went from 1 file/min. to 1 file/sec.
Well, here is my "solution" for helping convert the data into a DataTable or otherwise (run it in LINQPad):
var i = "(null, 1 , 'Some''\n Text' , 0x123.456)";
var pat = #",?\s*(?:(?<n>null)|(?<w>[\w.]+)|'(?<s>.*)'(?!'))";
Regex.Matches(i, pat,
RegexOptions.IgnoreCase | RegexOptions.Singleline).Dump();
The match should be run once per value group (e.g. (a,b,etc)). Parsing of the results (e.g. conversion) is left to the caller and I have not tested it [much]. I would recommend creating the correctly-typed DataTable first -- although it may be possible to pass everything "as a string" to the database? -- and then use the information in the columns to help with the extraction process (possibly using type converters). For the captures: n is null, w is word (e.g. number), s is string.
Happy coding.
Apparently your data is always wrapped in parentheses and starts with a left parenthesis. You might want to use this rule to split(RemoveEmptyEntries) each of those lines and load it into a DataTable. Then you can use SqlBulkCopy to copy all at once into the database.
This approach would not necessarily be fail-safe, but it would be certainly faster.
Edit: Here's the way how you could get the schema for every table:
private static DataTable extractSchemaTable(IEnumerable<String> lines)
{
DataTable schema = null;
var insertLine = lines.SkipWhile(l => !l.StartsWith("INSERT INTO [")).Take(1).First();
var startIndex = insertLine.IndexOf("INSERT INTO [") + "INSERT INTO [".Length;
var endIndex = insertLine.IndexOf("]", startIndex);
var tableName = insertLine.Substring(startIndex, endIndex - startIndex);
using (var con = new SqlConnection("CONNECTION"))
{
using (var schemaCommand = new SqlCommand("SELECT * FROM " tableName, con))
{
con.Open();
using (var reader = schemaCommand.ExecuteReader(CommandBehavior.SchemaOnly))
{
schema = reader.GetSchemaTable();
}
}
}
return schema;
}
Then you simply need to iterate each line in the file, check if it starts with ( and split that line by Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries). Then you could add the resulting array into the created schema-table.
Something like this:
var allLines = System.IO.File.ReadAllLines(path);
DataTable result = extractSchemaTable(allLines);
for (int i = 0; i < allLines.Length; i++)
{
String line = allLines[i];
if (line.StartsWith("("))
{
String data = line.Substring(1, line.Length - (line.Length - line.LastIndexOf(")")) - 1);
var fields = data.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
// you might need to parse it to correct DataColumn.DataType
result.Rows.Add(fields);
}
}
The following VB line, where _DSversionInfo is a DataSet, returns no rows:
_DSversionInfo.Tables("VersionInfo").Select("FileID=88")
but inspection shows that the table contains rows with FileID's of 92, 93, 94, 90, 88, 89, 215, 216. The table columns are all of type string.
Further investigation showed that using the ID of 88, 215 and 216 will only return rows if the number is quoted.
ie _DSversionInfo.Tables("VersionInfo").Select("FileID='88'")
All other rows work regardless of whether the number is quoted or not.
Anyone got an explanation of why this would happen for some numbers but not others? I understand that the numbers should be quoted just not why some work and others don't?
I discovered this in some VB.NET code but (despite my initial finger pointing) don't think it is VB.NET specific.
According to the MSDN documentation on building expressions, strings should always be quoted. Failing to do so produces some bizarro unpredictable behavior... You should quote your number strings to get predictable and proper behavior like the documentation says.
I've encounted what you're describing in the past, and kinda tried to figure it out - here, pop open your favorite .NET editor and try the following:
Create a DataTable, and into a string column 'Stuff' of that DataSet, insert rows in the following order: "6", "74", "710", and Select with the filter expression "Stuff = 710". You will get 1 row back. Now, change the first row into any number greater than 7 - suddenly, you get 0 rows back.
As long as the numbers are ordered in proper descending order using string ordering logic (i.e., 7 comes after 599) the unquoted query appears to work.
My guess is that this is a limitation of how DataSet filter expressions are parsed, and it wasn't meant to work this way...
The Code:
// Unquoted filter string bizzareness.
var table = new DataTable();
table.Columns.Add(new DataColumn("NumbersAsString", typeof(String)));
var row1 = table.NewRow(); row1["NumbersAsString"] = "9"; table.Rows.Add(row1); // Change to '66
var row2 = table.NewRow(); row2["NumbersAsString"] = "74"; table.Rows.Add(row2);
var row4 = table.NewRow(); row4["NumbersAsString"] = "90"; table.Rows.Add(row4);
var row3 = table.NewRow(); row3["NumbersAsString"] = "710"; table.Rows.Add(row3);
var results = table.Select("NumbersAsString = 710"); // Returns 0 rows.
var results2 = table.Select("NumbersAsString = 74"); // Throws exception "Min (1) must be less than or equal to max (-1) in a Range object." at System.Data.Select.GetBinaryFilteredRecords()
Conclusion: Based on the exception text in that last case, there appears to be some wierd casting going on inside filter expressions that is not guaranteed to be safe. Explicitely putting single quotes around the value for which you're querying avoids this problem by letting .NET know that this is a literal.
DataTable builds an index on the columns to make Select() queries fast. That index is sorted by value, then it uses a binary search to select the range of records that matches the query expression.
So the records will be sorted like this 215,216,88,89,90,92,93,94. A binary search is done treating them as integer (as per our filter expression) cannot locate certain records because, it is designed to only search properly sorted collections.
It indexes the data as string and Binary search searches as number. See the below explanation.
string[] strArr = new string[] { "115", "118", "66", "77", "80", "81", "82" };
int[] intArr = new int[] { 215, 216, 88, 89, 90, 92, 93, 94 };
int i88 = Array.BinarySearch(intArr, 88); //returns -ve index
int i89 = Array.BinarySearch(intArr, 89); //returns +ve index
This should be a bug in the framework.
this error usually comes due to invalid data table column type in which you are going to search
i got this error when i was using colConsultDate instead of Convert(colConsultDate, 'System.DateTime')
because colConsultDate was a data table column of type string which i must have to convert into System.DateTime therefor your search query should be like
string query = "Convert(colConsultDate, 'System.DateTime') >= #" + sdateDevFrom.ToString("MM/dd/yy") + "# AND Convert(colConsultDate, 'System.DateTime') <= #" + sdateDevTo.ToString("MM/dd/yy") + "#";
DataRow[] dr = yourDataTable.Select(query);
if (dr.Length > 0)
{
nextDataTabel = dr.CopyToDataTable();
}
#Val Akkapeddi just wanna add things to your answer.
if you do something like this it would be benefited specially when you have to use comparison operators. because you put quotes around 74 it will be treated as string. please see yourself by actually writing code. Comparison operators
(decimal is just for reference you can add your desired datatype instead.)
var results2 = table.Select("Convert(NumbersAsString , 'System.Decimal') = 74.0")