given an expression like so
DataTable1.Columns.Add("value", typeof(double), "rate * loan_amt");
in a datatable with 10000 rows where rate is same for all rows and loan_amt varies
When the rate changes, it changes for all
currently that means iterating through all rows like so
foreach(DataRow dr in DataTable1.Rows) dr["rate"] = new_rate;
wondering if there,s a better way using a ReferenceTable (with only 1 row ) in the same DataSet and linking it somehow like so
DataTable1.Columns.Add("value", typeof(double), "RefTable.Row0.rate * loan_amt");
so changing the rate would be as simple as
RefTable.Rows[0]["rate"] = new_rate;
Or any other way ?
That is a good idea, but you would have to rewrite any time that data was accessed in legacy code. It would certainly make updates to the rate more efficient, but you may run into issues with reverse compatibility.
If their isn't much code accessing that table then it isn't such a big deal, but if this is a production system with multiple processes calling that data you might end up with a runaway train of null value exceptions when trying to access the "rate" column of the original table, or inconsistencies for your "value" depending on which code accessed which table to retrieve the rate.
If this is not the case, then it's no big deal. Go for it.
found the answer, adding it for others who might land here
Key is to add a DataRelation between the two tables/columns and the expression would be
Parent.rate * loan_amt
Related
I have a dropdown list in my aspx page. Dropdown list's datasource is a datatable. Backend is MySQL and records get to the datatable by using a stored procedure.
I want to display records in the dropdown menu in ascending order.
I can achieve this by two ways.
1) dt is datatable and I am using dataview to filter records.
dt = objTest_BLL.Get_Names();
dataView = dt.DefaultView;
dataView.Sort = "name ASC";
dt = dataView.ToTable();
ddown.DataSource = dt;
ddown.DataTextField = dt.Columns[1].ToString();
ddown.DataValueField = dt.Columns[0].ToString();
ddown.DataBind();
2) Or in the select query I can simply say that
SELECT
`id`,
`name`
FROM `test`.`type_names`
ORDER BY `name` ASC ;
If I use 2nd method I can simply eliminate the dataview part. Assume this type_names table has 50 records. And my page is view by 100,000 users at a minute. What is the best method by considering efficiency,Memory handling? Get unsorted records to datatable and filter in code behind or sort them inside the datatabse?
Note - Only real performance tests can tell you real numbers.. Theoretical options are below (which is why I use word guess a lot in this answer).
You have at least 3 (instead of 2) options -
Sort in database - If the column being sorted on is indexed.. Then this may make most sense, because overhead of sorting on your database server may be negligible. SQL servers own data caches may make this super fast operation.. but 100k queries per minute.. measure if SQL gives noticeably faster results without sort.
Sort in code behind / middle layer - Likely you won't have your own equivalent of index.. you'd be sorting list of 50 records, 100k times per minutes.. would be slower than SQL, I would guess.
Big benefit would apply, only if data is relatively static, or very slow changing, and sorted values can be cached in memory for few seconds to minutes or hours..
The option not in your list - send the data unsorted all the way to the client, and sort it on client side using javascript. This solution may scale the most... sorting 50 records in Browser, should not be a noticeable impact on your UX.
The SQL purists will no doubt tell you that it’s better to let SQL do the sorting rather than C#. That said, unless you are dealing with massive record sets or doing many queries per second it’s unlikely you’d notice any real difference.
For my own projects, these days I tend to do the sorting on C# unless I’m running some sort of aggregate on the statement. The reason is that it’s quick, and if you are running any sort of stored proc or function on the SQL server it means you don’t need to find ways of passing order by’s into the stored proc.
I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this:
Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. I have increased by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like:
The above code gets called row-wise like this:
Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).
1. IIRC dRow["Condition"] is much slower than dRow[index] as it has to do a lookup every time. Find out which indexes the columns have before the call.
public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow, int conditionIndex, int valueIndex)
{
foreach (DataRow dRow in dtValidateAndFormatConditions.Rows)
{
string Condition = dRow[conditionIndex];
string FormatValue = Value = dRow[valueIndex];
GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex);
Condition = Parse(Condition);
dRow[conditionIndex] = Condition;
FormatValue = Parse(FormatValue );
dRow[valueIndex] = FormatValue;
}
}
2. If you are updating a excel document live, you should also lock the sheet updates during the process, so the document isn't redrawn on every cell change.
3. virtual methods also have a performance penalty.
The general answer to a problem like this is you want to move as much as possible of the heavier processing out of the row loop, so it only needs to execute once instead of for every row.
It's difficult to provide more detail without knowing exactly how your validation/formatting system works, but I can offer some "pointers":
Is it possible to build some kind of cached data structure from this condition table of yours? This would eliminate all the heavy DataTable operations from your inner loop
The most efficient solution to mass validation/formatting I can think of, is to construct a C# script based upon your set of conditions, compile it into a delegate and then just evaluate for each row. I don't know if this is possible for your problem ...
There are two proposed improvements in the algorithm : -
a.U can use multithreading if possible to speed the process by constant factor(need
testing to get the actual value). U can use multithreading to evaluate the rows in
parallel.
b. If it is possible to stop processing row if even one column is invalid then u can stop
processing that row . Further u can analyse the input data for large no of data and
arrange the columns in decreasing porbabilty of them being invalid and then check for
columns in this calculated order. Further more u can also arrange the predicates of
validation condition in the way u did for columns for checking validations
Proposed algorithm which might improve performance: -
for i in totalconds :
probability(i) = 0
for record in largeDataSet :
for col in record :
for cond in conditions :
if invalid(cond,col) :
probability(cond)++
sort(probability(cond),condorder,order=decreasing)
check for condition in order of condorder
This is learning algorithm which can be used to calculate the order of evaluation of predicates for efficient short-circuit evaluation of conditions but would take same time for valid inputs. You can evaluate this order offline on large dataset of sample inputs and just store in array during live usage.
Edit: Another Improvement that i missed is that use of hash table for columns which have a small range of valid values , so instead of evaluating the conditions on that column we just check if it is in hash table. Similarly if invalid values range is small then we check for them in hash table. The hash table can be filled up at before the start of evaluation using a file.
Operations like 'string Condition = dRow["Condition"]' are rather heavy, thus I would recommend to move row enumeration (for-cycle) from the ValidateAndFormat method to the ValidateAndFormatOnCopyPaste method, immediately around the call of GetValidatedFormattedData.
Pre steps:
create a task class that accepts 1 dataRow and processes it
create a task queue
create a thread pool with Y workers
Algorithm
when you get your N rows, add all of them as tasks to your task queue
get the workers to start taking tasks from the task queue
as the tasks responses get returned, update your data table (probably initially a clone)
once all the tasks are done return your new data table
Possible Improvements
as Vikram said, you could probably short circuit your conditions, if at value 10, you know that its already an error, don't bother checking the rest of the 140 conditions, but thats only if that fits your requirements, if your requirements require a checked condition for all 150, then you can't escape that one
change the task classes to take in a list of row data instead of 1, which could improve due to context switching between threads, if they finish pretty quickly
Other ideas that I haven't really thought through
sort the data first, maybe there is a speed benefit with short circuiting certain known conditions
checksum the whole row, store it into a db with its result, essentially cache the parameters/results so that the next time something with the exact same values/fields runs you can pull it from cache
the other checksum of the whole row buys you is change, lets say that you have some sort of key and you are looking for changed data, the checksum of everything but the key will tell you if some value changed and if its even worth looking at the conditions of all the other columns
You can use the datatable.
try this
datatable.Select(string.Format("[Col1] = '{0}'", id)).ToList().ForEach(r => r["Col1"] = Data);
see the link
https://msinternal1.engageexpress.com/sf/MTY1NDNfMTY4NTM4
I know this is kind of brute force, but you can combine it with what others have suggested:
Parallel.ForEach(dtValidateAndFormatConditions.Rows, dRow =>
{
string Condition = dRow[conditionIndex];
string FormatValue = Value = dRow[valueIndex];
GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex);
Condition = Parse(Condition);
FormatValue = Parse(FormatValue);
lock (dRow)
{
dRow[conditionIndex] = Condition;
dRow[valueIndex] = FormatValue;
}
});
I am working with a webservice that accepts a ADO.NET DataSet. The webservice will reject a submission if over 1000 rows are changed across all of the ten or so tables in the dataset.
I need to take my dataset and break it apart into chunks of less than 1000 changed rows. I can use DataSet.GetChanges() to produce a reduced dataset, but that still may exceed the changed row limit. Often a single table will have more than 1000 changes.
Right now, I think I need to: Create an empty copy of the dataset
Iterate over the DataTableCollection and .Add rows individually to the new tables until I get to the limit. Start a new dataset, and repeat until I've gone through everything.
Am I missing a simpler approach to this?
This is asking for trouble. Often changes to one table are dependent on changes to another. You don't want to split those up or bad things which may be difficult to debug problems will occur unless you are very very careful about this. Most likely, the "right" thing to do here is to submit changes to the webservice on a more frequent basis instead of batching them up so much.
My goal is to maximise performance. The basics of the scenario are:
I read some data from SQL Server 2005 into a DataTable (1000 records x 10 columns)
I do some processing in .NET of the data, all records have at least 1 field changed in the DataTable, but potentially all 10 fields could be changed
I also add some new records in to the DataTable
I do a SqlDataAdapter.Update(myDataTable.GetChanges()) to persist the updates (an inserts) back to the db using a InsertCommand and UpdateCommand I defined at the start
Assume table being updated contains 10s of millions of records
This is fine. However, if a row has changed in the DataTable then ALL columns for that record are updated in the database even if only 1 out of 9 columns has actually changed value. This means unnecessary work, particularly if indexes are involved. I don't believe SQL Server optimises this scenario?
I think, if I was able to only update the columns that had actually changed for any given record, that I should see a noticeable performance improvement (esp. as cumulatively I will be dealing with millions of rows).
I found this article: http://netcode.ru/dotnet/?lang=&katID=30&skatID=253&artID=6635
But don't like the idea of doing multiple UPDATEs within the sproc.
Short of creating individual UPDATE statements for each changed DataRow and then firing them in somehow in a batch, I'm looking for other people's experiences/suggestions.
(Please assume I can't use triggers)
Thanks in advance
Edit: Any way to get SqlDataAdapter to send UPDATE statements specific to each changed DataRow (only to update the actual changed columns in that row) rather than giving a general .UpdateCommand that updates all columns?
Isn't it possible to implement your own IDataAdapter where you implement this functionality ?
Offcourse, the DataAdapter only fires the correct SqlCommand, which is determined by the RowState of each DataRow.
So, this means that you would have to generate the SQL command that has to be executed for each situation ...
But, I wonder if it is worth the effort. How much performance will you gain ?
I think that - if it is really necessary - I would disable all my indexes and constraints, do the update using the regular SqlDataAdapter, and afterwards enable the indexes and constraints.
you might try is do create an XML of your changed dataset, pass it as a parameter ot a sproc and the do a single update by using sql nodes() function to translate the xml into a tabular form.
you should never try to update a clustered index. if you do it's time to rethink your db schema.
I would VERY much suggest that you do this with a stored procedure.
Lets say that you have 10 million records you have to update. And lets say that each record has 100 bytes (for 10 columns this could be too small, but lets be conservative). This amounts to cca 100 MB of data that must be transferred from database (network traffic), stored in memory and than returned to database in form of UPDATE or INSERT that are much more verbose for transfer to database.
I expect that SP would perform much better.
Than again you could divide you work into smaller SP (that are called from main SP) that would update just the necessary fields and that way gain additional performance.
Disabling indexes/constraints is also an option.
EDIT:
Another thing you must consider is potential number of different update statements. In case of 10 fields per row any field could stay the same or change. So if you construct your UPDATE statement to reflect this you could potentially get 10^2 = 1024 different UPDATE statements and any of those must be parsed by SQL Server, execution plan calculated and parsed statement stored in some area. There is a price to do this.
I have a problem with my asp.net program. I am doing a Datatable.Compute on a very large database with a like condition in it. The result takes something like 4 minutes to show or does a Request timeout. If I do the same request with a = and a fix text, it takes nearly 1 minute to show, which for my use is acceptable.
Here is the line that is so slow:
float test = (int)Datatbl.Tables["test"].Compute("COUNT(pn)", "pn like '9800%' and mois=" + i + " and annee=" + j);
I have been searching for a solution for 2 days.
Please help me.
Are you retrieving the data in your Datatable from a database? Do you have access to the database?
If so, one option is to research methods of moving this lookup and aggregation into the database instead of doing it in your C# code. Once it is in the database, if required, you could add indexes for the 'mois' and 'annee' columns which may speed up the look up considerably. If '9800' is a hardcoded value, then you could even add a denormalisation consisting of a boolean column indicating whether the 'pn' column begins with '9800' and put an index on this column. This may make the lookup very fast indeed.
There are lots of options available.
I found it.
I use the Dataview and send the result to a DataTable. This sped up the process 10 times
Here is an exemple:
DataView dv = new DataView(Datatbl.Tables["test"], filter, "pn", DataViewRowState.CurrentRows);
DataTable test = dv.ToTable();
and then you can use the "test" DataTable.