Update Top in Visual FoxPro - c#

I'm accessing a Visual FoxPro Table (DBF File) via VFP OleDB Provider in a C# Application.
Is there an equivalent of UPDATE TOP (MS SQL) in VFP?
This is my current Query:
UPDATE HM_LIST
SET
HM_DATE=DATE(2014,5,22) ,
HM_STATION="CM_PC" ,
HM_TIME="17:06" ,
HM_USER="TEST"
WHERE
HM_STATION=''
AND HM_TIME=''
AND HM_USER=''
The problem is, all rows are matching to my parameters.
But I want to update only one of those matching rows.
There is no Primary_Key.
I can't use INSERT.
Table

Use WHERE clause as follows:
WHERE RECNO()=1

With the hint of Oleg I found a workaround for the missing primary key.
But it needs two Querys
First select the Record Number alias RECNO of the matching rows.
SELECT RECNO() FROM table_name WHERE foo=''
Now read the first row of the Result (this is the "id" of the row)
Save it as a variable (int row_id) and put after WHERE Statment of the UPDATE Query only following line : "RECNO() ="+row_id
Example :
var MyOleDBCommand = MyOleDBConnection.CreateCommand();
MyOleDBCommand.CommandText = "SELECT RECNO() FROM table_name WHERE foo=''";
int row_id = -1;
/** Search for some matching rows **/
using(var reader = MyOleDBCommand.ExecuteReader()){
// Check if something was found
if(reader.HasRows){
reader.Read(); // Read only the first row (or use a for-loop if you need more then 1)
row_id = (int)reader.GetDecimal(0);
}
}
/** If a matching row was found **/
if(row_id > -1){
MyOleDBCommand.CommandText = "UPDATE table_name SET foo='bar' WHERE RECNO()="+row_id;
if(MyOleDBCommand.ExecuteNonQuery()>0){
//Successfully Updatet
}}
}
Remarks: RECNO has the Type Decimal, so you have to use GetDecimal(0) (see sample code)

If you run the above query in Foxpro, every row will be updated as you state because of the WHERE condition that you are using.
When you specify " Where column1 = '' ", then every row will be effected. Try specifying a value in the condition such as " Where column1 = 'somevalue' " or " Where EMPTY(column1) "

Generally speaking, this is what primary keys are for. Whether its indexed or not, there should be a single field that uniquely identifies each of your records, and allows you to target an update to that particular record and not the entire set.
UPDATE HM_LIST
SET
HM_DATE=DATE(2014,5,22) ,
HM_STATION="CM_PC" ,
HM_TIME="17:06" ,
HM_USER="TEST"
WHERE
HM_ID = 1
If the field doesn't have a primary key, it's strike #2 on the this was horribly designed and should be abandoned, right after you can't insert new rows. Unless this is a theoretical exercise, there are far better tools to accomplish whatever it is that you're after.
That said, for the particular example this is one of those rare instances where mucking about with SQL actually makes your life harder.
FoxPro at its heart, is not a set-based language like SQL. Rather, it's a specialized language focusing on data operations, using the DBF format. For operations where you don't want to deal with entire sets, and for some reason are still programming in FoxPro, you can most easily accomplish this by embracing the xBase roots and running with it.
SELECT HM_LIST
LOCATE FOR FOR HM_STATION='' AND HM_TIME='' AND HM_USER=''
IF FOUND()
REPLACE IN HM_LIST ;
HM_DATE WITH DATE(2014,5,22) ;
, HM_STATION WITH "CM_PC" ;
, HM_TIME WITH "17:06" ;
, HM_USER WITH "TEST"
ENDIF

Related

Npgsql C# multi row/value insert

I've been looking at the PostGres multi row/value insert which looks something like this in pure SQL:
insert into table (col1, col2, col3) values (1,2,3), (4,5,6)....
The reason I wan to use this is I have a lot of data to insert that is arriving via a queue, which I'm batching into 500/1000 record inserts at a time to improve performance.
However, I have been unable to find an example of doing this from within C#, everything I can find is adding only a single records parameter at a time, then executing, which is too slow.
I have this working using Dapper currently, but I need to expand the SQL to an upsert (insert on conflict update) which everything I have found indicated Dapper can't handle. I have found evidence the Postgres can handle upsert and multi valued in a single action.
Tom
I didn't get your question completely right. But for bulk insert in Postgresql, this is a good answer
It gives an example for inserting multiple records from a list (RecordList) into table (user_data.part_list) :
using (var writer = conn.BeginBinaryImport(
"copy user_data.part_list from STDIN (FORMAT BINARY)"))
{
foreach (var record in RecordList)
{
writer.StartRow();
writer.Write(record.UserId);
writer.Write(record.Age, NpgsqlTypes.NpgsqlDbType.Integer);
writer.Write(record.HireDate, NpgsqlTypes.NpgsqlDbType.Date);
}
writer.Complete();
}
COPY is the fastest way but does not work if you want to do UPSERTS with an ON CONFLICT ... clause.
If it's necessary to use INSERT, ingesting n rows (with possibly varying n per invocation) can be elegantly done using UNNEST like
INSERT INTO table (col1, col2, ..., coln) SELECT UNNEST(#p1), UNNEST(#p2), ... UNNEST(#pn);
The parameters p then need to be an array of the matching type. Here's an example for an array of ints:
new NpgsqlParameter()
{
ParameterName = "p1",
Value = new int[]{1,2,3},
NpgsqlDbType = NpgsqlDbType.Array | NpgsqlDbType.Integer
}
If you want to insert many records efficiently, you probably want to take a look at Npgsql's bulk copy API, which doesn't use SQL and is the most efficient option available.
Otherwise, there's nothing special about inserting two rows rather than one:
insert into table (col1, col2, col3) values (#p1_1,#p1_2,#p1_3), (#p2_1,#p2_2,#p_3)....
Simply add the parameters with the correct name and execute just as you would any other SQL.

SQL Server CE & C# After deleting records cannot read data in a table

I have created a database file that updates readings every minute and stores in a SQL Server CE database file. However as the database gets very large, it starts to get really slow.
I decided to delete the oldest files once the database reaches a certain size as they are of no use to me. I managed to do this using the following commands:
command.Append("DELETE FROM MyTable WHERE ID IN (SELECT TOP(" + difference.ToString() + ") ID From MyTable)");
where difference.ToString() is value I used to calculate how much I want to delete.
This worked successfully as I could open the file using CompactView and also I could type in the commands in CompactView to give the same results.
Now my problem started when I tried to read the data and update it on to a graph. So my codes in another form does the following:
private void updateGraphTimer_Tick(object sender, EventArgs e)
{
using (SqlCeConnection connection = new SqlCeConnection(connectionString))
{
connection.Open();
using (SqlCeDataAdapter adapter = new SqlCeDataAdapter("SELECT * FROM MyTable", connection))
{
// some code that is not relevant between these two statements
using (DataTable table = new DataTable())
{
StringBuilder command = new StringBuilder();
command.Clear();
command.Append("SELECT TOP(1) ID FROM MyTable ORDER BY ID DESC");
using (SqlCeCommand com = new SqlCeCommand(command.ToString(), connection))
{
int value = (int)com.ExecuteScalar();
graphPage.latestID = value;
if (value > graphPage.startID)
{
DataColumn xDateColumn;
xDateColumn = new DataColumn("XDate");
xDateColumn.DataType = typeof(double);
table.Columns.Add(xDateColumn);
adapter.Fill(graphPage.startID, value, table);
The problem I have is that the table is empty, even though value from (int)com.ExecuteScalar() returns a value! If I did not perform the delete, it all works fine!
I cannot figure out what is happening! The only thing I can think of is something to do with reading and writing the sql file.
Much appreciated!
Better not use TOP here. As ID is incremented automatically, get the highest ID and then delete at max before that ID as the latest ID is used in the graphing part. For example, delete everything below current max ID or below max ID - 10 or so.
DELETE * from table where ID < MAX(ID);
If using TOP as you do in the delete, all values, including the TOP (last ID) is deleted. And in the graphing part you are accessing the last ID too.
So you have two problems:
your table is empty after the delete
your query returns a result in spite of the fact that the table is empty
Regarding the first one the problem is in your delete statement: for some reason your difference value is higher or equal to the number of rows in your table and will delete the overall table.
I would look at the way you are computing it.
In any case if you have an insert_date on your table I would change the delete statement like this to delete rows older than 10 days from today:
DELETE FROM MyTable WHERE ID IN (SELECT ID From MyTable where insert_date < DATEADD(day, -10, GETDATE())
The second problem is a concurrency issue related to the fact that two threads are working: it may happens that you read data (and data are available) just before data are deleted; so you are not really getting a query result from an empty table also if this is the impression.
I think I figured out the problem! Forgive me if you tried to explain this to me and I didn't get it but here goes! The problem is with populating the table whilst the value returned from ExecuteScalar is correct, that Id is greater than the table as the table counts from 0 rather than the subtracted amount!
So for example I deleted 1000 records from 5000, the oldest record would then be 1000, so when I fill the table the 5000 is the latest id, however I am trying to access the 5000th record in the table rather than the 4000th value which contains ID 5000!
I will try and rewrite the code take this into account, this was never a problem before as when I wasn't deleting they matched. D'OH!

How to access specific row in datareader?

I got OutOfMemoryException while download bulk reports . So I alter the code by using DataReader instead of DataSet .
How to achieve below code in DataReader because am helpless to use any DataTables & DataSet in my coding.
if (dataSet.Tables[0].Rows[i + 1]["Sr No"].ToString() == dataSet.Tables[0].Rows[i]["Sr No"].ToString())
I don't think you can access a row that has not been executed. The way I understand it, the DataReader returns row by row as it reads it from the Database.
You can try the following:
This will loop through each row that the DataReader will return in the dataset. Here you can check certain values/conditions, as follow:
while (dataReader.Read())
{
var value = dataReader["Sr No"].ToString();
//Your custom code here
}
Alternatively, you can also specify the column index, instead of the name, if you wish to do so, as follow:
while (dataReader.Read())
{
var value = dataReader.GetString(0); //The 0 stands for "the 0'th column", so the first column of the result.
//Your custom code here
}
UPDATE
Why don't you place all the values read from the DataReader, into a list, and you can use this list afterwards for comparison between values if you need it.
Don't filter rows at the client level.
It's better to search for the right rows on the server, before they even reach the client, instead of fetching all the rows into the client and then filtering there. Simply incorporate your search criteria in the SQL query itself, and then fetch the rows that the server has already found for you.
Doing that:
Allows the server to use indexes and other database structures to identify the "interesting" rows potentially much faster than linearly searching through all rows.
You network connection between client and server will not be saturated by gazillion rows, most of which will be discarded at the end anyway.
May allow your client to deal with one row at a time in a simple way (e.g. using DbDataReader), as opposed to additional processing and/or storing multiple rows in memory (or even all rows, as you did).
In your particular case, looks like you'll need a self-join, or perhaps an analytic (aka. "window") function. Without knowing more about your database structure or what are you actually trying to accomplish, I can't know how your exact query is going to look like, bit it will probably be something along these lines:
-- Sample data...
CREATE TABLE T (
ID int PRIMARY KEY,
SR_NO int
);
INSERT INTO T VALUES
(1, 100),
(2, 101),
(3, 101),
(4, 100);
-- The actual query...
SELECT
*
FROM (
SELECT
*,
LEAD(SR_NO) OVER (ORDER BY ID) NEXT_SR_NO
FROM
T T1
) Q
WHERE
SR_NO = NEXT_SR_NO;
Result:
ID SR_NO NEXT_SR_NO
2 101 101
You can do this by using a do-while loop. Before checking the condition, the next row will be read and you can also access that row. Try this code :
do
{
your code
}
while(myreader.Read())

C# list items to database SQL

Hello I have a simple question that regards inserting data into a MS MySql Database 2012 table. The table that I have is called COMPLETED and has 3 fields.
student_ID (int, NOT allowed nulls)
completed (bool, NOT allowed nulls)
random_code (string, allowed nulls)
In c# I have a list filled with unique random codes. I want all codes inserted into the database, so if I have 20 records I want 20 unique codes inserted into the random_code field. So the first records gets the first code, the seconds records gets the second code and so on. I think the best way to do this is using a foreach and, for each code in the list of codes insert that code into the random_code field in my database. The problem is I don't know how to do this. I have the following code that give's me an error at VALUE:
Incorrect syntax near 'VALUE'.
foreach (string unRaCo in codes)
{
//insert database
SqlCommand toDB = new SqlCommand("INSERT INTO COMPLETED (random_code) VALUE ( '"+ unRaCo +"' ) ", conn);
SqlDataReader toDBR;
toDBR = toDB.ExecuteReader();
}
Could anyone give me a dircetion here? Thanks in advance.
EDIT:
Okay I totally changed my query as I figured out it did not yet do what I wanted it to do. I now want to update my records instead of inserting records. I did that with the following code:
foreach (string unRaCo in codes)
{
//insert database
SqlCommand naarDB = new SqlCommand("UPDATE VOLTOOID SET random_code = '"+ unRaCo +"' ", connectie);
SqlDataReader naarDBR;
naarDBR = naarDB.ExecuteReader();
naarDBR.Close();
}
The problem this time is that the update query updates ALL records with the first code, so the first record has the code 12345 for example but all other records also have that code. I want to update 12345 into record 1 and 54321 for example in number 2, how do I do that?
The correct is Values not Value, even if you only provide one column.
About your edit. First of all beware of SQL Injection. You better use SQLParameter class. Check Configuring Parameters and Parameter Data Types for further info.
If you want to update a specific id then use a where clause like (in plain SQL):
UPDATE VOLTOOID SET random_code = #NewValue WHERE random_code = #OldValue
Now if you just want to add the random number in a specific row, then you would have to use some more advanced SQL functions. Again in plain SQL you would have:
;WITH MyCTE AS
(
SELECT random_code,
ROW_NUMBER() OVER(ORDER BY random_code) AS ROWSEQ -- This will give a unique row number to each row of your table
FROM VOLTOOID _code
)
UPDATE MyCTE
SET random_code = #NewValue
WHERE ROWSEQ = #YourRandomRow
As the above queries are for SQL script execution you will need to define the variable used.
Your syntax is wrong, you are using 'value' where you should use 'values'. If you have SSMS you will able to easily figure out this kind of errors.
Usually I create the query in SQL Server Management Studio query editor, then use it in C#. Most of the times I use SQL Server stored procedures where it's possible. Because I think it cost some extra resources to execute a text query than executing a procedure

how to improve SQL query performance in my case

I have a table, schema is very simple, an ID column as unique primary key (uniqueidentifier type) and some other nvarchar columns. My current goal is, for 5000 inputs, I need to calculate what ones are already contained in the table and what are not. Tht inputs are string and I have a C# function which converts string into uniqueidentifier (GUID). My logic is, if there is an existing ID, then I treat the string as already contained in the table.
My question is, if I need to find out what ones from the 5000 input strings are already contained in DB, and what are not, what is the most efficient way?
BTW: My current implementation is, convert string to GUID using C# code, then invoke/implement a store procedure which query whether an ID exists in database and returns back to C# code.
My working environment: VSTS 2008 + SQL Server 2008 + C# 3.5.
My first instinct would be to pump your 5000 inputs into a single-column temporary table X, possibly index it, and then use:
SELECT X.thecol
FROM X
JOIN ExistingTable USING (thecol)
to get the ones that are present, and (if both sets are needed)
SELECT X.thecol
FROM X
LEFT JOIN ExistingTable USING (thecol)
WHERE ExistingTable.thecol IS NULL
to get the ones that are absent. Worth benchmarking, at least.
Edit: as requested, here are some good docs & tutorials on temp tables in SQL Server. Bill Graziano has a simple intro covering temp tables, table variables, and global temp tables. Randy Dyess and SQL Master discuss performance issue for and against them (but remember that if you're getting performance problems you do want to benchmark alternatives, not just go on theoretical considerations!-).
MSDN has articles on tempdb (where temp tables are kept) and optimizing its performance.
Step 1. Make sure you have a problem to solve. Five thousand inserts isn't a lot to insert one at a time in a lot of contexts.
Are you certain that the simplest way possible isn't sufficient? What performance issues have you measured so far?
What do you need to do with those entries that do or don't exist in your table??
Depending on what you need, maybe the new MERGE statement in SQL Server 2008 could fit your bill - update what's already there, insert new stuff, all wrapped neatly into a single SQL statement. Check it out!
http://blogs.conchango.com/davidportas/archive/2007/11/14/SQL-Server-2008-MERGE.aspx
http://www.sql-server-performance.com/articles/dba/SQL_Server_2008_MERGE_Statement_p1.aspx
http://blogs.msdn.com/brunoterkaly/archive/2008/11/12/sql-server-2008-merge-capability.aspx
Your statement would look something like this:
MERGE INTO
(your target table) AS t
USING
(your source table, e.g. a temporary table) AS s
ON t.ID = s.ID
WHEN NOT MATCHED THEN -- new rows does not exist in base table
....(do whatever you need to do)
WHEN MATCHED THEN -- row exists in base table
... (do whatever else you need to do)
;
To make this really fast, I would load the "new" records from e.g. a TXT or CSV file into a temporary table in SQL server using BULK INSERT:
BULK INSERT YourTemporaryTable
FROM 'c:\temp\yourimportfile.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR =' |\n'
)
BULK INSERT combined with MERGE should give you the best performance you can get on this planet :-)
Marc
PS: here's a note from TechNet on MERGE performance and why it's faster than individual statements:
In SQL Server 2008, you can perform multiple data manipulation language (DML) operations in a single statement by using the MERGE statement. For example, you may need to synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table. Typically, this is done by executing a stored procedure or batch that contains individual INSERT, UPDATE, and DELETE statements. However, this means that the data in both the source and target tables are evaluated and processed multiple times; at least once for each statement.
By using the MERGE statement, you can replace the individual DML statements with a single statement. This can improve query performance because the operations are performed within a single statement, therefore, minimizing the number of times the data in the source and target tables are processed. However, performance gains depend on having correct indexes, joins, and other considerations in place. This topic provides best practice recommendations to help you achieve optimal performance when using the MERGE statement.
Try to ensure you end up running only one query - i.e. if your solution consists of running 5000 queries against the database, that'll probably be the biggest consumer of resources for the operation.
If you can insert the 5000 IDs into a temporary table, you could then write a single query to find the ones that don't exist in the database.
If you want simplicity, since 5000 records is not very many, then from C# just use a loop to generate an insert statement for each of the strings you want to add to the table. Wrap the insert in a TRY CATCH block. Send em all up to the server in one shot like this:
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
BEGIN TRY
INSERT INTO table (theCol, field2, field3)
SELECT theGuid, value2, value3
END TRY BEGIN CATCH END CATCH
if you have a unique index or primary key defined on your string GUID, then the duplicate inserts will fail. Checking ahead of time to see if the record does not exist just duplicates work that SQL is going to do anyway.
If performance is really important, then consider downloading the 5000 GUIDS to your local station and doing all the analysis localy. Reading 5000 GUIDS should take much less than 1 second. This is simpler than bulk importing to a temp table (which is the only way you will get performance from a temp table) and doing an update using a join to the temp table.
Since you are using Sql server 2008, you could use Table-valued parameters. It's a way to provide a table as a parameter to a stored procedure.
Using ADO.NET you could easily pre-populate a DataTable and pass it as a SqlParameter.
Steps you need to perform:
Create a custom Sql Type
CREATE TYPE MyType AS TABLE
(
UniqueId INT NOT NULL,
Column NVARCHAR(255) NOT NULL
)
Create a stored procedure which accepts the Type
CREATE PROCEDURE spInsertMyType
#Data MyType READONLY
AS
xxxx
Call using C#
SqlCommand insertCommand = new SqlCommand(
"spInsertMyType", connection);
insertCommand.CommandType = CommandType.StoredProcedure;
SqlParameter tvpParam =
insertCommand.Parameters.AddWithValue(
"#Data", dataReader);
tvpParam.SqlDbType = SqlDbType.Structured;
Links: Table-valued Parameters in Sql 2008
Definitely do not do it one-by-one.
My preferred solution is to create a stored procedure with one parameter that can take and XML in the following format:
<ROOT>
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000">
<MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000001">
....
</ROOT>
Then in the procedure with the argument of type NCHAR(MAX) you convert it to XML, after what you use it as a table with single column (lets call it #FilterTable). The store procedure looks like:
CREATE PROCEDURE dbo.sp_MultipleParams(#FilterXML NVARCHAR(MAX))
AS BEGIN
SET NOCOUNT ON
DECLARE #x XML
SELECT #x = CONVERT(XML, #FilterXML)
-- temporary table (must have it, because cannot join on XML statement)
DECLARE #FilterTable TABLE (
"ID" UNIQUEIDENTIFIER
)
-- insert into temporary table
-- #important: XML iS CaSe-SenSiTiv
INSERT #FilterTable
SELECT x.value('#ID', 'UNIQUEIDENTIFIER')
FROM #x.nodes('/ROOT/MyObject') AS R(x)
SELECT o.ID,
SIGN(SUM(CASE WHEN t.ID IS NULL THEN 0 ELSE 1 END)) AS FoundInDB
FROM #FilterTable o
LEFT JOIN dbo.MyTable t
ON o.ID = t.ID
GROUP BY o.ID
END
GO
You run it as:
EXEC sp_MultipleParams '<ROOT><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000000"/><MyObject ID="60EAD98F-8A6C-4C22-AF75-000000000002"/></ROOT>'
And your results look like:
ID FoundInDB
------------------------------------ -----------
60EAD98F-8A6C-4C22-AF75-000000000000 1
60EAD98F-8A6C-4C22-AF75-000000000002 0

Categories

Resources