I'm looking to copy a few thousand records from SQL Server into Access in C#. The other direction works using SqlBulkCopy. Is there anything in place to do this in reverse?
I'm trying my best to stay away from looping through each field in each record and building a heinous Insert statement that not only would take forever to run, but would likely crash horribly if anything changes.
This will run against the MS Access OleConnection connection:
SELECT fld1, fld2 INTO accessTable FROM [sql connection string].sqltable
For example:
SELECT * INTO newtable
FROM
[ODBC;Description=Test;DRIVER=SQL Server;SERVER=server\SQLEXPRESS;UID=uid;Trusted_Connection=Yes;DATABASE=Test].table_1
Or to append
INSERT INTO newtable
SELECT *
FROM [ODBC;Description=Test;DRIVER=SQL Server;SERVER=server\SQLEXPRESS;UID=uid;Trusted_Connection=Yes;DATABASE=Test].table_1;
Or with FileDSN
INSERT INTO newtable
SELECT *
FROM [ODBC;FileDSN=z:\docs\test.dsn].table_1;
You will need to find the right driver to suit, for example
ODBC;Driver={SQL Server Native Client 11.0};Server=myServerAddress;Database=myDataBase; Uid=myUsername;Pwd=myPassword;
From http://connectionstrings.com works for me, but check out your client version.
Related
I am attempting to build the following tableadapter query in Visual Studio 2019
SELECT * FROM Vendors
ORDER BY VendorID
OFFSET 5 ROWS
FETCH NEXT 5 ROWS ONLY
And it gives the error "Unable to Parse Query Text" at the offset command
I know this query works on the database itself as i can run it successfully on the sql server (2019 Express).
Will Visual Studio tableadapter queries not recognize the offset command? Or is the syntax different in some way?
TableAdapters are pretty old these days, but they do remain a serviceable data access strategy. The designer attempts to parse the query you entered and doesn't like it much with the extra offset parts.
I recommend you try:
right click your dataset surface
add >> tableadapter
"select that downloads rows"
put the query in as SELECT * FROM Vendors
finish the wizard
click the fill,getdata() line of the adapter
in the properties grid, paste the extra clause onto the end of the query text
say "no" to "do you want to update the other queries?"
You should be left with a usable adapter. See the footnote though
You have other options if you want to use a lot of syntax that designer doesn't like; most notably you can say "Create New Stored Procedures" when going through the wizard, put a base query of SELECT * FROM Vendors, VS will make the sprocs, and then you can just edit the OFFSET etc into the sproc command in SSMS
Footnote: I personally still use TAs a lot and have never needed such syntaxes, but generally I make the first query in a TA of the form SELECT * FROM x WHERE ID = #y so it'll only ever pull one row anyway and is very "plain SQL" - the datatable schema is driven from it and it "just works":
Then other queries have defined uses such as SELECT * FROM x WHERE Name LIKE ... - If you adopt this approach of making the firt query in a "normal" one that it can cope with, then you can certainly add another query with unsupported syntax - you get an error "unable to parse query text":
but you can ignore it and finish the wizard anyway, and it will work fine
For example I get a custom report then the user can write any sql script.
In this case is Select * FROM table_x.
How can I know what're the columns of this script are?
If you're worried about this at the procedural level, your ORM will probably tell you when you pull them down. If you're using classic ADO.net, you can look at the resultant (probably) DataTable's columns.
If you're worried about this at the SQL Server level, you'd have to figure out the name of the table (for instance, by finding the next word after FROM, but that's pretty sketchy in my opinion), then you can do:
SELECT *
FROM Information_Schema.Columns
WHERE TABLE_NAME = 'ThatTableName'
That'll tell you a bit about them, including their COLUMN_NAME.
In SQL Server 2012 and later, you can use sp_describe_first_result_set:
EXEC sp_describe_first_result_set #tsql = N'SELECT * FROM table_x;';
I must sync 2 tables of two databases one of which is a MSSQL and the other one MySQL. I have to do this through a Windows Service. I currently have created a Windows Service, and what I currently have added to the service is : getting the data from the MySQL database and inserting it into a Data Adapter from which I plan to move the data to the MSSQL database using an insertion through transaction.Can you tell me what is the best approach to this problem and if what I'm doing right now is on the right track, first time I'm doing such a thing.
Well, probably there is a couple ways to solve this. You didn't mention about count and size of your tables, desired synchronization frequency so my answer might be not 100% relevant. Anyway my proposal is:
Each table should have additional column - SyncDate (TIMESTAMP) which by default is null.
Sync logic which is implemented in Windows Service periodically checks each table if there is some data to sync by executing query like this (use pure IDataReader for performance reasons):
SELECT * from TEST_TABLE where SyncDate is null
If above statement returns data then:
collect package of rows to insert (for example 500 rows)
begin transaction on target database and execute insert statement for collected package (set value for column SyncDate to DateTime.Now)
when insert is complete execute statement like this in source db:
UPDATE TEST_TABLE SET SyncDate = #DateTime.Now where ID_PRIMARY_KEY IN (IDENTIFIRES FROM PACKAGE)
commit transaction
collect another packages and repeat algorithm until DataReader provides data
Above algorithm should be applied on both databases
Probably your database has foreign keys so you have to remember that dependant tables should be synchronized in valid order
Wherever it possible you can benefit from multi threading
For this kind of job, SQL Server Integration Services is the way to go.
SSIS is made to Extract, transform, and load data. (ETL process)
You design a worklow with each transformation step of your data, from MySQL to MSSQL.
You then create a job and configure its schedule, and it will be executed by SQL Server.
No need to develop something from scratch and hard to maintain.
If you are already at a point where you are ready to move the MySQL data to SQL. I would suggest you put them on a separate table and issue a Merge command to merge the data. Make sure that you flag back the data that has been updated so you can now get them back and push to MySQL:
MERGE TableFromMySQL AS TARGET USING TableFromSQL AS SOURCE
ON (TARGET.PrimaryKey = SOURCE.PrimaryKey)
WHEN Matched AND (TARGET.Field1 <> SOURCE.Field1
OR TARGET.Field2 <> SOURCE.Field2
OR .. put more field comparison here that you want to sync
THEN
UPDATE
SET TARGET.Field1 = SOURCE.Field1,
TARGET.Field2 = SOURCE.Field2,
//flag the target field as updated
TARGET.IsUpdated = 1,
TARGET.LastUpdatedOn = GETDATE()
WHEN Not Matched
THEN
INSERT(PrimaryKey, Field1, Field2, IsUpdated, LastUpdatedOn)
VALUES(SOURCE.PrimaryKey, SOURCE.Field1, SOURCE.Field2, 1, GETDATE());
At this point, you can now query the MySQL table with IsUpdated = 1 and push all values to MYSQL database. I hope this helps.
If you add the mySql database as a linked server (sp_addlinkedserver) then I think you could do the following in a stored proc on your MS SQL database:
insert table1(col1)
select col1
from openquery('MySqlLinkedDb','select col1 from mySqlDb.dbo.table2')
If you have to do this through a Windows Service, I am curious how you are receiving the data from MySQL? What is initiating the call to the service and what form is the data passed in? Are you getting changes or are you getting a couple refresh of the data?
If you receive a DataTable or some other kind of IEnumerable and its a complete refresh, for example, you could use SqlBulkCopy.WriteToServer(). That would be the fastest and most efficient.
Tom
I'm using an SqlCommand like so:
command.CommandText = "INSERT INTO ... VALUES ...; SELECT SCOPE_IDENTITY();";
Is this enough, or do i need BEGIN TRAN etc.? (Mentioned here.)
I tried it first, of course, and it works fine. But will it work correctly even if there are two simultaneous inserts? (And I'm not sure how to test that.)
You don't need BEGIN TRAN. Scope_Identity() functions fine without it. Even if there are "simultaneous inserts". That is the whole point of the function--to return an answer for the current scope only.
Be aware that in less than SQL Server 2012, parallelism can break Scope_Identity(), so you must use the query hint WITH (MAXDOP 1) on your INSERT statement if you want it to work properly 100% of the time. You can read about this problem on Microsoft Connect. (It is theoretically fixed in Cumulative Update package 5 for SQL Server 2008 R2 Service Pack 1, but some people seem to think that may not be 100% true).
There is also the OUTPUT clause in SQL Server 2005 and up, which is another way to return data about your INSERT, either by sending a rowset to the client or by outputting to a table. Be aware that receiving the rowset does not actually prove the INSERT was properly committed... so you should probably use SET XACT_ABORT ON; in your stored procedure. here's an example of OUTPUT:
CREATE TABLE #AInsert(IDColumn);
INSERT dbo.TableA (OtherColumn) -- not the identity column
OUTPUT Inserted.IDColumn -- , Inserted.OtherColumn, Inserted.ColumnWithDefault
INTO #AInsert
SELECT 'abc';
-- Do something with #AInsert, which contains all the `IDColumn` values
-- that were inserted into the table. You can insert all columns, too,
-- as shown in the comments above
Not exactly the answer to your question, but if you are on SQL Server 2005 and above, consider using the OUTPUT clause, take a look this so answer for full sample, it's simple enough to implement
INSERT dbo.MyTable (col1, col2, col3)
OUTPUT INSERTED.idCol
VALUES ('a', 'b', 'c')
Scope_Identity and Begin Tran work independently, begin tran is used when you might want to rollback or commit a transaction at a given point within your query.
I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server.
The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements.
The auto-generated table adapter command looks like this...
this._adapter.InsertCommand.CommandText = #"INSERT INTO [Table] ([Field1], [Field2]) VALUES (#Value1, #Value2);
SELECT Field1, Field2 FROM Table WHERE (Field1 = #Value1)";
What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that?
Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE?
I cannot regenerate these table adapters, as the people who knew how to have long since left.
It is just updating the object with the latest values from the database, after an update. Always seemed a little unecessary to me but hey...
These are a nuisance from a maintenance point of view - if you have the option, you'll save yourself a lot of hassle by abstracting this all out to a proper data layer.
allows that the field values might be altered by trigger(s) on the table. Sensible enough, I'd have thought, in auto-generated boilerplate.
though the select statement is a tad whacky to assume that field1 is the primary key... but maybe the autogen code makes sure it is before generating this bit of code.