I and trying to perform a query against a linked server (SQL Server 2008 linked to Sybase) and select it into a temp table. Is works perfectly though a query window in SQL Management Studio, but when I do it through code (C#) it fails with the error "The operation could not be performed because OLE DB provider "ASEOLEDB" for linked server "MYLINKEDSERVER" was unable to begin a distributed transaction. I am not using a transaction in code with my DbConnection.
This query looks like this:
SELECT *
INTO #temptable
FROM OPENQUERY([MYLINKEDSERVER], 'SELECT * from table')
Found the issue. It was a result of connection pooling. It appears that connections were getting reused causing the system to think there was a distributed transaction happening.
Related
I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
We have a project we are hosting in Azure, we have our SQL servers stored in an elastic pool. The database we have is generated from Code First migrations from the .NET framework; and we load the data into the database with some SQL import scripts.
The issue comes into deploying the database to the Azure. We have tried using SQL Server Management Studio on a dev machine and on the sql server. We have attempted to push the database to Azure using the Deploy Database to Microsoft SQL Azure, and attempted to connect directly to the Azure and Import Data Tier Application using a BACPAC. SQL server 2014 12.0.4439.1 is our server version.
During the deployment process everything seems to go very quickly and the schema is created, data is loaded into the tables, but it hangs on "Enabling Indexes" for most of the process time and at about an hour the entire process times out and fails. I receive 3 errors; the first was error 0; something very cryptic about database files targeting SQL server 2014 having known compatibility with SQL azure v12. Another about a timeout, with a generic message that the process has timed out. The final error is a comment
Could not import package.
Error SQL72016: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated.
Error SQL72045: Script execution error. The executed script:
Bulk data is loaded
After doing some research I saw a few others complaining that if the basic tier is used that there are not enough DTU's for the process to run within the time limit. We are using the Standard 1 tier for our database and have tried upping it to S2 for the import process as a test. While things seemed to happen faster we had the same issue. Database is 1.6 gigs and does have 2 large tables with ~1 mill rows; though I do not see this as being such a large database that this process should fail.
Not a direct solution to the problem, but I have experienced this while testing large database migrations in SSMS. Extending the resources available does seem help, but that means restarting the deployment process.
Rebuilding all the indexes should be the last operation of the migration so to avoid redoing everything you can perform this yourself with something like:
DECLARE C CURSOR FAST_FORWARD FOR
SELECT so.name, si.name
FROM sys.indexes si JOIN sys.objects so ON so.object_id = si.object_id
WHERE is_disabled = 1
ORDER BY CASE WHEN si.type_desc='CLUSTERED' THEN 0 ELSE 1 END, si.name; -- clustered first as disabled clustered index will stop others rebuilding
DECLARE #tbl NVARCHAR(MAX), #idx NVARCHAR(MAX), #start DATETIME, #sql NVARCHAR(MAX);
OPEN C;
FETCH NEXT FROM C INTO #tbl, #idx;
WHILE ##FETCH_STATUS = 0 BEGIN
SET #sql = 'ALTER INDEX ['+#idx+'] ON ['+#tbl+'] REBUILD;';
EXEC (#sql);
FETCH NEXT FROM C INTO #tbl, #idx;
END
CLOSE C;
DEALLOCATE C;
I found the timeout errors occurred with rebuilds that would on their own take >5 minutes. I've not looked at whether this per operation timeout can be configured in the migration process.
Note that the above assumes all tables are in the dbo schema, if this is not the case add a join to sys.schemas in the cursor definition and use it to prepend the appropriate schema name to the table names.
In the comments Kevin suggests triggers may be disabled at this point, though this was not the case on the occasions I encountered this situation. Just in case, add the following after the current EXEC (#sql); in the above example:
SET #sql = 'ALTER TABLE ['+#tbl+'] ENABLE TRIGGER ALL;';
EXEC (#sql);
If the triggers are already active then this is essentially a NoOp so will do no harm. Note that this will not help if the triggers are not yet defined, but they should be by the time the indexes are being rebuilt as that is part of the schema build earlier in the process.
SQL DB is designed so that you can scale-up for big operations and scale back down when the workload is quieter? If you're still receiving errors, why not scale up to an S3 for an hour or two (SQL DB bills in hourly increments) get the import done and scale back down?
I am trying to copy all table data from server to my local database, like
INSERT INTO [.\SQLEXPRESS].[Mydatabase]..MYTable
SELECT *
FROM [www.MYSite.com].[Mydatabase]..MYTable
www.MYSite.com having SQL LOGIN ID XYZ AND PASSWORD 1234
but I get an error:
Could not find server 'www.MYSite.com' in sys.servers.
Verify that the correct server name was specified. If necessary,
execute the stored procedure sp_addlinkedserver to add the server to sys.servers.
I want to copy all the data from Mytable of www.MYSite.com to Mytable of .\SQLExpress.
How to resolve it? Please help.
Update :
I am using Microsoft Sync Framework 2.0 to sync all data from www.MYSite.com to .\SQLExpress and vice versa, but in a one condition I want to copy data from www.MYSite.com to .\SQLExpress without sync framework
Please Note I am passing those SQL Statement using C#..
When you specify a database on another server, like this:
SELECT *
FROM [www.MYSite.com].[Mydatabase]..MYTable
... the server name needs to be one that the database server was previously configured to recognize. It needs to be in the system table sys.servers.
So, you need to configure your SQLExpress instance to "know about" that server.
You can do this in code, with the stored procedure sp_addlinkedserver. You can learn more about it here.
Or, you can do it through SSMS:
I hope the below information will help you:
Using SQL Server Management Tools you can use the Import Feature.
Connect to your SQL instance server.
Select your database schema.
Right click Tasks > Import.
and follow wizard instructions.
I am calling a Stored Procedure (SP Name: stp1, DB Name:DB1, server:localhost) from C# code by passing the right parameters by opening the connection to DB1 with below connection string. Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
"Data Source=.\;" + "Initial Catalog=DB1;" + " Integrated
Security=SSPI;";
Inside SP "stp1" reads data from the table "table1" located in DB2 in the same server.
When I run the SP stp1 with the above connection string then the execution time is: 5 seconds
If I create the same SP in DB2 and if I run stp1 then execution time is: .02 seconds
we have the same environment in multiple machines, we are not seeing this problem in all the machines and we are seeing this especially in one server, so is this something due to server configuration or any idea???
Since the other way is faster, I would agree that we can create SP in DB2, but I would like to understand why this is happening??
With SQL Server I run this query with no problem...
SELECT SUM(Esi) AS Dispo
FROM [mdb].[dbo].[Query1] AS A
INNER JOIN [mdb2].[dbo].[TieCol] as B ON A.Alias=B.IDAlias
WHERE Alias LIKE 'SETUP%'
I join two tables that reside in two different databases (mdb and mdb2).
But how can I do it in my .NET application?
When I need to use this statement
string cmdText = "SELECT SUM(Esi) AS Dispo
FROM [mdb].[dbo].[Query1] AS A
INNER JOIN [mdb2].[dbo].[TieCol] as B ON A.Alias=B.IDAlias
WHERE Alias LIKE 'SETUP%'";
this.OP = new SqlConnection(ConfigurationManager.ConnectionStrings["mdb2"].ConnectionString);
SqlCommand sqlCommand = new SqlCommand(cmdText, this.OP);
I can't execute it, since this.OP is the connection to mdb2... And for mdb?
How can I connect to both databases simultanously?
The SQL connection is to the server - the Initial catalog in a connection string behaves like use - it sets the default DB.
So your 3 part SQL query should work as is. So possibly
Make sure that the SQL login used by your app (or the account of your AppPool if using Web and Integrated Security) has the necessary access to both databases. (use RunAs on SQL Enterprise Manager as this account and try to run the query)
You might try escaping [Alias]
Also, if there is coupling between mdb1 and mdb2 (e.g. SPROCS in mdb1 use tables in mdb2 etc), for ease of maintenance, you might consider adding views in mdb1 for mdb2 objects. This allows for easy identification of cross-database dependencies. In this case, your query can use views which look like they are in the same database, although the underlying dependency on mdb2 is still there.
I'm not sure if there is a way to do this within the connection string. But you can probably do it using a four part reference to the table: [server].[database].[table].[column].
Your C# application only need to connect to one database server for this query.
Say your C# application connect to [mdb]. Database [mdb2] should b linked server in database [mdb].
Since you can run that query in sql server, so there must be one sql server connected to both databases. use that sql server in your C# connection string. That's it!