Deploying SQL DB To Azure Timeout Enabling Indexes - c#

We have a project we are hosting in Azure, we have our SQL servers stored in an elastic pool. The database we have is generated from Code First migrations from the .NET framework; and we load the data into the database with some SQL import scripts.
The issue comes into deploying the database to the Azure. We have tried using SQL Server Management Studio on a dev machine and on the sql server. We have attempted to push the database to Azure using the Deploy Database to Microsoft SQL Azure, and attempted to connect directly to the Azure and Import Data Tier Application using a BACPAC. SQL server 2014 12.0.4439.1 is our server version.
During the deployment process everything seems to go very quickly and the schema is created, data is loaded into the tables, but it hangs on "Enabling Indexes" for most of the process time and at about an hour the entire process times out and fails. I receive 3 errors; the first was error 0; something very cryptic about database files targeting SQL server 2014 having known compatibility with SQL azure v12. Another about a timeout, with a generic message that the process has timed out. The final error is a comment
Could not import package.
Error SQL72016: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated.
Error SQL72045: Script execution error. The executed script:
Bulk data is loaded
After doing some research I saw a few others complaining that if the basic tier is used that there are not enough DTU's for the process to run within the time limit. We are using the Standard 1 tier for our database and have tried upping it to S2 for the import process as a test. While things seemed to happen faster we had the same issue. Database is 1.6 gigs and does have 2 large tables with ~1 mill rows; though I do not see this as being such a large database that this process should fail.

Not a direct solution to the problem, but I have experienced this while testing large database migrations in SSMS. Extending the resources available does seem help, but that means restarting the deployment process.
Rebuilding all the indexes should be the last operation of the migration so to avoid redoing everything you can perform this yourself with something like:
DECLARE C CURSOR FAST_FORWARD FOR
SELECT so.name, si.name
FROM sys.indexes si JOIN sys.objects so ON so.object_id = si.object_id
WHERE is_disabled = 1
ORDER BY CASE WHEN si.type_desc='CLUSTERED' THEN 0 ELSE 1 END, si.name; -- clustered first as disabled clustered index will stop others rebuilding
DECLARE #tbl NVARCHAR(MAX), #idx NVARCHAR(MAX), #start DATETIME, #sql NVARCHAR(MAX);
OPEN C;
FETCH NEXT FROM C INTO #tbl, #idx;
WHILE ##FETCH_STATUS = 0 BEGIN
SET #sql = 'ALTER INDEX ['+#idx+'] ON ['+#tbl+'] REBUILD;';
EXEC (#sql);
FETCH NEXT FROM C INTO #tbl, #idx;
END
CLOSE C;
DEALLOCATE C;
I found the timeout errors occurred with rebuilds that would on their own take >5 minutes. I've not looked at whether this per operation timeout can be configured in the migration process.
Note that the above assumes all tables are in the dbo schema, if this is not the case add a join to sys.schemas in the cursor definition and use it to prepend the appropriate schema name to the table names.
In the comments Kevin suggests triggers may be disabled at this point, though this was not the case on the occasions I encountered this situation. Just in case, add the following after the current EXEC (#sql); in the above example:
SET #sql = 'ALTER TABLE ['+#tbl+'] ENABLE TRIGGER ALL;';
EXEC (#sql);
If the triggers are already active then this is essentially a NoOp so will do no harm. Note that this will not help if the triggers are not yet defined, but they should be by the time the indexes are being rebuilt as that is part of the schema build earlier in the process.

SQL DB is designed so that you can scale-up for big operations and scale back down when the workload is quieter? If you're still receiving errors, why not scale up to an S3 for an hour or two (SQL DB bills in hourly increments) get the import done and scale back down?

Related

EF 6 | Timeout error on inserting into database, but works fine on another computer

I'm working with a friend on a project. I am now running the code on my system, but for me, I receive this error when I try to create a record in the SQL server database.
When he runs it on his system, it updates without any issues.
If I add this code to the context startup it works for me, but takes time!
public FileContext()
: base("name=FileContext")
{
var adapter = (IObjectContextAdapter)this;
var objectContext = adapter.ObjectContext;
objectContext.CommandTimeout = 1 * 60; // value in seconds
}
What is set wrong that would cause this?
We shouldn't have to touch the timeout if it works on his computer and has for this entire time?
Update:
The GET call to the database on the same table is working fine, but the inserts into the table cause timeouts.
Update 2:
After the below comment, I took the query directly to SQL Server and it takes 40 seconds on the server. It is a simple insert statement. On another database, it works fine.
Fixed:
I had to rebuilt the Indexes on the table that was seeing the slowness into SQL Server.
Went to SSMS -> Databases -> problematic Table -> Indexes -> Rebuild indexes

Linq Error Service Broker message delivery is not enabled in this database

We have C# LINQ program that writes/reads to a database. We recently moved the database to a different server. I changed the app.config to point to this new server and since then i am getting this error when I do a write to a table. specifically the error occurs on db.SubmitChanges()
"Service Broker message delivery is not enabled in this database. Use the ALTER DATABASE statement to enable Service Broker message delivery."
The same program works fine in the other server with Service Broker message delivery disabled and this is a simple insert into a table. I tried inserting into another test table and that works fine. I can't seem to find a pattern as to when the error occurs either.
You probably already know this stuff from the comment you exchanged with Gert. But...
There is a database option that switches on or off the service broker mechanism.
From mater database, you can check what the settings is on your database (in the same instance)
USE master;
-- Check if it is enabled
SELECT D.is_broker_enabled
FROM sys.databases D
WHERE D.name = 'YourDatabaseName' ;
-- Enable it
ALTER DATABASE YourDatabaseName
SET ENABLE_BROKER
WITH ROLLBACK IMMEDIATE ;
GO
I usually add this to the beginning of the service broker application setup script
USE master;
IF NOT EXISTS
(
SELECT D.is_broker_enabled
FROM sys.databases D
WHERE
D.name = 'YourDatabaseName'
AND D.is_broker_enabled = 1
)
ALTER DATABASE YourDatabaseName
SET ENABLE_BROKER
WITH ROLLBACK IMMEDIATE ;
GO
So I can make sure it gets enabled. But you probably should talk to your DBA because service broker probably eats up some resource in DB engine just by enabling it.

C# / Sql Server 2008 INSERT int value grows automatically to 1000000

I had a strange problem recently that only occured one time in Sql Server 2008.
I work in a .net web application (C#) and use SqlCommand to access my Database and execute queries. My process is the following:
I have a view that get me the maximum number existing in a specific table:
SELECT MAX(number) as MaxNumber
FROM MyTable
I get this MaxNumber in a variable and, with this variable, I execute an insert in MyTable with the MaxNumber + 1. Like that, I always have the maximum number logged in MyTable
It worked well since that one time, a week ago, when, suddenly, I saw a MaxNumber that passed from 134200 to 1000000 !
I investigate my code and there is no way it could be the reason of that behavior. I also inspected the logs of the Web Server, no logs of bad Insert throwned.
I looked also into the logs of Sql Server I've found no logs of error...
What is suspicious is that the number passed from a "common" number (134200) to a "specific" number (1000000). Why 1000000 ? Why not 984216 or 1000256 ?
Is there someone that experienced the same problem ?
Thanks for your help.
EDIT - 2014-12-23:
I analyzed further the problem and it seems that it occurred when I restored a backup in my PreProd environment.
I explain: I have PreProd server where I have an Sql Server Instance (PreProd) and I have a Prod server where I also have an Sql Server Instance (Prod), which is backed up every day, on this same server.
When I want to test with effective datas, I restore the Prod backups on my PreProd databases:
RESTORE DATABASE PreProd
FROM DISK = '\\Prod\Backup\SQL\Prod.bak'
WITH MOVE 'Prod' TO 'E:\Bases\Data\PreProd.mdf',
MOVE 'Prod_log' TO 'E:\Bases\Data\PreProd.ldf',
REPLACE
The problem occurred the same day I restored my backup. The "1000000 row" appeared at the same moment of my restore, on my Prod database. Is there any possibility that it's linked ? Was the Prod server overwhelmed with the restore command executed from my PreProd server, and it, eventually, crashed an Insert request that occurred at the same moment ?
Thanks for your advices
The only thing that I can think is that maybe you are getting the maxValue with ExecuteScalar method without casting the result to a proper datatype
var max= cmd.ExecuteScalar();
and then
max= max+1;
Otherwise I saw that with your version of sqlServer you may receive incorrect values when using SCOPE_IDENTITY() and ##IDENTITY
refer here for the bug fix, you should update to SQL Server 2008 R2 Service Pack 1

I can exec my stored procedure in SQL Server Management Studio but webservice can not

I have a webservice method that executes a tabular stored procedure. This sp takes 1 minute to be executed completely. I call that webservice method remotely and get results.
In normal situations everything is OK and I get results successfully.
But when the server is busy the webservice can not execute that sp (I tested it in SQL Profiler and nothing comes to profiler) but I can execute that sp manually in SQL Server Management Studio.
When I restart SQL Server, the problem is solved and the webservice can execute sp.
Why in busy situations webservice can not execute sp but I can do it in SQL Server Management Studio?
How this situation can be explained? How can I solve that?
Execute sp_who and see what is happening; my guess is that it is being blocked - perhaps your "isolation level" is different between SSMS and the web-service.
Equally, it could well be that the connection's SET options are different between SSMS and the web-service, which can lead to certain changes in behavior - for example, computed-stored-indexed values are very susceptible to SET options: if the caller's options aren't compatible with the options that were set when the column was created, then it can be forced to table-scan them, recalculating them all, instead of using the pre-indexed values. This also applies to hoisted xml values.
A final consideration is parameter sniffing; if the cache gets generated for yourproc 'abc' which has very different stats than yourproc 'def', then it can run very bad query plans. The optimize for / unknown hint can help with this.

Distributed Transaction Error Only Through Code

I and trying to perform a query against a linked server (SQL Server 2008 linked to Sybase) and select it into a temp table. Is works perfectly though a query window in SQL Management Studio, but when I do it through code (C#) it fails with the error "The operation could not be performed because OLE DB provider "ASEOLEDB" for linked server "MYLINKEDSERVER" was unable to begin a distributed transaction. I am not using a transaction in code with my DbConnection.
This query looks like this:
SELECT *
INTO #temptable
FROM OPENQUERY([MYLINKEDSERVER], 'SELECT * from table')
Found the issue. It was a result of connection pooling. It appears that connections were getting reused causing the system to think there was a distributed transaction happening.

Categories

Resources