C# / Sql Server 2008 INSERT int value grows automatically to 1000000 - c#

I had a strange problem recently that only occured one time in Sql Server 2008.
I work in a .net web application (C#) and use SqlCommand to access my Database and execute queries. My process is the following:
I have a view that get me the maximum number existing in a specific table:
SELECT MAX(number) as MaxNumber
FROM MyTable
I get this MaxNumber in a variable and, with this variable, I execute an insert in MyTable with the MaxNumber + 1. Like that, I always have the maximum number logged in MyTable
It worked well since that one time, a week ago, when, suddenly, I saw a MaxNumber that passed from 134200 to 1000000 !
I investigate my code and there is no way it could be the reason of that behavior. I also inspected the logs of the Web Server, no logs of bad Insert throwned.
I looked also into the logs of Sql Server I've found no logs of error...
What is suspicious is that the number passed from a "common" number (134200) to a "specific" number (1000000). Why 1000000 ? Why not 984216 or 1000256 ?
Is there someone that experienced the same problem ?
Thanks for your help.
EDIT - 2014-12-23:
I analyzed further the problem and it seems that it occurred when I restored a backup in my PreProd environment.
I explain: I have PreProd server where I have an Sql Server Instance (PreProd) and I have a Prod server where I also have an Sql Server Instance (Prod), which is backed up every day, on this same server.
When I want to test with effective datas, I restore the Prod backups on my PreProd databases:
RESTORE DATABASE PreProd
FROM DISK = '\\Prod\Backup\SQL\Prod.bak'
WITH MOVE 'Prod' TO 'E:\Bases\Data\PreProd.mdf',
MOVE 'Prod_log' TO 'E:\Bases\Data\PreProd.ldf',
REPLACE
The problem occurred the same day I restored my backup. The "1000000 row" appeared at the same moment of my restore, on my Prod database. Is there any possibility that it's linked ? Was the Prod server overwhelmed with the restore command executed from my PreProd server, and it, eventually, crashed an Insert request that occurred at the same moment ?
Thanks for your advices

The only thing that I can think is that maybe you are getting the maxValue with ExecuteScalar method without casting the result to a proper datatype
var max= cmd.ExecuteScalar();
and then
max= max+1;
Otherwise I saw that with your version of sqlServer you may receive incorrect values when using SCOPE_IDENTITY() and ##IDENTITY
refer here for the bug fix, you should update to SQL Server 2008 R2 Service Pack 1

Related

EF 6 | Timeout error on inserting into database, but works fine on another computer

I'm working with a friend on a project. I am now running the code on my system, but for me, I receive this error when I try to create a record in the SQL server database.
When he runs it on his system, it updates without any issues.
If I add this code to the context startup it works for me, but takes time!
public FileContext()
: base("name=FileContext")
{
var adapter = (IObjectContextAdapter)this;
var objectContext = adapter.ObjectContext;
objectContext.CommandTimeout = 1 * 60; // value in seconds
}
What is set wrong that would cause this?
We shouldn't have to touch the timeout if it works on his computer and has for this entire time?
Update:
The GET call to the database on the same table is working fine, but the inserts into the table cause timeouts.
Update 2:
After the below comment, I took the query directly to SQL Server and it takes 40 seconds on the server. It is a simple insert statement. On another database, it works fine.
Fixed:
I had to rebuilt the Indexes on the table that was seeing the slowness into SQL Server.
Went to SSMS -> Databases -> problematic Table -> Indexes -> Rebuild indexes

Deploying SQL DB To Azure Timeout Enabling Indexes

We have a project we are hosting in Azure, we have our SQL servers stored in an elastic pool. The database we have is generated from Code First migrations from the .NET framework; and we load the data into the database with some SQL import scripts.
The issue comes into deploying the database to the Azure. We have tried using SQL Server Management Studio on a dev machine and on the sql server. We have attempted to push the database to Azure using the Deploy Database to Microsoft SQL Azure, and attempted to connect directly to the Azure and Import Data Tier Application using a BACPAC. SQL server 2014 12.0.4439.1 is our server version.
During the deployment process everything seems to go very quickly and the schema is created, data is loaded into the tables, but it hangs on "Enabling Indexes" for most of the process time and at about an hour the entire process times out and fails. I receive 3 errors; the first was error 0; something very cryptic about database files targeting SQL server 2014 having known compatibility with SQL azure v12. Another about a timeout, with a generic message that the process has timed out. The final error is a comment
Could not import package.
Error SQL72016: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated.
Error SQL72045: Script execution error. The executed script:
Bulk data is loaded
After doing some research I saw a few others complaining that if the basic tier is used that there are not enough DTU's for the process to run within the time limit. We are using the Standard 1 tier for our database and have tried upping it to S2 for the import process as a test. While things seemed to happen faster we had the same issue. Database is 1.6 gigs and does have 2 large tables with ~1 mill rows; though I do not see this as being such a large database that this process should fail.
Not a direct solution to the problem, but I have experienced this while testing large database migrations in SSMS. Extending the resources available does seem help, but that means restarting the deployment process.
Rebuilding all the indexes should be the last operation of the migration so to avoid redoing everything you can perform this yourself with something like:
DECLARE C CURSOR FAST_FORWARD FOR
SELECT so.name, si.name
FROM sys.indexes si JOIN sys.objects so ON so.object_id = si.object_id
WHERE is_disabled = 1
ORDER BY CASE WHEN si.type_desc='CLUSTERED' THEN 0 ELSE 1 END, si.name; -- clustered first as disabled clustered index will stop others rebuilding
DECLARE #tbl NVARCHAR(MAX), #idx NVARCHAR(MAX), #start DATETIME, #sql NVARCHAR(MAX);
OPEN C;
FETCH NEXT FROM C INTO #tbl, #idx;
WHILE ##FETCH_STATUS = 0 BEGIN
SET #sql = 'ALTER INDEX ['+#idx+'] ON ['+#tbl+'] REBUILD;';
EXEC (#sql);
FETCH NEXT FROM C INTO #tbl, #idx;
END
CLOSE C;
DEALLOCATE C;
I found the timeout errors occurred with rebuilds that would on their own take >5 minutes. I've not looked at whether this per operation timeout can be configured in the migration process.
Note that the above assumes all tables are in the dbo schema, if this is not the case add a join to sys.schemas in the cursor definition and use it to prepend the appropriate schema name to the table names.
In the comments Kevin suggests triggers may be disabled at this point, though this was not the case on the occasions I encountered this situation. Just in case, add the following after the current EXEC (#sql); in the above example:
SET #sql = 'ALTER TABLE ['+#tbl+'] ENABLE TRIGGER ALL;';
EXEC (#sql);
If the triggers are already active then this is essentially a NoOp so will do no harm. Note that this will not help if the triggers are not yet defined, but they should be by the time the indexes are being rebuilt as that is part of the schema build earlier in the process.
SQL DB is designed so that you can scale-up for big operations and scale back down when the workload is quieter? If you're still receiving errors, why not scale up to an S3 for an hour or two (SQL DB bills in hourly increments) get the import done and scale back down?

I can exec my stored procedure in SQL Server Management Studio but webservice can not

I have a webservice method that executes a tabular stored procedure. This sp takes 1 minute to be executed completely. I call that webservice method remotely and get results.
In normal situations everything is OK and I get results successfully.
But when the server is busy the webservice can not execute that sp (I tested it in SQL Profiler and nothing comes to profiler) but I can execute that sp manually in SQL Server Management Studio.
When I restart SQL Server, the problem is solved and the webservice can execute sp.
Why in busy situations webservice can not execute sp but I can do it in SQL Server Management Studio?
How this situation can be explained? How can I solve that?
Execute sp_who and see what is happening; my guess is that it is being blocked - perhaps your "isolation level" is different between SSMS and the web-service.
Equally, it could well be that the connection's SET options are different between SSMS and the web-service, which can lead to certain changes in behavior - for example, computed-stored-indexed values are very susceptible to SET options: if the caller's options aren't compatible with the options that were set when the column was created, then it can be forced to table-scan them, recalculating them all, instead of using the pre-indexed values. This also applies to hoisted xml values.
A final consideration is parameter sniffing; if the cache gets generated for yourproc 'abc' which has very different stats than yourproc 'def', then it can run very bad query plans. The optimize for / unknown hint can help with this.

Inserting different data simultaneously from different clients

I created a windows forms application in C #, and a database MS SQL server 2008 Express, and I use LINQ-to-SQL query to insert and edit data.
The database is housed on a server with Windows Server 2008 R2 (standard edition). Right now I have the application running on five different computers, and users are authenticated through active directory.
One complaint reported to me was that sometimes when different data is entered and submitted, the same data do not appear in the listing that contains the application. I use try catch block to send the errors but errors do not appear in the application; but the data simply disappear.
The id of the table records is an integer auto-increment. As I have to tell them the registration number that was entered I use the following piece of code:
try{
ConectionDataContext db = new ConectionDataContext();
Table_Registers tr = new Table_Registers();
tr.Name=textbox1.text;
tr.sector=textbox2.text;
db.Table_Registers.InsertOnSubmit(tr);
db.SubmitChanges();
int numberRegister=tr.NumberRegister;
MessageBox.Show(tr.ToString());
}
catch{Exception e}
I wonder if I'm doing something wrong or if you know of any article on the web that speaks how to insert data from different clients in MSSQL Server databases, please let me know.
Thanks.
That's what a database server DOES: "insert data simultaneously from different clients".
One thing you can do is to consider "transactions":
http://www.sqlteam.com/article/introduction-to-transactions
Another thing you can (and should!) do is to insure as much work as possible is done on the server, by using "stored procedures":
http://www.sql-server-performance.com/2003/stored-procedures-basics/
You should also check the SQL Server error logs, especially for potential deadlocks. You can see these in your SSMS GUI, or in the "logs" directory under your SQL Server installation.
But the FIRST thing you need to do is to determine exactly what's going on. Since you've only got MSSQL Express (which is not a good choice for production use!), perhaps the easiest approach is to create a "log" table: insert an entry in your "log" every time you insert a row in the real table, and see if stuff is "missing" (i.e. you have more entires in the log table than the data table).

Poor man's SQL pipeline service for SQL express 2008 R2

I have a basic/simple need to create a pipeline transfer process from one SQL express 2008 database to another server (equally SQL 2008 express).
Basically:
I have one table on SERVER A which has data coming in, and a default field called 'downloaded' which is again, by default set to 'N'
I have the same table schema on SERVER B
On a timed basis (say every 10 mins), I need to get all records from SERVER A where the 'downloaded' field is set to 'N', and copy that whole record to SERVER B
As each record from SERVER A is read/successfully copied to SERVER B, I set the 'downloaded' flag to 'Y' (with a timestamp field too).
From old memories, I used DTS (now SSIS I guess) to do something similar.. but of course SQL express doesn't have the loveliness!!
Question:
Is it just a case of a SQL datareader to get data from SERVER A and manually either INSERT a SQL statement to SERVER B (or a proc of course)?? any other slick ways?
Thanks for all comments...
oh don't use flags! They are not good for indexing.
Add two columns to both source and target tables:
dt_created
dt_modified.
Add an index on each one.
From your target database, select the source database/table for dt_created > max(target table.dt_created). Those are your new records.
Do the same for dt_modified, and those will be your modified records. See! Poor man's replication.
Well, how about MySQL with replication? Cheap and slick :-)
But I afrait it's too late to change DB...

Categories

Resources