I have a strange issue, I deployed my application to production. Application is working fine. I have uploaded database too with some local data. After one day, database ID are incremented by 10000. I don't know why and how. See below picture. Blue ones are from local environment and yellow ones are of production.
Moreover when I connect to the production server and try to open any tables for edit I get an error:
This backend version is not supported to design database diagrams or tables. (MS Visual Database Tools)
If this increment some how connected to this ?
It seems Microsoft considers this a "feature" and not a bug. According to the site I found explaining it, SQL Server will do this when the server is restarted (perhaps to prevent duplicate IDs?) and the amount of the increment depends on the data type of the column, so an INT field will jump by 1,000 and a BIGINT will jump by 10,000.
Read here for more information, but essentially if you care about the value of your identity column than don't use an identity column and instead create a sequence of numbers.
http://www.codeproject.com/Tips/668042/SQL-Server-2012-Auto-Identity-Column-Value-Jump-Is
SQL Server 2012 has a new identity implementation that is shared with SEQUENCE. This unfortunate behavior is intentional.
Enable trace flag 272 on startup.
Use trace flag 272. This will cause a log record to be
generated for each generated identity value. The performance of
identity generation may be impacted by turning on this trace flag. •
Use a sequence generator with the NO CACHE
setting(http://msdn.microsoft.com/en-us/library/ff878091.aspx) o This
will cause a log record to be generated for each generated sequence
value. Note that the performance of sequence value generation may be
impacted by using NO CACHE.
Don't rely on identity values to be consecutive. Still, these large jumps are annoying im many small ways.
Related
In one of my tables Fee in column "ReceiptNo" in SQL Server 2012 database identity increment suddenly started jumping to 100s instead of 1 depending on the following two things.
if it is 1205446 it is jumps to 1206306, if it is 1206321, it jumps to 1207306 and if it is 1207314, it jumps to 1208306. What I want to make you note is that the last three digits remain constant i.e 306 whenever the jumping occurs as shown in the following picture.
this problem occurs when I restart my computer
You are encountering this behaviour due to a performance improvement since SQL Server 2012.
It now by default uses a cache size of 1,000 when allocating IDENTITY values for an int column and restarting the service can "lose" unused values (The cache size is 10,000 for bigint/numeric).
This is mentioned in the documentation
SQL Server might cache identity values for performance reasons and
some of the assigned values can be lost during a database failure or
server restart. This can result in gaps in the identity value upon
insert. If gaps are not acceptable then the application should use its
own mechanism to generate key values. Using a sequence generator with
the NOCACHE option can limit the gaps to transactions that are never
committed.
From the data you have shown it looks like this happened after the data entry for 22 December then when it restarted SQL Server reserved the values 1206306 - 1207305. After data entry for 24 - 25 December was done another restart and SQL Server reserved the next range 1207306 - 1208305 visible in the entries for the 28th.
Unless you are restarting the service with unusual frequency any "lost" values are unlikely to make any significant dent in the range of values allowed by the datatype so the best policy is not to worry about it.
If this is for some reason a real issue for you some possible workarounds are...
You can use a SEQUENCE instead of an identity column and define a smaller cache size for example and use NEXT VALUE FOR in a column default.
Or apply trace flag 272 which makes the IDENTITY allocation logged as in versions up to 2008 R2. This applies globally to all databases.
Or, for recent versions, execute ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE = OFF to disable the identity caching for a specific database.
You should be aware none of these workarounds assure no gaps. This has never been guaranteed by IDENTITY as it would only be possible by serializing inserts to the table. If you need a gapless column you will need to use a different solution than either IDENTITY or SEQUENCE
This problems occurs after restarting the SQL Server.
The solution is:
Run SQL Server Configuration Manager.
Select SQL Server Services.
Right-click SQL Server and select Properties.
In the opening window under Startup Parameters, type -T272 and click Add, then press Apply button and restart.
From SQL Server 2017+ you could use ALTER DATABASE SCOPED CONFIGURATION:
IDENTITY_CACHE = { ON | OFF }
Enables or disables identity cache at the database level. The default
is ON. Identity caching is used to improve INSERT performance on
tables with Identity columns. To avoid gaps in the values of the
Identity column in cases where the server restarts unexpectedly or
fails over to a secondary server, disable the IDENTITY_CACHE option.
This option is similar to the existing SQL Server Trace Flag 272,
except that it can be set at the database level rather than only at
the server level.
(...)
G. Set IDENTITY_CACHE
This example disables the identity cache.
ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE=OFF ;
I know my answer might be late to the party. But i have solved in another way by adding a start up stored procedure in SQL Server 2012.
Create a following stored procedure in master DB.
USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[ResetTableNameIdentityAfterRestart]
AS
BEGIN
begin TRAN
declare #id int = 0
SELECT #id = MAX(id) FROM [DatabaseName].dbo.[TableName]
--print #id
DBCC CHECKIDENT ('[DatabaseName].dbo.[TableName]', reseed, #id)
Commit
END
Then add it in to Start up by using following syntax.
EXEC sp_procoption 'ResetOrderIdentityAfterRestart', 'startup', 'on';
This is a good idea if you have few tables. but if you have to do for many tables, this method still works but not a good idea.
This is still a very common issue among many developers and applications regardless of size.
Unfortunately the suggestions above do not fix all scenarios, i.e. Shared hosting, you cannot rely on your host to set the -t272 startup parameter.
Also, if you have existing tables that use these identity columns for primary keys, it is a HUGE effort to drop those columns and recreate new ones to use the BS sequence workaround. The Sequence workaround is only good if you are designing the tables new from scratch in SQL 2012+
Bottom line is, if you are on Sql Server 2008R2, then STAY ON IT. Seriously, stay on it. Until Microsoft admits that they introduced a HUGE bug, which is still there even in Sql Server 2016, then we should not upgrade until they own it and FIX IT.
Microsoft straight up introduced a breaking change, i.e. they broke a working API that no longer works as designed, due to the fact that their system forgets their current identity on a restart. Cache or no cache, this is unacceptable, and the Microsoft developer by the name of Bryan needs to own it, instead of tell the world that it is "by design" and a "feature". Sure, the caching is a feature, but losing track of what the next identity should be, IS NOT A FEATURE. It's a fricken BUG!!!
I will share the workaround that I used, because My DB's are on Shared Hosting servers, also, I am not dropping and recreating my Primary Key columns, that would be a huge PITA.
Instead, this is my shameful hack (but not as shameful as this POS bug that microsoft has introduced).
Hack/Fix:
Before your insert commands, just reseed your identity before each insert. This fix is only recommended if you don't have admin control over your Sql Server instance, otherwise I suggest reseeding on restart of server.
declare #newId int -- where int is the datatype of your PKey or Id column
select #newId = max(YourBuggedIdColumn) from YOUR_TABLE_NAME
DBCC CheckIdent('YOUR_TABLE_NAME', RESEED, #newId)
Just those 3 lines immediately before your insert, and you should be good to go. It really won't affect performance that much, i.e. it will be unnoticeable.
Goodluck.
There are many possible reasons for jumping identity values. They range from rolled back inserts to identity management for replication. What is causing this in your case I can't tell without spending some time in your system.
You should know however, that in no case you can assume an identity column to be contiguos. There are just too many things that can cause gaps.
You can find a little more information about this here: http://sqlity.net/en/792/the-gap-in-the-identity-value-sequence/
I am working on enhancing an application and using Microsoft .NET Core Identity for IAM in our application [Microsoft.AspNetCore.Identity.EntityFrameworkCore].
The database is Microsoft SQL Server. The database setup is pretty much standard except the "NUMERIC_ROUNDABORT" will be turned ON.
But when we start the application we are getting the below exception.
System.Data.SqlClient.SqlException:
'CREATE INDEX failed because the following SET options have incorrect settings: 'NUMERIC_ROUNDABORT'.
Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type
Working scenarios :
Scenario 1: When the "NUMERIC_ROUNDABORT" is turned OFF in server level, the application works fine and no issues.
Scenario 2: When I extract the table creation queries from the database and execute them with "NUMERIC_ROUNDABORT" turned ON, still everything works.
Scenario 3: When the "NUMERIC_ROUNDABORT" is turned OFF in server level, and started application first time (letting all the tables created by ef core) and turned ON, next time onwards app works, as the necessary tables and indexes are created already.
Non-working scenario :
When the "NUMERIC_ROUNDABORT" is turned ON in server level, and when we run the application first time, it throws an exception.
I understand the reason that it is expected to get the exception after a loss of precision occurs in an expression and when creating or changing indexes on computed columns or indexed views.
Could you please help me, can we fix this programmatically. Is that possible?
P.S: Changing in the database will not be a possible option[In my case].
Thank you all for your advice and help.
In one of my tables Fee in column "ReceiptNo" in SQL Server 2012 database identity increment suddenly started jumping to 100s instead of 1 depending on the following two things.
if it is 1205446 it is jumps to 1206306, if it is 1206321, it jumps to 1207306 and if it is 1207314, it jumps to 1208306. What I want to make you note is that the last three digits remain constant i.e 306 whenever the jumping occurs as shown in the following picture.
this problem occurs when I restart my computer
You are encountering this behaviour due to a performance improvement since SQL Server 2012.
It now by default uses a cache size of 1,000 when allocating IDENTITY values for an int column and restarting the service can "lose" unused values (The cache size is 10,000 for bigint/numeric).
This is mentioned in the documentation
SQL Server might cache identity values for performance reasons and
some of the assigned values can be lost during a database failure or
server restart. This can result in gaps in the identity value upon
insert. If gaps are not acceptable then the application should use its
own mechanism to generate key values. Using a sequence generator with
the NOCACHE option can limit the gaps to transactions that are never
committed.
From the data you have shown it looks like this happened after the data entry for 22 December then when it restarted SQL Server reserved the values 1206306 - 1207305. After data entry for 24 - 25 December was done another restart and SQL Server reserved the next range 1207306 - 1208305 visible in the entries for the 28th.
Unless you are restarting the service with unusual frequency any "lost" values are unlikely to make any significant dent in the range of values allowed by the datatype so the best policy is not to worry about it.
If this is for some reason a real issue for you some possible workarounds are...
You can use a SEQUENCE instead of an identity column and define a smaller cache size for example and use NEXT VALUE FOR in a column default.
Or apply trace flag 272 which makes the IDENTITY allocation logged as in versions up to 2008 R2. This applies globally to all databases.
Or, for recent versions, execute ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE = OFF to disable the identity caching for a specific database.
You should be aware none of these workarounds assure no gaps. This has never been guaranteed by IDENTITY as it would only be possible by serializing inserts to the table. If you need a gapless column you will need to use a different solution than either IDENTITY or SEQUENCE
This problems occurs after restarting the SQL Server.
The solution is:
Run SQL Server Configuration Manager.
Select SQL Server Services.
Right-click SQL Server and select Properties.
In the opening window under Startup Parameters, type -T272 and click Add, then press Apply button and restart.
From SQL Server 2017+ you could use ALTER DATABASE SCOPED CONFIGURATION:
IDENTITY_CACHE = { ON | OFF }
Enables or disables identity cache at the database level. The default
is ON. Identity caching is used to improve INSERT performance on
tables with Identity columns. To avoid gaps in the values of the
Identity column in cases where the server restarts unexpectedly or
fails over to a secondary server, disable the IDENTITY_CACHE option.
This option is similar to the existing SQL Server Trace Flag 272,
except that it can be set at the database level rather than only at
the server level.
(...)
G. Set IDENTITY_CACHE
This example disables the identity cache.
ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE=OFF ;
I know my answer might be late to the party. But i have solved in another way by adding a start up stored procedure in SQL Server 2012.
Create a following stored procedure in master DB.
USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[ResetTableNameIdentityAfterRestart]
AS
BEGIN
begin TRAN
declare #id int = 0
SELECT #id = MAX(id) FROM [DatabaseName].dbo.[TableName]
--print #id
DBCC CHECKIDENT ('[DatabaseName].dbo.[TableName]', reseed, #id)
Commit
END
Then add it in to Start up by using following syntax.
EXEC sp_procoption 'ResetOrderIdentityAfterRestart', 'startup', 'on';
This is a good idea if you have few tables. but if you have to do for many tables, this method still works but not a good idea.
This is still a very common issue among many developers and applications regardless of size.
Unfortunately the suggestions above do not fix all scenarios, i.e. Shared hosting, you cannot rely on your host to set the -t272 startup parameter.
Also, if you have existing tables that use these identity columns for primary keys, it is a HUGE effort to drop those columns and recreate new ones to use the BS sequence workaround. The Sequence workaround is only good if you are designing the tables new from scratch in SQL 2012+
Bottom line is, if you are on Sql Server 2008R2, then STAY ON IT. Seriously, stay on it. Until Microsoft admits that they introduced a HUGE bug, which is still there even in Sql Server 2016, then we should not upgrade until they own it and FIX IT.
Microsoft straight up introduced a breaking change, i.e. they broke a working API that no longer works as designed, due to the fact that their system forgets their current identity on a restart. Cache or no cache, this is unacceptable, and the Microsoft developer by the name of Bryan needs to own it, instead of tell the world that it is "by design" and a "feature". Sure, the caching is a feature, but losing track of what the next identity should be, IS NOT A FEATURE. It's a fricken BUG!!!
I will share the workaround that I used, because My DB's are on Shared Hosting servers, also, I am not dropping and recreating my Primary Key columns, that would be a huge PITA.
Instead, this is my shameful hack (but not as shameful as this POS bug that microsoft has introduced).
Hack/Fix:
Before your insert commands, just reseed your identity before each insert. This fix is only recommended if you don't have admin control over your Sql Server instance, otherwise I suggest reseeding on restart of server.
declare #newId int -- where int is the datatype of your PKey or Id column
select #newId = max(YourBuggedIdColumn) from YOUR_TABLE_NAME
DBCC CheckIdent('YOUR_TABLE_NAME', RESEED, #newId)
Just those 3 lines immediately before your insert, and you should be good to go. It really won't affect performance that much, i.e. it will be unnoticeable.
Goodluck.
There are many possible reasons for jumping identity values. They range from rolled back inserts to identity management for replication. What is causing this in your case I can't tell without spending some time in your system.
You should know however, that in no case you can assume an identity column to be contiguos. There are just too many things that can cause gaps.
You can find a little more information about this here: http://sqlity.net/en/792/the-gap-in-the-identity-value-sequence/
I have one desktop application receiving data from a webservice and storing it inside a local postgresql database (while the webservice retrieves data from a SQL Server database). At the end of the process there will be a minimum of 2.5 million entries inside a table in my local database but this will be received from de webservice in batches of about 300 rows at time and within a time frame of about 15 days.
What I need is a way to make sure that my local database has the exact same information the server's database has.
I'm thinking of creating some sort of checksum for each batch received and then, after all batches were received, another checksum of the entire table but I don't know if this is the best solution and, if is, I don't know where to start to create it.
PS: TCP already handles integrity check so I don't even know if this is needed, but it is critical that the data are the same.
I can see how a checksum could possibly be useful, but the amount of transformation you're doing would probably make it impractical. You'd have to derive the checksum on either the original form of the data or on the transformed form; it wouldn't be valid on both.
You have some strange constraints (been there myself), so it's kind of hard to come up with a clear strategy without knowing all the details. Maybe one of the following suggestions would work.
A simple count(*) on the SQL Server side and on the PostgreSQL side after the migration is complete.
Dump out a list of keys from the SQL Server side and from the PostgreSQL side after the migration is complete, and then sort and compare those files.
If 1 and 2 aren't possible because of limited access to SQL Server, maybe dump out the results of the web service calls to a single file location as you go along, and then extract the same data from PostgreSQL at the end, and compare those files.
There are numerous tools available for comparing files if you choose options 2 or 3.
Do you have control over the web service and SQL Server DB? If you do, SQL Server Change Tracking should do the trick. MSDN Change Tracking will track every change (or just the changes you care about) on a per table basis. Each time you synchronize you just pass it your version number and it will return the changeset required to bring you up to date.
I have developed an network application that is in use in my company for last few years.
At start it was managing information about users, rights etc.
Over the time it grew with other functionality. It grew to the point that I have tables with, let's say 10-20 columns and even 20,000 - 40,000 records.
I keep hearing that Access in not good for multi-user environments.
Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client.
It happens because there is no database engine on the server side and data filtering is done on the client side.
I would migrate this project to the SQL Server but unfortunately it cannot be done in this case.
I was wondering if there is more reliable solution for me than using Access Database and still stay with a single-file database system.
We have quite huge system using dBase IV.
As far as I know it is fully multiuser database system.
Maybe it will be good to use it instead of Access?
What makes me not sure is the fact that dBase IV is much older than Access 2000.
I am not sure if it would be a good solution.
Maybe there are some other options?
If you're having problems with your Jet/ACE back end with the number of records you mentioned, it sounds like you have schema design problems or an inefficiently-structured application.
As I said in my comment to your original question, Jet does not retrieve full tables. This is a myth propagated by people who don't have a clue what they are talking about. If you have appropriate indexes, only the index pages will be requested from the file server (and then, only those pages needed to satisfy your criteria), and then the only data pages retrieved will be those that have the records that match the criteria in your request.
So, you should look at your indexing if you're seeing full table scans.
You don't mention your user population. If it's over 25 or so, you probably would benefit from upsizing your back end, especially if you're already comfortable with SQL Server.
But the problem you described for such tiny tables indicates a design error somewhere, either in your schema or in your application.
FWIW, I've had Access apps with Jet back ends with 100s of thousands of records in multiple tables, used by a dozen simultaneous users adding and updating records, and response time retrieving individual records and small data sets was nearly instantaneous (except for a few complex operations like checking newly entered records for duplication against existing data -- that's slower because it uses lots of LIKE comparisons and evaluation of expressions for comparison). What you're experiencing, while not an Access front end, is not commensurate with my long experience with Jet databases of all sizes.
You may wish to read this informative thread about Access: Is MS Access (JET) suitable for multiuser access?
For the record this answer is copied/edited from another question I answered.
Aristo,
You CAN use Access as your centralized data store.
It is simply NOT TRUE that access will choke in multi-user scenarios--at least up to 15-20 users.
It IS true that you need a good backup strategy with the Access data file. But last I checked you need a good backup strategy with SQL Server, too. (With the very important caveat that SQL Server can do "hot" backups but not Access.)
So...you CAN use access as your data store. Then if you can get beyond the company politics controlling your network, perhaps then you could begin moving toward upfitting your current application to use SQL Server.
I recently answered another question on how to split your database into two files. Here is the link.
Creating the Front End MDE
Splitting your database file into front end : back end is sort of a key to making it more performant. (Assume, as David Fenton mentioned, that you have a reasonably good design.)
If I may mention one last thing...it is ridiculous that your company won't give you other deployment options. Surely there is someone there with some power who you can get to "imagine life without your application." I am just wondering if you have more power than you might realize.
Seth
The problems you experience with an Access Database shared amongst your users will be the same with any file based database.
A read will pull a lot of data into memory and writes are guarded with some type of file lock. Under your environment it sounds like you are going to have to make the best of what you have.
"Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client. "
Actually no. This is a common misstatement spread by folks who do not understand the nature of how Jet, the database engine inside Access, works. Pulling down all the records, or excessive number of records, happens because you don't have all the fields used in the selection criteria or sorting in the index. We've also found that indexing yes/no aka boolean fields can also make a huge difference in some queries.
What really happens is that Jet brings down the index pages and data pages which are required. While this is a lot more data than a database engine would create this is not the entire table.
I also have clients with 600K and 800K records in various tables and performance is just fine.
We have an Access database application that is used pretty heavily. I have had 23 users on all at the same time before without any issues. As long as they don't access the same record then I don't have any problems.
I do have a couple of forms that are used and updated by several different departments. For instance I have a Quoting form that contains 13 different tabs and 10-20 fields on each tab. Users are typically in a single record for minutes editing and looking for information. To avoid any write conflicts I call the below function any time a field is changed. As long as it is not a new record being entered, then it updates.
Function funSaveTheRecord()
If ([chkNewRecord].value = False And Me.Dirty) Then
'To save the record, turn off the form's Dirty property
Me.Dirty = False
End If
End Function
They way I have everything setup is as follows:
PDC.mdb <-- Front End, Stored on the users machine. Every user has their own copy. Links to tables found in PDC_be.mdb. Contains all forms, reports, queries, macros, and modules. I created a form that I can use to toggle on/off the shift key bipass. Only I have access to it.
PDC_be.mdb <-- Back End, stored on the server. Contains all data. Only form and VBA it contains is to toggle on/off the shift key bipass. Only I have access to it.
Secured.mdw <-- Security file, stored on the server.
Then I put a shortcut on the users desktop that ties the security file to the front end and also provides their login credentials.
This database has been running without error or corruption for over 6 years.
Access is not a flat file database system! It's a relational database system.
You can't use SQL Server Express?
Otherwise, MySQL is a good database.
But if you can't install ANYTHING (you should get into those politics sooner rather than later -- or it WILL be later), just use you existing database system.
Basically with Access, it cannot handle more than 5 people connected at the same time, or it will corrupt on you.