I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
Related
Recently created a C# tool using SMO class to automate the refactor and merge of SQL Server databases for migration into Azure.
The TransferData method successfully adheres to the BulkCopyTimeout for the data copy phase; proved this by extending it when it timed out.
When the transfer phase moves to CREATE INDEX statements they appear to hit a timeout after 120sec / 2mins on a particularly large table.
The ServerConnection object has StatementTimeout and ConnectionTimeout both set to 0 (as initial research suggested doing) to no avail.
Running a profiler trace, I noticed the "Application Name" differs from the original set (MergeDB v1.8) when the bulk copy and index create phases are running.
The original connection is still present but it appears that the Transfer class spawns additional connections (but whilst appearing to pass on BulkCopyTimeout; failing to pass on the application name and (my hypothesis) the StatementTimeout property.
I'm using SMO v150.18131.0 connecting to SQL 2008 R2.
I am trying to create a database, but once created, I cannot connect to it.
The server is Microsoft SQL Server 2008 and using .Net 4.5. We're creating the database with SMO, but we're usually using Dapper to connect and query the database.
This is the code I have so far, which works :
System.Data.SqlClient.SqlConnection con = new System.Data.SqlClient.SqlConnection(connectionString);
Microsoft.SqlServer.Management.Smo.Server srv = new Microsoft.SqlServer.Management.Smo.Server(new Microsoft.SqlServer.Management.Common.ServerConnection(con));
var database = new Microsoft.SqlServer.Management.Smo.Database(srv, dbName);
database.Create(false);
database.Roles["db_datareader"].AddMember(???);
database.Roles["db_datawriter"].AddMember(???);
database.Roles["db_backupoperator"].AddMember(???);
srv.Refresh();
Noce the ??? ? I have tried
System.Environment.UserDomainName + "\\" + System.Environment.UserName
and
System.Environment.UserName
but it fails (update) with the error Add member failed for DatabaseRole 'db_datareader'. with both values.
The problem is that when I create the database, I cannot coonect to it for some reason (using Dapper), from the same program. (update) I get the error message : Cannot open database \"<database_name>\" requested by the login. The login failed.\r\nLogin failed for user '<domain>\\<username>' (where <database_name> is the database name, <domain> my logon domain, and <username> my Windows logon).
Am I missing something? Am I doing th right thing? I've tried searching the web, but it seems no one creates database this way. The methods are there, it should work, no?
** Update **
If I comment the database.Roles["..."].AddMember(...) lines, and I add a break point at srv.Refresh(), resuming the program from there solves everything.
Why a break point solves everything? I can't just break the program in production... nor break the program when creating the database everytime.
It sounds like the Dapper connection issue is a problem with SQL Server doing some of the SMO operations asynchronously. In all likelihood, the new Database is not ready for other users/connections immediately, but requires some small time for SQL Server to prepare it. In "human-time" (in SSMS, or a Breakpoint) this isn't noticeable, but "program-time" it too fast, so you probably need to give it a pause.
This may also be the problem with the Role's AddMember, but there a a number of things that could be wrong here, and we do not have enough information to tell. (specifically, does AddMember work later on? and are the strings being passed correct or not?)
This is happening because you've created the user, but no login for that user. Though I don't know the exact syntax, you're going to have to create a Login. You'll want to set its LoginType to LoginType.WindowsUser. Further, you'll likely need to set the WindowsLoginAccessType to WindowsLoginAccessType.Grant and you'll need to set the Credential by building one, probably a NetworkCredential with the user name you want.
To put a visual on this, the Login is under the Security node for the Server in Management Studio whereas the User is under the Security node for the Database. Both need to exist for access to the SQL Server.
I'm updating a current program that is working and in use on a Live environment, it saves Customers and Orders then exports them to an old database as well. All of the Reporting is still done in the old system while the reporting system in the new system is in development, which is why these all need to be exported.
This program has a built-in C# TransactionManager that is used to group multiple calls from C# to SQL within one transaction. Whenever I try to duplicate this I get errors and can't get it working.
Here's the code that is in place, working:
using (ITransactionInfo trx = this.TransactionManager.BeginTransaction())
{
//
// Update the customer. If the customer doesn't exist, then create a new one.
//
this.SaveCustomer(Order);
//
// Save the Order.
//
this.Store.SaveCorporateOrder(Order, ServiceContext.UserId);
//
// Save the Order notes and the customer notes.
//
this.NotesService.AppendNotes(NoteObjectTypes.CorporateOrder, Order.Id, Order.OrderNotes);
this.NotesService.AppendNotes(NoteObjectTypes.Customer, Order.Customer.Id, Order.CustomerNotes);
//
// Export the Order if it's new.
//
this.ExportOrder(Order, lastSavedVersion);
//
// Commit the transaction.
//
trx.Commit();
}
All of these functions just format the data and send parameters to Stored Procedures in the DB that perform the Select / Insert / Update operations on the DB.
The SaveCustomer stored procedure saves the customer to the new database.
The SaveCorporateOrder stored procedure gets information that was writen by the Save Customer stored procedure and uses it to save the Order to the new database.
The ExportOrder stored procedure gets information that was written by both of the previous ones and exports the Order to the old database.
Each of these stored procedures contain code that starts a new transaction if ##TRANCOUNT == 0 and have a commit statement at the end. It appears that none of these are being used because of the transaction in C#, but there is no code that passes transaction information or connection information to the stored procedures that I can see. This is working and in use on a SQL 2005 server.
When I try to build this and use it on my development environment that uses SQL 2008R2, I get errors like
"Uncommittable transaction is detected at the end of the batch"
and
"The server failed to resume the transaction"
It appears that each one is starting it's own transaction and is unable to read the data from the previous, uncommitted transaction instead of seeing that it is in the same transaction. I don't know if the different SQL version could be causing this to work differently or not, but the exact same code works in the Live install but not on my Dev environment.
Any ideas, or even direction where to look next would be wonderfull!
Thanks!
-Jacob
I think that the problem is because transaction fails and is not rejected. You don't have rollback call for the situation when any of SQL queries fail. Have you checked those queries?
I have a webservice method that executes a tabular stored procedure. This sp takes 1 minute to be executed completely. I call that webservice method remotely and get results.
In normal situations everything is OK and I get results successfully.
But when the server is busy the webservice can not execute that sp (I tested it in SQL Profiler and nothing comes to profiler) but I can execute that sp manually in SQL Server Management Studio.
When I restart SQL Server, the problem is solved and the webservice can execute sp.
Why in busy situations webservice can not execute sp but I can do it in SQL Server Management Studio?
How this situation can be explained? How can I solve that?
Execute sp_who and see what is happening; my guess is that it is being blocked - perhaps your "isolation level" is different between SSMS and the web-service.
Equally, it could well be that the connection's SET options are different between SSMS and the web-service, which can lead to certain changes in behavior - for example, computed-stored-indexed values are very susceptible to SET options: if the caller's options aren't compatible with the options that were set when the column was created, then it can be forced to table-scan them, recalculating them all, instead of using the pre-indexed values. This also applies to hoisted xml values.
A final consideration is parameter sniffing; if the cache gets generated for yourproc 'abc' which has very different stats than yourproc 'def', then it can run very bad query plans. The optimize for / unknown hint can help with this.
I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.
I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.