Application SQL Server Timeout because of metadatalock? - c#

I get the following generic Timeout expired error in my .NET Application when I run a program which does an update to the database.
In SQL Server Activity Monitor, when this error appears, it shows there is a lock on the database shown in the next image.
Any idea what could be going on here? I think it is permissions related because when I log on with my Windows Administrator account, it runs through fine and doesn't error out. Also, I don't think it is really a time out issue as SQL Server is set to time out after 10 minutes and so are the Sql Commands I create in my code.
Any ideas, what permissions I would need to change to stop this from happening for my users?

The issue was resolved by logging in as an Administrator, so permissions related.

Related

ASP.Net Timeout issue

I developed a medium sized asp.net website that accepts several hundred pdf documents a day. I have a very simple insert SP that inserts the documents into an Image field in SQL Server 2008 R2.
A few times a week I am starting to have an issue where my website seems to be timing out on this insert. Its very strange cause my drop down lists still load and authentication still works. Most of the time I can recycle the application pool or restart IIS and that fixes everything.
That is a very simplified version of course but thats the long and short of it. Has anyone else had an issue like this ??
Thanks!!
Specify the maximum pool size:-
If this doesn't work, set the command timeout to 0. Please send us the exactly error message if the two above solutions don't work

There are no active servers. Background tasks will not be processed

I have a problem which I already looking at it for a few days and still have no solution.
I found this exception in my C# web app log.
[2015-12-03 13:56:06] [ERROR] [] [Error occurred during execution of
'Server Bootstrapper' component. Execution will be retried (attempt
120 of 2147483647) in 00:05:00 seconds.]
[System.Data.SqlClient.SqlException (0x80131904): Login failed for
user "A Network account".
It appears to me that it is using the network account to access the SQL database and because that network account is not granted access to the database, hence login failed and server cannot startup.
However, when I go to the Hangfire dashboard, I can see the recurring jobs, which seems to me that hangfire can access the database with the right account for retrieving the recurring jobs.
Also, in the IIS server, we already set the Identity to "ApplicationPoolIdentity" for the application pools. Hence, we should use the virtual account instead of the network account.
May I know anyone has the similar problem and have the solution. Really appreciate for your help!!
I ran into the same issue using Hangfire with SQL Server and EntityFramework Code First where EntityFramework was responsible for creating the database, so this suggestion is based purely off of that scenario.
If you are using EF Code First, your database isn't created until the context is created and accessed for the first time.
This means that at App Start your database might not exist. Hangfire won't create the database, it only creates the tables in an already existing database.
So what you might be seeing is:
No database exists.
Your app start's up and Hangfire tries to get itself running, the server process throws an error because EF hasn't created the DB.
The web application start finishes since the hangfire service crashing isn't fatal to the application.
Something in your web app calls into EntityFramework.
EF runs and creates the database (likely no hangfire tables at this point)
you access the hangfire dashboard, hangfire is now able to connect, and see the tables don't exist, so it creates them. (now you will see the tables in your db)
The dashboard can now see the database and show you the stats (likely 0 servers running) so it seems like everything is working.
The way I solved it was to make sure that even if the database is empty (not tables), it is at least created. This was hangfire can access it to install it's tables and start, and EF can access it, and create it's schema.
Also, (and this probably isn't it), GlobalConfiguration.Configuration.UseSqlServerStorage() should run first before other Hangfire startup configuration.
Hope that helps!
Steve
You should create background job server instance, see document here
In my case, the server name was too long and there was a SQL error saying it would be truncated. I was only able to see this error after starting it with Just My Code turned off in Visual Studio.
Moral of the story: Use a shorter name for the server and don't append your own unique-per-instance identifiers to it so it can't run into a truncate error. The column is an nvarchar(100).

How to run C# Console application from SQL Job in SQL Server 2008?

I want to try out to run my C# console application from SQL Job. So to test it, I simply created a console application and using C# and SMO, wrote few lines to create a database. I could run it successfully and it creates a database in the SQL Server as expected.
Then in IDE, I clicked on Build-->Publish myProject to E:\myFolder\MakeNewDB24 because that's where my SQL Server resides on.
The above action copied the following files to the specified location i.e., E:\myFolder\MakeNewDB24
Application Files
setup.exe
myProject.application
Then I opened my SQL Server, created a job by
rt. clicking Jobs Folder-->New Job.
I filled all the information in General.
In Steps, I have under Command,
\\mySQLServer\myFolder\MakeNewDB24\setup.exe
Type: Operating System(CmdExec)
Run as: SQL Server Agent Service Account
I ran the job. It showed the result as "Success"
When I viewed the History,
Executed as user: mySQLServer\SYSTEM. The step did not generate any output. Process Exit Code 0. The step succeeded.
I was happy. But when I went to check the database that was supposed to be created, its not available, meaning the SQL JOb didn't do its job to create my db.
I don't know what am I missing here? If anyone has knowledge on this, please kindly share with me. I really really want to see my SQL Job do this work. The reason I m using SQL job is because all of my automation tasks start from here. Thanks.
Your problem almost certainly has to do with permissions. You need to check the user id used for running the SQL Server Agent job. My guess is that it doesn't have permissions somewhere along the line.
The reason that your job returns success is because CMD jobs always return success if they successfully launch the CMD executable. It has nothing to do with whether or not your code succeeded.
In environments where I don't have to worry about security, I have gotten into the unfortunate habit of giving everything admin privileges, so I don't have to worry about what privileges are needed where. Good luck.

SSIS Package won't execute when called

I have an 2005 SSIS package that I'm calling in a service created in VS 2005. The package will not run. The purpose of the package is to parse a file and put data into a "Load Table".
The package runs perfectly on its own, but will not run at all when executed programatically - when I'm stepping through the code. The Event Viewer indicates that the package has started but then it indicates that it has failed. I don't get any more information than that.
It's not throwing an exception. It's just returning "Failure". I've tried executing against different databases - Same result. The file it's parsing is valid becuase it runs fine when run on it's own.
The only other thing that I can think of is that I'm having some problem with user permissions, but I have no idea on how to go about looking into that issue. Does anyone have any ideas?
Sounds like a permissions issue. Make sure the process it is running as has the same permissions as the account which you are using to run it interactively.
Without more information it's hard to tell, but this sounds like a permissions issue.
When it's running from code, does the person or user account the code is running under have the appropriate permissions?
For example, if you run it manually, you're most likely using your own credentials. As the developer, I'd assume you have admin rights, so you can perform the task.
However, when run from a program you need to know what user account the program runs under. Is it Asp.Net? The default user is Network Service. Is it a Scheduled Task running under the default Local System account? You'd need to change the account it runs under or grant permissions on the DB appropriately.
When you loaded it from Studio to Integration Services, what Package Protection Level did you use? I've had the best luck with the last in the list: Rely on Server Storage and roles for access control.
Does your package have error logging set up? It could help you to see what the problem is.
Also, does the account for the service running the package have the correct rights to the directory where the file to be picked up is stored not just correct rights in SQL Server? We've had that problem before.
Have you attached Events to the execution of the package? Are you calling the package by code? Which Method are you using?
Please check Loading and Running a Remote Package...
Then when debugging, add a break point at the Console.Write Line where gets info of the error.
Hope it helps,
Arturo

MSDTC failing on first transaction

I have an application that retrieves data and stores into a database once per day. Until recently this application has resided on the same machine as the SQL server but due to some hardware issues with some of the required peripherals, it has been moved to a seperate machine running windows XP.
The problem we are having here, is that when the first transaction of the morning is run, we receive a stack trace of the following:
System.Transactions.TransactionManagerCommunicationException: Communication with the underlying transaction manager has failed. ---> System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.
However, immediately rerunning the transaction again is successful. It seems as though the MSDTC is taking too long to respond to the first transaction and is thus, failing but is then ready for the second. I have found several references to this occurring on the internet but have found no real solution. Has anyone encountered this? If so, is there a way of preventing MSDTC from unloading from memory or is there another solution for this such as extending time outs?
Thanks guys,
Just to fill you in, we have resolved the issue by changing the dcom config to use the remote coordinator located on the SQL server, so far we have not experienced any further issues.
I recommend you first look in the event logs of all machines involved and see what else is there. You're making an assumption about what's going on. It could be a good assumption, but I suggest you find out before making changes.
I'm also going to start the process of moving this question over to ServerFault, where you'll probably get a faster answer. If it takes too long (five people have to vote), then you may want to go ask the question over there manually. If you do, then indicate that the original (and past the link) is probably on its way.
One thing to look at (and it may not be the cause of your problem), is to make sure that the reverse DNS lookup on the client's IP actually resolves to a name that refers to the client machine. We had problems with our DNS/DHCP setup, where an IP was matched to multiple names. When the remote end of MSDTC tried to connect back to the MSDTC on the client, it was attempting to connect to a different machine.
This will manifest as (seemingly random) transaction timeouts.
Oh dear we have also been facing the same problem. We were migrating data from one database to another (with different structure) and was using Subsonic to speedup the process. We used transactions and SharedDbConnectionScope object and it failed simillarly on the machine running XP SP3. I think there are some updates in SP3 that breaks the things as it is working fine on Vista, 2003 and 2008 Servers.
EDIT: Here is an MSDN KB article that discusses the same problem.
You could probably try running a process which simply initiates and commits a transaction on the DTC every 30 minutes or so?
We had a similar problem in our test environment. The first transaction that occurred after 10 minutes of inactivity failed with error “Communication with the underlying transaction manager has failed”.
After some research we concluded that the MSDTC connection was canceled and could not be established in the required amount of time (it seams like the default timeout for this operation is 4 seconds).
To solve this problem we have increased the length of time that the client computer waits for the bind packet response from the server computer. This is done by adding a key in the registry of client computer: http://support2.microsoft.com/?id=922430

Categories

Resources