Occasionally our site slows down and the RAM usage goes up massively high. Then the app pool stops and I have to restart it. Then it's ok for a few days before the RAM suddenly spikes again and the app pool soon stops. The CPU isn't high.
Before the app pool stops I've noticed that one of our pages always hangs. The line it hangs on is a foreach on a ResourceSet :
var englishLocations = Lang.Countries.ResourceManager.GetResourceSet(new CultureInfo("en-GB"),true,true);
foreach(DictionaryEntry entry2 in englishLocations) // THIS LINE HANGS
We have the same code deployed on a different box and this doesn't happen. The main differences between the two boxes are :
Bad box
Window Server 2008 R2 Standard SP 1
IIS 7.5.7600.16385
.NET 4.5
24GB RAM
Good box
Window Server 2008 Server SP 2
IIS 7.0.6000.16386 SP 2
.NET 4.0
24GB RAM
I've tried adding uploadReadAheadSize="0" to the web.config as described here :
http://rionscode.wordpress.com/2013/03/11/resolving-controller-blocking-within-net-4-5-and-asp-net-mvc/
Which didn't work.
Why would foreach hang? It's hanging on the very first item, actually on the foreach.
Thanks.
I know it is an old post, but nevertheless... There is the potential of a deadlock when iterating over a ResourceSet and at the same time retrieving some other object through from the same Resources.
The problem is that when using a ResourceSet the iterator takes out locks on the internal resource cache of the ResourceReader http://referencesource.microsoft.com/#mscorlib/system/resources/resourcereader.cs,1389 and then in the method AllocateStringNameForIndex takes out a lock on the reader itself: http://referencesource.microsoft.com/#mscorlib/system/resources/resourcereader.cs,447
lock (_reader._resCache) {
key = _reader.AllocateStringForNameIndex(_currentName, out _dataPosition); // locks the reader
Getting an object takes out the same locks int the opposite order:
http://referencesource.microsoft.com/#mscorlib/system/resources/runtimeresourceset.cs,300
and http://referencesource.microsoft.com/#mscorlib/system/resources/runtimeresourceset.cs,335
lock(Reader) {
....
lock(_resCache) {
_resCache[key] = resLocation;
}
}
This can lead to a deadlock. We had this exact issue recently..
I experienced very similar problem.
Every once in a while IIS would hang, and I would see number of requests just sitting there. They were all in state ExecuteRequestHandler and with ManagedPipelineHandler module name.
After investigating with process explorer, I could see that all of them were sitting at mscorlib.dll!ResourceEnumerator.get_Entry, additional stack trace suggested some NGen action and then ntdll.dll!WaitForMultipleObjects.
My working hypothesis is that when multiple threads start enumerating those resources, we can run into a deadlock (possibly on some native code file generation), and alll subsequent threads then just keep on piling up.
To resolve it, I just created a critical section around this code block, to ensure that it is executed sequentially - I haven't experienced the issue since.
private static readonly object ResourceLock = new object();
public static MvcHtmlString SerializeGlobalResources(this HtmlHelper helper)
{
lock (ResourceLock)
{
// Existing code goes here ....
}
}
Based upon another answer to give you some idea how about using a try catch model ?
Perhaps it hangs because that resource isnt available / locked /..permissions etc.
var englishLocations = Lang.Countries.ResourceManager.GetResourceSet(new CultureInfo("en-GB"),true,true);
foreach(DictionaryEntry entry2 in englishLocations) // THIS LINE HANGS
ResourceManager CultureResourceManager = new ResourceManager("My.Language.Assembly", System.Reflection.Assembly.GetExecutingAssembly());
ResourceSet resourceSet = CultureResourceManager.GetResourceSet("sv-SE", true, true);
try { resourceSet.GetString("my_language_resource");}
catch (exception ex) { // from here log your error ex to wherever you like with some code }
Related
I have a service using WCF. Internally it has a dictionary with lists that you can add to or get a subset of from different endpoints.
The code is something like this:
List<Data> list = null;
try
{
locker.EnterReadLock();
list = internalData[Something].Where(x => x.hassomething()).ToList();
}
finally
{
locker.ExitReadLock();
}
foreach (var y in list)
{
result[y.proprty1].Add(y.property2); // <-- here it hangs
}
return result;
So the internalData is locked with a ReaderWriterLockSlim for all operations, readerlock for reading and writerlock for adding. I make a copy of the items inside the lock and work on this copy later.
The issue is after a while, more and more cpu-cores goes to 100% and finally is uses all cores. It can run perfectly for days and million calls before it stops.
Attaching debugger and pausing shows that one treads hang on adding to the result dictionary. But as soon as I resume all threads will continue and a lot of memory is being released.
Is there something special happening when a debugger is attached, pausing and resuming that will release something like this?
I changed my locks to lock(something) { code... } and the problem with hangs went away. So it looks lilke i ran into the issue Steffen Winkler pointed out in his comment. http://joeduffyblog.com/2007/02/07/introducing-the-new-readerwriterlockslim-in-orcas/
I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.
First I've read all the posts here regarding this issue and I manged to progress a bit. However it seems I do need your help :)
I have a program with several threads, sometimes (not always) the CPU usage of the program is increasing up to 100% and never reduced until I shut down the program.
As I read in other similar posts, I ran the app using the visual studio (2012 - Ultimate).
I paused the app, and open the threads window.
There I pauses the threads until I've found the 4 threads which stuck the app.
The all refer to the same line of code (a call for constructor).
I checked the constructor inside and outside and couldn't find any loop which could cause it.
To be more careful I've added break point to almost every line of code and resume the app. None of them have been triggered.
This is the line of code:
public static void GenerateDefacementSensors(ICrawlerManager cm)
{
m_SensorsMap = new Dictionary<DefacementSensorType, DefacementSensor>();
// Create instance of all sensors
// For any new defacement sensor, don't forget to add an appropriate line here
// m_SensorsMap.add(DefacementSensorType.[Type], new [Type]Sensor())
try
{
if (m_SensorsMap.Count <= 0)
{
m_SensorsMap.Add(DefacementSensorType.BackgroundSensor, new BackgroundSensor());
m_SensorsMap.Add(DefacementSensorType.TaglinesSensor, new TaglinesSensor(cm.Database));
m_SensorsMap.Add(DefacementSensorType.SingleImageSensor, new SingleImageSensor());
}
}
catch (Exception)
{
Console.WriteLine("There was a problem initializing defacement sensors");
}
}
The second "m_SensorsMap.Add" is marked with green arrow, as I understand it, it means it's still waiting to the first line to finish.
By the way, the m_SensorsMap.Count value is 3.
How can I find the problem?
Is it a loop?
Or maybe a deadlock (not make sense because it shouldn't be 100% cpu, right?)
It's pointless to upload a code because this is a huge project.
I need more general help like how to debug?
Is it could something else than a loop?
Because it's a bug that returns every while and than I'm not closing the app until I found the problem :)
Thanks in advance!!
Edit:
The constructors:
public TaglinesSensor(IDatabase db)
{
m_DB = db;
}
I couldn't found the problem so I've changed the design on order not to call those constructors anymore.
Thanks for the guys who tried to help.
Shaul
In my c# application multiple clients will access the same server, to process one client ata a time below code is written.In the code i used Moniter class and also the queue class.will this code affect the performance.if i use Monitor class, then shall i remove queue class from the code.
Sometimes my remote server machine where my application running as service is totally down.is the below code is the reasond behind, coz all the clients go in a queue, when i check the netstatus -an command using command prompt, for 8 clients it shows 50 connections are holding in Time-wait...
Below is my code where client acces the server ...
if (Id == "")
{
System.Threading.Monitor.Enter(this);
try
{
if (Request.AcceptTypes == null)
{
queue.Enqueue(Request.QueryString["sessionid"].Value);
string que = "";
que = queue.Dequeue();
TypeController.session_id = que;
langStr = SessionDatabase.Language;
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
TypeController.session_id = "";
filter.Execute();
Request.Clear();
return filter.XML;
}
else
{
TypeController.session_id = "";
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
filter.Execute();
}
}
finally
{
System.Threading.Monitor.Exit(this);
}
}
Locking this is pretty wrong, it won't work at all if every thread uses a different instance of whatever class this code lives in. It isn't clear from the snippet if that's the case but fix that first. Create a separate object just to store the lock and make it static or give it the same scope as the shared object you are trying to protect (also not clear).
You might still have trouble since this sounds like a deadlock rather than a race. Deadlocks are pretty easy to troubleshoot with the debugger since the code got stuck and is not executing at all. Debug + Break All, then Debug + Windows + Threads. Locate the worker threads in the thread list. Double click one to select it and use Debug + Call Stack to see where it got stuck. Repeat for other threads. Look back through the stack trace to see where one of them acquired a lock and compare to other threads to see what lock they are blocking on.
That could still be tricky if the deadlock is intricate and involves multiple interleaved locks. In which case logging might help. Really hard to diagnose mandelbugs might require a rewrite that cuts back on the amount of threading.
I have created a webservice in .net 2.0, C#. I need to log some information to a file whenever different methods are called by the web service clients.
The problem comes when one user process is writing to a file and another process tries to write to it. I get the following error:
The process cannot access the file because it is being used by another process.
The solutions that I have tried to implement in C# and failed are as below.
Implemented singleton class that contains code that writes to a file.
Used lock statement to wrap the code that writes to the file.
I have also tried to use open source logger log4net but it also is not a perfect solution.
I know about logging to system event logger, but I do not have that choice.
I want to know if there exists a perfect and complete solution to such a problem?
The locking is probably failing because your webservice is being run by more than one worker process.
You could protect the access with a named mutex, which is shared across processes, unlike the locks you get by using lock(someobject) {...}:
Mutex lock = new Mutex("mymutex", false);
lock.WaitOne();
// access file
lock.ReleaseMutex();
You don't say how your web service is hosted, so I'll assume it's in IIS. I don't think the file should be accessed by multiple processes unless your service runs in multiple application pools. Nevertheless, I guess you could get this error when multiple threads in one process are trying to write.
I think I'd go for the solution you suggest yourself, Pradeep, build a single object that does all the writing to the log file. Inside that object I'd have a Queue into which all data to be logged gets written. I'd have a separate thread reading from this queue and writing to the log file. In a thread-pooled hosting environment like IIS, it doesn't seem too nice to create another thread, but it's only one... Bear in mind that the in-memory queue will not survive IIS resets; you might lose some entries that are "in-flight" when the IIS process goes down.
Other alternatives certainly include using a separate process (such as a Service) to write to the file, but that has extra deployment overhead and IPC costs. If that doesn't work for you, go with the singleton.
Maybe write a "queue line" of sorts for writing to the file, so when you try to write to the file it keeps checking to see if the file is locked, if it is - it keeps waiting, if it isn't locked - then write to it.
You could push the results onto an MSMQ Queue and have a windows service pick the items off of the queue and log them. It's a little heavy, but it should work.
Joel and charles. That was quick! :)
Joel: When you say "queue line" do you mean creating a separate thread that runs in a loop to keep checking the queue as well as write to a file when it is not locked?
Charles: I know about MSMQ and windows service combination, but like I said I have no choice other than writing to a file from within the web service :)
thanks
pradeep_tp
Trouble with all the approached tried so far is that multiple threads can enter the code.
That is multiple threads try to acquire and use the file handler - hence the errors - you need a single thread outside of the worker threads to do the work - with a single file handle held open.
Probably easiest thing to do would be to create a thread during application start in Global.asax and have that listen to a synchronized in-memory queue (System.Collections.Generics.Queue). Have the thread open and own the lifetime of the file handle, only that thread can write to the file.
Client requests in ASP will lock the queue momentarily, push the new logging message onto the queue, then unlock.
The logger thread will poll the queue periodically for new messages - when messages arrive on the queue, the thread will read and dispatch the data in to the file.
To know what I am trying to do in my code, following is the singletone class I have implemented in C#
public sealed class FileWriteTest
{
private static volatile FileWriteTest instance;
private static object syncRoot = new Object();
private static Queue logMessages = new Queue();
private static ErrorLogger oNetLogger = new ErrorLogger();
private FileWriteTest() { }
public static FileWriteTest Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new FileWriteTest();
Thread MyThread = new Thread(new ThreadStart(StartCollectingLogs));
MyThread.Start();
}
}
}
return instance;
}
}
private static void StartCollectingLogs()
{
//Infinite loop
while (true)
{
cdoLogMessage objMessage = new cdoLogMessage();
if (logMessages.Count != 0)
{
objMessage = (cdoLogMessage)logMessages.Dequeue();
oNetLogger.WriteLog(objMessage.LogText, objMessage.SeverityLevel);
}
}
}
public void WriteLog(string logText, SeverityLevel errorSeverity)
{
cdoLogMessage objMessage = new cdoLogMessage();
objMessage.LogText = logText;
objMessage.SeverityLevel = errorSeverity;
logMessages.Enqueue(objMessage);
}
}
When I run this code in debug mode (simulates just one user access), I get the error "stack overflow" at the line where queue is dequeued.
Note: In the above code ErrorLogger is a class that has code to write to the File. objMessage is an entity class to carry the log message.
Alternatively, you might want to do error logging into the database (if you're using one)
Koth,
I have implemented Mutex lock, which has removed the "stack overflow" error. I yet have to do a load testing before I can conclude whether it is working fine in all cases.
I was reading about Mutex objets in one of the websites, which says that Mutex affects the performance. I want to know one thing with putting lock through Mutex.
Suppose User Process1 is writing to a file and at the same time User Process2 tries to write to the same file. Since Process1 has put a lock on the code block, will Process2 will keep trying or just die after the first attempet iteself.?
thanks
pradeep_tp
It will wait until the mutex is released....
Joel: When you say "queue line" do you
mean creating a separate thread that
runs in a loop to keep checking the
queue as well as write to a file when
it is not locked?
Yeah, that's basically what I was thinking. Have another thread that has a while loop until it can get access to the file and save, then end.
But you would have to do it in a way where the first thread to start looking gets access first. Which is why I say queue.