How to debug async/await (multi-threading) issue with Entity Framework? - c#

I have "inherited" a multi-tier ASP.NET Core project that uses AutoMapper and EntityFramework.
It also makes heavy use of async/await.
As soon as there are multiple simultaneous requests, I receive an InvalidOperationException from EntityFramework, informing me that "A second operation started on this context before a previous asynchronous operation completed."
I have already played around with the lifetime settings (everything was set to Scoped) and thought setting everything to Transient had solved the issue as a temporary workaround. But after slightly cleaning up my code, so that I could merge it into the main branch, the problem returned.
Anyway, I figured it would be better to create a sample project to replicate the problem. But that didn't work either. I tried to copy the exact project structure, use the same (outdated) versions of Entity Framework etc., but no luck. Everything works fine, both with "Scoped" and "Transient" lifetime.
How would you try to debug this problem?
I have already added code to log the addresses in memory, like so:
private static string GetAddress(object a)
{
GCHandle handle = GCHandle.Alloc(a, GCHandleType.Weak);
IntPtr pointer = GCHandle.ToIntPtr(handle);
handle.Free();
return "0x" + pointer.ToString("X");
}
public async Task<IActionResult> GetAll()
{
Debug.WriteLine("######## {0:00}: Controller {1}\tManager {2}\tRepository\t\t\tContext\t\t\t(DepartmentController.GetAll)", Thread.CurrentThread.ManagedThreadId, GetAddress(this), GetAddress(_departmentManager));
Thread.Sleep(2000); // To make it easier to reproduce the issue
var departments = await _departmentManager.GetAllItems();
return new ObjectResult(GetJsonObject(departments));
}
But this didn't really help much. With the lifetime set to "Transient", all instances have different addresses on different threads, so it doesn't really look like the threads are accessing each other's DbContexts.
I suspect there's a very subtle problem somewhere, like a missing keyword or so, that is running code on a different thread.
I just can't seem to figure out what it is...
Any help will be highly appreciated.

Related

HttpContext.Current sometimes lost during async Entity Framework 6 query. Why the difference between the call stacks?

I'm debugging some code and noticed that occasionally, when it accesses the HttpContext.Current it is null and resorts to a fallback that was put in to handle that. The code is an async Web API method that calls down through the application layers and ultimately executes an async Entity Framework 6 query. (Code below) None of the layers in between do anything other than await [method call] - no .ConfigureAwait(false) or anything else.
The project has a System.Data.Entity.Infrastructure.IDbConnectionInterceptor which sets up the SQL session (for SQL RLS) in the Opened method. It uses an injected dependency which, in this case, gets the an ID it needs from the HttpContext.Current.Items collection. When I'm debugging, 95% of the time it works every time, but once in a while I found that HttpContext.Current and SynchronizationContext.Current are both null.
Looking at the Call Stack window, I can see they are arriving at the IDbConnectionInterceptor.Opened method in different ways. The successful version leads back down to my calling code in the Web API controller, but the version where it is null, leads back down to native code. I thought well maybe when it's not null, it's not even executing on a different thread, but Open does execute on a different thread from the original in both cases. My project is targeting .NET Framework 4.8 and referencing the Microsoft.AspNet.WebApi v5.2.3 nuget package. It has <httpRuntime targetFramework="4.7.2" /> under <system.web> in the config file (which I'm just now noticing does not match the 4.8 of the framework). My understanding is that as of .NET Framework 4.5, the context should flow across async calls so it seems like something is preventing that, or somehow Opened is getting queued on a thread that's not using the async/await model. So can someone help me understand the call stack of the failed request, why it might be different from the one that succeeds, and hopefully how that might explain the missing context?
Web API method:
[HttpGet]
[Infrastructure.Filters.AjaxOnly]
[Route("event/month/list/{year}")]
public async Task<IHttpActionResult> GetRoster___EventMonthItems(int year)
{
try
{
HttpContext.Current.SetCallContextFacilityID(); //This extension method sets the mentioned fallback value for when HttpContext.Current is null
List<RosterDayListItem> data = await _roster___Mapper.GetRoster___EventMonthItems(year);
return Ok(data);
}
catch (Exception ex)
{
Logging.DefaultLogger.Error(ex, "An error occurred while loading report categories.");
return BadRequest(ex.Message);
}
}
EF6 Query
public async Task<List<Roster___EventListItem>> GetRoster___EventListItems(int year, int month)
{
using (var dbContextScope = _dbContextFactory.Create())
{
var context = GetContext(dbContextScope);
var result = await context.DropInEvents
.Where(w => w.EventDate.Year == year && w.EventDate.Month == month && w.IsDeleted == false)
.Select(d => new Roster___EventListItem
{
ID = d.ID,
EventDate = d.EventDate,
EventTime = d.StartTime,
Year = d.EventDate.Year
})
.OrderBy(f => f.EventDate).ThenBy(f => f.EventTime)
.ThenByDescending(f => f.EventDate)
.ToListAsync();
return result;
}
}
Successful call stack:
Call Stack with null contexts:
Update
Grasping at straws but after thinking about it for a while, it seemed like maybe something inside EF 6 is perhaps queueing the call to IDbConnectionInterceptor.Opened on a thread in a way that loses the SynchronizationContext. So I went looking through the EF source following my successful stack trace and it looks like the call to Opened is initiated here in InternalDispatcher.DispatchAsync<TTarget, TInterceptionContext> line 257. I'm not sure how it would explain the intermittency of my problem, but might it have something to do with Task.ContinueWith that is being used here? Interestingly I found this other question related to both Task.ContinueWith that method and a SynchronizationContext being lost. Then i found this question where the answer says it will continue with a ThreadPool thread which will not have an associated SyncrhonizationContext unless one is explicitly specified. So this sounds like what I came looking for, but I'm not sure whether the TaskContinuationOptions.ExecuteSynchronously option used changes anything, and if this is the culprit, I don't yet understand why my HttpContext is available most of the time.

How to properly set up a multithreaded Asp.Net MVC + COM server on IIS

Update 2
The queueing problem was probably solved already, as we've been able to run multiple requests concurrently and the lib nicely reported progress for each operation. Other issues we're still facing about concurrency were likely the reason for this apparent behaviour, but that's a design matter. To solve this however, it'd be helpful to have some knowledge of the inner workings of classes, modules and variables as used in VB6. A question arise: would encapsulating everything (connections, components etc.) in classes ensure that every created object does not share any data with other instances?
Update 1
We've refactored the application a bit more to cope with resource disposing, especially when dealing with OCXs. Apparently that solved the out of memory issue. What still bothers me is that I don't understand what is happening beneath the surface. In this regard, is there a way to see what objects are currently in memory and how many references they have? I know the reference counting model is different from garbage collector-based systems. Still I would suppose the RCW wrapping our com objects would keep things clean for us. In the model given, is that a safe assumption or there's something we're missing?
So, I've probably read the most variegated kind of articles and docs about the topic of COM multithreading, but I still cannot get how that's supposed to work exactly, especially when interacting with .Net technologies such as ASP.Net MVC. That could be considered a simple fancy of mine, except for the fact that we've got this quite critical project and we're experiencing severe issues in trying to tie everything up. We're getting out of memory errors (in VB6) and apparently we got wrong how objects are created and data shared between these in COM. Continue reading to know how the story goes...
How things came to be
Not much to say here. We have a legacy VB6 Desktop application made up of a number of ActiveX DLLs. These are configured to use Apartment as the threading model, and all classes are set as MultiUse. All worked well and nice until the time came when we was requested to transpose the app on the mighty web :O
The problem we faced and how we (thought we) solved it
Since we haven't got the resources to design and develop a solution from scratch, we used a third party java(script)-based framework to quickly build a web app. However, much of the real work is done by the legacy library, so we needed a way to interface these two components. The easiest way we could think of was to build a very basic (w/o auth and w/o UI) Asp.Net MVC website to use as the middle layer. This would receive requests from the web app and translate them for the COM lib to crunch data.
To this end, and since the libs were never meant to be used as a server, we tried to refactor the whole thing a bit so that most classes can now be used in a standalone manner: this included separating logic from the UI and eliminating all module and public vars where possible; unfortunately, some of the former are still present, in particular some ComponentOne OCXs to handle reports and prints. All in all, this seemed to work just fine, until we had to deal with the COM threading model :O
Making sense of nonsense
Long story short, after a lot of digging and headaches we devised the current solution, which is outlined below:
we install the legacy app as usual, so that it register its dlls in the registry;
in our MVC solution, we use System.Threading.Tasks, one per every request, to start the requested operation in an asynchronous manner. We assign the operation an id and return this id to the client. To start the task we call this method:
protected Task<TReturn> StartSTATask<TReturn>(Func<TReturn> function)
{
var task = Task.Factory.StartNew(
function,
System.Threading.CancellationToken.None,
TaskCreationOptions.None,
STATaskScheduler // property to store the scheduler instance
);
return task;
}
the task is run using the STATaskScheduler. We modified it so that it spawns a new thread if the number of threads in the pool is set to 0.
/// <summary>Initializes a new instance of the StaTaskScheduler class with the specified concurrency level.</summary>
/// <param name="numberOfThreads">The number of threads that should be created and used by this scheduler.</param>
public StaTaskScheduler(int numberOfThreads)
{
// Validate arguments
//if (numberOfThreads < 1) throw new ArgumentOutOfRangeException("concurrencyLevel");
// Initialize the tasks collection
_tasks = new BlockingCollection<Task>();
if (numberOfThreads > 0)
{
// Create the threads to be used by this scheduler
_threads = Enumerable.Range(0, numberOfThreads).Select(i =>
{
var thread = new Thread(() =>
{
// Continually get the next task and try to execute it.
// This will continue until the scheduler is disposed and no more tasks remain.
foreach (var t in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(t);
}
});
thread.Name = "sta_thread_" + i;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
return thread;
}).ToList();
// Start all of the threads
_threads.ForEach(t => t.Start());
}
}
/// <summary>Queues a Task to be executed by this scheduler.</summary>
/// <param name="task">The task to be executed.</param>
protected override void QueueTask(Task task)
{
if (_threads != null)
// Push it into the blocking collection of tasks
_tasks.Add(task);
else
{
var thread = new Thread(() => TryExecuteTask(task));
thread.Name = "sta_thread_task_" + task.Id;
thread.IsBackground = true;
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
}
}
And in our base controller's OnActionExecuting method we initiliaze it so
STATaskScheduler = HttpContext.Application["STATaskScheduler"] as TaskScheduler;
if (null == STATaskScheduler)
{
STATaskScheduler = new StaTaskScheduler(0);
HttpContext.Application["STATaskScheduler"] = STATaskScheduler;
}
we use a thin wrapper to instantiate and call our COM libs through reflection:
// Libraries is a Dictionary containing the names of the registered dlls
protected object InitCom(Libraries lib)
{
return InitCom(lib, true);
}
protected virtual object InitCom(Libraries lib, bool setOperation)
{
var comObj = GetComInstance(lib);
var success = SetUpConnection(comObj);
if (!success)
throw new LeafOperationException(lib, "Errore durante la connessione: {1}".Printf(connectionString));
if(setOperation)
return InitOperation(comObj);
return comObj;
}
protected object GetComInstance(Libraries lib)
{
var comType = Type.GetTypeFromProgID(MALib[lib]);
var comObj = Activator.CreateInstance(comType);
return comObj;
}
protected virtual bool DisposeCom(object comObj)
{
var success = CloseConnection(comObj);
if(!success)
throw new LeafOperationException("Errore durante la chiusura della connessione: {1}".Printf(connectionString));
//Marshal.FinalReleaseComObject(comObj);
//comObj = null;
return success;
}
protected bool SetUpConnection(object comObj)
{
var serverName = connectionString.ServerName();
var catalogName = connectionString.CatalogName();
return Convert.ToBoolean(comObj.InvokeMethod("Set_ConnectionWeb", serverName, catalogName));
}
protected bool CloseConnection(object comObj)
{
return Convert.ToBoolean(comObj.InvokeMethod("Close_ConnectionWeb"));
}
protected object InitOperation(object comObj)
{
comObj.GetType().InvokeMember("OperationID", BindingFlags.SetProperty, null, comObj, new object[] { OperationId });
comObj.GetType().InvokeMember("OperationHash", BindingFlags.SetProperty, null, comObj, new object[] { OperationHash });
return comObj;
}
The rationale behind this is that we create a new instance of the class with each request, eventually releasing it when done. Read here to know why we commented out the ReleaseComObject part. Basically, we were trading out of memory for a lot of COM object that has been separated from its underlying RCW cannot be used exceptions.
The object is then used like this within methods of various classes:
public bool ChiusuraMese()
{
try
{
PulisciMessaggi();
var comObj = InitCom(Libraries.Chiusura);
var byRefArgs = new int[] { 2 };
var oReturn = comObj.InvokeMethodByRef("ChiusuraMese", byRefArgs, IdDitta, PeriodoGiornaliera, IdDipendenti.PadLeft(), IdGruppoInstallazione, CodGruppoGestione);
DisposeCom(comObj);
return Convert.ToInt32(oReturn) == 0;
}
catch (Exception ex)
{
using (ErrorLog Log = new ErrorLog(System.Reflection.Assembly.GetExecutingAssembly().FullName, ex)) { }
aErrorMessage = ex.Message;
return false;
}
}
where InvokeMethodByRef is an extension method defined this way:
public static object InvokeMethodByRef(this object comObj, string methodName, int[] byRefArgs, params object[] args)
{
var modifiers = new ParameterModifier(args.Length);
byRefArgs.ToList().ForEach(index => { modifiers[index] = true; });
return comObj.GetType().InvokeMember(methodName, BindingFlags.InvokeMethod, null, comObj, args, new ParameterModifier[] { modifiers }, null, null);
}
Left out of the apartment
For what I understood, this whole apartment stuff is really hard to get right, with its cross-thread marshalling, message loop, yadda yadda whatnot. Add to that we're using and old, unsupported technology used to develop an application that was not architected for the purpose we're forcing it into. All that said, and taken for grant that the .Net side of things is working correctly, a couple of thoughts still wander in our minds. In particular:
is this the correct way to get advantage of multithreading with COM? Sometimes, multiple requests for the same object get stuck as if queued. This makes us wonder whether COM is actually sharing some instances between threads;
are we really creating and disposing of objects with each request, or under the hood COM handles things differently? Apparently, we're getting public vars overwritten, so there's probably some resource contention and reentering somewhere we wouldn't expect;
is the setup correct? Are there alternatives which are easier to maintain and debug? Please keep in mind we don't have neither the time nor the resources to rewrite anything in great extent. We could probably try something like creating an exe ActiveX, but I wouldn't count on that.
what's the "least worse" way to use OCXs in a project of this kind (not using them is not an option at the moment)? Should we dispose of them in some particular way? We already checked we set them to nothing when finished, but maybe some other thread is still using them;
should we be aware of any particular COM limit related to our out of memory issue? We encountered the problem before when the form had more than 256 unique controls displayed. Maybe the same is happening here somehow? The error seems to be especially related to classes using UI components.
Things I've already read (and probably did not understand)
Before you point to resources online I should read, I add here some topics I've encountered, in random order:
About SingleUse/MultiUse
http://www.vb-helper.com/howto_activex_dll.html
https://msdn.microsoft.com/en-us/library/aa242108(v=vs.60).aspx
Not really much choice here, if we want to stick with ActiveX DLLs with forms.
About (apartment) threading
https://msdn.microsoft.com/en-us/library/aa716297(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx. By the way, this one probably hints that calls to objects are being serialized for access by other threads.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms680112%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
About debugging
https://msdn.microsoft.com/en-us/library/aa241684(v=vs.60).aspx
https://msdn.microsoft.com/en-us/library/aa716193%28v=vs.60%29.aspx?f=255&MSPPError=-2147217396
Could a stack dump be of any help when we face the error? I don't even know how to use WinDbg, so I'd like at least to know if that would be a total waste of time :D
We're kinda stuck here, as we've got no clue as to where or what to look for, so any kind of help would be really appreciated.
Comments
So I've been pointed out I should read more about COM's threading model. I kind of expected that. Anyhow, to elaborate further, let me write some comments.
First, I don't have any control over CoInitialize or whatever, I'm just instantiating some VB6 dlls. I guess COM is doing such and such under the hood. Fact is, I could not find anywhere what that is (edit - apparently, .Net is already taking care of that for me, see the answer to this question: Do i need to call CoInitialize before interacting with COM in .NET?).
To recap:
I'm using STA threads from the client app
I'm using Activator.CreateInstance supposing it is actually creating a new object every time it is called. The call is done within a new STA thread.
Let's set aside for a moment questions about thread-safety in the actual DLLs. What I'm mainly interested in understanding here is if the described solution is a correct way (possibly not the best way, I'm aware of that) to exploit multithreading with COM libraries.
To cite some sources, to the best of my current knowledge I should be in the situation depicted in Figure 8.5 here: https://msdn.microsoft.com/en-us/library/aa716228(v=vs.60).aspx
I can't find any reason why this should not work, since as I said I'm supposing each object resides in its own apartment and has its own variables, plus a copy of global vars (see here: https://msdn.microsoft.com/en-us/library/aa261361(v=vs.60).aspx).

C# + WinRT + Monogame threading for network (Azure Mobile Service) operations

I have a loop in my application that loops through a set of entities in the following fashion
foreach(var entity in mEntities)
{
entity.Update();
}
Some of these entities maintain a networking component that will call a Azure Mobile Service in order to update their state to the server. An example is below:
public class TestEntity {
public int Index;
public int PropertyValue;
public async void Update()
{
Task.Run(() => {
MyAzureMobileServiceClient.Update(Index, PropertyValue);
});
}
}
The UI rendering is done by Monogame in a more traditional game loop fashion. Whilst I do not know the inner workings of it, I am fairly certain that it does not have an actual separate thread doing the work. In practice, this shows as the UI freezing every time this update is called.
I want to be able to run it "smoothly" in the background. In the old Windows model this could have easily been done by starting a new Thread that would handle it, but I don't understand the threading well enough in WinRT to understand what is wrong with my approach.
Any ideas?
[update] I also tried this:
Task.Factory.StartNew(async () =>
{
while(true) {
await Task.Delay(1000);
MyAzureMobileServiceClient.Update(Index, PropertyValue);
}
});
Every 1 seconds, I get a mini-freeze like before.
[update 2] I tried this with a twist. I replaced the Azure Mobile Service client call with a standard HTTP request and it worked splendidly; no mini-freezes. Granted it wasn't to the backend yet, but at least I have a work around by doing the whole thing manually. Would prefer to not do that, however.
[update 3] This is getting peculiar. I realize I simplified the code in this question in order to get it coherent in the context. However, this appears to have removed the true source of the problem. I tried the following things:
I created a HTTP request and created the request manually, called it inside the Task.Run() and it worked splendidly with no latency.
I called the Azure Mobile Service client Update DIRECTLY and there was no latency.
So this brings me to where the problem lies. I basically have a wrapper class for the Azure Mobile Service. The real path that goes looks roughly like this:
CommunicationClient.UpdateAsync(myObject);
public Task UpdateAsync(MyObjectType obj)
{
var table = mMobileServiceClient.GetTable<MyObjectType>();
return table.UpdateAsync(obj);
}
This causes the lag, but if I do this instead of it, it works with no latency whatsoever:
var client = CommunicationClient.MobileServiceClient;
var table = client.GetTable<MyObjectType>();
table.UpdateAsync(obj);
Soooooo... I should probably refactor the whole question. It's getting tl;dry.
I had a question about how to run things on a backgroundthread and they advised me to use ThreadPool so I would advise you to look at my question and the answer on it maybe you can pick up on some things and get it working on your end.
Create Backgroundthread Monogame

Completed Event not triggering for web service on some systems

This is rather weird issue that I am facing with by WCF/Silverlight application. I am using a WCF to get data from a database for my Silverlight application and the completed event is not triggering for method in WCF on some systems. I have checked the called method executes properly has returns the values. I have checked via Fiddler and it clearly shows that response has the returned values as well. However the completed event is not getting triggered. Moreover in few of the systems, everything is fine and I am able to process the returned value in the completed method.
Any thoughts or suggestions would be greatly appreciated. I have tried searching around the web but without any luck :(
Following is the code.. Calling the method..
void RFCDeploy_Loaded(object sender, RoutedEventArgs e)
{
btnSelectFile.IsEnabled = true;
btnUploadFile.IsEnabled = false;
btnSelectFile.Click += new RoutedEventHandler(btnSelectFile_Click);
btnUploadFile.Click += new RoutedEventHandler(btnUploadFile_Click);
RFCChangeDataGrid.KeyDown += new KeyEventHandler(RFCChangeDataGrid_KeyDown);
btnAddRFCManually.Click += new RoutedEventHandler(btnAddRFCManually_Click);
ServiceReference1.DataService1Client ws = new BEVDashBoard.ServiceReference1.DataService1Client();
ws.GetRFCChangeCompleted += new EventHandler<BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs>(ws_GetRFCChangeCompleted);
ws.GetRFCChangeAsync();
this.BusyIndicator1.IsBusy = true;
}
Completed Event....
void ws_GetRFCChangeCompleted(object sender, BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs e)
{
PagedCollectionView view = new PagedCollectionView(e.Result);
view.GroupDescriptions.Add(new PropertyGroupDescription("RFC"));
RFCChangeDataGrid.ItemsSource = view;
foreach (CollectionViewGroup group in view.Groups)
{
RFCChangeDataGrid.CollapseRowGroup(group, true);
}
this.BusyIndicator1.IsBusy = false;
}
Please note that this WCF has lots of other method as well and all of them are working fine.... I have problem with only this method...
Thanks...
As others have noted, a look at some of your code would help. But some things to check:
(1) Turn off "Enable Just My Code" under Debug/Options/Debugging/General, and set some breakpoints in the Reference.cs file, to see whether any of the low-level callback methods there are getting hit.
(2) Confirm that you're setting the completed event handlers, and on the right instance of the proxy client. If you're setting the event handlers on one instance, and making the call on another, that could result in the behavior you're describing.
(3) Poke around with MS Service Trace Viewer, as described here, and see if there are any obvious errors (usually helpfully highlighted in red).
Likely there are other things you could check, but this will keep you busy for a day or so :-).
(Edits made after code posted)
(4) You might want to try defining your ws variable at the class level rather than the function. In theory, having an event-handler defined on it means that it won't get garbage collected, but it's still a little odd, in that once you're out of the function, you don't have a handle to it anymore, and hence can't do important things like, say, closing it.
(5) If you haven't already, try rebuilding your proxy class through the Add Service Reference dialog box in Visual Studio. I've seen the occasional odd problem pop up when the web service has changed subtly and the client wasn't updated to reflect the changes: some methods will get called successfully, others won't.
(6) If you're likely to have multiple instances of a proxy client open at the same time, consider merging them into one instance (and use the optional "object userState" parameter of the method call to pass the callback, so you don't run into the nasty possibility of multiple event handlers getting assigned). I've run into nasty problems in the past when multiple instances were stepping on each other, and my current best practice is to structure my code in such a way that there's only ever one client instance open at a time. I know that's not necessarily what MS says, but it's been my experience.
This issue is because of special characters in one of the fields returned from DB which browser was not able to render. After considerable debug n search over the web, was able to find this out. Used Regular expressions to remove these special characters in WCF, the new returned values from the method was successfully rendered in various browsers on different system. :)
Make sure you have checked 'Generate asynchronous operations' in your service reference. Right-click on the service reference and check the box. This solved it for me.

solution for RPC_E_ATTEMPTED_MULTITHREAD error caused by SPRequestContext caching SPSites?

I'm developing a solution for SharePoint 2007, and I'm using SPSecurity.RunWithElevatedPrivileges a lot, passing in UserToken of the SystemAccount.
After reading http://hristopavlov.wordpress.com/2009/01/19/understanding-sharepoint-sprequest/ I finally began to understand why I get these System.Runtime.InteropServices.COMException (0x80010102): Attempted to make calls on more than one thread in single threaded mode. (Exception from HRESULT: 0x80010102 (RPC_E_ATTEMPTED_MULTITHREAD)) errors, but there seems to be no solution - "known issue in the product"
The article is more then a year old. I wasn't able to find anything more recent and helpful, but I was hoping maybe someone else has?
My code goes like this
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SPSite elevatedSite = new SPSite(web.Site.ID, web.Site.SystemAccount.UserToken))
{
using (SPWeb elevatedWeb = elevatedSite.OpenWeb(web.ID))
{
// some operations on lists and items obtained through elevatedWeb
}
}
}
The errors come up wherever such an elevated code is used, and more often when there are more users who use these functionalities, so I guess perhaps the elevated SPSite is getting cached and reused.
Is there any way to solve this? If my understanding is correct, how to make Sharepoint forget about the cached SPSites, and use a fresh one instead?
Thanks
Solved it myself, after finally understanding what I'm actually doing there - by using for example new SPSite(web.Site.ID, I'm actually making the delegate, which seems to be on a new thread, reach into web, which is on the original thread
So the answer is: you have put all the data you'll need (like various IDs, SystemAccount.UserToken etc.) into variables before running the delegate, and don't access any objects with associated SPRequest (webs, lists, items, users...) from inside the delegate. And, of course, the same holds for data that goes out of the delegate - you can return web ID, list ID and item ID, but you better not return SPListItem.

Categories

Resources