Does a single web request to IIS stay on a single thread? - c#

I want to write a logging http module that stores a list of log events for a single request in thread local storage while the request executes. On End_Request I want to write all the events back to persistent storage.
Question is, will one request match to one thread? I.e. can I assume from anywhere in my code that I can add items to the IEnumerable and they will properly be all together at the end of the request.

No. ASP.NET can potentially switch threads while processing a request. This is known as thread-agility.
There are only certain points where it can/will do this. I can't remember what they are off the top of my head - searching for a reference now...
But the short answer is NO: you can't rely on the same thread-local storage being accessible for the entire duration of the request.

You might be better off using Context.Items rather than thread storage - that's per request. You don't need to worry about what the server is doing with its threads that way.

The session remains constant for the duration of the request. Why not use that?

I would look at maybe using ELMAH and or log4net to get what you need done as it is simple to use and now even simpler to install using NuGet package manager.
http://code.google.com/p/elmah/
http://logging.apache.org/log4net/
Sometimes it's great to use code that is already tested and configured for the environment you need rather than rolling your own, but hey that's your call.

If you are prepared to potentially lose the log data you have accumulated then you could wait until the end of the request to write to the log, irrespective of thread safety. If it's important to you that every event be logged then I would suggest that you write events to the log as they occur rather.

Related

How to reduce the execution time in C# while calling an API?

I am creating a windows application (using windows form application) which calls the web service to fetch data. In data, I have to fetch 200+ clients information and for each client, I have to fetch all users information. A client can have 50 to 100 users. So, I am calling web service in a loop (after getting all clients list) for each client to fetch the users listing. This is a long process. I want to reduce the execution time for this whole process. So, please suggest me which approach can help in reducing the execution time which is currently up to 40-50 mins for one time data fetch. Let me know any solution like multithreading or any thing else, whichever is best suited to my application.
Thanks in advance.
If you are in control of the web service, have a method that returns all the clients at once instead of 1 by one to avoid rountrips as Michael suggested.
If not, make sure to make as many requests at the same time (not in sequence) to avoid as much laterncy as possible. For each request you will have at least 1 rountrip (so at least your ping's Worth of delay), if you make 150 requests then you'll get your ping to the server X 150 Worth of "just waiting on the network". If you split those requests in 4 bunches, and do each of these bunches in parallel, then you'll only wait 150/4*ping time. So the more requests you do concurrently, the least you wait.
I suggest you to avoid calling the service in a loop for every user to get the details, but instead do that loop in the server and return all the data in one-shot, otherwise you will suffer of a lot of useless latencies caused by the thousand of calls, and not just because of the server time or data-transferring time.
This is also a pattern, called Remote Facade or Facade Pattern explained by Martin Fowler and the Gang of Four:
any object that's intended to be used as a remote objects needs a coarse-grained interface that minimizes the number of calls needed to get some-thing done [...] Rather than ask for an order and its order lines individually, you need to access and update the order and order lines in a single call.
In case you're not in control of the web service, you could try to use a Parallel.ForEach loop instead of a ForEach loop to query the web service.
The MSDN has a tutorial on how to use it: http://msdn.microsoft.com/en-us/library/dd460720(v=vs.110).aspx

How to prevent NHibernate long-running process from locking up web site?

I have an NHibernate MVC application that is using ReadCommitted Isolation.
On the site, there is a certain process that the user could initiate, and depending on the input, may take several minutes. This is because the session is per request and is open that entire time.
But while that runs, no other user can access the site (they can try, but their request won't go through unless the long-running thing is finished)
What's more, I also have a need to have a console app that also performs this long running function while connecting to the same database. It is causing the same issue.
I'm not sure what part of my setup is wrong, any feedback would be appreciated.
NHibernate is set up with fluent configuration and StructureMap.
Isolation level is set as ReadCommitted.
The session factory lifecycle is HybridLifeCycle (which on the web should be Session per request, but on the win console app would be ThreadLocal)
It sounds like your requests are waiting on database locks. Your options are really:
Break the long running process into a series of smaller transactions.
Use ReadUncommitted isolation level most of the time (this is appropriate in a lot of use cases).
Judicious use of Snapshot isolation level (Assuming you're using MS-SQL 2005 or later).
(N.B. I'm assuming the long-running function does a lot of reads/writes and the requests being blocked are primarily doing reads.)
As has been suggested, breaking your process down into multiple smaller transactions will probably be the solution.
I would suggest looking at something like Rhino Service Bus or NServiceBus (my preference is Rhino Service Bus - I find it much simpler to work with personally). What that allows you to do is separate the functionality down into small chunks, but maintain the transactional nature. Essentially with a service bus, you send a message to initiate a piece of work, the piece of work will be enlisted in a distributed transaction along with receiving the message, so if something goes wrong, the message will not just disappear, leaving your system in a potentially inconsistent state.
Depending on what you need to do, you could send an initial message to start the processing, and then after each step, send a new message to initiate the next step. This can really help to break down the transactions into much smaller pieces of work (and simplify the code). The two service buses I mentioned (there is also Mass Transit), also have things like retries built in, and error handling, so that if something goes wrong, the message ends up in an error queue and you can investigate what went wrong, hopefully fix it, and reprocess the message, thus ensuring your system remains consistent.
Of course whether this is necessary depends on the requirements of your system :)
Another, but more complex solution would be:
You build a background robot application which runs on one of the machines
this background worker robot can be receive "worker jobs" (the one initiated by the user)
then, the robot processes the jobs step & step in the background
Pitfalls are:
- you have to programm this robot very stable
- you need to watch the robot somehow
Sure, this is involves more work - on the flip side you will have the option to integrate more job-types, enabling your system to process different things in the background.
I think the design of your application /SQL statements has a problem , unless you are facebook I dont think any process it should take all this time , it is better to review your design and check where is the bottleneck are, instead of trying to make this long running process continue .
also some times ORM is not good for every scenario , did you try to use SP ?

Prevent calling a web service too many times

I provide a Web Service for my clients which allow him to add a record to the production database.
I had an incident lately, in which my client's programmer called the service in a loop , iterated to call to my service thousands of times.
My question is what would be the best way to prevent such a thing.
I thought of some ways:
1.At the entrence to the service, I can update counters for each client that call the service, but that looks too clumbsy.
2.Check the IP of the client who called this service, and raise a flag each time he/she calls the service, and then reset the flag every hour.
I'm positive that there are better ways and would appriciate any suggestions.
Thanks, David
First you need to have a look at the legal aspects of your situation: Does the contract with your client allow you to restrict the client's access?
This question is out of the scope of SO, but you must find a way to answer it. Because if you are legally bound to process all requests, then there is no way around it. Also, the legal analysis of your situation may already include some limitations, in which way you may restrict the access. That in turn will have an impact on your solution.
All those issues aside, and just focussing on the technical aspects, do you use some sort of user authentication? (If not, why not?) If you do, you can implement whatever scheme you decide to use on a per user base, which I think would be the cleanest solution (you don't need to rely on IP addresses, which is a somehow ugly workaround).
Once you have your way of identifying a single user, you can implement several restrictions. The fist ones that come to my mind are these:
Synchronous processing
Only start processing a request after all previous requests have been processed. This may even be implemented with nothing more but a lock statement in your main processing method. If you go for this kind of approach,
Time delay between processing requests
Requires that after one processing call a specific time must pass before the next call is allowed. The easiest solution is to store a LastProcessed timestamp in the user's session. If you go for this approach, you need to start thinking of how to respond when a new request comes in before it is allowed to be processed - do you send an error message to the caller? I think you should...
EDIT
The lock statement, briefly explained:
It is intended to be used for thread safe operations. the syntax is as follows:
lock(lockObject)
{
// do stuff
}
The lockObject needs to be an object, usually a private member of the current class. The effect is that if you have 2 threads who both want to execute this code, the first to arrive at the lock statement locks the lockObject. While it does it's stuff, the second thread can not acquire a lock, since the object is already locked. So it just sits there and waits until the first thread releases the lock when it exits the block at the }. Only thhen can the second thread lock the lockObject and do it's stuff, blocking the lockObject for any third thread coming along, until it has exited the block as well.
Careful, the whole issue of thread safety is far from trivial. (One could say that the only thing trivial about it are the many trivial errors a programmer can make ;-)
See here for an introduction into threading in C#
The way is to store on the session a counter and use the counter to prevent too many calls per time.
But if your user may try to avoid that and send different cookie each time*, then you need to make a custom table that act like the session but connect the user with the ip, and not with the cookie.
One more here is that if you block basic on the ip you may block an entire company that come out of a proxy. So the final correct way but more complicate is to have both ip and cookie connected with the user and know if the browser allow cookie or not. If not then you block with the ip. The difficult part here is to know about the cookie. Well on every call you can force him to send a valid cookie that is connected with an existing session. If not then the browser did not have cookies.
[ * ] The cookies are connected with the session.
[ * ] By making new table to keep the counters and disconnected from session you can also avoid the session lock.
In the past I have use a code that used for DosAttack, but none of them are working good when you have many pools and difficult application so I now use a custom table as I describe it. This are the two code that I have test and use
Dos attacks in your web app
Block Dos attacks easily on asp.net
How to find the clicks per seconds saved on a table. Here is the part of my SQL that calculate the Clicks Per Second. One of the tricks is that I continue to add clicks and make the calculation of the average if I have 6 or more seconds from the last one check. This is a code snipped from the calculation as an idea
set #cDos_TotalCalls = #cDos_TotalCalls + #NewCallsCounter
SET #cMilSecDif = ABS(DATEDIFF(millisecond, #FirstDate, #UtpNow))
-- I left 6sec diferent to make the calculation
IF #cMilSecDif > 6000
SET #cClickPerSeconds = (#cDos_TotalCalls * 1000 / #cMilSecDif)
else
SET #cClickPerSeconds = 0
IF #cMilSecDif > 30000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = #NewCallsCounter, cDos_TotalCallsChecksOn = #UtpNow WHERE cLiveUsersID=#cLiveUsersID
ELSE IF #cMilSecDif > 16000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = (cDos_TotalCalls / 2),
cDos_TotalCallsChecksOn = DATEADD(millisecond, #cMilSecDif / 2, cDos_TotalCallsChecksOn)
WHERE cLiveUsersID=#cLiveUsersID
Get user ip and insert it into cache for an hour after using web service, this is cached on server:
HttpContext.Current.Cache.Insert("UserIp", true, null,DateTime.Now.AddHours(1),System.Web.Caching.Cache.NoSlidingExpiration);
When you need to check if user entered in last hour:
if(HttpContext.Current.Cache["UserIp"] != null)
{
//means user entered in last hour
}

Building a scalable ASP.NET MVC Web Application

I'm currently in the process of building an ASP.NET MVC web application in c#.
I want to make sure that this application is built so that it can scale out in the future without the need for major re-factoring.
I'm quite keen on using some sort of queue to post any writes to my database base to and have a process which polls that queue asynchronously to perform the update. Once this data has been posted back to the database the client then needs to be updated with the new information. The implication here being that the process to write the data back to the database could take a short while based on business rules executing on the server.
My question is what would be the best way to handle the update from the client\browser perspective.
I'm thinking along the lines of posting the data back to the server and adding it to the queue and immediately sending a response to the client then polling at some frequency to get the updated data. Any best practices or patterns on this would be appreciated.
Also in terms of reading data from the database would you suggest using any particular techniques or would reading straight from db be sufficient given my scenario.
Update
Thought I'd post an update on this as it's been a while. We've actually ended up using Windows Azure but the solution is applicable to other platforms.
What we've ended up doing is using the Windows Azure Queue to post messages\commands to. This is a very quick process and returns immediately. We then have a worker role which processes these messages on another thread. This allows us to minimize any db writes\updates on the web role in theory allowing us to scale more easily.
We handle informing the user via emails or even silently depending on the type of data we are dealing with.
Not sure if this helps but why dont you have an auto refresh on the page every 30 seconds for example. This is sometimes how news feeds work on sports websites, saying the page will be updated every x minutes.
<meta http-equiv="refresh" content="120;url=index.aspx">
Why not let the user manually poll the status of the request? This is how your typical e-commerce app is implemented. When you purchase something online, the order is submitted to a queue for fullfillment. After it's submitted, the user is presented with a "Thank you for your order" page and a link where they can check the status of the order. The user can visit the link anytime to check the status, no need for an auto-poll mechanism.
Is your scenario so different from this?
Sorry in my previous answer I might have misunderstood. I was talking of a "queue" as something stored in a SQL DB, but it seems on reading your post again you are may be talking about a separate message queueing component like MSMQ or JMS?
I would never put a message queue in the front end, between a user and backend SQL DB. Queues are good for scaling across time, which is suitable between backend components, where variances in processing times are acceptable (e.g. order fulfillment)... when dealing with users, this variance is usually not acceptable.
While I don't know if I agree with the logic of why, I do know that something like jQuery is going to make your life a LOT easier. I would suggest making a RESTful web API that your client-side code consumes. For example, you want to post a new order to the system and have the client responsive? Make a post to www.mystore.com/order/create and have that return the new URI to access the order (i.e. order#) as a URI (www.mystore.com/order/1234). That response is then stored in the client code and a jQuery call is setup to poll for a response or stop polling on an error.
For further reading check out this Wikipedia article on the concept of REST.
Additionally you might consider the Reactive Extensions for .NET and within that check out the RxJS sub-project which has some pretty slick ways of handling with the polling problem without causing you to write the polling code yourself. Fun things to play with!
Maybe you can add a "pending transactions" area to the UI. When you queue a transaction, add it to the user's "pending transactions" list.
When it completes, show that in the user's "pending transactions" list the next time they request a new page.
You can make a completed transaction stay listed until the user clicks on it, or for a predetermined length of time.

How to implement a job that runs every hour but can also be triggered from .aspx pages?

I need a method to run every so often that does some database processing. However, I may need it to be triggerable by an admin on the site. But I don't want this method being run more than once at the same time, as this could cause issues with the way it hits the database.
For example, could I...
Create a singleton class that runs the method on a timer, and instantiate it in the global.asax file. Then, since it's a singleton, I can call it from my normal .aspx pages and call the method whenever I want. I would probably need to use that "lock" feature of C# to check to see if the method is already running.
I heard some talk lately that Singletons are "evil", but this seems like the perfect fit for it. What do you think? Thanks in advance.
Timers and locks (that are intended to synchronize access to the database) are a bad idea on the web; you may have zero, one or many app-pools on different servers. They may recycle at any time, and won't be spun up until needed. Basically, this won't prevent you hammering the db from multiple sources.
Personally, I'd be tempted to either write a service to do this work (either db-polling, or via WCF etc), or use the db (a SP or similar) - set a flag in a table-row to say "in progress", do the work at the db, and clear the flag (duplicate attempts exit immediately while in progress).
I would do it this way
Build a normal ASP.NET page which does the processing
StealBorrow LFSR Consultings idea for a flag in the DB which does the work of checking if the process is currently running
Use normal cronjob or windows task scheduler to call the web page on a regular basis.
And Singletons aren't evil they just get abused easily.
Another option which Joel Spolsky mentioned in one of the SO Podcasts, i believe it was #20 something. Is to set an empty Cache object on application start with a certain expiration date, and in the CacheItemRemovedCallback make a call out to page or do some work and then reset the empty cache object.
I'm probably horribly mis-quoting him, so I recommend you listen or look through the transcripts for yourself.
What about just setting up a flag in the database and checking that to determine if the job is running or not? Seems simpler IMO.
The canonical way to write a singleton ends up not being thread safe. Especially in a webby environment, where threads needn't even be on the same machine!
If you really want to do a "singleton", think of it as a service that you only ever deploy to one machine. Then use the transactional semantics of your database like Marc Gravell suggests to synchronize the locks.
We've done similar things by using a Web Service to do the backend processing, then writing a Desktop App to call it on whatever schedule we need. We can then run that app on a server, or an admin can run it directly from their PC to trigger the job.
Edit: After I saw your revision that you don't want them to run simulatenously, we have usually just controlled that with a database flag like a few others have said, nothing fancy but it gets the job done
Set an Application wide variable to denote that the process is running. That should be a little easier than storing the variable in the database, right?

Categories

Resources