I have 2 different classes that i am testing to send files to the browser.
First one is at http://pastebin.org/1187259 uses Range specific headers in order to provide resuming
Second one is at http://pastebin.org/1187454 uses chunk reading to send large files.
Both work fine with one different. First one is wayyy slower than the second one in the sense of download speed. With first one i cannot pass over 80KB/s with second one i can get as fast as possible.
I have done few tests and result was same. Is this an illusion or is there something on the first one that slows download speed?
I also noticed that first one seems to block other requests. For example if i request a file from server with first one server will not respond to my other request to it until download finish. Even if I request different page. It doesn’t do that if i open different sessions from different browsers.
Thanks.
At last! I managed to fix the issue by adding EnableSessionState="ReadOnly" to the download page.
See http://www.guidanceshare.com/wiki/ASP.NET_2.0_Performance_Guidelines_-_Session_State
"Use the ReadOnly Attribute When You Can
For pages that only need read access to session data, consider setting EnableSessionState to ReadOnly.
Why
Page requests that use session state internally use a ReaderWriterLock object to manage session data. This allows multiple reads to occur at the same time when no lock is held. When the writer acquires the lock to update session state, all read requests are blocked. Normally two calls are made to the database for each request. The first call connects to the database, marks the session as locked, and executes the page. The second call writes any changes and unlocks the session. By setting EnableSessionState to ReadOnly, you avoid blocking, and you send fewer calls to the database thus improving the performance.
"
Related
The essence of the problem is this: there is a controller, in which is a method that generates an excel file. Upon request by its need to generate and return. The file is generated for a long time for 1-2 hours, while it is necessary to highlight the text notifications, please wait. After finish notification must be removed.
I could not find my desired solutions.
I sory for my bad English
public ActionResult DownloadFile()
{
return new FileStreamResult(_exporter.Export(), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
}
You only get one bite at the response apple. If you return a file result, that is all you can return. The only way to handle this while giving the user updates about the status is to do the file creation out-of-stream and then long-poll or use Web Sockets to update the user periodically. The request to this action would merely queue up the file creation and then return a regular view result.
It's unwise to have particularly long running actions take place within the request-response cycle, anyways. A web server has a thread pool which is often referred to as its "max requests", because each request needs a thread. Usually this is set by default to something around a 1000, and assumes that you're going to clear the threads as soon as possible. If 1001 people all tried to request this action at the same time, the 1001st individual would be queued until one of the other 1000 threads freed up, meaning they could be waiting for almost 4 hours. Even if you never see your site getting this kind of load, it's still an excellent vector for a DDoS attack. Just send a few thousand requests to this URL and your server deadlocks.
Also, I have no idea what you're doing, but 1-2 hours to generate an Excel file is absolutely insane. Either you're dealing with way too much data at once, and sending back multi-gigabyte files that will likely fail to even open properly in Excel, or the process by which you're doing it is severely unoptimized.
I am using LINQ to SQL in order to read/write a database on a server. Right now only the creator of an db-entry can change the according information.
Is it guaranteed that thereby no concurrency issues should appear? I mean it is not possible that one entry is changed simultaniously form two different locations. Or are there also read-concurrent problems that could appear?
Thanks in advance
No; that won't help at all.
A single user can still send multiple concurrent requests from different tabs or browsers or machines.
Excluding the obvious unlikely case of a "malicious" user using mutiple windows I wonder if strange scenarios could happen:
The user sends its first request which is executed on a thread by the web server but for whatever reason this thread is preempted.
Then later the same user emits another request that is in conflict with the first one, but this request is executed on a thread which is not preempted and directly writes a new value to the DB.
The first thread is then awakened and ends its works writing the old value
So when you have any doubt put some safeguards because we're never too careful, especially when programming ;)
In an ideal world yes. But there are several things to consider.
Can the same user submit changes from multiple locations?
If the TCP/IP connection is broken and reestablished will overlapping/out of order requests be an issue?
Is there any possiblity of needing moderator or admin access to user data?
I provide a Web Service for my clients which allow him to add a record to the production database.
I had an incident lately, in which my client's programmer called the service in a loop , iterated to call to my service thousands of times.
My question is what would be the best way to prevent such a thing.
I thought of some ways:
1.At the entrence to the service, I can update counters for each client that call the service, but that looks too clumbsy.
2.Check the IP of the client who called this service, and raise a flag each time he/she calls the service, and then reset the flag every hour.
I'm positive that there are better ways and would appriciate any suggestions.
Thanks, David
First you need to have a look at the legal aspects of your situation: Does the contract with your client allow you to restrict the client's access?
This question is out of the scope of SO, but you must find a way to answer it. Because if you are legally bound to process all requests, then there is no way around it. Also, the legal analysis of your situation may already include some limitations, in which way you may restrict the access. That in turn will have an impact on your solution.
All those issues aside, and just focussing on the technical aspects, do you use some sort of user authentication? (If not, why not?) If you do, you can implement whatever scheme you decide to use on a per user base, which I think would be the cleanest solution (you don't need to rely on IP addresses, which is a somehow ugly workaround).
Once you have your way of identifying a single user, you can implement several restrictions. The fist ones that come to my mind are these:
Synchronous processing
Only start processing a request after all previous requests have been processed. This may even be implemented with nothing more but a lock statement in your main processing method. If you go for this kind of approach,
Time delay between processing requests
Requires that after one processing call a specific time must pass before the next call is allowed. The easiest solution is to store a LastProcessed timestamp in the user's session. If you go for this approach, you need to start thinking of how to respond when a new request comes in before it is allowed to be processed - do you send an error message to the caller? I think you should...
EDIT
The lock statement, briefly explained:
It is intended to be used for thread safe operations. the syntax is as follows:
lock(lockObject)
{
// do stuff
}
The lockObject needs to be an object, usually a private member of the current class. The effect is that if you have 2 threads who both want to execute this code, the first to arrive at the lock statement locks the lockObject. While it does it's stuff, the second thread can not acquire a lock, since the object is already locked. So it just sits there and waits until the first thread releases the lock when it exits the block at the }. Only thhen can the second thread lock the lockObject and do it's stuff, blocking the lockObject for any third thread coming along, until it has exited the block as well.
Careful, the whole issue of thread safety is far from trivial. (One could say that the only thing trivial about it are the many trivial errors a programmer can make ;-)
See here for an introduction into threading in C#
The way is to store on the session a counter and use the counter to prevent too many calls per time.
But if your user may try to avoid that and send different cookie each time*, then you need to make a custom table that act like the session but connect the user with the ip, and not with the cookie.
One more here is that if you block basic on the ip you may block an entire company that come out of a proxy. So the final correct way but more complicate is to have both ip and cookie connected with the user and know if the browser allow cookie or not. If not then you block with the ip. The difficult part here is to know about the cookie. Well on every call you can force him to send a valid cookie that is connected with an existing session. If not then the browser did not have cookies.
[ * ] The cookies are connected with the session.
[ * ] By making new table to keep the counters and disconnected from session you can also avoid the session lock.
In the past I have use a code that used for DosAttack, but none of them are working good when you have many pools and difficult application so I now use a custom table as I describe it. This are the two code that I have test and use
Dos attacks in your web app
Block Dos attacks easily on asp.net
How to find the clicks per seconds saved on a table. Here is the part of my SQL that calculate the Clicks Per Second. One of the tricks is that I continue to add clicks and make the calculation of the average if I have 6 or more seconds from the last one check. This is a code snipped from the calculation as an idea
set #cDos_TotalCalls = #cDos_TotalCalls + #NewCallsCounter
SET #cMilSecDif = ABS(DATEDIFF(millisecond, #FirstDate, #UtpNow))
-- I left 6sec diferent to make the calculation
IF #cMilSecDif > 6000
SET #cClickPerSeconds = (#cDos_TotalCalls * 1000 / #cMilSecDif)
else
SET #cClickPerSeconds = 0
IF #cMilSecDif > 30000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = #NewCallsCounter, cDos_TotalCallsChecksOn = #UtpNow WHERE cLiveUsersID=#cLiveUsersID
ELSE IF #cMilSecDif > 16000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = (cDos_TotalCalls / 2),
cDos_TotalCallsChecksOn = DATEADD(millisecond, #cMilSecDif / 2, cDos_TotalCallsChecksOn)
WHERE cLiveUsersID=#cLiveUsersID
Get user ip and insert it into cache for an hour after using web service, this is cached on server:
HttpContext.Current.Cache.Insert("UserIp", true, null,DateTime.Now.AddHours(1),System.Web.Caching.Cache.NoSlidingExpiration);
When you need to check if user entered in last hour:
if(HttpContext.Current.Cache["UserIp"] != null)
{
//means user entered in last hour
}
When a user visits an .aspx page, I need to start some background calculations in a new thread. The results of the calculations need to be stored in the user's Session, so that on a callback, the results can be retrieved. Additionally, on the callback, I need to be able to see what the status of the background calculation is. (E.g. I need to check if the calculation is finished and completed successfully, or if it is still running) How can I accomplish this?
Questions
How would I check on the status of the thread? Multiple users could have background calculations running at the same time, so I'm unsure how the process of knowing which thread belongs to which user would work.. (though in my scenario, the only thread that matters, is the thread originally started by user A -- and user A does a callback to retrieve/check on the status of that thread).
Am I correct in my assumption that passing an HttpSessionState "Session" variable for the user to the new thread, will work as I expect (e.g. I can then add stuff to their Session later).
Thanks. Also I have to say, I might be confused about something but it seems like the SO login system is different now, so I don't have access to my old account.
Edit
I'm now thinking about using the approach described in this article which basically uses a class and a Singleton to manage a list of threads. Instead of storing my data in the database (and incurring the performance penalty associated with retrieving the data, as well as the extra table, maintenance, etc in the database), I'll probably store the data in my class as well.
Edit 2
The approach mentioned in my first edit worked well. Additionally I had timers to ensure the threads, and their associated data, were both cleaned up after the corresponding timers called their cleanup methods. The Objects containing my data and the threads were stored in the Singleton class. For some applications it might be appropriate to use the database for storage but it seemed like overkill for mine, since my data is tied to a specific instance of a page, and is useless outside of that page context.
I would not expect session-state to continue working in this scenario; the worker may have no idea who the user is, and even if it does (or more likely: you capture this data into the worker), no reason to store anything (updating session is a step towards the end of the request pipeline; but if you aren't in the pipeline...?).
I suspect you might need to store this data separately using some unique property of the user (their id or cn), or invent a GUID otherwise. On a single machine it may suffice to store this in a synchronised dictionary (or similar), but on a farm/cluster you may need to push the data down a layer to your database or state server. And fetch manually.
When i enter username and password on my site. if the username and pasword are correct, then i have a c# method called on Page_Load for database (which delete the non-required records).
if there is one record or 100, i still have to wait for the page load until that process is completed :(
I am using this string to load all the files, which will be then used to compare files
HttpContext.Current.Request.PhysicalApplicationPath;
how ever if i used a static path i.e : c:/images, then things goes bad :(
so what could be the possible solultion ?
You can start the record removal asynchronously:
Asynchronous Operations (ADO.NET)
Then your Page Load will occur before the removal operation is finished.
EDIT: Since you mention that you are using an Access DB, I guess that you are not losing the time by deleting the records but by some other operation (I suspect closing the DB, see my comment to Amir's answer). The thing you should do now is to benchmark, either by using a tool (see this question) or "manually", using the Stopwatch class. Any way, before you try to optimize, use one of these methods to find out what is really causing the delay.
Use Ajax and make it async as a web service.
Edit1: What I mean is to move the code in the Page_Load into a web service method, then call that web service from javascript after the page loads, sending it the information it needs to properly perform your operation, thus the client side appears more responsive - I make the assumption that the actions taken are not required to properly render your client side code, however if not, you might consider updating the page after the web service returns. This could be done manually, through the built in ajax toolkit or via a library such as jQuery.
This doesn't sound like a async problem to me. Deleting 100 or even 1000 records in a database shouldn't take more than a few milliseconds. If I was to suspect, I would think you have not set up your indexes correctly. So instead of deleting those records using a quick index, it needs to look through every record and see if its a match.