I am submitting HTTP POST requests via HttpWebRequest which contain a large amount of content. I would like to gzip the message content. Is this possible?
Does IIS 7 have to be configured to handle the compressed content? It has already been configured to serve compressed responses.
I've tried adding a Content-Encoding = gzip header and writing to the request stream wrapped in a GZipStream but the server returns a 504 (GatewayTimeout) which seems odd.
I don't believe IIS7 supports GZIP requests, out of the box. Here's why. On my IIS7 machine, gzip.dll does not export decompression methods.
c:\Windows\System32\inetsrv>c:\vc9\bin\dumpbin.exe -exports gzip.dll
Microsoft (R) COFF/PE Dumper Version 9.00.30729.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file gzip.dll
File Type: DLL
Section contains the following exports for gzip.dll
00000000 characteristics
47919400 time date stamp Sat Jan 19 01:09:04 2008
0.00 version
1 ordinal base
6 number of functions
6 number of names
ordinal hint RVA name
1 0 0000242D Compress
2 1 00002E13 CreateCompression
3 2 000065AE DeInitCompression
4 3 000012EE DestroyCompression
5 4 0000658D InitCompression
6 5 000065B6 ResetCompression
Summary
1000 .data
1000 .reloc
1000 .rsrc
6000 .text
I think this represents a change in gzip.dll. I believe in prior versions of gzip.dll, there were 12 exported methods, including 6 that did Decompression.
The vast majority of web servers do not support compressed request bodies. mod_deflate can be configured to support it on Apache but seldom actually is (as a zip-bomb is an easy potential DoS attack). I'm not aware of an IIS solution.
If you are talking back to your own server there is of course nothing stopping you doing the compression at the application level. If you have to pass a standard form type for the backend to read, you should pick multipart/form-data, as URL-encoding would bloat the binary data of the compressed content parameter.
I got the same error.
Solved by adding executionTimeout to web.config:
<httpRuntime maxRequestLength="1048576" executionTimeout="300" />
ExecutionTimeout- is on seconds...
Related
I'm writing a webservice that creates a file a user can download. The data source for that file is given by one or more URI's so I end up using Stream and Reader a lot. All of them are in using blocks. Even in the endpoint that offers the file I'm using return File(byte[], string, string) and the source stream of the byte[] is disposed.
Since I'm trying to type the data I receive via strings I'm using a lot of double.TryParse and Datetime.TryParse a lot.
I see some Garbage Collector runs but they are not freeing much. (Less than 1%)
But I'm observing a Heap size grow for every request I send with Swagger.
Some Numbers:
Memory usage before 1st request: 90 MB
Memory usage after 1st, 2nd, 3rd request: 180 MB, 277 MB, 480 MB
File size 200 Kb - the same for every request.
This led me to these questions:
Where are files stored a web site hosted in IIS Express offers while debugging using Swagger? Memory or disc?
Could there be a Swagger overhead that is the reason for this Memory growth?
What else could be the source of this memory leak?
.Net 5.0
3rd party library in use: CsvHelper latest
Consider the following code:
var container = new BlobContainerClient(...);
// fileStream is a stream delivering 10 MB of data
await container.UploadBlobAsync("name-of-blob", fileStream);
Using Fiddler Proxy to watch the HTTP requests, I can see that this ends up in 4 HTTP PUT requests (the address is 127.0.0.1 as I am locally testing using the Azurite emulator):
The first two requests (603 and 607) are 4 MB in size, the third one (613) is 2 MB in size and the fourth one (614) finally commits all sent blocks.
Instead of making 3 requests (4 MB + 4 MB + 2 MB) for the data, is it somehow possible to stream the 10 MB of data in one request to save some overhead?
As the data is sent in 4 MB chunks, does this mean that the Azure Storage clients wait until it got 4 MB from the fileStream to start sending, meaning 4 MB of RAM is used for caching? My intention to use a fileStream was to reduce memory usage by directly passing through the fileStream to Azure Blob Storage.
I am using Azure.Storage.Blobs version 12.8.0 (the latest stable version as I am writing this).
Ad 1) The maximum size of a single block in a PUT operation depends on the Azure Storage server version (see here). For testing purpose I just created a new storage account in Azure and started an UploadBlobAsync() operation with a 10.5 MB video file, Fiddler shows me this
A single PUT operation with 10544014 Byte. Note the x-ms-version request header which gives the client the ability to specify which version it wants to use (see here). I suppose your local emulator is just using an older API version.
Ad 2) Yes, for larger files UploadBlobAsync() will chunk the request, reads a set of bytes from the stream, performs a PUT, reads the next set of bytes, performs a PUT and so on.
i'm a newbie to asp.net core
i'm write a web api service, which store passed data to database. in theory there is about 300-400 request per second to server in future and response time must be less than 10 seconds
but first of all i try to run some load test with locust.
i write simple app with one controller and only one post method which simple return Ok() without any processing.
i try to create load to this service for 1000 users. my service run under ubuntu 16.04 with .net core 2.1 (2 Xeon 8175M with 8 GB of RAM). Locust run from dedicated computer
but i see only ~400 RPS and response time about 1400 ms. For empty action it is very big value.
i'm turn off all loging, run in production mode but no luck - still ~400 rps.
in system monitor (i use nmon) i see that both cpu loads only for 12-15% (total 24-30%). I have about 3 GB free ram, no network usage (about 200-300 KB/s), no disk usage, so system have hardware resource for handling request.
so i think, that there is problem with some configuration or may be with system resource like sockets, handles etc
i also try to use libuv instead of managed socket, but result is same
in kestrel configuration i setup explicitly Limit.MaxConnection and MaxUpgradedConnection to null (but it is default value)
so, i have two question:
- in theory, can kestrel provide high rps?
- if first is true, can you give me some advise for start point (links, articles and so on)
I have a problem with an ASP.NET MVC project hosted on IIS. I'm flooding the same request hundreds of times:
function Test(count){
for(var i=0; i<count; i++){
$.ajax({
url: "http://example.com?someparam=sth&test="+i,
context: document.body
}).done(function() {
console.log("done");
});
}
}
Test(500)
Here is the taken time of each request in milliseconds (here are just a part of the sent requests):
221
215
225
429
217
228
227
209
236
355
213
224
257
249
223
211
227
1227
168
181
257
3241
201
244
130
198
283
1714
146
136
177
3304
294
868
772
2750
138
1283
221
775
136
235
792
278
641
1707
880
1711
As you can see there are peaks for some of the requests and the taken time could be more than 10 times of the average of the other requests.
I though it could be a Garbage Collector issue, but I think it's not. I called GB on each request. I had the same result, the delays were still there in the log.
This happens not only for my MVC project but either for an empty MVC project.
I created a new MVC project and sent lots of requests to Home/About. The result was same.
I tried with an action that returns EmprtActionResult... same result.
If anybody knows why this happens and has a solution for the problem or just has a suggestion ... please share the information, I will be really grateful
Also I'm using .NET Memory Profiler, but I can't find out how to track each request and catch exactly the requests with delays. Can I do it with .NET Memory Profiler? If I don't please suggest another profiler that will be working for me.
Thank you!
EDIT: I also tried with an empty WebForms project. There were delays just for the firs 5 requests ... but this is because of IIS warming up for sure. And no delays for the next 1495 requests.
Your testing methodology has no way to identify where your bottleneck might be occurring, only that something is causing your delay.
Also, there is no mention whether this is an isolated server. If you are hitting a production website, you'll be affected when pages are requested by visitors accessing the web site.
At the very least, you'll need to add a control to this. I would start by loading a plain text file from the same web server. Another point to note is that most web browsers will limit the number of concurrent requests to the same web site. Usually, that is two simultaneous requests. Your delay could be a backlog of ajax requests in your client.
I need to download with maximum available download speed in C#.
FlashGet, IDM and other download managers seem to be able to.
It's nothing special, they're simply opening up multiple download connections to the same file and use segmented downloading so each connection pulls down a different range of bytes from the file.
For more information see for example - http://www.ehow.com/how-does_4615524_download-accelerator-work.html
For the C# side you might want to look at existing .NET projects such as this - http://www.codeproject.com/Articles/21053/MyDownloader-A-Multi-thread-C-Segmented-Download-M
The magic is in multiple connection and http Range header.
Say a file is 100MB in size. You plan to open 10 connections. So for each connections you'll download 10Mb. Now open a http connection and start downloading same file but 10 connecitons will be assigned to 10 different segments.
Connection 1 sends Range: bytes=0-1048575
Connection 2 sends Range: bytes=1048576-2097151
and so on
You have to set Window size in TCP protocol. but this future is not support in .net