My pages are automatically being compressed by IIS7 with GZIP.
That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user.
Is there a way that I can have this one page opt out of the compression?
One possible option
I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type.
This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed:
<add mimeType="text/*" enabled="true" />
<add mimeType="message/*" enabled="true" />
<add mimeType="application/x-javascript" enabled="true" />
<add mimeType="application/atom+xml" enabled="true" />
<add mimeType="application/xaml+xml" enabled="true" />
Others options?
Are there other options to opt out of compression?
It may not be possible to disable compression for a certain page, but you can for a directory.
This describes how to disable static compression, but it may work for dynamic compression: (From http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/502ef631-3695-4616-b268-cbe7cf1351ce.mspx?mfr=true)
To disable static compression for only a single directory, first enable global static compression (if it is disabled) and then disable static compression at that directory. For example, to enable static compression for a directory at http://www.contoso.com/Home/StyleSheets, perform the following steps:
Enable global static compression by executing the following command at a command prompt:
adsutil set w3svc/filters/compression/parameters/HcDoStaticCompression true
Disable static compression at this directory by executing the following command at a command prompt:
adsutil set w3svc/1/root/Home/StyleSheets/DoStaticCompression false
Not sure I like this but maybe worth mentioning:
Disable GZIP compression for IE6 clients
You could use a custom made compression module, like this one:
HTTP compression of WebResource.axd and pages in ASP.NET
Using such it should be easy to customize which files to include/exclude.
I know of no way of a page to disable itself programmatically during the request. However you can workaround the compression and send some extra padding garbage, enough for gzip to process a new block. Your padding data should be as random as possible so it doesn't get too compressed, filling the deflate buffer faster.
The actual amount of data to send depends on the compression module's configurations.
if you do Response.BufferOutput = false it will stop the inbuilt compression working, albeit not cleanly. You may get event warnings that it can't add headers after they have already been sent to the client.
If you need a solution which depends only on C#, you may adapt this method I have written to cope with a problem in the Android Browser:
/// <summary>
/// Alters the current HTTP request only for Android user agents, in order to disable web page compression so the Android browser will not cut off most of the page content, based on the Content-length HTTP header.
/// </summary>
public static void fixAndroidPageDisplay()
{
HttpContext c = HttpContext.Current;
if (c == null) return;
HttpRequest r = c.Request;
if (r == null || r.UserAgent == null) return;
if (r.UserAgent.ToLowerInvariant().Contains("android"))
{
HttpResponse rsp = c.Response;
if (rsp != null)
{
string ce = null;
foreach (string s in rsp.Headers.Keys)
{
if (s != null)
{
if (s.ToLowerInvariant().Equals("content-encoding")) {
ce = s;
}
}
}
if (ce != null) {
rsp.Headers[ce] = "text/html";
rsp.Filter = rsp.OutputStream;
}
}
}
}
Related
I am using the S3Reader plugin with ImageResizer to read deliver resized images from Amazon.
I am having troubles getting it working in my production environment mainly because I am unable to see what is going on under the covers.
I have added substantial logging and I know that the ImageMissing event is being fired, when I request the expected image.
If I check the url manually the image there, so the only thing I can think is that somewhere in the processing of ImageResizer via the S3Reader plugin?
So how can I see what the Url that ImageResizer is using to request the image from Amazon?
I suspect that because my bucket is in the AisaPac region it is somehow not using the correct url?
Some things to note, I only process images on the media subdomain and I re-write the url (this could impact it too?)
My code and config to follow:
<resizer>
<diskcache dir="~/app_data" autoClean="true" />
<clientcache minutes="1440" />
<plugins>
<add name="MvcRoutingShim" />
<add name="S3Reader" buckets="media.domain.com" useSubdomains="true" />
<add name="DiskCache" />
</plugins>
</resizer>
private static void ImageResizer_ReWrite(IHttpModule sender, HttpContext context, IUrlEventArgs args)
{
string subDomain = context.Request.Url.SubDomain();
if (string.IsNullOrWhiteSpace(subDomain) || subDomain != AppSettings.MediaSubDomain)
return;
args.VirtualPath = string.Format("/s3/{0}", AppSettings.AmazonS3BucketName) + args.VirtualPath;
Logger.Error("New VirtualPath: " + args.VirtualPath);
}
private static void ImageResizer_OnPostAuthorizeRequestStart(IHttpModule sender2, HttpContext context)
{
var subDomain = context.Request.Url.SubDomain();
if (string.IsNullOrWhiteSpace(subDomain) || subDomain != AppSettings.MediaSubDomain)
return;
Config.Current.Pipeline.SkipFileTypeCheck = true;
Config.Current.Pipeline.ModifiedQueryString["cache"] = ServerCacheMode.Always.ToString();
Logger.Error("ImageResizer Process: " + context.Request.RawUrl);
}
There are no warnings or errors in the debug trace, and I receive a 404 when I expect the image to be returned.
Amazon S3 suggests using the subdomain address method to ensure optimal performance for non-US regions - which is what S3Reader does by default (and which you have also enabled).
As such, your bucket name can't have periods or leading dashes (but you can have embedded dashes).
Assuming the value of AmazonS3BucketName is "media.domain.com", this would be the reason for the image lookup failure.
See Notes on bucket naming at the bottom of the S3Reader plugin documentation page.
I had created one website which has two modules,
ADMIN
USER
They are hosted on different domains. Now when user open its domain suppose its abc.com and can register their company and also upload photo from there and uploaded photo will go in Company_Logo FOLDER.
Now suppose ADMIN's domain is xyz.com . now i want that ADMIN open its xyz.com and can see the photo uploaded from abc.com now i want like ADMIN means from xyz.com can change that uploaded photo to abc.com which is in Company_Logo FOLDER.
In short photo uploded from User side which is on abc.com and replace from ADMIN side which is on xyz.com so how can i do that
So you have two different sites, hosted on different domains and perhaps even different servers, and you want site A to notify site B when some file has been uploaded. You then want to be able to alter that file on site A from site B.
Seems to me you need to create some sort of API on site A, that lets users (admins) from site B check recently uploaded files and also lets them overwrite it.
Okay, this can be done but you'll need to use an HttpHandler. You can find a good example here, but I'll spell out the important parts. I cannot feasibly write the entire handler for you here.
First, let's build a class in the web project and call it ImageHandler ...
public class ImageHandler : IHttpHandler
{
}
... next let's implement the interface ...
public bool IsReusable
{
get { return false; }
}
public void ProcessRequest(HttpContext context)
{
// find out what we're trying to do first
string method = context.Request.HttpMethod;
switch (method)
{
case "GET":
// read the query string for the document name or ID
// read the file in from the shared folder
// write those bytes to the response, ensuring to set the Reponse.ContentType
// and also remember to issue Reponse.Clear()
break;
case "PUT":
// read the Headers from the Request to get the byte[] of the file to CREATE
// write those bytes to disk
// construct a 200 response
break;
case "POST":
// read the Headers from the Request to get the byte[] of the file to UPDATE
// write those bytes to disk
// construct a 200 response
break;
case "DELETE":
// read the Headers from the Request to get the byte[] of the file to DELETE
// write those bytes to disk
// construct a 200 response
break;
}
}
... finally we need to setup the handler in the web.config ...
<configuration>
<system.web>
<httpHandlers>
<!-- remember that you need to replace the {YourNamespace} with your fully qualified -->
<!-- namespace and you need to replace {YourAssemblyName} with your assembly name -->
<!-- EXCLUDING the .dll -->
<add verb="*" path="*/images/*" type="{YourNamespace}.ImageHandler, {YourAssemblyName}" />
</httpHandlers>
</system.web>
</configuration>
Finally, something you're also going to want to do is pass in some kind of session key that can be validated when you get into the handler because otherwise this is open to everbody. It wouldn't matter if you didn't need the PUT, POST and DELETE verbs, but you do.
Technically you wouldn't need to check the session key on GET if you didn't care that everybody could access the GET, but you gotta check it on the others.
You have two options.
If both of your sites are hosted in the same machine or a shared hosting environment, chances are there that your site can access the other directories. In that case you will be easily able to place the images in desired folder.
Now the second case, where one of your site does not have access to the folder of another site, - it is rather complicated. You will have to create a proxy where by the admin site will accept the image and in turn it will put it in the main site folder. I do not recommend this though.
You can do this in 2 steps:
1) Upload image to your server using standard File Upload mechanism
2) Use HttpWebRequest class to upload image to different server on server-side right after original upload.
Please refer to this article: Upload files with HTTPWebrequest (multipart/form-data)
see this for reference:
http://forums.asp.net/t/1726911.aspx/1
I have a comerce css on my site. I use IIS and vendor says that others can use my css fonts because they know the url. Is it possible to set server or something so that only my site can use it ? It is about cufon
Things you can do:
Give up. If your users can see it, they can steal it. Similarly, don't expect to protect your site from users viewing its source code.
If the font is a vector font, rasterize the font for all the font sizes you support, but no others. This may have a negative impact on browsing experience of your users. This makes stealing your font give less useful data, but doesn't actually stop the theft.
Replace all use of the font with bitmaps. Much more work to steal in that case, and only gives the user rasterized version of font (and not necessarily all the letters). You can create a special text UserControl that sticks a bitmap where-ever you put it, so this isn't actually that much work to do or maintain. It does increase the bandwidth requirements for your page, though. It also forces you to do some of the layout by hand that is normally handled by the browser, which could add heavy maintenance costs or minimal maintenance costs, depending on how your site's layout works. And as with #2, it can have a negative impact on browsing experience of your users. It also hurts accessibility, though not absurdly so since your UserControl will presumably use alt text to duplicate the text.
I strongly recommend #1.
If you are on IIS7 or greater you can perform a Referer check without writing any custom code, simply by using IIS URL Rewrite in the manor discussed here. However as simply a Referer check, it has the shortcomings discussed in the other answers given.
(For introduction to IIS URL Rewrite see here.)
Excerpt from the first link:
Let me now explain what we have done
on this property page:
Specified name of the rule as "Prevent Leeching". This must be a
unique rule.
Every requested URL will be matched as the pattern is ".*" and is a
regular expression.
Added two condition and specified both the condition to be satisfied
(see "Logical Grouping" is "Match
All")
HTTP_REFERER does not match empty as it can be a direct reference to the
image
HTTP_REFERER does not match my own site http://www.contoso.com
If the above two conditions are
satisfied (apparently meaning the
request is coming from any other
site), we are just redirecting it to
pick up some other image which can be
anything And that's it. So without
writing even a single line of code we
are able to prevent hot-linking.
I would probably tailor your Rewrite configuration so that it is only performed on your font URLs (and other static assets of concern) rather than every single incoming request.
If you don't have remote desktop access or are just editing web.config, your rewrite rule will probably look something like:
<rule name="block font leaching" stopProcessing="true">
<match url="myFontFile.woff" />
<conditions logicalGrouping="MatchAny">
<add input="{HTTP_REFERER}" pattern="^$" /><!-- no referrer -->
<add input="{HTTP_REFERER}" pattern="yourdomain.com" negate="true" /><!-- or not your site -->
</conditions>
<action type="AbortRequest" /><!-- block the request -->
</rule>
In this example I choose the block the request entirely (through AbortRequest), however you could just as well have redirected to a page with a friendly notice.
Not reliably. In order to serve up the embedded fonts they need to readable by the public, and referable by your CSS.
What you could do is create an asp.net page, or a handler which takes a parameter of the font file, reads the file from somewhere in your web site (APP_DATA is a good place to put them - you can't browse to APP_DATA) and spits it out. In the script you could check the HTTP_REFERER server side variable and if it is either blank, or comes from your site you server the file, if it doesn't you don't.
MSDN has an example of how to serve up a binary file in C#. You'll need to ensure you get the MIME type right, however be aware this would probably break any caching provided by the browser or proxies. This also wouldn't stop people downloading the fonts by typing the URL into their browser and saving them locally, but if bandwidth is the concern that's not really going to be a problem.
If you're on IIS7 you could write an Http Module which would do the referrer check for you, Scott Hansleman wrote one for image leeching prevention quite a while ago, you could edit that to match your purposes.
You could make an http handler to serve up css files. In your custom http handler, check that the request.Url.Host equals request.UrlReferrer.Host. If they don't match, set the response to 404 or serve up an empty css file.
This is untested but should be close to what you would need.
You would add a link to css like:
<link rel="Stylesheet" href="CustomCSSHandler.ashx?file=site.css" />
public class CustomCSSHandler : IHttpHandler
{
public void ProcessRequest(HttpContext ctx)
{
HttpRequest req = ctx.Request;
//Get the file from the query stirng
string file = req.QueryString["file"];
//Find the actual path
string path = ctx.Server.MapPath(file); //Might need to modify location of css
//Limit to only css files
if(Path.GetExtension(path) != ".css")
ctx.Response.End();
if (req.UrlReferrer != null && req.UrlReferrer.Host.Length > 0)
{
if (CultureInfo.InvariantCulture.CompareInfo.Compare(req.Url.Host, req.UrlReferrer.Host, CompareOptions.IgnoreCase) != 0)
{
path = ctx.Server.MapPath("~/thiswontexist.css");
}
}
//Make sure file exists
if(!File.Exists(path))
{
ctx.Response.Status = "File not found";
ctx.Response.StatusCode = 404;
ctx.Response.End();
}
ctx.Response.StatusCode = 200;
ctx.Response.ContentType = "text/css";
ctx.Response.WriteFile(path);
}
}
I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5.
So images have an URL like this:
http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg
or
http://mydomain/pictures/ajfhajkfhal/44.jpg
This is rewritten to:
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg
or
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44
I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached.
The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best.
I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.
EDIT I use the following options to cache the image on the client side, where 'fileName' is the physical filename of the image (on disk).
context.Response.AddFileDependency(fileName);
context.Response.Cache.SetETagFromFileDependencies();
context.Response.Cache.SetLastModifiedFromFileDependencies();
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.Now.AddTicks(600));
context.Response.Cache.SetMaxAge(new TimeSpan(0, 5, 0));
context.Response.Cache.SetSlidingExpiration(true);
context.Response.Cache.SetValidUntilExpires(true);
context.Response.ContentType = "image/jpg";
EDIT 2 Thanks for pointing that out, that was indeed a very stupid mistake ;). I changed it to 30 minutes from now (DateTime.Now.AddMinutes(30)).
But this doesnt solve the problem. I am really thinking the problem lies with Firefox. I use Firebug to track each request and somehow, I am thinking I am doing something fundamentally wrong. Normal images (which are cached and static) give back an response code "304 (Not Modified)", while my page always gives back a "200 (OK)".
alt text http://images.depl0y.com/capture.jpg
If what you mean by "script" is the code in your Picture.aspx, I should point out that C# is not a scripting language, so it is technically not a script.
You can use the Caching API provided by ASP.NET.
I assume you alread have a method which contains something like this. Here is how you can use the Caching API:
string fileName = ... // The name of your file
byte[] bytes = null;
if (HttpContext.Current.Cache[fileName] != null)
{
bytes = (byte[])HttpContext.Current.Cache[fileName];
}
else
{
bytes = ... // Retrieve your image's bytes
HttpContext.Current.Cache[fileName] = bytes; // Set the cache
}
// Send it to the client
Response.BinaryWrite(bytes);
Response.Flush();
Note that the keys you use in the cache must be unique to each cached item, so it might not be enough to just use the name of the file for this purpose.
EDIT:
If you want to enable caching the content on the client side, use the following:
Response.Cache.SetCacheability(HttpCacheability.Public);
You can experiment with the different HttpCacheability values. With this, you can specify how and where the content should be cached. (Eg. on the server, on proxies, and on the client)
This will make ASP.NET to send the client the caching rules with the appropriate HTTP headers.
This will not guarantee that the client will actually cache it (it depends on browser settings, for example), but it will tell the browser "You should cache this!"
The best practice would be to use caching on both the client and the server side.
EDIT 2:
The problem with your code is the SetExpires(DateTime.Now.AddTicks(600)). 600 ticks is only a fraction of a second... (1 second = 10000000 ticks)
Basically, the content gets cached but expires the moment it gets to the browser.
Try these:
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(5));
context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5));
(The TimeSpan.FromMinutes is also more readable than new TimeSpan(...).)
I am developing a web app. which will generate a random link pointing to an image on my server. something like -http://dummy.com/Images/Image1.jpg?id=19234
Here this link can then be used by anybody on their site, now I just want to know how many sites are using my links, without anybody clicking on those links.
Can It be done using HTTPModule ??
Is this as simple as Googling? Search for
link:http://dummy.com/Images/Image1.jpg?id=19234
If you want to do this programmatically, you'll need to use the Google API.
The issue you'd have with an HttpHandler is that it will generally only kick in for requests that are being handled by the ASP.Net engine - the image requests will normally be handled by IIS without going through the handler.
Your web logs should be able to tell you who the referers for any given item on your servers are - assuming that you have them, and you hve something to process them - this will be more accurate than using Google.
Going forward, one of the ways I've done this in the past is to have the image generated by an HttpHandler (implementing IHttpHandler).
This will return the image as a stream (setting the content type to "image/jpeg"), and you can add further processing (such as logging where the request (referer) came from, etc).
The limitation I found with the HttpHandler, is that some services (PBBS for example) require an image link to have an image extension - I got around this by processing all 404's with an ASP.Net page that checks for the .jpg extension in the request. If it finds one, instead of returning the usual 404 page, it returns the requeted image. You'll need to configure the 404 handler in IIS though, as the web.config error handler only kicks in for ASP.Net requests (web services and .aspx type pages).
Example handler:
// Sample from the ASP.Net Personal Web Site Starter Kit
public class Handler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
// Set up the response settings
context.Response.ContentType = "image/jpeg";
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.BufferOutput = false;
// QueryString parameters are available here:
// context.Request.QueryString["QueryStringKey"]
// You can also access the Referrer object, and log the requests here.
Stream stream;
// Read your image into the stream, either from file system or DB
if (stream == null)
{
stream = PhotoManager.GetPhoto();
}
// Write image stream to the response stream
const int buffersize = 1024 * 16;
var buffer = new byte[buffersize];
int count = stream.Read(buffer, 0, buffersize);
while (count > 0)
{
context.Response.OutputStream.Write(buffer, 0, count);
count = stream.Read(buffer, 0, buffersize);
}
}
}
You can have similar code (or better yet, refactor the main image streaming code into a shared class) in the 404 page, that checks for the existence of the image extension, and renders the image out that way (again, setting the content type, etc).
Oddthinking is right. See http://code.google.com/intl/en/apis/ajaxsearch/documentation/#fonje_snippets or Google's API. They give examples for PHP and Java, but there are also AJAX frameworks for ASP.NET (http://www.asp.net/ajax/), and I'm sure C# as well.
You can change your image extension to an aspx extension (http://dummy.com/Images/Image1.aspx?id=19234), there is no problem in this, because this page the only thing it would do Response.OutputStream of the image. That is to say it would be similar to a jpg but with the advantage you can have some other code to process.
In this aspx (before outputing the image), we would ask about the http_referer and it would be stored in a data table if this registry does not exist.
This is really useful if for example you want to restrict the access to images. You could add some logic to forbid if they are not logged in.