I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5.
So images have an URL like this:
http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg
or
http://mydomain/pictures/ajfhajkfhal/44.jpg
This is rewritten to:
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg
or
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44
I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached.
The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best.
I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.
EDIT I use the following options to cache the image on the client side, where 'fileName' is the physical filename of the image (on disk).
context.Response.AddFileDependency(fileName);
context.Response.Cache.SetETagFromFileDependencies();
context.Response.Cache.SetLastModifiedFromFileDependencies();
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.Now.AddTicks(600));
context.Response.Cache.SetMaxAge(new TimeSpan(0, 5, 0));
context.Response.Cache.SetSlidingExpiration(true);
context.Response.Cache.SetValidUntilExpires(true);
context.Response.ContentType = "image/jpg";
EDIT 2 Thanks for pointing that out, that was indeed a very stupid mistake ;). I changed it to 30 minutes from now (DateTime.Now.AddMinutes(30)).
But this doesnt solve the problem. I am really thinking the problem lies with Firefox. I use Firebug to track each request and somehow, I am thinking I am doing something fundamentally wrong. Normal images (which are cached and static) give back an response code "304 (Not Modified)", while my page always gives back a "200 (OK)".
alt text http://images.depl0y.com/capture.jpg
If what you mean by "script" is the code in your Picture.aspx, I should point out that C# is not a scripting language, so it is technically not a script.
You can use the Caching API provided by ASP.NET.
I assume you alread have a method which contains something like this. Here is how you can use the Caching API:
string fileName = ... // The name of your file
byte[] bytes = null;
if (HttpContext.Current.Cache[fileName] != null)
{
bytes = (byte[])HttpContext.Current.Cache[fileName];
}
else
{
bytes = ... // Retrieve your image's bytes
HttpContext.Current.Cache[fileName] = bytes; // Set the cache
}
// Send it to the client
Response.BinaryWrite(bytes);
Response.Flush();
Note that the keys you use in the cache must be unique to each cached item, so it might not be enough to just use the name of the file for this purpose.
EDIT:
If you want to enable caching the content on the client side, use the following:
Response.Cache.SetCacheability(HttpCacheability.Public);
You can experiment with the different HttpCacheability values. With this, you can specify how and where the content should be cached. (Eg. on the server, on proxies, and on the client)
This will make ASP.NET to send the client the caching rules with the appropriate HTTP headers.
This will not guarantee that the client will actually cache it (it depends on browser settings, for example), but it will tell the browser "You should cache this!"
The best practice would be to use caching on both the client and the server side.
EDIT 2:
The problem with your code is the SetExpires(DateTime.Now.AddTicks(600)). 600 ticks is only a fraction of a second... (1 second = 10000000 ticks)
Basically, the content gets cached but expires the moment it gets to the browser.
Try these:
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(5));
context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5));
(The TimeSpan.FromMinutes is also more readable than new TimeSpan(...).)
Related
I have a web site (IIS, C#.Net, MVC4) where users are (forms-)authenticated and they upload media files (mostly .mp4) and authorize set of users to play back on demand. I store these files on local storage.
I play these files using jwplayer back to the authorized users on demand.
jwplayer expects I pass the url directly for it to play, but I didn't want to expose a direct url.
I really have to restrict unauthorized access to these files as they are private files.
I tried implementing a controller method to handle https://mysite/Video/Watch?VideoId=xyz, and return FileStream of the actual file. It works on a browser directly. (Though not sure how efficient it is for large files.)
But the problem is, jwplayer looks for urls of pattern http(s)://domain/path/file.mp4[?parameter1=value1¶meter2=value2 and so on.]
When I give a url like https://mysite/Video/Watch?VideoId=xyz, it says 'No playable sources found' without even sending a HEAD request.
If I expose the urls directly, the files are available for anybody to download, which will break the privacy.
Worst case, I would at least want to avoid hot links which will live for ever.
I have also looked at www.jwplayer.com/blog/securing-your-content/ but did not find the solutions suitable.
My questions are,
Is there a way I can retain the pattern of the url http(s)://domain/path/file.mp4 and still control the access to the file?
If (1.) is not possible, how do I leverage the parameters that could be passed on the url. With the parameters, I can think of signed urls. What should I do on the server if I have to provide and handle/validate signed urls.
Just not to hinder the performance, after any validation, can I somehow get the iis to handle the filestream rather my code?
I implemented an HTTPModule to allow/block access to the file. This addresses my questions 1 & 3.
Code snippet below.
void context_PreRequestHandlerExecute(object sender, EventArgs e)
{
HttpApplication app = sender as HttpApplication;
//Get the file extension
string fileExt= Path.GetExtension(app.Request.Url.AbsolutePath);
//Check if the extension is mp4
bool requestForMP4 = fileExt.Equals(".mp4", StringComparison.InvariantCultureIgnoreCase);
//If the request is not for an mp4 file, we have nothing to do here
if (!requestForMP4)
return;
//Initially assume no access to media
bool allowAccessToMedia = false;
//....
// Logic to determine access
// If allowed set allowAccessToMedia = true
// otherwise, just return
//....
if(!allowAccessToMedia)
{
//Terminate the request with HTTP StatusCode 403.2 Forbidden: Read Access Forbidden
app.Response.StatusCode = (int)HttpStatusCode.Forbidden;
app.Response.SubStatusCode = 2;
app.CompleteRequest();
}
}
I'm trying to implement punchout catalogs on our eComm site. Honestly, the documentation for cXML is a mess and all the code examples are in javascript and/or VB.Net (I use C# and would rather not have to try and translate). Does anyone out there have examples or samples of how to receive the PunchOutSetupRequest XML and then send out the PunchOutSetupResponse XML using C#? I've been unable to find anything on the interwebs (I've been looking for two days now)...
I'm hoping I can just do this inside an ActionResult (vs. a 'launch page' as suggested).
I'm a complete noob at punchouts and could really use some help here. The bosses are being pretty pushy, so any assistance would be greatly appreciated. Suggestions as to how to make this work would also be much appreciated.
I apologize to all for the vagueness of the question (request).
This isn't trivial, but this should get you started.
You'll need 3 generic handlers (.ashx): Setup, Start, and Order....
Setup and Order will receive HTTP Post with content-type of "text/xml". Look at HttpRequest.InputStream if needed to get the XML into a string. From there, look at LINQ-to-XML to dig out the data you want. Your HTTP Response to both of these will also be content-type "text/xml" and UTF8 encoded, returning the CXML as documented...use LINQ-to-XML to produce that.
The Setup handler will need to validate credentials and return a URL with a unique QueryString token pointing to the Start handler. Do not expect session persistence between Setup and Start, because they're not from the same caller. This handler will need to create an application object for the token and associated data you extracted from the cXML.
The Start handler will be called as a simple GET, and will need to match the token in the QueryString to the appropriate application object, copy that data to the session, and then do a response.redirect to whatever page in your site you want the buyer to land on.
Once they populate their cart with some things, and are ready to check out, you'll take them to a page that has an embedded form (not to be confused with an ASP.Net form that posts back to your server) and a submit button (again, not an ASP.Net button). From your Setup handler, you captured a URL to point this form's Post, and within the form you'll have a hidden input tag with the UTF8 encoded CXML Punchout Order injected as the value produced with LINQ-to-XML. Highly recommend Base64 encoding that value to avoid ASP.Net messing with the tags it contains during rendering, and naming the hidden input "cxml-base64" per the documentation. The result is the form is client-side POSTed to your customer's server instead of yours, and their server will extract the CXML Punchout Order and that ends your visitor's session.
The Order handler will receive a CXML OrderRequest and just like Setup, you'll dump that to a string and then use LINQ-to-XML to parse it and act upon it. Again you'll get credentials to verify, possibly a credit card to process, and the order items, ship-to, etc. Note that the OrderRequest may not contain all the items that were in the Punchout Order, because the system on your customer's side may remove items or even change item quantities before submitting the final OrderRequest to you. The OrderRequest could come back to you after the Punchout Order is posted to them in a matter of minutes, days, weeks, or never...don't bother storing the cart data in hopes of matching it to the order later.
Last note...the buyer may be experiencing your site in an iframe embedded in their web-based procurement UI, so design accordingly.
If you need more info, reply to this and I'll get back.
Update...Additional considerations:
Discuss with the buyer how they want fault handling to flow, particularly with orders, because you have a choice. 1) exhaustively evaluate everything in the CXML you receive and return response codes other than 200 if anything is wrong, or 2) always return a 200 Success and deal with any issues out of band or by generating a ConfirmationRequest that rejects the order. My experience is that a mix of the two works best. Certainly you should throw a non-200 if the credentials fail, but you may not want (or be able) to run a credit card or validate stock availability inline. Your buyer's system may not be able to cope with dozens of possible faults, and/or may not show your fault messages to the user for them to make corrections. I've seen systems that will flat-out discard any non-200 response code and just blindly retry the submission repeatedly on an interval for hours or days until it gives up on a sanity check, while others will handle response codes within certain ranges differently than others, for example a 4xx invokes a retry, while a 5xx is treated as fatal. Remember that Setup and Order are not coming directly from the user...their procurement system is generating those internally.
Update...answering the comment about how to test things...
You'd use the same method as you will for generating outbound ConfirmationRequest, ShipNoticeRequest, and InvoiceDetailRequest, all of which generally are produced on your side after receiving an OrderRequest from your customer's procurement system.
Start with Linq-To-XML for an example of crafting your outgoing cXML (Creating XML Trees section). Combine that example with this bit of code:
StringBuilder output = new StringBuilder();
XmlWriterSettings objXmlWriterSettings = new XmlWriterSettings();
objXmlWriterSettings.Indent = true;
objXmlWriterSettings.NewLineChars = Environment.NewLine;
objXmlWriterSettings.NewLineHandling = NewLineHandling.Replace;
objXmlWriterSettings.NewLineOnAttributes = false;
objXmlWriterSettings.Encoding = new UTF8Encoding();
using (XmlWriter objXmlWriter = XmlWriter.Create(output, objXmlWriterSettings)) {
XElement root = new XElement("Root",
new XElement("Child", "child content")
);
root.Save(objXmlWriter);
}
Console.WriteLine(output.ToString());
So at this point the StringBuilder (output) has your whole cXML, and you need to POST it someplace. Your Web Application project, started with F5 and a default.aspx page will be listening on localhost and some port (you'll see that in the URL it opens). Separately, perhaps using VS Express for Desktop, you have the above code in a console app that you can run to do the Post using something like this:
Net.HttpWebRequest objRequest = Net.WebRequest.Create("http://localhost:12345/handler.ashx");
objRequest.Method = "POST";
objRequest.UserAgent = "Some User Agent";
objRequest.ContentLength = output.Length;
objRequest.ContentType = "text/xml";
IO.StreamWriter objStreamWriter = new IO.StreamWriter(objRequest.GetRequestStream, System.Text.Encoding.ASCII);
objStreamWriter.Write(output);
objStreamWriter.Flush();
objStreamWriter.Close();
Net.WebResponse objWebResponse = objRequest.GetResponse();
XmlReaderSettings objXmlReaderSettings = new XmlReaderSettings();
objXmlReaderSettings.DtdProcessing = DtdProcessing.Ignore;
XmlReader objXmlReader = XmlReader.Create(objWebResponse.GetResponseStream, objXmlReaderSettings);
// Pipes the stream to a higher level stream reader with the required encoding format.
IO.MemoryStream objMemoryStream2 = new IO.MemoryStream();
XmlWriter objXmlWriter2 = XmlWriter.Create(objMemoryStream2, objXmlWriterSettings);
objXmlWriter2.WriteNode(objXmlReader, true);
objXmlWriter2.Flush();
objXmlWriter2.Close();
objWebResponse.Close();
// Reset current position to the beginning so we can read all below.
objMemoryStream2.Position = 0;
StreamReader objStreamReader = new StreamReader(objMemoryStream2, Encoding.UTF8);
Console.WriteLine(objStreamReader.ReadToEnd());
objStreamReader.Close();
Since your handler should be producing cXML you'll see that spat out in the console. If it pukes, you'll get a big blob of debug mess in the console, which of course will help you fix whatever is broken.
pardon the verbosity in the variable names, done to try to make things clear.
I have an application which intended to stream videos back from our local DB. I spent a lot of time yesterday attempting to return the data a either a RangeFileContentResult or RangeFileStreamResult without success.
In short, when I return the file as either of these two results I cannot seem to get a video to stream correctly (or play at all).
The request from the browser gets sent with the following headers:
Range: bytes=0-
And the response comes provided gives these headers as an example:
Accept-Ranges: bytes
Content-Range: bytes 0-5103295/5103296
In terms of network traffic, I get a series of 206's for partial results, then a 200 at the end (according to fiddler) which seems correct.
Chrome's network tab disagrees with this and see's an initial request (always 13 bytes which I assume is a handshake) then a couple more requests which have a status of either cancelled or pending.
As far as I understand, this is more or less correct, 206 - cancel, 206 - cancel etc. But the video never plays.
If I switch the result from my controller to a FileResult, the video plays and Chrome, IE10 and Firefox and appears to begin playing before the end of the download is completed (which feels a little like it's streaming! although I suspect it's not)
But with the range result I get nothing in chrome or IE and the entire video downloads in one drop in firefox.
As far as I understood, the RangeFileContentResult should handle responding to the client with a range of bytes to download (which mine doesn't seem to do, it just tells it to get the whole file (illustrated by the response above)). And the client should respond to that, which it doesn't seem to do.
Does anyone have any thoughts in this area? Specifically:
a) Should RangeFileContentResult be sending a range of bytes back to the client?
b) Is there any way I can explicitly control the range of bytes requested from the client side?
c) Is there any reason or anything I'm doing wrong here which would cause browsers not to load the video at all, when requesting a RangeFileContentResult?
EDIT: Added a diagram to help describe what I'm seeing:
EDIT2: Ok, so the plot thickens. Whilst playing around with the RangedFile gubbins we needed to push another system test version out and I left the 'RangeFileContentResult' on my controller action as below:
private ActionResult RetrieveVideo(MediaItem media)
{
return new RangeFileContentResult(
media.Content,
media.MimeType,
media.Id.ToString(),
DateTime.Now);
}
Rather oddly, this now seems to work as expected on our Azure system test environment but still not on my local machine. I wonder if there's something IIS based which works happily on Azures IIS8, but not on my local 7.5 instance?
The reason of the issue described here is the value passed to modificationDate parameter of RangeFileContentResult constructor:
return new RangeFileContentResult(media.Content, media.MimeType, media.Id.ToString(), DateTime.Now);
This date is used by the RangeFileResult in order to create two headers:
ETag - This header is an identifier used by browser and server to make sure that they are speaking about the same entity.
Last-Modified - This header informs the browser about the last modification date of the entity.
The fact that a DateTime.Now is being passed every time the browser makes partial request might be a reason for ETag and Last-Modified headers values to change before the client will get the whole entity (usually if the entire process takes longer than one second).
In case described above, the browser is sending If-Range header with the request. This header is telling the server that the entire entity should be resend if the entity tag (or modification date because If-Range can carry either one of those two values) doesn't much. This is what happens in this case.
The fact that modification date is "dynamic" may also cause further issues if client decides to use one of following headers for verification: If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match.
The solution in this situation is to keep a modification date in database with the file to make sure it is consistent.
There is also a place for optimization here. Instead of grabbing the whole video from DB every time a partial request is being made, one can either cache it or grab only the relevant part (if the database engine which application is using allows such an operation). Such a mechanism can be used in order to create specialized action result by delivering from RangeFileResult and overwriting WriteEntireEntity and WriteEntityRange methods.
Ok So I didn't have enough time to look at RangeFileResult in details, but I have just downloaded the file (RangeFileContentResult) from
RangeFileContentResult
and modified my code so it looks like
public ActionResult Movie()
{
byte[] file = System.IO.File.ReadAllBytes(#"C:\HOME\asp\Java\Java EE. Programming Spring 3.0\01.avi");
return new RangeFileContentResult(file, "video/x-msvideo", "01.avi", DateTime.Now);
}
and again it works. However, I noticed that when I stop the video I have an exception and it happens in RangeFileResult
if (context.HttpContext.Response.IsClientConnected)
{
WriteEntityRange(context.HttpContext.Response, RangesStartIndexes[i], RangesEndIndexes[i]);
if (MultipartRequest)
context.HttpContext.Response.Write("\r\n");
context.HttpContext.Response.Flush();
}
So you better modify the code to handle it.In terms when users already disconnected , but you are still trying to send them a response.
Again, technically it's not a big difference whether you pass byte[] or Stream , because even when you pass Stream the code working with it
using (FileStream)
{
FileStream.Seek(rangeStartIndex, SeekOrigin.Begin);
int bytesRemaining = Convert.ToInt32(rangeEndIndex - rangeStartIndex) + 1;
byte[] buffer = new byte[_bufferSize];
while (bytesRemaining > 0)
{
int bytesRead = FileStream.Read(buffer, 0, _bufferSize < bytesRemaining ? _bufferSize : bytesRemaining);
response.OutputStream.Write(buffer, 0, bytesRead);
bytesRemaining -= bytesRead;
}
}
again reads data and puts them into an byte[] array!.... So it's up to you!
BUT... I suggest that you pay attention to a content type that you provide!!!
Point is that your browser must be able to handle it!So if you provide something unknown definitely you will have problems.To find your content type string please check
mime-types-by-content-type
Again I just gave a quick look and if you have problems I will help you later when come home.
mofiPlease just copy these two files in your mvc project
RangeFileResult
RangeFileStreamResult
public ActionResult Movie()
{
var path = new FileStream(#"C:\temp\01.avi", FileMode.Open);
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
}
Now run your project and open in chrome (for example: http://youraddress.com:45454/Main/Movie) you should see your file playing using a standard chrome video player. it's streaming and you can see it if you put a breakpoint at
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
Again the source is easy to modify to change the buffer size which is used for streaming!
I am making a launcher for Minecraft. 1.6.2 changed a lot, so the way you login is different. If any of you have any knowledge of logging into minecraft using C#, I would appreciate it.
wClient.DownloadString("http://login.minecraft.net/?user=" + strUsername + "&password=" + strPassword + "&version=13");
I believe this used to be a valid way of doing it, but I am not quite sure anymore. Help is appreciated, thanks.
In reply to TheUnrealMegashark's comments to Rhys Towey's Answer. I have been working really hard to get it to launch, but. Its throwing me off a bit. The very next update will include a 1.6 fix. Just got to figure it out.
The proper answer to your question is that the web link that fetches the Session is still currently in use. Nothing new there.
Beware! You must know that your
"http://login.minecraft.net/?user=" + strUsername + "&password=" +
strPassword + "&version=13"
Is unsafe. It sends the password of the user through the internet in plain text. it can be subject to "Man in the Middle" attacks.
One of the proper ways to encrypt the connection is to use HTTPS with POST. Using POST, I avoid sending all of the data in the request URL and send the data through POST. Using HTTPS, I encrypt any data sent after the request URL returns. HTTPS makes POST encrypted, thus removing "Man in the Middle" attacks.
You can use GET with HTTPS and it still be secure (from what i have read). But, it is considered an unsafe practice. Although it is safe in all accounts between your computer and the connected device, anywhere else it might be seen and be subject to "Man behind you Attack". What I mean is that when you send this URL, it is possible for your computer to record the URL in some sort of history, or, display it in an address bar in plain text. Although, sense your not making a web browser and the URL is not displayed, this could possibly all be forgotten.
But, If it were me, I would still play it safe and just use the safer strategy.
To use HTTPS with POST.
Here is a sample of code i use in my "AtomLauncher." This code will send the POST data to the URL and return a string. Goto http://www.minecraftwiki.net/wiki/Minecraft.net to get more info on the string that is returned.
string mcURLData = "Error";
using (WebClient client = new WebClient()) // Get Data from Minecraft with username and password
{
// This a Text control for my Program, ignore this commented line if you wish.
// this.Invoke(new MethodInvoker(delegate { homeLabelTop.Text = "Connecting to Minecraft.net..."; }));
try
{
System.Collections.Specialized.NameValueCollection urlData = new System.Collections.Specialized.NameValueCollection();
urlData.Add("user", "UserName");
urlData.Add("password", "MYPa22w0rd");
urlData.Add("version", "13");
byte[] responsebytes = client.UploadValues("https://login.minecraft.net", "POST", urlData);
mcURLData = Encoding.UTF8.GetString(responsebytes);
}
catch
{
if (!System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable())
{
mcURLData = "Internet Disconnected.";
}
else
{
mcURLData = "Can't connect to login.minecraft.net.";
}
}
}
To use HTTPS with GET
just simply change the
http
in your code to
https
In other news.
I have fixed my code. Feel free (when its uploaded) to use it.
For your information, you need to know that when 1.6.X launches it creates a natives folder of which it starts using immediately. What I have done to fix this was to run 1.6.2 and copy the natives folder it created and removed the number.
Created "version/1.6.2/1.6.2-natives-###"
Copied it to "version/1.6.2/1.6.2.natives"
Point my program to "natives" folder I created.
What I'll end up doing in the future is automatically checking for the natives folder and if it doesn't exist, I'll have it download natives from the internet.
(I would love to know where minecraft is getting its current natives so i can essentially do the same thing. Unless, what it does is download it from the internet every time it launches. If true, that's kind of ugly. Seeing as I have bandwidth usage limits.)
I am developing a web app. which will generate a random link pointing to an image on my server. something like -http://dummy.com/Images/Image1.jpg?id=19234
Here this link can then be used by anybody on their site, now I just want to know how many sites are using my links, without anybody clicking on those links.
Can It be done using HTTPModule ??
Is this as simple as Googling? Search for
link:http://dummy.com/Images/Image1.jpg?id=19234
If you want to do this programmatically, you'll need to use the Google API.
The issue you'd have with an HttpHandler is that it will generally only kick in for requests that are being handled by the ASP.Net engine - the image requests will normally be handled by IIS without going through the handler.
Your web logs should be able to tell you who the referers for any given item on your servers are - assuming that you have them, and you hve something to process them - this will be more accurate than using Google.
Going forward, one of the ways I've done this in the past is to have the image generated by an HttpHandler (implementing IHttpHandler).
This will return the image as a stream (setting the content type to "image/jpeg"), and you can add further processing (such as logging where the request (referer) came from, etc).
The limitation I found with the HttpHandler, is that some services (PBBS for example) require an image link to have an image extension - I got around this by processing all 404's with an ASP.Net page that checks for the .jpg extension in the request. If it finds one, instead of returning the usual 404 page, it returns the requeted image. You'll need to configure the 404 handler in IIS though, as the web.config error handler only kicks in for ASP.Net requests (web services and .aspx type pages).
Example handler:
// Sample from the ASP.Net Personal Web Site Starter Kit
public class Handler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
// Set up the response settings
context.Response.ContentType = "image/jpeg";
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.BufferOutput = false;
// QueryString parameters are available here:
// context.Request.QueryString["QueryStringKey"]
// You can also access the Referrer object, and log the requests here.
Stream stream;
// Read your image into the stream, either from file system or DB
if (stream == null)
{
stream = PhotoManager.GetPhoto();
}
// Write image stream to the response stream
const int buffersize = 1024 * 16;
var buffer = new byte[buffersize];
int count = stream.Read(buffer, 0, buffersize);
while (count > 0)
{
context.Response.OutputStream.Write(buffer, 0, count);
count = stream.Read(buffer, 0, buffersize);
}
}
}
You can have similar code (or better yet, refactor the main image streaming code into a shared class) in the 404 page, that checks for the existence of the image extension, and renders the image out that way (again, setting the content type, etc).
Oddthinking is right. See http://code.google.com/intl/en/apis/ajaxsearch/documentation/#fonje_snippets or Google's API. They give examples for PHP and Java, but there are also AJAX frameworks for ASP.NET (http://www.asp.net/ajax/), and I'm sure C# as well.
You can change your image extension to an aspx extension (http://dummy.com/Images/Image1.aspx?id=19234), there is no problem in this, because this page the only thing it would do Response.OutputStream of the image. That is to say it would be similar to a jpg but with the advantage you can have some other code to process.
In this aspx (before outputing the image), we would ask about the http_referer and it would be stored in a data table if this registry does not exist.
This is really useful if for example you want to restrict the access to images. You could add some logic to forbid if they are not logged in.