I have an application which intended to stream videos back from our local DB. I spent a lot of time yesterday attempting to return the data a either a RangeFileContentResult or RangeFileStreamResult without success.
In short, when I return the file as either of these two results I cannot seem to get a video to stream correctly (or play at all).
The request from the browser gets sent with the following headers:
Range: bytes=0-
And the response comes provided gives these headers as an example:
Accept-Ranges: bytes
Content-Range: bytes 0-5103295/5103296
In terms of network traffic, I get a series of 206's for partial results, then a 200 at the end (according to fiddler) which seems correct.
Chrome's network tab disagrees with this and see's an initial request (always 13 bytes which I assume is a handshake) then a couple more requests which have a status of either cancelled or pending.
As far as I understand, this is more or less correct, 206 - cancel, 206 - cancel etc. But the video never plays.
If I switch the result from my controller to a FileResult, the video plays and Chrome, IE10 and Firefox and appears to begin playing before the end of the download is completed (which feels a little like it's streaming! although I suspect it's not)
But with the range result I get nothing in chrome or IE and the entire video downloads in one drop in firefox.
As far as I understood, the RangeFileContentResult should handle responding to the client with a range of bytes to download (which mine doesn't seem to do, it just tells it to get the whole file (illustrated by the response above)). And the client should respond to that, which it doesn't seem to do.
Does anyone have any thoughts in this area? Specifically:
a) Should RangeFileContentResult be sending a range of bytes back to the client?
b) Is there any way I can explicitly control the range of bytes requested from the client side?
c) Is there any reason or anything I'm doing wrong here which would cause browsers not to load the video at all, when requesting a RangeFileContentResult?
EDIT: Added a diagram to help describe what I'm seeing:
EDIT2: Ok, so the plot thickens. Whilst playing around with the RangedFile gubbins we needed to push another system test version out and I left the 'RangeFileContentResult' on my controller action as below:
private ActionResult RetrieveVideo(MediaItem media)
{
return new RangeFileContentResult(
media.Content,
media.MimeType,
media.Id.ToString(),
DateTime.Now);
}
Rather oddly, this now seems to work as expected on our Azure system test environment but still not on my local machine. I wonder if there's something IIS based which works happily on Azures IIS8, but not on my local 7.5 instance?
The reason of the issue described here is the value passed to modificationDate parameter of RangeFileContentResult constructor:
return new RangeFileContentResult(media.Content, media.MimeType, media.Id.ToString(), DateTime.Now);
This date is used by the RangeFileResult in order to create two headers:
ETag - This header is an identifier used by browser and server to make sure that they are speaking about the same entity.
Last-Modified - This header informs the browser about the last modification date of the entity.
The fact that a DateTime.Now is being passed every time the browser makes partial request might be a reason for ETag and Last-Modified headers values to change before the client will get the whole entity (usually if the entire process takes longer than one second).
In case described above, the browser is sending If-Range header with the request. This header is telling the server that the entire entity should be resend if the entity tag (or modification date because If-Range can carry either one of those two values) doesn't much. This is what happens in this case.
The fact that modification date is "dynamic" may also cause further issues if client decides to use one of following headers for verification: If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match.
The solution in this situation is to keep a modification date in database with the file to make sure it is consistent.
There is also a place for optimization here. Instead of grabbing the whole video from DB every time a partial request is being made, one can either cache it or grab only the relevant part (if the database engine which application is using allows such an operation). Such a mechanism can be used in order to create specialized action result by delivering from RangeFileResult and overwriting WriteEntireEntity and WriteEntityRange methods.
Ok So I didn't have enough time to look at RangeFileResult in details, but I have just downloaded the file (RangeFileContentResult) from
RangeFileContentResult
and modified my code so it looks like
public ActionResult Movie()
{
byte[] file = System.IO.File.ReadAllBytes(#"C:\HOME\asp\Java\Java EE. Programming Spring 3.0\01.avi");
return new RangeFileContentResult(file, "video/x-msvideo", "01.avi", DateTime.Now);
}
and again it works. However, I noticed that when I stop the video I have an exception and it happens in RangeFileResult
if (context.HttpContext.Response.IsClientConnected)
{
WriteEntityRange(context.HttpContext.Response, RangesStartIndexes[i], RangesEndIndexes[i]);
if (MultipartRequest)
context.HttpContext.Response.Write("\r\n");
context.HttpContext.Response.Flush();
}
So you better modify the code to handle it.In terms when users already disconnected , but you are still trying to send them a response.
Again, technically it's not a big difference whether you pass byte[] or Stream , because even when you pass Stream the code working with it
using (FileStream)
{
FileStream.Seek(rangeStartIndex, SeekOrigin.Begin);
int bytesRemaining = Convert.ToInt32(rangeEndIndex - rangeStartIndex) + 1;
byte[] buffer = new byte[_bufferSize];
while (bytesRemaining > 0)
{
int bytesRead = FileStream.Read(buffer, 0, _bufferSize < bytesRemaining ? _bufferSize : bytesRemaining);
response.OutputStream.Write(buffer, 0, bytesRead);
bytesRemaining -= bytesRead;
}
}
again reads data and puts them into an byte[] array!.... So it's up to you!
BUT... I suggest that you pay attention to a content type that you provide!!!
Point is that your browser must be able to handle it!So if you provide something unknown definitely you will have problems.To find your content type string please check
mime-types-by-content-type
Again I just gave a quick look and if you have problems I will help you later when come home.
mofiPlease just copy these two files in your mvc project
RangeFileResult
RangeFileStreamResult
public ActionResult Movie()
{
var path = new FileStream(#"C:\temp\01.avi", FileMode.Open);
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
}
Now run your project and open in chrome (for example: http://youraddress.com:45454/Main/Movie) you should see your file playing using a standard chrome video player. it's streaming and you can see it if you put a breakpoint at
return new RangeFileStreamResult(path, "video/x-msvideo", "01.avi", DateTime.Now);
Again the source is easy to modify to change the buffer size which is used for streaming!
Related
I am new to API development and I want to create a Web API end point which will be receiving a large amount of log data. And I want to send that data to Amazon s3 bucket via Amazon Kinesis delivery stream. Below is a sample application which works FINE, but I have NO CLUE how to INGEST large inbound of data and in What format my API should be receiving data? How my API Endpoint should look like.
[HttpPost]
public async void Post() // HOW to allow it to receive large chunk of data?
{
await WriteToStream();
}
private async Task WriteToStream()
{
const string myStreamName = "test";
Console.Error.WriteLine("Putting records in stream : " + myStreamName);
// Write 10 UTF-8 encoded records to the stream.
for (int j = 0; j < 10000; ++j)
{
// I AM HARDCODING DATA HERE FROM THE LOOP COUNTER!!!
byte[] dataAsBytes = Encoding.UTF8.GetBytes("testdata-" + j);
using (MemoryStream memoryStream = new MemoryStream(dataAsBytes))
{
PutRecordRequest putRecord = new PutRecordRequest();
putRecord.DeliveryStreamName = myStreamName;
Record record = new Record();
record.Data = memoryStream;
putRecord.Record = record;
await kinesisClient.PutRecordAsync(putRecord);
}
}
}
P.S: IN real world app I will not have that for loop. I want my API to ingest large data, what should be the definition of my API? Do I need to use something called multiform/data, file? Please guide me.
Here is my thought process. As you are exposing a API for the logging, your input should contain below attributes
Log Level (info, debug, warn, fatal)
Log message (string)
Application ID
Application Instance ID
application IP
Host (machine in which the error was logged)
User ID (for whom the error occurred)
Time stamp in Utc (time at which the error occurred)
Additional Data (customisable as xml / json)
I will suggest exposing the API as AWS lambda via Gateway API as it will help in scaling out as load increases.
To take sample for how to build API and use model binding, you may refer https://learn.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/model-validation-in-aspnet-web-api
I don't have much context so basically will try to provide answer from how I see it.
First instead of sending data to webapi I would send data directly to S3. In azure there is Share Access Token so you send request to you api to give you url where to upload file(there is many options but you can limit by time, limit by IP who can upload). So to upload file 1. Do call to get upload Url, 2. PUT to that url. Looks like in Amazon it called Signed Policy.
After that write lambda function which will be triggered on S3 upload, this function will be sending event (Again I dont know how its in AWS but in azure I will send Blob Queue message) this event will contain url to file and start position.
Write second Lambda which listens to events and do actually processing, so in my apps sometimes i know that to process N items it take 10 seconds so I usually choose N to be something not longer that 10-20 seconds, due to nature of deployments. After you processed N rows and not yet finished send same event but now Start position = Start position on the begging + N. More info how to read range
Designing this way you can process large files, even more you can be smarter because you can send multiple events where you can say Start Line, End Line so you will be able to process your file in multiple instances.
PS. Why I would not recommend you upload files to WebApi its because those files will be in memory, so lets say you have 1GB files sending from multiple sources in this case you will kill your servers in minutes.
PS2. Format of file depends, could be json since its the easiest way to read those files, but keep in mind that if you have large files it will be expensive to read whole file to memory. Here is example how to read them properly. So other option could be just flat file then will be easy to read it, since then you can read range and process it
PS3. In azure I would use Azure Batch Jobs
I am currently writing a Discord Bot in C#. I have most the bot done but for this next update I am wanting to add on the capability of checking if the Streamer has Gone live. Currently I am polling the Twitch API and Pulling the JSON File that it has and checking whether or not the JSON Stream Object is Null or Not. But this takes 3-5 min after the streamer to go live before it finally sees that Stream is not Null even though I poll the JSON every 5 seconds. Is there anyway to do this more efficiently? My code is Below:
private const string Url = "https://api.twitch.tv/kraken/streams/streamer";
var request = (HttpWebRequest)WebRequest.Create(Url);
request.Method = "Get";
request.Timeout = 12000;
request.ContentType = "application/vnd.twitchtv.v5+json";
request.Headers.Add("Client-ID", "ID");
using (var s = request.GetResponse().GetResponseStream())
{
using (var sr = new System.IO.StreamReader(s))
{
var jsonObject = JObject.Parse(sr.ReadToEnd());
var jsonStream = jsonObject["stream"];
// twitch channel is online if stream is not null.
LastTwitchStatus = jsonStream.Type != JTokenType.Null;
}
}
Looks like it's intended behavior of Twitch API.
They are definitely more focused on pushing their horsepower to streaming, not immediate data provision through API.
While there might be a limitation like this, you can try scrapping the page if timing is crucial and you don't want to wait 3-5 min for something that already happened.
One idea is to poll page each 5s or so and then query the HTML document for something characteristic that distinguish offline and online channel.
Idea for scrapping in JavaScript (just replicate in .NET):
For example, I have tried to query user pages (https://www.twitch.tv/username) in JavaScript with:
$(".recent-past-broadcast").length > 0
and for user that is not broadcasting it yields true while for broadcasting user it yields false. Problem might be for user with no recent broadcasts history though.
You can try checking videos page (https://www.twitch.tv/username/videos/all) for their live indicator too like:
$(".cn-livestatus__circle").length > 0
It will yield true for streaming user and false for the one that does not stream (even if he/she is online).
Of course that's least efficient way on doing this and requires lots of download as compared to just polling but... still it seems more up to date than asking API every 5s and still getting actual state delayed by 3-5min.
Just replicate querying like above in .NET and you're there.
You could also mix two approaches and if you see that someone started streaming, just disable page scrapping and swap to only API calls for checking if you're up-to-date still.
Useful tooling for scrapping:
For parsing HTML documents use parsers like AngleSharp to do this in .NET:
https://github.com/AngleSharp/AngleSharp
I'm trying to implement punchout catalogs on our eComm site. Honestly, the documentation for cXML is a mess and all the code examples are in javascript and/or VB.Net (I use C# and would rather not have to try and translate). Does anyone out there have examples or samples of how to receive the PunchOutSetupRequest XML and then send out the PunchOutSetupResponse XML using C#? I've been unable to find anything on the interwebs (I've been looking for two days now)...
I'm hoping I can just do this inside an ActionResult (vs. a 'launch page' as suggested).
I'm a complete noob at punchouts and could really use some help here. The bosses are being pretty pushy, so any assistance would be greatly appreciated. Suggestions as to how to make this work would also be much appreciated.
I apologize to all for the vagueness of the question (request).
This isn't trivial, but this should get you started.
You'll need 3 generic handlers (.ashx): Setup, Start, and Order....
Setup and Order will receive HTTP Post with content-type of "text/xml". Look at HttpRequest.InputStream if needed to get the XML into a string. From there, look at LINQ-to-XML to dig out the data you want. Your HTTP Response to both of these will also be content-type "text/xml" and UTF8 encoded, returning the CXML as documented...use LINQ-to-XML to produce that.
The Setup handler will need to validate credentials and return a URL with a unique QueryString token pointing to the Start handler. Do not expect session persistence between Setup and Start, because they're not from the same caller. This handler will need to create an application object for the token and associated data you extracted from the cXML.
The Start handler will be called as a simple GET, and will need to match the token in the QueryString to the appropriate application object, copy that data to the session, and then do a response.redirect to whatever page in your site you want the buyer to land on.
Once they populate their cart with some things, and are ready to check out, you'll take them to a page that has an embedded form (not to be confused with an ASP.Net form that posts back to your server) and a submit button (again, not an ASP.Net button). From your Setup handler, you captured a URL to point this form's Post, and within the form you'll have a hidden input tag with the UTF8 encoded CXML Punchout Order injected as the value produced with LINQ-to-XML. Highly recommend Base64 encoding that value to avoid ASP.Net messing with the tags it contains during rendering, and naming the hidden input "cxml-base64" per the documentation. The result is the form is client-side POSTed to your customer's server instead of yours, and their server will extract the CXML Punchout Order and that ends your visitor's session.
The Order handler will receive a CXML OrderRequest and just like Setup, you'll dump that to a string and then use LINQ-to-XML to parse it and act upon it. Again you'll get credentials to verify, possibly a credit card to process, and the order items, ship-to, etc. Note that the OrderRequest may not contain all the items that were in the Punchout Order, because the system on your customer's side may remove items or even change item quantities before submitting the final OrderRequest to you. The OrderRequest could come back to you after the Punchout Order is posted to them in a matter of minutes, days, weeks, or never...don't bother storing the cart data in hopes of matching it to the order later.
Last note...the buyer may be experiencing your site in an iframe embedded in their web-based procurement UI, so design accordingly.
If you need more info, reply to this and I'll get back.
Update...Additional considerations:
Discuss with the buyer how they want fault handling to flow, particularly with orders, because you have a choice. 1) exhaustively evaluate everything in the CXML you receive and return response codes other than 200 if anything is wrong, or 2) always return a 200 Success and deal with any issues out of band or by generating a ConfirmationRequest that rejects the order. My experience is that a mix of the two works best. Certainly you should throw a non-200 if the credentials fail, but you may not want (or be able) to run a credit card or validate stock availability inline. Your buyer's system may not be able to cope with dozens of possible faults, and/or may not show your fault messages to the user for them to make corrections. I've seen systems that will flat-out discard any non-200 response code and just blindly retry the submission repeatedly on an interval for hours or days until it gives up on a sanity check, while others will handle response codes within certain ranges differently than others, for example a 4xx invokes a retry, while a 5xx is treated as fatal. Remember that Setup and Order are not coming directly from the user...their procurement system is generating those internally.
Update...answering the comment about how to test things...
You'd use the same method as you will for generating outbound ConfirmationRequest, ShipNoticeRequest, and InvoiceDetailRequest, all of which generally are produced on your side after receiving an OrderRequest from your customer's procurement system.
Start with Linq-To-XML for an example of crafting your outgoing cXML (Creating XML Trees section). Combine that example with this bit of code:
StringBuilder output = new StringBuilder();
XmlWriterSettings objXmlWriterSettings = new XmlWriterSettings();
objXmlWriterSettings.Indent = true;
objXmlWriterSettings.NewLineChars = Environment.NewLine;
objXmlWriterSettings.NewLineHandling = NewLineHandling.Replace;
objXmlWriterSettings.NewLineOnAttributes = false;
objXmlWriterSettings.Encoding = new UTF8Encoding();
using (XmlWriter objXmlWriter = XmlWriter.Create(output, objXmlWriterSettings)) {
XElement root = new XElement("Root",
new XElement("Child", "child content")
);
root.Save(objXmlWriter);
}
Console.WriteLine(output.ToString());
So at this point the StringBuilder (output) has your whole cXML, and you need to POST it someplace. Your Web Application project, started with F5 and a default.aspx page will be listening on localhost and some port (you'll see that in the URL it opens). Separately, perhaps using VS Express for Desktop, you have the above code in a console app that you can run to do the Post using something like this:
Net.HttpWebRequest objRequest = Net.WebRequest.Create("http://localhost:12345/handler.ashx");
objRequest.Method = "POST";
objRequest.UserAgent = "Some User Agent";
objRequest.ContentLength = output.Length;
objRequest.ContentType = "text/xml";
IO.StreamWriter objStreamWriter = new IO.StreamWriter(objRequest.GetRequestStream, System.Text.Encoding.ASCII);
objStreamWriter.Write(output);
objStreamWriter.Flush();
objStreamWriter.Close();
Net.WebResponse objWebResponse = objRequest.GetResponse();
XmlReaderSettings objXmlReaderSettings = new XmlReaderSettings();
objXmlReaderSettings.DtdProcessing = DtdProcessing.Ignore;
XmlReader objXmlReader = XmlReader.Create(objWebResponse.GetResponseStream, objXmlReaderSettings);
// Pipes the stream to a higher level stream reader with the required encoding format.
IO.MemoryStream objMemoryStream2 = new IO.MemoryStream();
XmlWriter objXmlWriter2 = XmlWriter.Create(objMemoryStream2, objXmlWriterSettings);
objXmlWriter2.WriteNode(objXmlReader, true);
objXmlWriter2.Flush();
objXmlWriter2.Close();
objWebResponse.Close();
// Reset current position to the beginning so we can read all below.
objMemoryStream2.Position = 0;
StreamReader objStreamReader = new StreamReader(objMemoryStream2, Encoding.UTF8);
Console.WriteLine(objStreamReader.ReadToEnd());
objStreamReader.Close();
Since your handler should be producing cXML you'll see that spat out in the console. If it pukes, you'll get a big blob of debug mess in the console, which of course will help you fix whatever is broken.
pardon the verbosity in the variable names, done to try to make things clear.
I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5.
So images have an URL like this:
http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg
or
http://mydomain/pictures/ajfhajkfhal/44.jpg
This is rewritten to:
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg
or
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44
I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached.
The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best.
I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.
EDIT I use the following options to cache the image on the client side, where 'fileName' is the physical filename of the image (on disk).
context.Response.AddFileDependency(fileName);
context.Response.Cache.SetETagFromFileDependencies();
context.Response.Cache.SetLastModifiedFromFileDependencies();
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.Now.AddTicks(600));
context.Response.Cache.SetMaxAge(new TimeSpan(0, 5, 0));
context.Response.Cache.SetSlidingExpiration(true);
context.Response.Cache.SetValidUntilExpires(true);
context.Response.ContentType = "image/jpg";
EDIT 2 Thanks for pointing that out, that was indeed a very stupid mistake ;). I changed it to 30 minutes from now (DateTime.Now.AddMinutes(30)).
But this doesnt solve the problem. I am really thinking the problem lies with Firefox. I use Firebug to track each request and somehow, I am thinking I am doing something fundamentally wrong. Normal images (which are cached and static) give back an response code "304 (Not Modified)", while my page always gives back a "200 (OK)".
alt text http://images.depl0y.com/capture.jpg
If what you mean by "script" is the code in your Picture.aspx, I should point out that C# is not a scripting language, so it is technically not a script.
You can use the Caching API provided by ASP.NET.
I assume you alread have a method which contains something like this. Here is how you can use the Caching API:
string fileName = ... // The name of your file
byte[] bytes = null;
if (HttpContext.Current.Cache[fileName] != null)
{
bytes = (byte[])HttpContext.Current.Cache[fileName];
}
else
{
bytes = ... // Retrieve your image's bytes
HttpContext.Current.Cache[fileName] = bytes; // Set the cache
}
// Send it to the client
Response.BinaryWrite(bytes);
Response.Flush();
Note that the keys you use in the cache must be unique to each cached item, so it might not be enough to just use the name of the file for this purpose.
EDIT:
If you want to enable caching the content on the client side, use the following:
Response.Cache.SetCacheability(HttpCacheability.Public);
You can experiment with the different HttpCacheability values. With this, you can specify how and where the content should be cached. (Eg. on the server, on proxies, and on the client)
This will make ASP.NET to send the client the caching rules with the appropriate HTTP headers.
This will not guarantee that the client will actually cache it (it depends on browser settings, for example), but it will tell the browser "You should cache this!"
The best practice would be to use caching on both the client and the server side.
EDIT 2:
The problem with your code is the SetExpires(DateTime.Now.AddTicks(600)). 600 ticks is only a fraction of a second... (1 second = 10000000 ticks)
Basically, the content gets cached but expires the moment it gets to the browser.
Try these:
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(5));
context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5));
(The TimeSpan.FromMinutes is also more readable than new TimeSpan(...).)
I am developing a web app. which will generate a random link pointing to an image on my server. something like -http://dummy.com/Images/Image1.jpg?id=19234
Here this link can then be used by anybody on their site, now I just want to know how many sites are using my links, without anybody clicking on those links.
Can It be done using HTTPModule ??
Is this as simple as Googling? Search for
link:http://dummy.com/Images/Image1.jpg?id=19234
If you want to do this programmatically, you'll need to use the Google API.
The issue you'd have with an HttpHandler is that it will generally only kick in for requests that are being handled by the ASP.Net engine - the image requests will normally be handled by IIS without going through the handler.
Your web logs should be able to tell you who the referers for any given item on your servers are - assuming that you have them, and you hve something to process them - this will be more accurate than using Google.
Going forward, one of the ways I've done this in the past is to have the image generated by an HttpHandler (implementing IHttpHandler).
This will return the image as a stream (setting the content type to "image/jpeg"), and you can add further processing (such as logging where the request (referer) came from, etc).
The limitation I found with the HttpHandler, is that some services (PBBS for example) require an image link to have an image extension - I got around this by processing all 404's with an ASP.Net page that checks for the .jpg extension in the request. If it finds one, instead of returning the usual 404 page, it returns the requeted image. You'll need to configure the 404 handler in IIS though, as the web.config error handler only kicks in for ASP.Net requests (web services and .aspx type pages).
Example handler:
// Sample from the ASP.Net Personal Web Site Starter Kit
public class Handler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
// Set up the response settings
context.Response.ContentType = "image/jpeg";
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.BufferOutput = false;
// QueryString parameters are available here:
// context.Request.QueryString["QueryStringKey"]
// You can also access the Referrer object, and log the requests here.
Stream stream;
// Read your image into the stream, either from file system or DB
if (stream == null)
{
stream = PhotoManager.GetPhoto();
}
// Write image stream to the response stream
const int buffersize = 1024 * 16;
var buffer = new byte[buffersize];
int count = stream.Read(buffer, 0, buffersize);
while (count > 0)
{
context.Response.OutputStream.Write(buffer, 0, count);
count = stream.Read(buffer, 0, buffersize);
}
}
}
You can have similar code (or better yet, refactor the main image streaming code into a shared class) in the 404 page, that checks for the existence of the image extension, and renders the image out that way (again, setting the content type, etc).
Oddthinking is right. See http://code.google.com/intl/en/apis/ajaxsearch/documentation/#fonje_snippets or Google's API. They give examples for PHP and Java, but there are also AJAX frameworks for ASP.NET (http://www.asp.net/ajax/), and I'm sure C# as well.
You can change your image extension to an aspx extension (http://dummy.com/Images/Image1.aspx?id=19234), there is no problem in this, because this page the only thing it would do Response.OutputStream of the image. That is to say it would be similar to a jpg but with the advantage you can have some other code to process.
In this aspx (before outputing the image), we would ask about the http_referer and it would be stored in a data table if this registry does not exist.
This is really useful if for example you want to restrict the access to images. You could add some logic to forbid if they are not logged in.