How best to respond to an open HTTP range request - c#

I am currently going through the process of attempting to respond to HTTP Range Requests so that video can be streamed from our server and also satisfy Safari actually playing the video.
I do have the added complication that the video file is encrypted on disk preventing me from being able to Seek in the stream but that is not really within the scope of this question.
I have noticed that Chrome and at least the new Edge request open ranges (0 - ). What is the correct thing to respond with here? Currently I respond with the full video which I understand completely defeats the purpose of streaming.
var range = context.Request.Headers["Range"].Split('=', '-');
var startByteIndex = int.Parse(range[1]);
// Quite a few browers send open ended requests for the range (e.g. 0- ).
long endByteIndex;
if (!long.TryParse(range[2], out endByteIndex))
{
endByteIndex = streamLength - 1;
}
Below is my full attempt so far at responding.
if (!string.IsNullOrEmpty(context.Request.Headers["Range"]))
{
var range = context.Request.Headers["Range"].Split('=', '-');
var startByteIndex = int.Parse(range[1]);
// Quite a few browers send open ended requests for the range (e.g. 0- ).
long endByteIndex;
if (!long.TryParse(range[2], out endByteIndex))
{
endByteIndex = streamLength - 1;
}
Debug.WriteLine("Range request for " + context.Request.Headers["Range"]);
// Make sure the request is within the bounds of the video.
if (endByteIndex >= streamLength)
{
context.Response.StatusCode = (int)HttpStatusCode.RequestedRangeNotSatisfiable;
return false;
}
var currentIndex = 0;
// SEEKING IS NOT WHOLE AND COMPLETE.
// Get to the requested start. We are not allowed to seek with CBC + AES.
while (currentIndex < startByteIndex) // TODO: we could probably work out a more suitable buffer size here to get to the start index.
{
var dummy = new byte[bufferLength];
var a = videoReadStream.Read(dummy, 0, bufferLength);
currentIndex += bufferLength;
}
// Fast but unreliable given AES + CBC.
//fileStream.Seek(startByteIndex, SeekOrigin.Begin);
dataToRead = endByteIndex - startByteIndex + 1;
// Supply the relevant partial content headers.
context.Response.StatusCode = (int)HttpStatusCode.PartialContent;
context.Response.AddHeader("Content-Range", $"bytes {startByteIndex}-{endByteIndex}/{streamLength}");
context.Response.AddHeader("Content-Length", dataToRead.ToString());
}
else
{
context.Response.AddHeader("Cache-Control", "private, max-age=1200");
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(20));
context.Response.AddHeader("content-disposition", "inline;filename=" + fileID);
context.Response.AddHeader("Accept-Ranges", "bytes");
}
var buffer = new byte[bufferLength];
while (dataToRead > 0 && context.Response.IsClientConnected)
{
videoReadStream.Read(buffer, 0, bufferLength);
// Write the data to the current output stream.
context.Response.OutputStream.Write(buffer, 0, bufferLength);
// Flush the data to the HTML output.
context.Response.Flush();
buffer = new byte[bufferLength];
dataToRead -= bufferLength;
}
Most frustratingly I notice that Edge (I haven't tested others yet) always seem to send an open request
Range request for bytes=0-
Range request for bytes=1867776-
Range request for bytes=3571712-
Range request for bytes=5341184-
Range request for bytes=7176192-
Range request for bytes=9273344-
Range request for bytes=10977280-
Range request for bytes=12943360-
Range request for bytes=14614528-
Range request for bytes=16384000-
Range request for bytes=18087936-
Range request for bytes=19955712-
Range request for bytes=21823488-
Range request for bytes=23625728-
Range request for bytes=25690112-
Range request for bytes=27525120-
Range request for bytes=39256064-
Range request for bytes=41222144-
Range request for bytes=42270720-
Should I just decide on a chunk size to respond with and stick with that? I notice that if I did respond with chunks of just 3 in size then Edge does request a lot more ranges in increments of 3.

This is a similar, but not identical, question to What byte range 0- means.
What is the correct thing to respond with here?
The correct thing to respond with here is the entire resource. However, as this client included a Range header, you may also respond with a 206 Partial content and a subset of the file.
Should I just decide on a chunk size to respond with and stick with that?
Pretty much, based on what's efficient for your server. Note that, as in Firefox won't request further data after receiving 206 with specified content range, you may encounter browsers that don't do the right thing.

Related

Uploading Large Files to WCF from Xamarin Android App Crashes

I'm trying to upload a large video (1 GB+) from my xamarin app and it keeps crashing once it reaches about 0.5 GB of my file. The only way I've found to get the videos to post to my WCF service while sending data along with it is using the MultiPart logic but I'm not sure if I'm running out of memory or what because even in debug mode, it simply crashes without any real error message.
I'm trying to run it on a native device (not a sim) and it's a Samsung Galaxy S9 with Android 9.
Here's the upload code that I'm using: (p.s. - as a test, I tried putting the WriteAsync into a for loop thinking that maybe trying to write the whole gig was the problem, but the result was the same. That's why you'll see the MAXFILESIZEPART constant in there which is just an int equal to 10000000.)
private async Task<byte[]> GetMultipartFormDataAsync(Dictionary<string, object> postParameters, string boundary)
{
try
{
using (Stream formDataStream = new System.IO.MemoryStream())
{
bool needsCLRF = false;
foreach (var param in postParameters)
{
// Thanks to feedback from commenters, add a CRLF to allow multiple parameters to be added.
// Skip it on the first parameter, add it to subsequent parameters.
if (needsCLRF)
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes("\r\n"), 0, Encoding.UTF8.GetByteCount("\r\n"));
needsCLRF = true;
if (param.Value is FileParameter)
{
FileParameter fileToUpload = (FileParameter)param.Value;
// Add just the first part of this param, since we will write the file data directly to the Stream
string header = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"; filename=\"{2}\"\r\nContent-Type: {3}\r\n\r\n",
boundary,
param.Key,
fileToUpload.FileName ?? param.Key,
fileToUpload.ContentType ?? "application/octet-stream");
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(header), 0, Encoding.UTF8.GetByteCount(header));
// Write the file data directly to the Stream, rather than serializing it to a string.
if (fileToUpload.File.Length > MAXFILESIZEPART)
{
for (var i = 0; i < fileToUpload.File.Length; i += MAXFILESIZEPART)
{
var len = i + MAXFILESIZEPART > fileToUpload.File.Length
? fileToUpload.File.Length - i
: MAXFILESIZEPART;
await formDataStream.WriteAsync(fileToUpload.File, i, len);
}
}
else
{
await formDataStream.WriteAsync(fileToUpload.File, 0, fileToUpload.File.Length);
}
}
else
{
string postData = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"\r\n\r\n{2}",
boundary,
param.Key,
param.Value);
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(postData), 0, Encoding.UTF8.GetByteCount(postData));
}
}
// Add the end of the request. Start with a newline
string footer = "\r\n--" + boundary + "--\r\n";
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(footer), 0, Encoding.UTF8.GetByteCount(footer));
// Dump the Stream into a byte[]
formDataStream.Position = 0;
byte[] formData = new byte[formDataStream.Length];
formDataStream.Read(formData, 0, formData.Length);
return formData;
}
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
And it's eventually failing on the following line
await formDataStream.WriteAsync(fileToUpload.File, i, len);
but only after a certain point (about 500MB) so I'm assuming it's a memory issue but it doesn't say so. Is there a better way to accomplish this task? I'm doing it so that it also records the progress as the upload happens. I'm trying to accomplish something similar to uploading large videos via the facebook app so that it will upload in the background while you continue working. It works great when working with smaller files (i.e. - < 500 MB) but this is the first time I've tried a file that was almost a gig in size.
NOTE: This happens BEFORE it starts posting anything to the server so it's not IIS or WCF related. This code crashes just writing the bytes to the memory stream.
Any suggestions?
Thanks!
According to your description, the service will stop at a certain time point, and because the file you transfer is about 1G, it is likely to be sendtimeout.No transfer completed within the specified time, causing exception。The SendTimeout that specifies how long the write operation has to complete before timing out. The default value is 1 minute.
I set sendtimeout to 15 seconds in my configuration file.If the data takes more than 15 seconds, an exception will occur. You can set it to a higher value to avoid timeout and exception.
For information about sendtimeout, please refer to the following link:
https://learn.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.binding.sendtimeout?view=dotnet-plat-ext-3.1
UPDATE
I think it might be a memory overflow problem.Large file may cause memory overflow, unable to read at the same time.
You can refer to the following links for solutions
https://learn.microsoft.com/en-us/archive/blogs/johan/are-you-getting-outofmemoryexceptions-when-uploading-large-files

Incomplete video streaming to mobile browsers - HTTP headers

I am trying to stream mp4 (fragmented) video to a mobile browser client using Nancy MVC framework. Everything works fine. The code is enclosed.
The thing is, a video is going to generated simultaneously as being streamed, so the stream.Length will increase over period of time. Does someone knows what to do to support this scenario ?
(I have tried to commit the length in "content-range" header, giving an arbitrary max size to encompass whole video size, but no avail...)
/*called when /video is requested*/
Get["/video"] = _ =>
{
if (Request.Headers.Keys.Contains("Range"))
return Response.FromPartialStream(Request, File.OpenRead("../Page/video-fragmented.mp4"), "video/mp4");
else
/*from stream...*/
};
public static Response FromPartialStream(this IResponseFormatter f,
Request req, Stream stream,
string contentType)
{
const string BYTES_RANGE_HEADER = "Range";
if (req.Headers[BYTES_RANGE_HEADER].Count() != 1)
throw new NotSupportedException();
var rangeStr = req.Headers[BYTES_RANGE_HEADER].FirstOrDefault();
var range = rangeStr.Replace("bytes=", String.Empty)
.Split(new string[] { "-" }, StringSplitOptions.RemoveEmptyEntries)
.Select(x => Int32.Parse(x))
.ToArray();
var start = (range.Length > 0) ? range[0] : 0;
var end = (range.Length > 1) ? range[1] : (int)(stream.Length - 1);
var res = new PartialStreamResponse(stream, start, end, contentType)
.WithHeader("Connection", "keep-alive")
.WithHeader("Accept-Ranges", "bytes")
.WithHeader("Content-Range", "bytes " + start + "-" + end + "/" + stream.Length)
.WithHeader("Content-Length", (end - start + 1).ToString());
Console.WriteLine("Requested range: {0}", rangeStr);
return res;
}
public class PartialStreamResponse : Response
{
Stream sourceStream = null;
int start, end;
public PartialStreamResponse(Stream sourceStream, int start, int end, string mimeType)
{
this.sourceStream = sourceStream;
this.start = start;
this.end = end;
Contents = populateRequest;
StatusCode = HttpStatusCode.PartialContent;
ContentType = mimeType;
}
private void populateRequest(Stream stream)
{
Console.WriteLine("Begin stream...");
sourceStream.CopyTo(stream, start, end);
Console.WriteLine("End stream");
}
}
EDIT: serving such files should also work for mobile browsers (single file would be preferred over HLS or DASH which require segments)
Well at least Chrome and Firefox happily handle 200 OK responses without Content-Range or Content-Length headers in the response. In my opinion you should only reply with 206 Partial Content if the request range has a valid end marker otherwise just reply with 200 OK without a content length and push the stream. Of course another thing is how to handle the file being generated live. I'd advise having the moov part in a separate file and then generating a second file with the current moof - that way a new client would initially get the moov (which should be fixed) and when that data has been sent the server would just continue reading and serving the moof file, which will be refreshed. Also to escape I/O starvation (the server trying to read the file and whatever generates the content trying to write to it) you could have at least 2 moof files which act as a double buffer - one is the last finished fragment and another is the one currently being written.
Here is an example of request/response headers a working live fragmented video in Chrome:
In my opinion the 206 Partial Content responses are more useful for static video content than for live because in that case the browser can get the moov atom, parse all the size and offset tables and offer seeking while content is being loaded, which is not possible for a live video.
Use multiple requests (one per fragment) or use chunked transfer.

C# - Downloading from Google Drive in byte chunks

I'm currently developing for an environment that has poor network connectivity. My application helps to automatically download required Google Drive files for users. It works reasonably well for small files (ranging from 40KB to 2MB), but fails far too often for larger files (9MB). I know these file sizes might seem small, but in terms of my client's network environment, Google Drive API constantly fails with the 9MB file.
I've concluded that I need to download files in smaller byte chunks, but I don't see how I can do that with Google Drive API. I've read this over and over again, and I've tried the following code:
// with the Drive File ID, and the appropriate export MIME type, I create the export request
var request = DriveService.Files.Export(fileId, exportMimeType);
// take the message so I can modify it by hand
var message = request.CreateRequest();
var client = request.Service.HttpClient;
// I change the Range headers of both the client, and message
client.DefaultRequestHeaders.Range =
message.Headers.Range =
new System.Net.Http.Headers.RangeHeaderValue(100, 200);
var response = await request.Service.HttpClient.SendAsync(message);
// if status code = 200, copy to local file
if (response.IsSuccessStatusCode)
{
using (var fileStream = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite))
{
await response.Content.CopyToAsync(fileStream);
}
}
The resultant local file (from fileStream) however, is still full-length (i.e. 40KB file for the 40KB Drive file, and a 500 Internal Server Error for the 9MB file). On a sidenote, I've also experimented with ExportRequest.MediaDownloader.ChunkSize, but from what I observe it only changes the frequency at which the ExportRequest.MediaDownloader.ProgressChanged callback is called (i.e. callback will trigger every 256KB if ChunkSize is set to 256 * 1024).
How can I proceed?
You seemed to be heading in the right direction. From your last comment, the request will update progress based on the chunk size, so your observation was accurate.
Looking into the source code for MediaDownloader in the SDK the following was found (emphasis mine)
The core download logic. We download the media and write it to an
output stream ChunkSize bytes at a time, raising the ProgressChanged
event after each chunk. The chunking behavior is largely a historical
artifact: a previous implementation issued multiple web requests, each
for ChunkSize bytes. Now we do everything in one request, but the API
and client-visible behavior are retained for compatibility.
Your example code will only download one chunk from 100 to 200. Using that approach you would have to keep track of an index and download each chunk manually, copying them to the file stream for each partial download
const int KB = 0x400;
int ChunkSize = 256 * KB; // 256KB;
public async Task ExportFileAsync(string downloadFileName, string fileId, string exportMimeType) {
var exportRequest = driveService.Files.Export(fileId, exportMimeType);
var client = exportRequest.Service.HttpClient;
//you would need to know the file size
var size = await GetFileSize(fileId);
using (var file = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite)) {
file.SetLength(size);
var chunks = (size / ChunkSize) + 1;
for (long index = 0; index < chunks; index++) {
var request = exportRequest.CreateRequest();
var from = index * ChunkSize;
var to = from + ChunkSize - 1;
request.Headers.Range = new RangeHeaderValue(from, to);
var response = await client.SendAsync(request);
if (response.StatusCode == HttpStatusCode.PartialContent || response.IsSuccessStatusCode) {
using (var stream = await response.Content.ReadAsStreamAsync()) {
file.Seek(from, SeekOrigin.Begin);
await stream.CopyToAsync(file);
}
}
}
}
}
private async Task<long> GetFileSize(string fileId) {
var file = await driveService.Files.Get(fileId).ExecuteAsync();
var size = file.size;
return size;
}
This code makes some assumptions about the drive api/server.
That the server will allow the multiple requests needed to download the file in chunks. Don't know if requests are throttled.
That the server still accepts the Range header like stated in the developer documenation

Network Streams - Amount to read per Read

Im currently a bit stuck with my c# project.
I have 2 applications, they both have a common class definition I call a NetMessage
a NetMessage contains a MessageType string property, as well as 2 List lists.
The idea is that I can pack this class with classes, and data to send across the network as a byte[].
Because Network Streams do not advertise the amount of data they are receiving, I modified my Send method to send the size of the NetMessage byte[] ahead of the actual byte[].
private static byte[] ReceivedBytes(NetworkStream MainStream)
{
try
{
//byte[] myReadBuffer = new byte[1024];
int receivedDataLength = 0;
byte[] data = { };
long len = 0;
int i = 0;
MainStream.ReadTimeout = 60000;
//MainStream.CanTimeout = false;
if (MainStream.CanRead)
{
//Read the length of the incoming message
byte[] byteLen = new byte[8];
MainStream.Read(byteLen, 0, 8);
len = BitConverter.ToInt64(byteLen, 0);
data = new byte[len];
//data is now set to the appropriate size for the expected message
//While we have not got the full message
//Read each individual byte and append to data.
//This method, seems to work, but is ridiculously slow,
while (receivedDataLength < data.Length)
{
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
}
//receivedDataLength += MainStream.Read(data, receivedDataLength, data.Length);
return data;
}
}
catch (Exception E)
{
//System.Windows.Forms.MessageBox.Show("Exception:" + E.ToString());
}
return null;
}
I have tried to change the size argument below to something like 1024 or to be the data.Length, but I get funky results.
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
setting it to data.Length seems to cause problems when the Class being sent is a few mb in size.
Setting the buffer size to be 1024 like I have seen in other examples, causes failures when the size of the incoming message is small, like 843 bytes, it errors out saying that I tried to read out of bounds or something.
Below is the type of method being used to send the data in the first place.
public static void SendBytesToStream(NetworkStream TheStream, byte[] TheMessage)
{
//IAsyncResult r = TheStream.BeginWrite(TheMessage, 0, TheMessage.Length, null, null);
// r.AsyncWaitHandle.WaitOne(10000);
//TheStream.EndWrite(r);
try
{
long len = TheMessage.Length;
byte[] Bytelen = BitConverter.GetBytes(len);
TheStream.Write(Bytelen, 0, Bytelen.Length);
TheStream.Flush();
// <-- I've tried putting thread sleeps in this spot to see if it helps
//I've also tried writing each byte of the message individually
//takes longer, but seems more accurate as far as network transmission goes?
TheStream.Write(TheMessage, 0, TheMessage.Length);
TheStream.Flush();
}
catch (Exception e)
{
//System.Windows.Forms.MessageBox.Show(e.ToString());
}
}
I'd like to get these two methods setup to the point where they are reliably sending and receiving data.
The application I am using this for, monitors a screenshots folder in a game directory,
when it detects a screenshot in TGA format, it converts it to PNG, then takes its byte[] and sends it to the receiver.
The receiver then posts it to Facebook (I don't want my FB tokens distributed in my client application), hence the server / proxy idea.
Its strange, but when I step through the code, the transfer is invariably successful.
But if I run it full speed, no breakpoint, it typically tells me that the connection was closed by the remote host etc.
The client typically finishes sending the data almost instantly, even though its a 4mb file.
The receiver spends about 2 minutes reading from the Network Stream, which doesnt make sense, if the client finished sending the data, does that mean the data is just floating in cyber space, and being pulled down?
Surely it should be synchronous?
I suspect I know where my code was going wrong.
It turns out that the scope I was creating my TCPClient that was doing the sending, was declared and instantiated within a method.
This being the case, the GAC was disposing of it, even though the Receiving Server had not finished downloading the stream.
I managed to resolve it by creating a method that can detect when the Client has disconnected on the server end, and until it has actually disconnected, it will keep looping/waiting until disconnected.
This way, we are waiting until the server lets go of us.

How do I remove the header info using an HttpModule to Upload a file?

I've created an HttpModule in ASP.NET to allow users to upload large files. I found some sample code online that I was able to adapt for my needs. I grab the file if it is a multi-part message and then I chunk the bytes and write them to disk.
The problem is that the file is always corrupt. After doing some research, it turns out that for some reason there is HTTP header or message body tags applied to the first part of the bytes I receive. I can't seem to figure out how to parse out those bytes so I only get the file.
Extra data / junk is prepended to the top of the file such as this:
-----------------------8cbb435d6837a3f
Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: application/octet-stream
This kind of header information of course corrupts the file I am receiving so I need to get rid of it before I write the bytes.
Here is the code I wrote to handle the upload:
public class FileUploadManager : IHttpModule
{
public int BUFFER_SIZE = 1024;
protected void app_BeginRequest(object sender, EventArgs e)
{
// get the context we are working under
HttpContext context = ((HttpApplication)sender).Context;
// make sure this is multi-part data
if (context.Request.ContentType.IndexOf("multipart/form-data") == -1)
{
return;
}
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest wr =
(HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
// only process this file if it has a body and is not already preloaded
if (wr.HasEntityBody() && !wr.IsEntireEntityBodyIsPreloaded())
{
// get the total length of the body
int iRequestLength = wr.GetTotalEntityBodyLength();
// get the initial bytes loaded
int iReceivedBytes = wr.GetPreloadedEntityBodyLength();
// open file stream to write bytes to
using (System.IO.FileStream fs =
new System.IO.FileStream(
#"C:\tempfiles\test.txt",
System.IO.FileMode.CreateNew))
{
// *** NOTE: This is where I think I need to filter the bytes
// received to get rid of the junk data but I am unsure how to
// do this?
int bytesRead = BUFFER_SIZE;
// Create an input buffer to store the incomming data
byte[] byteBuffer = new byte[BUFFER_SIZE];
while ((iRequestLength - iReceivedBytes) >= bytesRead)
{
// read the next chunk of the file
bytesRead = wr.ReadEntityBody(byteBuffer, byteBuffer.Length);
fs.Write(byteBuffer, 0, byteBuffer.Length);
iReceivedBytes += bytesRead;
// write bytes so far of file to disk
fs.Flush();
}
}
}
}
}
How would I detect and parse out this header junk information in order to isolate just the file bits?
use InputSteramEntity class as follows:
InputStreamEntity reqEntity = new InputStreamEntity(new FileInputStream(filePath), -1);
reqEntity.setContentType("binary/octet-stream");
httppost.setEntity(reqEntity);
HttpResponse response = httpclient.execute(httppost);
If you use like above, it will not add token to header and trailer and content-disposition, content-type at server
-----------------------8cbb435d6837a3f
Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: application/octet-stream
-----------------------8cbb435d6837a3f
What you're running into is the boundary used to separate the various parts of the HTTP request. There should be a header at the beginning of the request called Content-type, and within that header, there's a boundary statement like so:
Content-Type: multipart/mixed;boundary=gc0p4Jq0M2Yt08jU534c0p
Once you find this boundary, simply split your request on the boundary with two hyphens (--) prepended to it. In other words, split your content on:
"--"+Headers.Get("Content-Type").Split("boundary=")[1]
Sorta pseudo-code there, but it should get the point across. This should divide the multipart form data into the appropriate sections.
For more info, see RFC1341
It's worth noting, apparently the final boundary has two hyphens appended to the end of the boundary as well.
EDIT: Okay, so the problem you're running into is that you're not breaking the form data into the necessary components. The sections of a multipart/form-data request can each individually be treated as separate requests (meaning they can contain headers). What you should probably do is read the bytes into a string:
string formData = Encoding.ASCII.GetString(byteBuffer);
split into multiple strings based on the boundary:
string boundary = "\r\n"+context.Request.ContentType.Split("boundary=")[1];
string[] parts = Regex.Split( formData, boundary );
loop through each string, separating headers from content. Since you actually want the byte value of the content, keep track of the data offset since converting from ASCII back to byte might not work properly (I could be wrong, but I'm paranoid):
int dataOffset = 0;
for( int i=0; i < parts.Length; i++ ){
string header = part.Substring( 0, part.IndexOf( "\r\n\r\n" ) );
dataOffset += boundary.Length + header.Length + 4;
string asciiBody = part.Substring( part.IndexOf( "\r\n\r\n" ) + 4 );
byte[] body = new byte[ asciiBody.Length ];
for( int j=dataOffset,k=0; j < asciiBody.Length; j++ ){
body[k++] = byteBuffer[j];
}
// body now contains your binary data
}
NOTE: This is untested, so it may require some tweaking.

Categories

Resources