DeleteMessageAsync() not deleting message in SQS queue .Net Core - c#

I am trying to delete message in SQS queue, but it is not deleting in the queue. I have been trying to make a lot of changes, but is still not working. I am new to c#, .net core, and AWS. Can anyone please help me with this?
Here is my main method:
[HttpGet]
public async Task<ReceiveMessageResponse> Get()
{
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest
{
WaitTimeSeconds = 3 //it'll ping the queue for 3 seconds if I don't do this, sometimes I receive message and sometimes I don't
};
receiveMessageRequest.QueueUrl = myQueueUrl;
receiveMessageRequest.MaxNumberOfMessages = 10; // can change number of messages as needed
//receiveing messages/responses
var receiveMessageResponse = await amazonSQSClient.ReceiveMessageAsync(receiveMessageRequest);
if (receiveMessageResponse.Messages.Count > 0){
var bucketName = getBucketName(receiveMessageResponse);
var objectKey = getObjectKey(receiveMessageResponse);
var versionId = getVersionId(receiveMessageResponse);
string filePath = "C:\\InputPdfFile\\"; // change it later
string path = filePath + objectKey;
//get the file from s3 bucket and download it in in
var downloadInputFile = await DownloadAsync(path, versionId, objectKey);
//Get score from the output file
string jsonOutputFileName = "\\file-1.txt"; //change it later from text file to json file
string jsonOutputPath = "C:\\OutputJsonFile"; //change it later
string jasonArchivePath = "C:\\ArchiveJsonFile"; //change it later
int score = GetOutputScore(jsonOutputPath, jsonOutputFileName);
//update metadata from the score received from ML worker (GetOutputScore)
PutObjectResponse putObjectResponse = await UpdateMetadataAsync(score);
//Move file from output to archive after updating metadata
string sourceFile = jsonOutputPath + jsonOutputFileName;
string destFile = jasonArchivePath + jsonOutputFileName;
if (!Directory.Exists(jasonArchivePath))
{
Directory.CreateDirectory(jasonArchivePath);
}
System.IO.File.Move(sourceFile, destFile);
//delete message after moving file from archive
*DeleteMessage(receiveMessageResponse);* //not sure why it is not deleting**
}
return receiveMessageResponse;
}
Here is my Delete method:
public async void DeleteMessage(ReceiveMessageResponse receiveMessageResponse)
{
if (receiveMessageResponse.Messages.Count > 0)
{
foreach (var message in receiveMessageResponse.Messages)
{
var delRequest = new DeleteMessageRequest
{
QueueUrl = myQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessage = await amazonSQSClient.DeleteMessageAsync(delRequest);
}
}
else // It is not going in else because the message was found but still not deleting it
{
Console.WriteLine("No message found");
}
}
Any help would be greatly appreciated!

Related

Access token empty error when uploading large files to a ToDoTask using Graph Api

I am trying to attach large files to a ToDoTask using the Graph Api using the example in the docs for attaching large files for ToDoTask and the recommend class LargeFileUploadTask for uploading large files.
I have done this sucessfully before with attaching large files to emails and sending so i used that as base for the following method.
public async Task CreateTaskBigAttachments( string idList, string title, List<string> categories,
BodyType contentType, string content, Importance importance, bool isRemindOn, DateTime? dueTime, cAttachment[] attachments = null)
{
try
{
var _newTask = new TodoTask
{
Title = title,
Categories = categories,
Body = new ItemBody()
{
ContentType = contentType,
Content = content,
},
IsReminderOn = isRemindOn,
Importance = importance
};
if (dueTime.HasValue)
{
var _timeZone = TimeZoneInfo.Local;
_newTask.DueDateTime = DateTimeTimeZone.FromDateTime(dueTime.Value, _timeZone.StandardName);
}
var _task = await _graphServiceClient.Me.Todo.Lists[idList].Tasks.Request().AddAsync(_newTask);
//Add attachments
if (attachments != null)
{
if (attachments.Length > 0)
{
foreach (var _attachment in attachments)
{
var _attachmentContentSize = _attachment.ContentBytes.Length;
var _attachmentInfo = new AttachmentInfo
{
AttachmentType = AttachmentType.File,
Name = _attachment.FileName,
Size = _attachmentContentSize,
ContentType = _attachment.ContentType
};
var _uploadSession = await _graphServiceClient.Me
.Todo.Lists[idList].Tasks[_task.Id]
.Attachments.CreateUploadSession(_attachmentInfo).Request().PostAsync();
using (var _stream = new MemoryStream(_attachment.ContentBytes))
{
_stream.Position = 0;
LargeFileUploadTask<TaskFileAttachment> _largeFileUploadTask = new LargeFileUploadTask<TaskFileAttachment>(_uploadSession, _stream, MaxChunkSize);
try
{
await _largeFileUploadTask.UploadAsync();
}
catch (ServiceException errorGraph)
{
if (errorGraph.StatusCode == HttpStatusCode.InternalServerError || errorGraph.StatusCode == HttpStatusCode.BadGateway
|| errorGraph.StatusCode == HttpStatusCode.ServiceUnavailable || errorGraph.StatusCode == HttpStatusCode.GatewayTimeout)
{
Thread.Sleep(1000); //Wait time until next attempt
//Try again
await _largeFileUploadTask.ResumeAsync();
}
else
throw errorGraph;
}
}
}
}
}
}
catch (ServiceException errorGraph)
{
throw errorGraph;
}
catch (Exception ex)
{
throw ex;
}
}
Up to the point of creating the task everything goes well, it does create the task for the user and its properly shown in the user tasks list. Also, it does create an upload session properly.
The problem comes when i am trying to upload the large file in the UploadAsync instruction.
The following error happens.
Code: InvalidAuthenticationToken Message: Access token is empty.
But according to the LargeFileUploadTask doc , the client does not need to set Auth Headers.
param name="baseClient" To use for making upload requests. The client should not set Auth headers as upload urls do not need them.
Is not LargeFileUploadTask allowed to be used to upload large files to a ToDoTask?
If not then what is the proper way to upload large files to a ToDoTask using the Graph Api, can someone provide an example?
If you want, you can raise an issue for the same with the details here, so that they can have look: https://github.com/microsoftgraph/msgraph-sdk-dotnet-core/issues.
It seems like its a bug and they are working on it.
Temporarily I did this code to deal with the issue of the large files.
var _task = await _graphServiceClient.Me.Todo.Lists[idList].Tasks.Request().AddAsync(_newTask);
//Add attachments
if (attachments != null)
{
if (attachments.Length > 0)
{
foreach (var _attachment in attachments)
{
var _attachmentContentSize = _attachment.ContentBytes.Length;
var _attachmentInfo = new AttachmentInfo
{
AttachmentType = AttachmentType.File,
Name = _attachment.FileName,
Size = _attachmentContentSize,
ContentType = _attachment.ContentType
};
var _uploadSession = await _graphServiceClient.Me
.Todo.Lists[idList].Tasks[_task.Id]
.Attachments.CreateUploadSession(_attachmentInfo).Request().PostAsync();
// Get the upload URL and the next expected range from the response
string _uploadUrl = _uploadSession.UploadUrl;
using (var _stream = new MemoryStream(_attachment.ContentBytes))
{
_stream.Position = 0;
// Create a byte array to hold the contents of each chunk
byte[] _chunk = new byte[MaxChunkSize];
//Bytes to read
int _bytesRead = 0;
//Times the stream has been read
var _ind = 0;
while ((_bytesRead = _stream.Read(_chunk, 0, _chunk.Length)) > 0)
{
// Calculate the range of the current chunk
string _currentChunkRange = $"bytes {_ind * MaxChunkSize}-{_ind * MaxChunkSize + _bytesRead - 1}/{_stream.Length}";
//Despues deberiamos calcular el next expected range en caso de ocuparlo
// Create a ByteArrayContent object from the chunk
ByteArrayContent _byteArrayContent = new ByteArrayContent(_chunk, 0, _bytesRead);
// Set the header for the current chunk
_byteArrayContent.Headers.Add("Content-Range", _currentChunkRange);
_byteArrayContent.Headers.Add("Content-Type", _attachment.ContentType);
_byteArrayContent.Headers.Add("Content-Length", _bytesRead.ToString());
// Upload the chunk using the httpClient Request
var _client = new HttpClient();
var _requestMessage = new HttpRequestMessage()
{
RequestUri = new Uri(_uploadUrl + "/content"),
Method = HttpMethod.Put,
Headers =
{
{ "Authorization", bearerToken },
}
};
_requestMessage.Content = _byteArrayContent;
var _response = await _client.SendAsync(_requestMessage);
if (!_response.IsSuccessStatusCode)
throw new Exception("File attachment failed");
_ind++;
}
}
}
}
}

Bulk files upload to ADLS gen 1

Is there anyway to upload or transform multiple files to adls in one attempt(Bulk Upload). If that is not possible any parallel processing feature available??.
Am looking for c# example.
Please advise..
If you want to upload multiple files in one directory, we can use the method BulkUpload to implement bulk upload. Regarding how to use the method, please refer to here
For example
Create a service principal and configure permissions
Code
string TENANT = "";
string CLIENTID = "";
Uri ADL_TOKEN_AUDIENCE = new Uri(#"https://datalake.azure.net/");
string secret_key = "";
var adlServiceSettings = ActiveDirectoryServiceSettings.Azure;
adlServiceSettings.TokenAudience = ADL_TOKEN_AUDIENCE;
var adlCred = await ApplicationTokenProvider.LoginSilentAsync(TENANT, CLIENTID, secret_key, adlServiceSettings);
AdlsClient client = AdlsClient.CreateClient("<>.azuredatalakestore.net", adlCred);
But please note that the method has limits. it just can be used to upload files in the same directory one time and it will upload whole files in the directory. If you want to upload some files in one directory or files in different directories, pleas refer to the following code
static async Task Main(string[] args)
{
string TENANT = "";
string CLIENTID = "";
Uri ADL_TOKEN_AUDIENCE = new Uri(#"https://datalake.azure.net/");
string secret_key = "";
var adlServiceSettings = ActiveDirectoryServiceSettings.Azure;
adlServiceSettings.TokenAudience = ADL_TOKEN_AUDIENCE;
var adlCred = await ApplicationTokenProvider.LoginSilentAsync(TENANT, CLIENTID, secret_key, adlServiceSettings);
AdlsClient client = AdlsClient.CreateClient("<>.azuredatalakestore.net", adlCred);
string[] filePaths = { };
int count = 0;
var tasks = new Queue<Task>();
foreach (string filePath in filePaths)
{
var fileName = "";
// Add the upload task to the queue
tasks.Enqueue(uplaodFile(client, filePath, fileName));
count++;
}
// Run all the tasks asynchronously.
await Task.WhenAll(tasks);
}
private static async Task uplaodFile(AdlsClient client, string filePath, string fileName)
{
using (var createStream = client.CreateFile(fileName, IfExists.Overwrite))
{
using (var fileStrem = File.OpenRead(filePath))
{
await fileStrem.CopyToAsync(createStream);
}
}
}

High memory usage and slow upload using AWS S3 TransferUtility

I have created an application using C# and the AWS .Net SDK (v3.3.106.25) to upload some database backup files to a S3 bucket. The application is currently unusable as memory usage goes up to 100% and uploads of files take progressively longer.
I am trying to upload 3 files, 2 of which are about 1.45GB and one is about 4MB. I am using the TransferUtility method as I understand that it utilises multi part uploads. I have set the part size to 16MB. Each file is uploaded consecutively. Here are some facts about the upload:
File 1 - 4MB - upload duration 4 seconds
File 2 - 1.47GB - upload duration 11.5 minutes
File 3 - 1.45GB - upload duration 1 hour 12 minutes before killing the process as PC became unusable
I am running this on a Windows 10 machine with 16GB RAM and Intel Core i7 CPU # 3.40GHz
Here is my upload code:
private async Task UploadFileAsync(string keyName, string filePath, int partSizeMB, S3StorageClass storageClass)
{
try
{
using (IAmazonS3 s3Client = new AmazonS3Client(_region))
{
var fileTransferUtility = new TransferUtility(s3Client);
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = _bucketName,
FilePath = filePath,
StorageClass = storageClass,
PartSize = partSizeMB * 1024 * 1024, // set to 16MB
Key = keyName,
CannedACL = S3CannedACL.Private
};
await fileTransferUtility.UploadAsync(fileTransferUtilityRequest);
}
}
catch (AmazonS3Exception e)
{
string errMsg = string.Format("Error encountered on server. Message:{0} when writing an object", e.Message);
System.Exception argEx = new System.Exception(errMsg, e.InnerException);
throw argEx;
}
catch (Exception e)
{
string errMsg = string.Format("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
System.Exception argEx = new System.Exception(errMsg, e.InnerException);
throw argEx;
}
}
This code is being called 3 times in a loop with each call awaited.
Can anyone please suggest how I can upload these files in a more efficient manner.
Many thanks.
I have decided to abandon the use of the high level TransferUtility API method as it doesn't seem fit for purpose for large files. It seems that it loads the whole file into memory before splitting it into parts and uploading each part. For large files it just consumes all available memory and your server can grind to a halt.
For anyone interested this is how I have solved the issue:
I now use the low level api methods InitiateMultipartUploadAsync,UploadPartAsync and CompleteMultipartUploadAsync and manage the multi part upload myself.
The key to making this work is the use of the .Net MemoryMappedFile class and the CreateViewStream method to manage only retrieving the parts of the file into memory one at a time.
I use a Queue to control which parts have been uploaded and also to retry any individual parts that might have failed.
Here is my new code:
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.IO;
using System.Threading.Tasks;
using System.Threading;
using System.Collections.Generic;
using System.IO.MemoryMappedFiles;
using System.Linq;
using Amazon.Runtime;
public class S3Upload
{
// declarations
private readonly string _bucketName;
private readonly RegionEndpoint _region;
//event handlers
public event EventHandler<ProgressUpdatedEventArgs> OnProgressUpdated;
private bool CheckFilePath(string filePath)
{
// check the filePath exists
if (!Directory.Exists(Path.GetDirectoryName(filePath)))
{
return false;
}
if (!File.Exists(filePath))
{
return false;
}
return true;
}
public async Task UploadFileMultiPartAsync(string keyName, string filePath, string storageClass,
int partSizeMB = 16, int retryWaitInterval = 60000,
int maxRetriesOnFail = 10)
{
if (CheckFilePath(filePath))
{
long fileSize = new FileInfo(filePath).Length;
long partSize = partSizeMB * (long)Math.Pow(1024, 2);
partSize = GetPartSize(fileSize, partSize);
S3StorageClass sClass = new S3StorageClass(storageClass);
try
{
await UploadFileMultiPartAsync(keyName, filePath, fileSize, partSize, sClass, retryWaitInterval, maxRetriesOnFail);
}
catch (Exception ex)
{
throw new Exception(ex.Message, ex.InnerException);
}
}
else
{
string errMsg = string.Format("Cannot find file {0}. Check the file exists and that the application has access permissions.", filePath);
System.IO.DirectoryNotFoundException argEx = new System.IO.DirectoryNotFoundException(errMsg);
throw argEx;
}
}
private async Task UploadFileMultiPartAsync(string keyName, string filePath, long fileSize,
long partSize, S3StorageClass storageClass,
int retryWaitInterval,
int maxRetriesOnFail)
{
int retryCount = 0;
long offset = 0;
// we need to calculate the number of parts based on the fileSize and partSize
int iterations = (int)Math.Ceiling((double)fileSize / (double)partSize);
int currentIterations = iterations;
// create a queue of indexes to be processed. Indexes will be removed from this list as the
// uploads are processed. If the upload is not successful then it will be re-added to the end
// of the queue for later retry. We pause after each full loop is completed before starting the retry
Queue<int> q = new Queue<int>(Enumerable.Range(0, iterations));
// the following 2 variables store values returned from the S3 call and are persisted throughout the loop
string uploadId = "";
List<PartETag> eTags = new List<PartETag>();
// Create the memory-mapped file.
using (var mmf = MemoryMappedFile.CreateFromFile(filePath, FileMode.Open, "uploadFile"))
{
while (q.Count > 0)
{
int iPart = q.Dequeue();
offset = iPart * partSize;
long chunkSize = (offset + partSize > fileSize) ? fileSize - offset : partSize;
using (var stream = mmf.CreateViewStream(offset, chunkSize))
{
using (BinaryReader binReader = new BinaryReader(stream))
{
byte[] bytes = binReader.ReadBytes((int)stream.Length);
//convert to stream
MemoryStream mStream = new MemoryStream(bytes, false);
bool lastPart = (q.Count == 0) ? true : false;
UploadResponse response = await UploadChunk(keyName, uploadId, iPart, lastPart, mStream, eTags, iterations);
uploadId = response.uploadId;
eTags = response.eTags;
if (!response.success)
{
// the upload failed so we add the failed index to the back of the
// queue for retry later
q.Enqueue(iPart);
lastPart = false;
}
// if we have attempted an upload for every part and some have failed then we
// wait a bit then try resending the parts that failed. We try this a few times
// then give up.
if (!lastPart && iPart == currentIterations - 1)
{
if (retryCount < maxRetriesOnFail)
{
currentIterations = q.Count;
Thread.Sleep(retryWaitInterval);
retryCount += 1;
}
else
{
// reached maximum retries so we abort upload and raise error
try
{
await AbortMultiPartUploadAsync(keyName, uploadId);
string errMsg = "Multi part upload aborted. Some parts could not be uploaded. Maximum number of retries reached.";
throw new Exception(errMsg);
}
catch (Exception ex)
{
string errMsg = string.Format("Multi part upload failed. Maximum number of retries reached. Unable to abort upload. Error: {0}", ex.Message);
throw new Exception(errMsg);
}
}
}
}
}
}
}
}
private async Task AbortMultiPartUploadAsync(string keyName, string uploadId)
{
using (var _s3Client = new AmazonS3Client(_region))
{
AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest
{
BucketName = _bucketName,
Key = keyName,
UploadId = uploadId
};
await _s3Client.AbortMultipartUploadAsync(abortMPURequest);
}
}
private async Task<UploadResponse> UploadChunk(string keyName, string uploadId, int chunkIndex, bool lastPart, MemoryStream stream, List<PartETag> eTags, int numParts)
{
try
{
using (var _s3Client = new AmazonS3Client(_region))
{
var partNumber = chunkIndex + 1;
// Step 1: build and send a multi upload request
// we check uploadId == "" rather than chunkIndex == 0 as if the initiate call failed on the first run
// then chunkIndex = 0 would have been added to the end of the queue for retries and uploadId
// will still not have been initialized, even though we might be on a later chunkIndex
if (uploadId == "")
{
var initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = _bucketName,
Key = keyName
};
InitiateMultipartUploadResponse initResponse = await _s3Client.InitiateMultipartUploadAsync(initiateRequest);
uploadId = initResponse.UploadId;
}
// Step 2: upload each chunk (this is run for every chunk unlike the other steps which are run once)
var uploadRequest = new UploadPartRequest
{
BucketName = _bucketName,
Key = keyName,
UploadId = uploadId,
PartNumber = partNumber,
InputStream = stream,
IsLastPart = lastPart,
PartSize = stream.Length
};
// Track upload progress.
uploadRequest.StreamTransferProgress +=
(_, e) => OnPartUploadProgressUpdate(numParts, uploadRequest, e);
UploadPartResponse uploadResponse = await _s3Client.UploadPartAsync(uploadRequest);
//Step 3: build and send the multipart complete request
if (lastPart)
{
eTags.Add(new PartETag
{
PartNumber = partNumber,
ETag = uploadResponse.ETag
});
var completeRequest = new CompleteMultipartUploadRequest
{
BucketName = _bucketName,
Key = keyName,
UploadId = uploadId,
PartETags = eTags
};
CompleteMultipartUploadResponse result = await _s3Client.CompleteMultipartUploadAsync(completeRequest);
return new UploadResponse(uploadId, eTags, true);
}
else
{
eTags.Add(new PartETag
{
PartNumber = partNumber,
ETag = uploadResponse.ETag
});
return new UploadResponse(uploadId, eTags, true);
}
}
}
catch
{
return new UploadResponse(uploadId, eTags, false);
}
}
private class UploadResponse
{
public string uploadId { get; set; }
public List<PartETag> eTags { get; set; }
public bool success { get; set; }
public UploadResponse(string Id, List<PartETag> Tags, bool succeeded)
{
uploadId = Id;
eTags = Tags;
success = succeeded;
}
}
private void OnPartUploadProgressUpdate(int numParts, UploadPartRequest request, StreamTransferProgressArgs e)
{
// Process event.
if (OnProgressUpdated != null)
{
int partIndex = request.PartNumber - 1;
int totalIncrements = numParts * 100;
int percentDone = (int)Math.Floor((double)(partIndex * 100 + e.PercentDone) / (double)totalIncrements * 100);
OnProgressUpdated(this, new ProgressUpdatedEventArgs(percentDone));
}
}
private long GetPartSize(long fileSize, long partSize)
{
// S3 multi part limits
//====================================
// min part size = 5MB
// max part size = 5GB
// total number of parts = 10,000
//====================================
if (fileSize < partSize)
{
partSize = fileSize;
}
if (partSize <= 0)
{
return Math.Min(fileSize, 16 * (long)Math.Pow(1024, 2)); // default part size to 16MB
}
if (partSize > 5000 * (long)Math.Pow(1024, 2))
{
return 5000 * (long)Math.Pow(1024, 2);
}
if (fileSize / partSize > 10000)
{
return (int)(fileSize / 10000);
}
return partSize;
}
}
public class ProgressUpdatedEventArgs : EventArgs
{
public ProgressUpdatedEventArgs(int iPercentDone)
{ PercentDone = iPercentDone; }
public int PercentDone { get; set; }
}

Unable to store filesize data in Azure Media Service Assets

Currently building a web api for the existing web-based media services to encode uploaded videos.
The goal of my solution is to create a api call where i'll be sending the mp4 link and do the processing (encoding and streaming of the given mp4 link). I was able to fetch the mp4 and download to the server and reupload to its own blob storage. However if I check the AMS explorer, every parameters I passed exists except for the filesize. Here's my WEB API call I created (a total replicate of the existing media service form method. (https://tiltestingstreaming.azurewebsites.net/
)
[HttpPost]
public JsonResult UploadApi(String video_url)
{
var id = 1;
WebClient client = new WebClient();
var videoStream = new MemoryStream(client.DownloadData(video_url));
var container = CloudStorageAccount.Parse(mediaServiceStorageConnectionString).CreateCloudBlobClient().GetContainerReference(mediaServiceStorageContainerReference);
container.CreateIfNotExists();
var fileName = Path.GetFileName(video_url);
var fileToUpload = new CloudFile()
{
BlockCount = 1,
FileName = fileName,
Size = videoStream.Length,
BlockBlob = container.GetBlockBlobReference(fileName),
StartTime = DateTime.Now,
IsUploadCompleted = false,
UploadStatusMessage = string.Empty
};
Session.Add("CurrentFile", fileToUpload);
byte[] chunk = new byte[videoStream.Length];
//request.InputStream.Read(chunk, 0, Convert.ToInt32(request.Length));
//JsonResult returnData = null;
string fileSession = "CurrentFile";
CloudFile model = (CloudFile)Session[fileSession];
var blockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(
string.Format(CultureInfo.InvariantCulture, "{0:D4}", id)));
try
{
model.BlockBlob.PutBlock(
blockId,
videoStream, null, null,
new BlobRequestOptions()
{
RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(10), 3)
},
null);
}
catch (StorageException e)
{
model.IsUploadCompleted = true;
model.UploadStatusMessage = "Failed to Upload file. Exception - " + e.Message;
return Json(new { error = true, isLastBlock = false, message = model.UploadStatusMessage });
}
var blockList = Enumerable.Range(1, (int)model.BlockCount).ToList<int>().ConvertAll(rangeElement => Convert.ToBase64String(Encoding.UTF8.GetBytes(string.Format(CultureInfo.InvariantCulture, "{0:D4}", rangeElement))));
model.BlockBlob.PutBlockList(blockList);
var duration = DateTime.Now - model.StartTime;
float fileSizeInKb = model.Size / 1024;
string fileSizeMessage = fileSizeInKb > 1024 ? string.Concat((fileSizeInKb / 1024).ToString(CultureInfo.CurrentCulture), " MB") : string.Concat(fileSizeInKb.ToString(CultureInfo.CurrentCulture), " KB");
model.UploadStatusMessage = string.Format(CultureInfo.CurrentCulture, "File of size {0} took {1} seconds to upload.", fileSizeMessage, duration.TotalSeconds);
IAsset mediaServiceAsset = CreateMediaAsset(model);
model.AssetId = mediaServiceAsset.Id;
//if (id == model.BlockCount){CommitAllChunks(model);}
return Json(new { error = false, isLastBlock = false, message = string.Empty, filename = fileName,filesize = videoStream.Length });
}
Functions used on the form-method solution.
[HttpPost]
public ActionResult SetMetadata(int blocksCount, string fileName, long fileSize)
{
var container = CloudStorageAccount.Parse(mediaServiceStorageConnectionString).CreateCloudBlobClient().GetContainerReference(mediaServiceStorageContainerReference);
container.CreateIfNotExists();
var fileToUpload = new CloudFile()
{
BlockCount = blocksCount,
FileName = fileName,
Size = fileSize,
BlockBlob = container.GetBlockBlobReference(fileName),
StartTime = DateTime.Now,
IsUploadCompleted = false,
UploadStatusMessage = string.Empty
};
Session.Add("CurrentFile", fileToUpload);
return Json(true);
}
[HttpPost]
[ValidateInput(false)]
public ActionResult UploadChunk(int id)
{
HttpPostedFileBase request = Request.Files["Slice"];
byte[] chunk = new byte[request.ContentLength];
request.InputStream.Read(chunk, 0, Convert.ToInt32(request.ContentLength));
JsonResult returnData = null;
string fileSession = "CurrentFile";
if (Session[fileSession] != null)
{
CloudFile model = (CloudFile)Session[fileSession];
returnData = UploadCurrentChunk(model, chunk, id);
if (returnData != null)
{
return returnData;
}
if (id == model.BlockCount)
{
return CommitAllChunks(model);
}
}
else
{
returnData = Json(new
{
error = true,
isLastBlock = false,
message = string.Format(CultureInfo.CurrentCulture, "Failed to Upload file.", "Session Timed out")
});
return returnData;
}
return Json(new { error = false, isLastBlock = false, message = string.Empty });
}
private JsonResult UploadCurrentChunk(CloudFile model, byte[] chunk, int id)
{
using (var chunkStream = new MemoryStream(chunk))
{
var blockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(
string.Format(CultureInfo.InvariantCulture, "{0:D4}", id)));
try
{
model.BlockBlob.PutBlock(
blockId,
chunkStream, null, null,
new BlobRequestOptions()
{
RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(10), 3)
},
null);
return null;
}
catch (StorageException e)
{
model.IsUploadCompleted = true;
model.UploadStatusMessage = "Failed to Upload file. Exception - " + e.Message;
return Json(new { error = true, isLastBlock = false, message = model.UploadStatusMessage });
}
}
}
private ActionResult CommitAllChunks(CloudFile model)
{
model.IsUploadCompleted = true;
bool errorInOperation = false;
try
{
var blockList = Enumerable.Range(1, (int)model.BlockCount).ToList<int>().ConvertAll(rangeElement => Convert.ToBase64String(Encoding.UTF8.GetBytes(string.Format(CultureInfo.InvariantCulture, "{0:D4}", rangeElement))));
model.BlockBlob.PutBlockList(blockList);
var duration = DateTime.Now - model.StartTime;
float fileSizeInKb = model.Size / 1024;
string fileSizeMessage = fileSizeInKb > 1024 ? string.Concat((fileSizeInKb / 1024).ToString(CultureInfo.CurrentCulture), " MB") : string.Concat(fileSizeInKb.ToString(CultureInfo.CurrentCulture), " KB");
model.UploadStatusMessage = string.Format(CultureInfo.CurrentCulture, "File of size {0} took {1} seconds to upload.", fileSizeMessage, duration.TotalSeconds);
IAsset mediaServiceAsset = CreateMediaAsset(model);
model.AssetId = mediaServiceAsset.Id;
}
catch (StorageException e)
{
model.UploadStatusMessage = "Failed to upload file. Exception - " + e.Message;
errorInOperation = true;
}
return Json(new
{
error = errorInOperation,
isLastBlock = model.IsUploadCompleted,
message = model.UploadStatusMessage,
assetId = model.AssetId
});
}
private IAsset CreateMediaAsset(CloudFile model)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(mediaServiceStorageConnectionString);
CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer mediaBlobContainer = cloudBlobClient.GetContainerReference(mediaServiceStorageContainerReference);
mediaBlobContainer.CreateIfNotExists();
// Create a new asset.
IAsset asset = context.Assets.Create("UploadedVideo-" + Guid.NewGuid().ToString().ToLower(), AssetCreationOptions.None);
IAccessPolicy writePolicy = context.AccessPolicies.Create("writePolicy", TimeSpan.FromMinutes(120), AccessPermissions.Write);
ILocator destinationLocator = context.Locators.CreateLocator(LocatorType.Sas, asset, writePolicy);
// Get the asset container URI and copy blobs from mediaContainer to assetContainer.
Uri uploadUri = new Uri(destinationLocator.Path);
string assetContainerName = uploadUri.Segments[1];
CloudBlobContainer assetContainer = cloudBlobClient.GetContainerReference(assetContainerName);
string fileName = HttpUtility.UrlDecode(Path.GetFileName(model.BlockBlob.Uri.AbsoluteUri));
var sourceCloudBlob = mediaBlobContainer.GetBlockBlobReference(fileName);
sourceCloudBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length > 0)
{
IAssetFile assetFile = asset.AssetFiles.Create(fileName);
var destinationBlob = assetContainer.GetBlockBlobReference(fileName);
destinationBlob.DeleteIfExists();
destinationBlob.StartCopy(sourceCloudBlob);
destinationBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length != destinationBlob.Properties.Length)
model.UploadStatusMessage += "Failed to copy as Media Asset!";
}
destinationLocator.Delete();
writePolicy.Delete();
sourceCloudBlob.Delete(); //delete temp blob
// Refresh the asset.
asset = context.Assets.Where(a => a.Id == asset.Id).FirstOrDefault();
var ismAssetFiles = asset.AssetFiles.FirstOrDefault();
ismAssetFiles.IsPrimary = true;
ismAssetFiles.Update();
model.UploadStatusMessage += " Media file uploaded successfully by id: " + asset.Id;
model.AssetId = asset.Id;
return asset;
}
[HttpPost]
public ActionResult EncodeToAdaptiveBitrateMP4s(string assetId)
{
// Note: You need atleast 1 reserve streaming unit for dynamic packaging of encoded media. If you don't have that, you can't see video file playing.
IAsset inputAsset = GetAssetById(assetId);
string token = string.Empty;
string uploadFileOriginalName = string.Empty;
////// Without preset (say default preset), works very well
//IJob job = context.Jobs.CreateWithSingleTask(MediaProcessorNames.AzureMediaEncoder,
// MediaEncoderTaskPresetStrings.H264AdaptiveBitrateMP4Set720p,
// inputAsset,
// "UploadedVideo-" + Guid.NewGuid().ToString().ToLower() + "-Adaptive-Bitrate-MP4",
// AssetCreationOptions.None);
//job.Submit();
//IAsset encodedOutputAsset = job.OutputMediaAssets[0];
//// XML Preset
IJob job = context.Jobs.Create(inputAsset.Name);
IMediaProcessor processor = GetLatestMediaProcessorByName("Media Encoder Standard");
string configuration = System.IO.File.ReadAllText(HttpContext.Server.MapPath("~/MediaServicesCustomPreset.xml"));
ITask task = job.Tasks.AddNew(inputAsset.Name + "- encoding task", processor, configuration, TaskOptions.None);
task.InputAssets.Add(inputAsset);
task.OutputAssets.AddNew(inputAsset.Name + "-Adaptive-Bitrate-MP4", AssetCreationOptions.None);
job.Submit();
IAsset encodedAsset = job.OutputMediaAssets[0];
// process policy & encryption
ProcessPolicyAndEncryption(encodedAsset);
// Get file name
string fileSession = "CurrentFile";
if (Session[fileSession] != null)
{
CloudFile model = (CloudFile)Session[fileSession];
uploadFileOriginalName = model.FileName;
}
// Generate Streaming URL
string smoothStreamingUri = GetStreamingOriginLocator(encodedAsset, uploadFileOriginalName);
// add jobid and output asset id in database
AzureMediaServicesContext db = new AzureMediaServicesContext();
var video = new Video();
video.RowAssetId = assetId;
video.EncodingJobId = job.Id;
video.EncodedAssetId = encodedAsset.Id;
video.LocatorUri = smoothStreamingUri;
video.IsEncrypted = useAESRestriction;
db.Videos.Add(video);
db.SaveChanges();
if (useAESRestriction)
{
token = AzureMediaAsset.GetTestToken(encodedAsset.Id, encodedAsset);
}
// Remove session
Session.Remove("CurrentFile");
// return success response
return Json(new
{
error = false,
message = "Congratulations! Video is uploaded and pipelined for encoding, check console log for after encoding playback details.",
assetId = assetId,
jobId = job.Id,
locator = smoothStreamingUri,
encrypted = useAESRestriction,
token = token
});
}
The actual challenge that I encounter was, I'm not sure why the filesize of the downloaded remote mp4 file doesn't store in the media services asset file yet I was able to return the value via the json response of the my api call. Please check attached Screenshot of the API response.
Was able to figure out my own problem. All I need to do is to copy the function of my encoding function that was bind to an ActionResult data type. I think ActionResult is part of the form-method solution and I am building a WebAPI call solution of the working form-method.
From the original call function
[HttpPost] public ActionResult EncodeToAdaptiveBitrateMP4s(string assetId)
I copy the entire function into my WebApi Call function, like this:
[HttpPost]
public JsonResult UploadApi(String video_url)
{
var id = 1;
WebClient client = new WebClient();
var videoStream = new MemoryStream(client.DownloadData(video_url));
var container = CloudStorageAccount.Parse(mediaServiceStorageConnectionString).CreateCloudBlobClient().GetContainerReference(mediaServiceStorageContainerReference);
container.CreateIfNotExists();
var fileName = Path.GetFileName(video_url);
var fileToUpload = new CloudFile()
{
BlockCount = 1,
FileName = fileName,
Size = videoStream.Length,
BlockBlob = container.GetBlockBlobReference(fileName),
StartTime = DateTime.Now,
IsUploadCompleted = false,
UploadStatusMessage = string.Empty
};
Session.Add("CurrentFile", fileToUpload);
byte[] chunk = new byte[videoStream.Length];
//request.InputStream.Read(chunk, 0, Convert.ToInt32(request.Length));
//JsonResult returnData = null;
string fileSession = "CurrentFile";
CloudFile model = (CloudFile)Session[fileSession];
var blockId = Convert.ToBase64String(Encoding.UTF8.GetBytes(
string.Format(CultureInfo.InvariantCulture, "{0:D4}", id)));
try
{
model.BlockBlob.PutBlock(
blockId,
videoStream, null, null,
new BlobRequestOptions()
{
RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(10), 3)
},
null);
}
catch (StorageException e)
{
model.IsUploadCompleted = true;
model.UploadStatusMessage = "Failed to Upload file. Exception - " + e.Message;
return Json(new { error = true, isLastBlock = false, message = model.UploadStatusMessage });
}
var blockList = Enumerable.Range(1, (int)model.BlockCount).ToList<int>().ConvertAll(rangeElement => Convert.ToBase64String(Encoding.UTF8.GetBytes(string.Format(CultureInfo.InvariantCulture, "{0:D4}", rangeElement))));
model.BlockBlob.PutBlockList(blockList);
var duration = DateTime.Now - model.StartTime;
float fileSizeInKb = model.Size / 1024;
string fileSizeMessage = fileSizeInKb > 1024 ? string.Concat((fileSizeInKb / 1024).ToString(CultureInfo.CurrentCulture), " MB") : string.Concat(fileSizeInKb.ToString(CultureInfo.CurrentCulture), " KB");
model.UploadStatusMessage = string.Format(CultureInfo.CurrentCulture, "File of size {0} took {1} seconds to upload.", fileSizeMessage, duration.TotalSeconds);
IAsset mediaServiceAsset = CreateMediaAsset(model);
model.AssetId = mediaServiceAsset.Id;
// Note: You need atleast 1 reserve streaming unit for dynamic packaging of encoded media. If you don't have that, you can't see video file playing.
var assetId = model.AssetId;
IAsset inputAsset = GetAssetById(assetId);
string token = string.Empty;
string uploadFileOriginalName = string.Empty;
////// Without preset (say default preset), works very well
//IJob job = context.Jobs.CreateWithSingleTask(MediaProcessorNames.AzureMediaEncoder,
// MediaEncoderTaskPresetStrings.H264AdaptiveBitrateMP4Set720p,
// inputAsset,
// "UploadedVideo-" + Guid.NewGuid().ToString().ToLower() + "-Adaptive-Bitrate-MP4",
// AssetCreationOptions.None);
//job.Submit();
//IAsset encodedOutputAsset = job.OutputMediaAssets[0];
//// XML Preset
IJob job = context.Jobs.Create(inputAsset.Name);
IMediaProcessor processor = GetLatestMediaProcessorByName("Media Encoder Standard");
string configuration = System.IO.File.ReadAllText(HttpContext.Server.MapPath("~/MediaServicesCustomPreset.xml"));
ITask task = job.Tasks.AddNew(inputAsset.Name + "- encoding task", processor, configuration, TaskOptions.None);
task.InputAssets.Add(inputAsset);
task.OutputAssets.AddNew(inputAsset.Name + "-Adaptive-Bitrate-MP4", AssetCreationOptions.None);
job.Submit();
IAsset encodedAsset = job.OutputMediaAssets[0];
// process policy & encryption
ProcessPolicyAndEncryption(encodedAsset);
// Get file name
uploadFileOriginalName = model.FileName;
// Generate Streaming URL
string smoothStreamingUri = GetStreamingOriginLocator(encodedAsset, uploadFileOriginalName);
// add jobid and output asset id in database
AzureMediaServicesContext db = new AzureMediaServicesContext();
var video = new Video();
video.RowAssetId = assetId;
video.EncodingJobId = job.Id;
video.EncodedAssetId = encodedAsset.Id;
video.LocatorUri = smoothStreamingUri;
video.IsEncrypted = useAESRestriction;
db.Videos.Add(video);
db.SaveChanges();
if (useAESRestriction)
{
token = AzureMediaAsset.GetTestToken(encodedAsset.Id, encodedAsset);
}
// Remove session
Session.Remove("CurrentFile");
// return success response
return Json(new
{
error = false,
message = "Congratulations! Video is uploaded and pipelined for encoding, check console log for after encoding playback details.",
assetId = assetId,
jobId = job.Id,
locator = smoothStreamingUri,
encrypted = useAESRestriction,
token = token
});
//if (id == model.BlockCount){CommitAllChunks(model);}
//return Json(new { error = false, isLastBlock = false, message = string.Empty, filename = fileName,filesize = videoStream.Length });
}
However this kind of solution is to rigid and not for a long term solution but the concept was there and able to meet my goal. I will just redo my code and re-create a more flexible solution.
NOTE: I am not a C# developer. Respect for the beginner like me.

Parse WebCacheV01.dat in C#

I'm looking to parse the WebCacheV01.dat file using C# to find the last file location for upload in an Internet browser.
%LocalAppData%\Microsoft\Windows\WebCache\WebCacheV01.dat
I using the Managed Esent nuget package.
Esent.Isam
Esent.Interop
When I try and run the below code it fails at:
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Or if I use
Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
at
Api.JetAttachDatabase(sesid, filePath, AttachDatabaseGrbit.ReadOnly);
I get the following error:
An unhandled exception of type
'Microsoft.Isam.Esent.Interop.EsentFileAccessDeniedException' occurred
in Esent.Interop.dll
Additional information: Cannot access file, the file is locked or in use
string localAppDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
string filePathExtra = #"\Microsoft\Windows\WebCache\WebCacheV01.dat";
string filePath = string.Format("{0}{1}", localAppDataPath, filePathExtra);
JET_INSTANCE instance;
JET_SESID sesid;
JET_DBID dbid;
JET_TABLEID tableid;
String connect = "";
JET_SNP snp;
JET_SNT snt;
object data;
int numInstance = 0;
JET_INSTANCE_INFO [] instances;
int pageSize;
JET_COLUMNDEF columndef = new JET_COLUMNDEF();
JET_COLUMNID columnid;
Api.JetCreateInstance(out instance, "instance");
Api.JetGetDatabaseFileInfo(filePath, out pageSize, JET_DbInfo.PageSize);
Api.JetSetSystemParameter(JET_INSTANCE.Nil, JET_SESID.Nil, JET_param.DatabasePageSize, pageSize, null);
//Api.JetSetSystemParameter(instance, JET_SESID.Nil, JET_param.CircularLog, 1, null);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sesid, null, null);
//Do stuff in db
Api.JetEndSession(sesid, EndSessionGrbit.None);
Api.JetTerm(instance);
Is it not possible to read this without making modifications?
Viewer
http://www.nirsoft.net/utils/ese_database_view.html
Python
https://jon.glass/attempts-to-parse-webcachev01-dat/
libesedb
impacket
Issue:
The file is probably in use.
Solution:
in order to free the locked file, please stop the Schedule Task -\Microsoft\Windows\Wininet\CacheTask.
The Code
public override IEnumerable<string> GetBrowsingHistoryUrls(FileInfo fileInfo)
{
var fileName = fileInfo.FullName;
var results = new List<string>();
try
{
int pageSize;
Api.JetGetDatabaseFileInfo(fileName, out pageSize, JET_DbInfo.PageSize);
SystemParameters.DatabasePageSize = pageSize;
using (var instance = new Instance("Browsing History"))
{
var param = new InstanceParameters(instance);
param.Recovery = false;
instance.Init();
using (var session = new Session(instance))
{
Api.JetAttachDatabase(session, fileName, AttachDatabaseGrbit.ReadOnly);
JET_DBID dbid;
Api.JetOpenDatabase(session, fileName, null, out dbid, OpenDatabaseGrbit.ReadOnly);
using (var tableContainers = new Table(session, dbid, "Containers", OpenTableGrbit.ReadOnly))
{
IDictionary<string, JET_COLUMNID> containerColumns = Api.GetColumnDictionary(session, tableContainers);
if (Api.TryMoveFirst(session, tableContainers))
{
do
{
var retrieveColumnAsInt32 = Api.RetrieveColumnAsInt32(session, tableContainers, columnIds["ContainerId"]);
if (retrieveColumnAsInt32 != null)
{
var containerId = (int)retrieveColumnAsInt32;
using (var table = new Table(session, dbid, "Container_" + containerId, OpenTableGrbit.ReadOnly))
{
var tableColumns = Api.GetColumnDictionary(session, table);
if (Api.TryMoveFirst(session, table))
{
do
{
var url = Api.RetrieveColumnAsString(
session,
table,
tableColumns["Url"],
Encoding.Unicode);
var downloadedFileName = Api.RetrieveColumnAsString(
session,
table,
columnIds2["Filename"]);
if(string.IsNullOrEmpty(downloadedFileName)) // check for download history only.
continue;
// Order by access Time to find the last uploaded file.
var accessedTime = Api.RetrieveColumnAsInt64(
session,
table,
columnIds2["AccessedTime"]);
var lastVisitTime = accessedTime.HasValue ? DateTime.FromFileTimeUtc(accessedTime.Value) : DateTime.MinValue;
results.Add(url);
}
while (Api.TryMoveNext(session, table.JetTableid));
}
}
}
} while (Api.TryMoveNext(session, tableContainers));
}
}
}
}
}
catch (Exception ex)
{
// log goes here....
}
return results;
}
Utils
Task Scheduler Wrapper
You can use Microsoft.Win32.TaskScheduler.TaskService Wrapper to stop it using c#, just add this Nuget package [nuget]:https://taskscheduler.codeplex.com/
Usage
public static FileInfo CopyLockedFileRtl(DirectoryInfo directory, FileInfo fileInfo, string remoteEndPoint)
{
FileInfo copiedFileInfo = null;
using (var ts = new TaskService(string.Format(#"\\{0}", remoteEndPoint)))
{
var task = ts.GetTask(#"\Microsoft\Windows\Wininet\CacheTask");
task.Stop();
task.Enabled = false;
var byteArray = FileHelper.ReadOnlyAllBytes(fileInfo);
var filePath = Path.Combine(directory.FullName, "unlockedfile.dat");
File.WriteAllBytes(filePath, byteArray);
copiedFileInfo = new FileInfo(filePath);
task.Enabled = true;
task.Run();
task.Dispose();
}
return copiedFileInfo;
}
I was not able to get Adam's answer to work. What worked for me was making a copy with AlphaVSS (a .NET class library that has a managed API for the Volume Shadow Copy Service). The file was in "Dirty Shutdown" state, so I additionally wrote this to handle the exception it threw when I opened it:
catch (EsentErrorException ex)
{ // Usually after the database is copied, it's in Dirty Shutdown state
// This can be verified by running "esentutl.exe /Mh WebCacheV01.dat"
logger.Info(ex.Message);
switch (ex.Error)
{
case JET_err.SecondaryIndexCorrupted:
logger.Info("Secondary Index Corrupted detected, exiting...");
Api.JetTerm2(instance, TermGrbit.Complete);
return false;
case JET_err.DatabaseDirtyShutdown:
logger.Info("Dirty shutdown detected, attempting to recover...");
try
{
Api.JetTerm2(instance, TermGrbit.Complete);
Process.Start("esentutl.exe", "/p /o " + newPath);
Thread.Sleep(5000);
Api.JetInit(ref instance);
Api.JetBeginSession(instance, out sessionId, null, null);
Api.JetAttachDatabase(sessionId, newPath, AttachDatabaseGrbit.None);
}
catch (Exception e2)
{
logger.Info("Could not recover database " + newPath + ", will try opening it one last time. If that doesn't work, try using other esentutl commands", e2);
}
break;
}
}
I'm thinking about using the 'Recent Items' folder as when you select a file to upload an entry is written here:
C:\Users\USER\AppData\Roaming\Microsoft\Windows\Recent
string recent = (Environment.GetFolderPath(Environment.SpecialFolder.Recent));

Categories

Resources