UWP OneDrive storage space - c#

I have a c# UWP App, that makes use of the OneDrive API to store files in the approot (special folder for my App only). I know, that I can get the total space of OneDrive this way, but it doesn't tell me, how much space my App takes.
Is there a fast way to tell, how much space my App takes to store these files there (instead of iterating through all items)?

As Brad said, approot like any other OneDrive item has a metadata. And in item's metadata has a size property which represents size of the item in bytes. So we can use this property to get the total space your app takes.
As App Folder described, we can use GET /drive/special/approot to get your app folder's metadata and when using OneDrive .NET SDK, the code will like:
var item = await oneDriveClient.Drive.Special.AppRoot.Request().GetAsync();
System.Diagnostics.Debug.WriteLine($"{item.Name}'s size is {item.Size}");
However as I tested, when we use this code in UWP, we will encounter a cache issue. Even your app folder's size has changed, this API will return the same value as the first time you run it.
This is because while Get metadata for a OneDrive item, it has an optional request headers if-none-match and if this request header is included and the eTag (or cTag) provided matches the current tag on the file, an HTTP 304 Not Modified response is returned.
And in UWP, using HttpClient will automatically add this header in request, if the eTag is not changed, HttpClient will not get the newest info, it will return the data it cached. According to Item resource type:
Note: The eTag and cTag properties work differently on containers (folders). The cTag value is modified when content or metadata of any descendant of the folder is changed. The eTag value is only modified when the folder's properties are changed, except for properties that are derived from descendants (like childCount or lastModifiedDateTime).
So in most case, app folder's eTag won't change and when we use OneDrive .NET SDK or default HttpClient in UWP to get app folder's metadata, we will get the cached data. To see this clearly, we can use fiddler to trace the network, and we will find in the request headers, If-None-Match is added and the real response from OneDrive is HTTP 304 Not Modified.
To solve this issue, we can use Windows.Web.Http.HttpClient class with HttpBaseProtocolFilter class and HttpCacheControl class to disable the cache like following:
var oneDriveClient = await OneDriveClientExtensions.GetAuthenticatedUniversalClient(new[] { "wl.signin", "onedrive.readwrite" });
var filter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
filter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
filter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;
var httpClient = new HttpClient(filter);
var request = new HttpRequestMessage(HttpMethod.Get, new Uri("https://api.onedrive.com/v1.0/drive/special/approot"));
request.Headers.Authorization = new Windows.Web.Http.Headers.HttpCredentialsHeaderValue("Bearer", oneDriveClient.AuthenticationProvider.CurrentAccountSession.AccessToken);
var response = await httpClient.SendRequestAsync(request);
var item = oneDriveClient.HttpProvider.Serializer.DeserializeObject<Item>(await response.Content.ReadAsStringAsync());
System.Diagnostics.Debug.WriteLine($"{item.Name}'s size is {item.Size}");
PS: To make this method work, we need make sure there is no local HTTP cache. So we'd better uninstall the app first and do not use await oneDriveClient.Drive.Special.AppRoot.Request().GetAsync() in the app.

When you fetch your app's folder (via approot) the value of the size property returned on the item should be a reflection of the amount of space your application is using (since for a folder the value is the sum of the size of all files stored within it, at any level).

Related

Error uploading Autodesk design automation, dotnet Core..... 1 rfa and 1 big Json file to pass as input

The Revit Addin is working perfectly and I have also converted correctly for Design automation. I have debugged it with local debugger. It worked perfect.
So I can say app bundle is perfectly doing well.
Now coming to the web application code, it works perfect until last line of "workItemStatus".
I need a rfa file and a big Json file as input file, to run the code. Both together will be 1 mb in size. But code is stack (endlessly waiting) when uploading file, workitem does not start.
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by.....
Url = string.Format("https://developer.api.autodesk.com/oss/v2/buckets/{0}/objects/{1}", bucketKey, inputFileNameOSS)
That post says, I will need to upload bigger files to another cloud service and use the signed-in URL instead of Forge oss bucket.
The code looks correct while debugging and it is stack, when it reach to the line
WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);
I have debugged the code, looks like perfectly working until "workItemStatus" value, in DesignAutomationController.cs "StartWorkItem".
Every Key and Value looks perfectly passed.
Is it because of the file size ? As the Json file is big, I am uploading it like the other input (.rfa/.rvt) files.
string callbackUrl = string.Format("{0}/api/forge/callback/designautomation?id={1}&outputFileName={2}", OAuthController.GetAppSetting("FORGE_WEBHOOK_URL"), browerConnectionId, outputFileNameOSS);
WorkItem workItemSpec = new WorkItem()
{
ActivityId = activityName,
Arguments = new Dictionary<string, IArgument>()
{
{ "inputFile", inputFileArgument },
{ "inputJsonFile", inputFileArgument1 },
{ "outputFile", outputFileArgument },
{ "onComplete", new XrefTreeArgument { Verb = Verb.Post, Url = callbackUrl } }
}
};
***WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);***
return Ok(new { WorkItemId = workItemStatus.Id }); ```
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by..
The 16kb limit is on a payload of design automation endpoints including the workitem. The limits are defined here. If the workitem payload exceeds 16kb you will see an error HTTP 413 Payload Too Large.
To send large json inputs to design automation, you may first upload the json to OSS (or even another storage service such as Amazon S3). Then call the workitem with a signed url to the json file (similar to the signed url for the rfa file).
Edit:
1. Large JSON files can be uploaded to OSS using Data Management endpoint.
2. A signed URL with read access can then be obtained for that object using endpoint.
3. The URL obtained can then be passed to Design Automation workitem payload as an input argument, instead of embedding the json contents into the payload.

Onedrive special App Folder - creating a new folder

I'm working with the OneDrive SDK and have successfully managed to get a reference to my special App folder.
_appFolder = await OneDriveClient.Drive.Special.AppRoot.Request().GetAsync();
From here I want to create a sub folder called say "Deep Purple".
Looking at the C# example code I can do this using:
var folderToCreate = new Item { Name = artistName, Folder = new Folder() };
var newFolder = await OneDriveClient
.Drive
.Items[itemId]
.Children
.Request()
.AddAsync(folderToCreate);
But I'm thinking I already have a reference down to Items[itemId] (my _appFolder is of type Item), so I can just use:
var myNewFolder = await _appFolder.Children.Request().AddAsync(folderToCreate);
But no, as you can see by this image I don't have a Request option.
I'm clearly misunderstanding something.
Your issue is that you are using a Model (object representations of response data) and trying to get a Request, which is only returned from the type RequestBuilder. You are close, though! If you want to add a file to the children of your app folder (which is of type Folder), then your request would look like this:
var newFolder = await oneDriveClient
.Drive
.Items[_appFolder.Id]
.Children
.Request()
.AddAsync(folderToCreate);
At its core, the SDK relies on the OneDriveClient object to generate requests because it knows how to do things like authentication and URL generation. The Models are just containers for information returned from the service. You can use the information in those containers in concert with the client to generate any request you need.

How do I generate an Azure SAS signature constraining Content-Length header to certain value?

I am currently implementing ajax browser uploads straight to an azure storage container. First I need a signed url that will allow me to do that without sharing my private key with the browser (I ask my server for this signed URL on a previous ajax call).
My server side code looks like this;
public string GetSignature(string fileName, int contentLength)
{
CloudBlobContainer blobContainer = this.BlobClient.GetContainerReference("demo");
CloudBlockBlob blob = blobContainer.GetBlockBlobReference("photo1.jpg");
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
Permissions = SharedAccessBlobPermissions.Create,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10)
};
SharedAccessBlobHeaders headers = new SharedAccessBlobHeaders()
{
CacheControl = "",
ContentDisposition = "",
ContentEncoding = "",
ContentLanguage = "",
ContentType = ""
// PROBLEM: Where's content length?
};
return blob.GetSharedAccessSignature(policy, headers);
}
My problem is that I can't find a way to specify the content length header. I need this to prevent users from uploading huge files once they have a storage quote that must be respected.
I have googled a lot and actually found a way to do this which is implementing the signature algorithm as per this link, but implementing all of this myself is just my last resort (it will be more error prone and time consuming).
So my question is, is there a way to pass content length constraint to a PUT signed url?
There is no way currently to put blob size limitations on delegated access. (The link you provided is for signing requests with the account key; it does not constrain the size of the blob being uploaded.)
There are two factors that limit the amount of data being uploaded in this scenario:
The time limitations of the SAS token. Your code shows a 10 minute window; assuming 60 MB/sec that allows for approximately 36 GB.
The maximum size of a single blob is a hard upper bound.
One solution is to scan the container periodically and clean up any blobs that go beyond the user's quota.
As a side point, the SharedAccessBlobHeaders control what the users see when they download the blob. They don't restrict what the user can upload, so they are not relevant to your scenario. You can simply pass in null for this parameter.

Azure Blob Storage - Changing permissions of a container and accessing with SAS

I have an Azure Blob container which contains a few blobs. The container was created (successfully) with the code:
if (container.CreateIfNotExists())
{
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
container.SetPermissions(permissions);
}
You'll see the permissions are set to private (i.e., PublicAccess is Off).
In a later portion of my code, I would like to open the permissions using SAS, with an expiration of 1 hour. To attempt this, I am using the code:
if (container.Exists())
{
//Set the expiry time and permissions for the container.
//In this case no start time is specified, so the shared access signature becomes valid immediately.
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1);
sasConstraints.Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List;
//Generate the shared access signature on the container, setting the constraints directly on the signature.
string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken;
}
However, no matter how I shape it, when I navigate my browser to the returned url (i.e., container.Uri + sasContainerToken), I get an authentication error:
<Error>
<Code>AuthenticationFailed</Code>
<Message>
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:d7f89ef3-919b-4b86-9b4f-4a95273c20ff Time:2014-06-26T15:33:11.2754096Z
</Message>
<AuthenticationErrorDetail>
Signature did not match. String to sign used was rl 2014-06-26T16:32:02Z /mycontainer/$root 2014-02-14
</AuthenticationErrorDetail>
</Error>
Can anyone give me any pointers as to why I am seeing this authentication error?
My final url looks like it is in the correct format?:
https://myservice.blob.core.windows.net/mycontainer?sv=2014-02-14&sr=c&sig=0MSvKIRJnxWr2G%2Bh0mj%2BslbNtZM3VnjSF8KPhBKCPs8%3D&se=2014-06-26T16%3A32%3A02Z&sp=rl
I'm at a loss so any pointers would be greatly appreciated.
I also faced the exact same error :). You can't do container related operations (with the exception of listing blobs) using Shared Access Signature. You would need to use account key for performing operations on a container. From this page: http://msdn.microsoft.com/en-us/library/azure/jj721951.aspx
Supported operations using shared access signatures include:
Reading and writing page or block blob content, block lists, properties, and metadata
Deleting, leasing, and creating a snapshot of a blob
Listing the blobs within a container
UPDATE
For listing blobs, just add &comp=list&restype=container to your URL and that should do the trick. So your URL should be:
https://myservice.blob.core.windows.net/mycontainer?sv=2014-02-14&sr=c&sig=0MSvKIRJnxWr2G%2Bh0mj%2BslbNtZM3VnjSF8KPhBKCPs8%3D&se=2014-06-26T16%3A32%3A02Z&sp=rl&comp=list&restype=container

Getting the File type in Azure Storage

Is it possible to get the file type of an image file or blob in Azure Storage? i have researched everything but no to avail. any response will be appreciated! I was wondering, is there anything that i could add in order to get it? here is my code:
ImageSource wew=new BitmapImage(new Uri("http://sweetapp.blob.core.windows.net/cakepictures/"+newfilename+filetypeofthepicture, UriKind.RelativeOrAbsolute)); //need to get the file type of the image
CakeData.Add(new CakeData { Cakename = item.Cakename, ImagePath = wew });
Each blob has a property called Content-Type which can be set when the blob is uploaded. You can make use of Storage Client Library to fetch blob properties to get its content type property. If you are using .Net Storage Client Library, you could use code below:
var blob = new CloudBlockBlob(new Uri("blob uri"), new StorageCredentials("sweetapp", "account key"));
blob.FetchAttributes();
var contentType = blob.Properties.ContentType;
This would however require you to include the credentials in your client app. If you don't want to do that, other alternative would be to use Shared Access Signature Token and use that to create an instance of StorageCredentials object. You could create this SAS token somewhere on the server.
var credentials = new StorageCredentials(sasToken);
var blob = new CloudBlockBlob(new Uri("blob uri"), credentials);
blob.FetchAttributes();
var contentType = blob.Properties.ContentType;
3rd alternative would be access the registry and get the mime-type based on file extension but I'm not sure if a Windows 8 app would have access to the registry.
Last alternative would be to hard code stuff in your application. There's a predefined set of mime-types which you can hard code in your application and based on the file's extension, you can retrieve the content type.

Categories

Resources