The goal of what i am trying to do is to take a photo and upload it using dropzone (which drop zone is working fine for how i have implemented it) and load it to an NTFS file system. I store the "uploadpath" to my SQL server so i can pull the image faster later. The problem that i am running into is i have no idea how to load my images into Azure File System. Also from what i gather a blob storage isnt quite what i am needing to use since that is based off a type of table storage format which isnt using ntfs.
I have been trying to research this and i have no idea where to actually start... i have read a few articles within MSDN to try to understand it but it seems that everything i keep finding is rather pertaining to BLOB storage.
foreach (string filename in Request.Files)
{
HttpPostedFileBase file = Request.Files[filename];
fname = file.FileName;
if (file != null && file.ContentLength > 0)
{
var path = Path.Combine(Server.MapPath("~/uploadeimg"));
string pathstring = Path.Combine(path.ToString());
string filename1 = Guid.NewGuid() + Path.GetExtension(file.FileName);
bool isexist = Directory.Exists(pathstring);
if (!isexist)
{
Directory.CreateDirectory(pathstring);
}
uploadpath = string.Format("{0}\\{1}", pathstring, filename1);
file.SaveAs(uploadpath);
As for documentation the following links are what i have read and looked through.
File Uploading to Azure Storage by ASP.NET Webpages
https://learn.microsoft.com/en-us/azure/storage/files/storage-dotnet-how-to-use-files
https://learn.microsoft.com/en-us/azure/storage/files/storage-dotnet-how-to-use-files
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs
I appreciate any assistance that you guys may be able to provide. I am looking to get more experience with this type of programming and i have just decided to play around and see what i can do with it.
I should also note. I can save the files in the area that i host the project and I can retrieve them that way as well. But i am not certain that would be the proper way to go about handling this.
On Azure it is common to store your files in BLOB. I would recommend to store your photo on BLOB instead of storing them into the Azure Web App.
All objects saved into Azure BLOB have their own URL that you can store on your SQL database to retrieve them later.
Check https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks?toc=%2fazure%2fstorage%2ffiles%2ftoc.json#comparison-files-and-blobs to get a clear comparison between Azure Files and BLOBS.
For the legacy applications which use the native file system APIs, you could mount your Azure File share in Azure VM or on-premises as the network drive, then access your file share just as the local file system. You could leverage AzCopy and Azure Storage Explorer to manage your file storage.
For Azure Web Apps, you could not mount the Azure File share. By default, Azure web app content is stored on Azure Storage which is managed by azure side. You could just leverage the Home directory access (d:\home) and store your files on Azure Web. Details you could follow Azure Web App sandbox about the File System Restrictions/Considerations section.
In summary, we recommend you store your files into Azure Blob storage. And you could use azure storage client library to communicate with your blob storage, details you could follow here.
Related
I have large media files on Azure Storage block blobs, and I would like to encode them with Azure Media Service API V3.
I've found equivalent in API v2 : Copying existing blobs into a Media Services Asset but the v2 is obsolete and will be retired soon.
Where can I find exemple of Creating an Asset from an existing blob? All exemples I can find (including offical SDK v3 tutorials) are always using small local videos uploaded directly.
Also, it's not clear if in V3 I still need to copy my blob to an asset like in V2, or if you can use a blob with an asset as long as the Media Service is using the same Storage Account (because as said in the v2 to v3 migration guide, AssetFiles doesn't exists anymore in v3).
I have a pretty extensive sample of copying content from a storage account, encoding it with AMS and delivering it back to the same location in this Node.js/Typescript sample:
https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/VideoEncoding/Encoding_Bulk_Remote_Storage_Account_SAS
Take a look there first and tell me if that is what you are in need of. There are a number of helper functions I use with the storage blob SDK in the common folder here.
https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/main/Common
Keep in mind that the workflow for remote assets can be achieved in a couple ways in v3.
V3 Jobs support the JobInputHTTP object, which can point to a read only SAS URL that you pass in from your remote storage blob (if the storage account is not attached to the AMS account).
You can create an empty Asset and copy the blob into it from a remote storage account and the submit the job as JobInputAsset as usual
You can create an asset and pass in the container name - if this is an attached storage account, you can then wrap an existing storage account container as an Asset and then submit a job with the specified file in that Asset container as the input source. This is what you said in the last sentence above... but it may not be clear that you can do this in JobInputAssets - look at the Files property on JobInputAsset to pass in a specific list of files to the encoder (single or multi if doing overlays).
"input": {
"#odata.type": "#Microsoft.Media.JobInputAsset",
"files": [],
"inputDefinitions": [],
"assetName": "job1-InputAsset"
},
I'm using Azure Blob Storage to allow users to upload files from a web app.
I've got them uploading into a container, but I'm not sure what would be best to save on the web app's database since there are multiple options.
There is a GUID for the file, but also a URL.
The URL can be used to get to the file directly, but is there a risk that it could change?
If I store the file GUID I can use that to get the other details of the file using an API, but of course that's and extra step compared to the URL.
I'm wondering what best practices are. Do you just store the URL and be done with it? Do you store the GUID and always make an extra call whenever a page loads to get the current URL? Do you store both? Is the URL something constant that can act just as good as a GUID?
Any suggestions are appreciated!
If you upload any file on azure blob it will give you Url to access it which contains three part
{blob base url}/{Container Name}/{File Name}
e.g
https://storagesamples.blob.core.windows.net/sample-container/logfile.txt
SO you can save Blob base url and container name in config file and only the file name part in data base.
and at run time you can create whole url and return it back to user.
So in case if you are changing blob or container you just need to change it in config file.
I've deployed a website into Azure and i want to access programaticaly this path : "D:\home\site\app" from a c# desktop application and delete all files and upload new ones programatically.
i have searched and found many ways but all are for AzureStorage or using Kudu consol or FTP while what i realy want is to access the local storage where the website is deployed programatiacally, and make some edits on files programatically.
Sure thing, the Site Control Manager (Kudu) has an API for that, the VFS API:
https://github.com/projectkudu/kudu/wiki/REST-API#vfs
You can use either of these for authentication:
A Bearer token that you obtain from the STS (reference implementation in ARMClient)
Site-level credentials (the long ugly ones under your Web App → Properties)
Git/FTP credentials (subscription level)
Sample usage (using site-level credentials):
# Line breaks brutally used to improve readability
# /api/vfs/ is d:\home
# Append path as necessary, i.e. /api/vfs/site/app
$ curl -k https://$are-we-eating-too-much-garlic-as-a-people:6sujXXX
XXXXXXq7Zc#are-we-eating-too-much-garlic-as-a-people.scm.azurewebsites.net
/api/vfs/site/wwwroot/ill-grab-this-file-over-vfs-api.txt
There, i did it.
I'm assuming here that you want to do all that from the outside world - since you don't clearly state otherwise.
Well, in my azure code. my task was to save a excel file and upload its contents to SQL server.
I used this plain and simple to access home site.
string fileToSave = string.Format("{0}\\{1}", HostingEnvironment.MapPath(#"~\Temp"), FileUpload.FileName);
if (!Directory.Exists(HostingEnvironment.MapPath(#"~\Temp")))
Directory.CreateDirectory(HostingEnvironment.MapPath(#"~\Temp"));
FileUpload.PostedFile.SaveAs(fileToSave);
you could use something like this to delete and save a new file or other I/O operations.
I am using log4net in my web application
We are deploying it via Cloud Services (NOT App Services).
My understanding is that I won't be able to access the log files on disk (and further, these files are not persistent anyways).
My readings are to use Blob storage. But I don't see any code out there on how to do this. There is a nuget package
https://www.nuget.org/packages/log4net.Appender.Azure
but the documentation says it creates a file for each log entry.
What I want is the RollingLogFile.
Do I basically have to create my own? as in, pull down the log4net source code and create my own appender that logs to a cloud storage account instead of disk? Seems like a lot of work, would have thought someone would have coded this feature already?
Thanks.
This project shares us some samples that store log entry in Azure Blob storage using AzureBlobAppender or AzureAppendBlobAppender for log4Net.
According the code, we could find AzureBlobAppender will create separate xml file for each log entity in Azure Blob storage, but AzureAppendBlobAppender will store logs that generated in one day in one log file by calling CloudAppendBlob.AppendBlock method to append a new block of logs data to the end of an existing blob.
If you do not want to create a xml file for each log entry, you could try to use AzureAppendBlobAppender.
private static string Filename(string directoryName){
return string.Format("**{0}/{1}.entry.log.xml**",
directoryName,
DateTime.Today.ToString("yyyy_MM_dd",
DateTimeFormatInfo.InvariantInfo));
}
I am trying to find a simple way to list log files which I am storing in and Azure Blob Container so developers and admins can easily get to dev log information. I am following the information in this API doc https://msdn.microsoft.com/en-us/library/dd135734.aspx but when I go to
https://-my-storage-url-.blob.core.windows.net/dev?comp=list&nclude={snapshots,metadata,uncommittedblobs,copy}&maxresults=1000
I see one file listed which is a Block Blob but the log files I have generated which are of type Append Blob are not showing. How can I construct this api call to include Append Blobs?