I am using log4net in my web application
We are deploying it via Cloud Services (NOT App Services).
My understanding is that I won't be able to access the log files on disk (and further, these files are not persistent anyways).
My readings are to use Blob storage. But I don't see any code out there on how to do this. There is a nuget package
https://www.nuget.org/packages/log4net.Appender.Azure
but the documentation says it creates a file for each log entry.
What I want is the RollingLogFile.
Do I basically have to create my own? as in, pull down the log4net source code and create my own appender that logs to a cloud storage account instead of disk? Seems like a lot of work, would have thought someone would have coded this feature already?
Thanks.
This project shares us some samples that store log entry in Azure Blob storage using AzureBlobAppender or AzureAppendBlobAppender for log4Net.
According the code, we could find AzureBlobAppender will create separate xml file for each log entity in Azure Blob storage, but AzureAppendBlobAppender will store logs that generated in one day in one log file by calling CloudAppendBlob.AppendBlock method to append a new block of logs data to the end of an existing blob.
If you do not want to create a xml file for each log entry, you could try to use AzureAppendBlobAppender.
private static string Filename(string directoryName){
return string.Format("**{0}/{1}.entry.log.xml**",
directoryName,
DateTime.Today.ToString("yyyy_MM_dd",
DateTimeFormatInfo.InvariantInfo));
}
Related
I'm using Azure Blob Storage to allow users to upload files from a web app.
I've got them uploading into a container, but I'm not sure what would be best to save on the web app's database since there are multiple options.
There is a GUID for the file, but also a URL.
The URL can be used to get to the file directly, but is there a risk that it could change?
If I store the file GUID I can use that to get the other details of the file using an API, but of course that's and extra step compared to the URL.
I'm wondering what best practices are. Do you just store the URL and be done with it? Do you store the GUID and always make an extra call whenever a page loads to get the current URL? Do you store both? Is the URL something constant that can act just as good as a GUID?
Any suggestions are appreciated!
If you upload any file on azure blob it will give you Url to access it which contains three part
{blob base url}/{Container Name}/{File Name}
e.g
https://storagesamples.blob.core.windows.net/sample-container/logfile.txt
SO you can save Blob base url and container name in config file and only the file name part in data base.
and at run time you can create whole url and return it back to user.
So in case if you are changing blob or container you just need to change it in config file.
I'm trying to use Realm Cloud in an Azure Function but it keeps giving me an error:
make_dir() failed: Permission denied Path: /realm-object-server/
Is there a way to configure Azure Functions to have permissions to create files? I'm new to Azure Functions, this is my first one so I'm not really sure of all the particulars.
I have found one solution to this. Inside Azure function, you can only create any file or folder inside the temp folder. So if you use following sync configuration, it should work. It worked for me.
var configuration = new FullSyncConfiguration(new Uri("/~/<your-realm-name>", UriKind.Relative), _realmUser, Path.Combine(Path.GetTempPath(), "realm-object-server"));
So basically, here you are passing the folder name to store the realm locally. If you don't pass the folder name, it will try to store it in a default location where you will get the access error.
I am not familiar with Realm, but functions has permissions to interact with the file system by default. See these links for information on the app service file system (this applies for functions too):
https://learn.microsoft.com/en-us/azure/app-service/operating-system-functionality#file-access
https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system
If the function is deployed using run from package, then the wwwroot is readonly. But since the path in the error message doesn't point to wwwroot, this is probably not the issue.
My best guess is that the code that is failing is trying to write to in an inappropriate location. The information in the links above should help resolve that. Maybe check to see if realm has a config setting that lets you specify which location it should be creating "realm-object-server" in.
You can use
SyncConfigurationBase.Initialize(UserPersistenceMode.NotEncrypted, basePath: Path.GetTempPath());
The goal of what i am trying to do is to take a photo and upload it using dropzone (which drop zone is working fine for how i have implemented it) and load it to an NTFS file system. I store the "uploadpath" to my SQL server so i can pull the image faster later. The problem that i am running into is i have no idea how to load my images into Azure File System. Also from what i gather a blob storage isnt quite what i am needing to use since that is based off a type of table storage format which isnt using ntfs.
I have been trying to research this and i have no idea where to actually start... i have read a few articles within MSDN to try to understand it but it seems that everything i keep finding is rather pertaining to BLOB storage.
foreach (string filename in Request.Files)
{
HttpPostedFileBase file = Request.Files[filename];
fname = file.FileName;
if (file != null && file.ContentLength > 0)
{
var path = Path.Combine(Server.MapPath("~/uploadeimg"));
string pathstring = Path.Combine(path.ToString());
string filename1 = Guid.NewGuid() + Path.GetExtension(file.FileName);
bool isexist = Directory.Exists(pathstring);
if (!isexist)
{
Directory.CreateDirectory(pathstring);
}
uploadpath = string.Format("{0}\\{1}", pathstring, filename1);
file.SaveAs(uploadpath);
As for documentation the following links are what i have read and looked through.
File Uploading to Azure Storage by ASP.NET Webpages
https://learn.microsoft.com/en-us/azure/storage/files/storage-dotnet-how-to-use-files
https://learn.microsoft.com/en-us/azure/storage/files/storage-dotnet-how-to-use-files
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs
I appreciate any assistance that you guys may be able to provide. I am looking to get more experience with this type of programming and i have just decided to play around and see what i can do with it.
I should also note. I can save the files in the area that i host the project and I can retrieve them that way as well. But i am not certain that would be the proper way to go about handling this.
On Azure it is common to store your files in BLOB. I would recommend to store your photo on BLOB instead of storing them into the Azure Web App.
All objects saved into Azure BLOB have their own URL that you can store on your SQL database to retrieve them later.
Check https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks?toc=%2fazure%2fstorage%2ffiles%2ftoc.json#comparison-files-and-blobs to get a clear comparison between Azure Files and BLOBS.
For the legacy applications which use the native file system APIs, you could mount your Azure File share in Azure VM or on-premises as the network drive, then access your file share just as the local file system. You could leverage AzCopy and Azure Storage Explorer to manage your file storage.
For Azure Web Apps, you could not mount the Azure File share. By default, Azure web app content is stored on Azure Storage which is managed by azure side. You could just leverage the Home directory access (d:\home) and store your files on Azure Web. Details you could follow Azure Web App sandbox about the File System Restrictions/Considerations section.
In summary, we recommend you store your files into Azure Blob storage. And you could use azure storage client library to communicate with your blob storage, details you could follow here.
Good morning,
I'm trying to implement Azure Blog Storage for the first time using their example code provided. However my app is through a very broad 400 Bad Request error when trying to UploadFromStream().
I have done a bunch of searching on this issue. Almost everything i have come across identifies naming conventions of the container or blob to be the issue. this is NOT my issue, i'm using all lowercase, etc.
My code is no different from their example code:
The connection string:
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=xxxxxx;EndpointSuffix=core.windows.net" />
And the code:
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container
CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
// Retrieve reference to a blob named "myblob"
CloudBlockBlob blob = container.GetBlockBlobReference("myblob");
// Create the container if it doesn't already exist
container.CreateIfNotExists();
// Create or overwrite the "myblob" blob with contents from a local file.
using (var fileStream = System.IO.File.OpenRead(#"D:\Files\logo.png"))
{
blob.UploadFromStream(fileStream);
}
Here is the exception details:
This is all i have to go on. The only other thing i can think of is that i'm running this on my development environment with HTTP not HTTPS. Not sure if this might be a issue?
EDIT:
Additionally, when attempting to upload a file directily in the Azure portal to the container i recieve a
Validation error for TestAzureFileUpload.txt. Details: "The page blob
size must be aligned to a 512-byte boundary. The current file size is
56."
Could this be related to my issue? Am i missing some setting here?
I know i do not have enough to go on here for anyone to help me identify the exact issue, but i am hoping that someone can at least point me in the right direction to resolve this?
Any help would be appreciated
I use a Premium storage account to test the code and get the same "400 bad request" as yours.
From the exception details, you can see the "Block blobs are not supported" message.
Here is an image of my exception details
To solve your problem, I think you should know the difference between block blob and page blob.
Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by writing a set of blocks and committing them by their block IDs. they are for you discrete storage objects like jpg, txt, log, etc. That you'd typically view as a file in your local OS. Supported by standard storage account only.
Page blobs are a collection of 512-byte pages optimized for random read and write operations, such as VHD's. To create a page blob, you initialize the page blob and specify the maximum size the page blob will grow. The truth is, page blobs are designed for Azure Virtual Machine disks. Supported by both standard and Premium storage account.
Since you are using the Premium Storage, which is currently available only for storing data on disks used by Azure Virtual Machines.
So my suggestion is :
If you want your application to support streaming and random access scenarios, and be able to access application data from anywhere. You should use block blobs with the standard account.
If you want to lift and shift applications that use native file system APIs to read and write data to persistent disks. Or you want to store data that is not required to be accessed from outside the virtual machine to which the disk is attached. You should use Page blobs.
Reference link:
Understanding Block Blobs, Append Blobs, and Page Blobs
I am trying to find a simple way to list log files which I am storing in and Azure Blob Container so developers and admins can easily get to dev log information. I am following the information in this API doc https://msdn.microsoft.com/en-us/library/dd135734.aspx but when I go to
https://-my-storage-url-.blob.core.windows.net/dev?comp=list&nclude={snapshots,metadata,uncommittedblobs,copy}&maxresults=1000
I see one file listed which is a Block Blob but the log files I have generated which are of type Append Blob are not showing. How can I construct this api call to include Append Blobs?