Windows service to GCP - c#

I was looking for resources on how to create a simple background service using C# that checks a specific folder for FLAC files and sends them to a GCP bucket, once the file is uploaded successfully the file is erased or moved to another folder. Where can I find something to read about this kind of thing?

To move a file to another location using c# you can use the move method. The Move method moves an existing file to a new location with the same or a different file name.The Move method moves an existing file to a new location with the same or a different file name in File Move. The Move method takes two parameters. The Move method deletes the original file. The method that renames files is called File.Move
Example:
{
File.Move(sourceFile, destinationFile);
}
catch (IOException iox)
{
Console.WriteLine(iox.Message);
}
If you need more examples about File.Move method please follow this link
Adding to that, you can use the Directory.GetFiles method to select the file extension, like in the example below.
This is the original thread where the example was posted
Example:
//Assume user types .txt into textbox
string fileExtension = "*" + textbox1.Text;
string[] txtFiles = Directory.GetFiles("Source Path", fileExtension);
foreach (var item in txtFiles)
{
File.Move(item, Path.Combine("Destination Directory", Path.GetFileName(item)));
}
If you want to know more about Directory.GetFiles method follow this link
And concerning GCP,using Cloud Storage Transfer Service you can move or backup data to a Cloud Storage bucket either from other cloud storage providers or from your on-premises storage. Storage Transfer Service provides options that make data transfers and synchronization easier. For example, you can:
Schedule one-time transfer operations or recurring transfer
operations.
Delete existing objects in the destination bucket if they do not have
a corresponding object in the source.
Delete data source objects after transferring them.
Schedule periodic synchronization from a data source to a data sink
with advanced filters based on file creation dates, file-names, and
the times of day you prefer to import data.
If you want to know more about GCP Cloud Storage Transfer Service follow this link
If you want to know more about how to create storage buckets follow this link

Related

Google Cloud Storage .Net Client Bulk Upload

I am uploading files to cloud storage using the .net client.
at the moment am uploading files one by one like
StorageClient client = StorageClient.Create();
foreach(file in files)
{
client.UploadObject(bucketName, uploadLocation, contentType, file);
}
But I couldn't find any way to bulk upload files. Is there any way to upload files in bulk ?
You are effectively bulk uploading; you're just uploading each file serially 😃
If you're looking for a method that you give files to and it handles the entire upload, I'm unsure that this exists.
You can run the uploads in parallel using threads or equivalent.
You'll want to ensure that you can resume failed uploads (uploading multiple files increases the likelihood of failure), see Create Object Uploader
You'll need to self-manage resuming on failures.
It's possible that there are libraries that implement this abstraction.
I understand that you are searching for a way to upload a large number of files at once in Google Cloud Storage. However, there is no direct method to handle the entire upload. If you have a large number of files to upload you can perform a parallel multi-threaded/multi-processing copy. The following steps are to be followed:
1.Instantiation of a StorageClient object.
2.Specifying the parallelization options.
3.Getting a list of filenames from my upload folder and storing the names of the files.
4.Getting a list of files in the cloud.
5.Using the Parallel Foreach.
6.calling the UploadObject method from the Google Cloud Storage client library.
You can also refer to this article for more details on the above methods.

Azure Media Services (v3) blob storage, assets, and locators backup

I'm trying to figure out how to backup videos produced by Azure Media Services.
Where are the assets and streaming locators stored, how to backup them or recreate them for existing binary files stored in the Azure Media Service's blob storage?
Proposed solution:
I've come up with a solution, once the video is processed by transformation job, the app will create a copy of the container to separate backup blob storage.
Since, from my understanding, the data produced by transformation jobs are immutable, I don't have to manage another synchronization.
if (job.State == JobState.Finished)
{
StreamingLocator locator = await AzureMediaServicesService.CreateStreamingLocatorAsync(client, azureMediaServicesConfig, outputAssetName, locatorName);
var videoUrls = await AzureMediaServicesService.GetVideoUrlsAsync(client, azureMediaServicesConfig, locator.Name);
// backup blobs in creted container here
}
Are only the binary data stored in blob storage sufficient for restoring the videos successfully? After restore, will the already existing streaming and download links work properly?
Since, when I'm creating locators, I'm passing the asset name as well, I reckon I should backup asset's data too. Can/should I somehow backup assets and locators? Where are they stored? Is there any better way to backup videos?
I was looking for the answers here:
https://learn.microsoft.com/en-us/azure/media-services/latest/streaming-locators-concept
https://learn.microsoft.com/en-us/azure/media-services/latest/stream-files-tutorial-with-api#get-a-streaming-locator
https://learn.microsoft.com/en-us/azure/media-services/latest/limits-quotas-constraints
Part of what you're asking is 'What is an asset in Media Services?'. The Storage container that is created as part of the encoding process is definitely a good portion of what you need to backup. Technically that is all you need to recreate an asset from the backup Storage account. Well, if you don't mind recreating the other aspects of the asset.
An asset is/can be several things:
The Storage container and the contents of that container. These would include the MP4 video files, the manifests (.ism and .ismc), and metadata XML files.
The published locator or URL where clients make GET requests to the streaming endpoint.
Metadata. This includes things like the asset name, creation date, description, etc.
If you keep track of the Storage container in your backup and what metadata is associated with it as well as have a way of updating your site with a new streaming locator then all you really need is the Storage container for recreating the asset.

efficiently pass files from webserver to file server

i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.

How to upload files to Azure Blob Storage using Azure WebJob? (Using C#, .NET)

I have a very basic understanding of Azure WebJob, that is, it can perform tasks in background. I want to upload files to Azure Blob Storage, specifically using Azure WebJob. I would like to know how to do this, from scratch. Assume that the file to be uploaded is locally available on the system in a certain folder (say C:/Users/Abc/Alpha/Beta/).
How and where do I define the background task that is supposed to be performed?
How to make sure, that whenever a new file is available in the same folder (
C:/Users/Abc/Alpha/Beta/) the function is automatically triggered, and this new file is also transferred to Azure Blob Storage?
Can I monitor progress of transfer for each file? or for all files?
How to handle connection failures during transfer? and what other errors should I worry about?
How and where do I define the background task that is supposed to be performed?
According to your description, you could create a webjob console application in the VS.You could run this console application in the local.
More details, you could refer to this article to know how to create webjob in the VS.
Notice:Sine you need watch the local side folder, this webjob is running in your local side not upload to the azure web app.
How to make sure, that whenever a new file is available in the same folder ( C:/Users/Abc/Alpha/Beta/) the function is automatically triggered, and this new file is also transferred to Azure Blob Storage?
As far as I know, webjob support the filetrigger, it will monitor for file additions/changes to a particular directory, and triggers a job function when they occur.
More details, you could refer to below code sample:
Program.cs:
static void Main()
{
var config = new JobHostConfiguration();
FilesConfiguration filesConfig = new FilesConfiguration();
//set the root path when the function to watch the folder
filesConfig.RootPath = #"D:\";
config.UseFiles(filesConfig);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
function.cs:
public static void ImportFile(
[FileTrigger(#"fileupload\{name}", "*.*", WatcherChangeTypes.Created | WatcherChangeTypes.Changed)] Stream file,
FileSystemEventArgs fileTrigger,
[Blob("textblobs/{name}", FileAccess.Write)] Stream blobOutput,
TextWriter log)
{
log.WriteLine(string.Format("Processed input file '{0}'!", fileTrigger.Name));
file.CopyTo(blobOutput);
log.WriteLine("Upload File Complete");
}
Can I monitor progress of transfer for each file? or for all files?
As far as I know, there's a [BlobInput] attribute that lets you specify a container to listen on, and it includes an efficient blob listener that will dispatch to the method when new blobs are detected. More details, you could refer to this article.
How to handle connection failures during transfer? and what other errors should I worry about?
You could use try catch to catch the error.If the error happens you could send the details to the queue or write a txt file in the blob. Then you could do some operation according to the queue message or the blob txt file.

Minimizing code impact of migrating to Azure Blob storage

I'm attempting to migrate fairly complex application to Windows Azure. In both the worker role and web role there are many instances where the application saves files to a local file system.
Here's an example:
string thumbnailFileName = System.IO.Path.GetDirectoryName(fileName) + "\\" + "bthumb_" + System.IO.Path.GetFileName(fileName);
thumbnail.Save(thumbnailFileName);
and another example:
using (System.IO.StreamWriter file = System.IO.File.AppendText(GetCurrentLogFilePath()))
{
string logEntry = String.Format("\r\n{0} - {1}: {2}", DateTime.Now.ToString("yyyy.MM.dd#HH.mm.ss"), type.ToString(), message);
file.Write(logEntry);
file.Close();
}
In these examples we are saving images and log files to file locations specified in the app.config. Here's an example:
<add key="ImageFileDirectory" value="C:\temp\foo\root\auth\inventorypictures"/>
I'd like to make as few code changes as possible to support Azure blob storage in case we ever decide to move back to a more traditional hosting environment and more generally to reduce the potential for creating unintended problems.
Based on this post I've decided that Azure Drive is not the best way to go.
Can someone guide me in the right direction (ideally with an example)? The best solution in my mind would be one that only requires a change to my config file. But I'm guessing that is not realistic.
Indeed, you want to use Azure Blob storage to save your files.
As for your coding question, consider creating an interface, call it IFileStore:
public interface IFileStore
{
void Save(string filePath, byte [] contents);
byte [] Read(string filePath);
}
Then you create 2 provider classes, one for the file system, and one for Azure Blob storage.
The file system provider can implement the save function like this:
public void Save(string filePath, byte [] content)
{
File.WriteAllBytes(filePath, content);
}
public byte [] Read(string filePath)
{
return File.ReadAllBytes(filePath);
}
As for the Azure Blob provider, you will have to derive the storage path based on the filePath passed in to you.
Generally
For your storage, I'd recommend using Blob and Table storage - this allows multiple instances to access the storage simultaneously. If you want to assist with making the code portable, then I'd recommend abstracting your code behind interfaces/APIs (see #Philpp's answer).
e.g.
for your log file example, then table storage might be the best thing to use
for your image files, then blob storage might be the best thing to use
If you really want to use AzureDrive
I'd only recommend using AzureDrive if you are only ever going to deploy a single instance of your role - otherwise you will end up fighting problems with sharing files across multiple instances (and remember that only 1 instance can mount with write access at any one time)
If you are operating with a single instance, and if you are only storing temp and log files, then you could also look at using local storage instead of azure drive - it's much simpler and cheaper to use than blob storage. e.g. One possible specialist alternative for your log file example is that you could consider using local storage alongside Azure Diagnostics-controlled uploading of that local storage to Blob storage.

Categories

Resources