Upload file into google drive folder using Xamarin.Android - c#

I want to create file inside a specific folder in google drive ( not the default location ) using Xamarin.Andriod
I'm using the below code
MetadataChangeSet changeSetfile = new MetadataChangeSet.Builder()
.SetTitle("Test.jpg")
.SetMimeType("image/jpeg")
.Build();
DriveClass.DriveApi
.GetRootFolder(_googleApiClient)
.CreateFile(_googleApiClient, changeSetfile, contentResults.DriveContents);

Implement GoogleApiClient.IConnectionCallbacks
Obtain a GoogleApiClient with DriveClass.API and DriveClass.ScopeFile
GoogleApiClient Example:
if (_googleApiClient == null) // _googleApiClient is a class level variable
{
_googleApiClient = new GoogleApiClient.Builder(this)
.AddApi(DriveClass.API)
.AddScope(DriveClass.ScopeFile)
.AddConnectionCallbacks(this)
.AddOnConnectionFailedListener(onConnectionFailed)
.Build();
}
if (!_googleApiClient.IsConnected)
_googleApiClient.Connect();
Once connected, Query for folder, create if needed and then "write" a file to it.
Folder and File Example:
var folderName = "StackOverflow";
using (var driveId = DriveClass.DriveApi.GetRootFolder(_googleApiClient))
using (var query = new QueryClass.Builder().AddFilter(Filters.And(Filters.Eq(SearchableField.Title, folderName), Filters.Eq(SearchableField.Trashed, false))).Build())
using (var metaBufferResult = await driveId.QueryChildrenAsync(_googleApiClient, query))
{
if (metaBufferResult.Status.IsSuccess)
{
DriveId folderId = null;
foreach (var metaData in metaBufferResult.MetadataBuffer)
{
if (metaData.IsFolder && metaData.Title == folderName)
{
folderId = metaData.DriveId;
break;
}
}
IDriveFolder driveFolder = null;
switch (folderId)
{
case null: // if folder not found, create it and fall through to default
using (var folderChangeSet = new MetadataChangeSet.Builder().SetTitle(folderName).Build())
using (var folderResult = await driveId.CreateFolderAsync(_googleApiClient, folderChangeSet))
{
if (!folderResult.Status.IsSuccess)
{
Log.Error(TAG, folderResult.Status.StatusMessage);
break;
}
driveFolder = folderResult.DriveFolder;
}
goto default;
default:
driveFolder = driveFolder ?? folderId.AsDriveFolder();
// create your file in the IDriveFolder obtained,
using (var contentResults = await DriveClass.DriveApi.NewDriveContentsAsync(_googleApiClient))
{
if (contentResults.Status.IsSuccess)
{
using (var writer = new OutputStreamWriter(contentResults.DriveContents.OutputStream))
{
writer.Write("StackOverflow Rocks");
using (var changeSet = new MetadataChangeSet.Builder()
.SetTitle("StackOverflow Rocks")
.SetStarred(true)
.SetMimeType("text/plain")
.Build())
using (var driveFileResult = await driveFolder.CreateFileAsync(_googleApiClient, changeSet, contentResults.DriveContents))
{
if (driveFileResult.Status.IsSuccess)
Log.Debug(TAG, "File created, open https://drive.google.com to review it");
else
Log.Error(TAG, driveFileResult.Status.StatusMessage);
}
}
}
}
driveFolder.Dispose();
break;
}
folderId?.Dispose();
}
else
{
Log.Error(TAG, metaBufferResult.Status.StatusMessage);
}
}
Notes:
Do this on a background thread
Drive allows multiple files/folders with same name (Title)
Query for existing files if you want to replace one
Query for existing folders unless you really what multiple folders with the same Title
Folders and files in the Trash are returned queries unless excluded.
Make use of Using blocks and Dispose to avoid leaks

Related

unable to update zip file in Azure File Share/Blob

I am using Azure File share I want to create zip file only once but wants to update it multiple times (upload multiple files after once created).
is it possible to create .zip file only once and add more files in it later without **overriding **existing files in zip.?
when i tried to add more files in .zip it overrides existing files in zip with new file.
private static async Task OpenZipFile()
{
try
{
using (var zipFileStream = await OpenZipFileStream())
{
using (var zipFileOutputStream = CreateZipOutputStream(zipFileStream))
{
var level = 0;
zipFileOutputStream.SetLevel(level);
BlobClient blob = new BlobClient(new Uri(String.Format("https://{0}.blob.core.windows.net/{1}", "rtsatestdata", "comm/2/10029.txt")), _currentTenantTokenCredential);
var zipEntry = new ZipEntry("newtestdata")
{
Size = 1170
};
zipFileOutputStream.PutNextEntry(zipEntry);
blob.DownloadToAsync(zipFileOutputStream).Wait();
zipFileOutputStream.CloseEntry();
}
}
}
catch (TaskCanceledException)
{
throw;
}
}
private static async Task<Stream> OpenZipFileStream()
{
BlobContainerClient mainContainer = _blobServiceClient.GetBlobContainerClient("comm");
var blobItems = mainContainer.GetBlobs(BlobTraits.Metadata, BlobStates.None);
foreach (var item in blobItems)
{
if (item.Name == "testdata.zip")
{
BlobClient blob = new BlobClient(new Uri(String.Format("https://{0}.blob.core.windows.net/{1}", "rtsatestdata", "comm/testdata.zip")), _currentTenantTokenCredential);
return await blob.OpenWriteAsync(true
, options: new BlobOpenWriteOptions
{
HttpHeaders = new BlobHttpHeaders
{
ContentType = "application/zip"
}
}
);
}
}
}
private static ZipOutputStream CreateZipOutputStream(Stream zipFileStream)
{
return new ZipOutputStream(zipFileStream)
{
IsStreamOwner = false,
};
}
This is not possible in Azure storage. The workaround would be to download the zip, unzip it, add more files, re-zip it, and re-upload to storage.

How to create zip file in memory?

I have to create a zip file from set of urls. and it should have a proper folder structure.
So i tried like
public async Task<byte[]> CreateZip(Guid ownerId)
{
try
{
string startPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "zipFolder");//base folder
if (Directory.Exists(startPath))
{
DeleteAllFiles(startPath);
Directory.Delete(startPath);
}
Directory.CreateDirectory(startPath);
string zipPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, $"{ownerId.ToString()}"); //folder based on ownerid
if (Directory.Exists(zipPath))
{
DeleteAllFiles(zipPath);
Directory.Delete(zipPath);
}
Directory.CreateDirectory(zipPath);
var attachemnts = await ReadByOwnerId(ownerId);
attachemnts.Data.ForEach(i =>
{
var fileLocalPath = $"{startPath}\\{i.Category}";
if (!Directory.Exists(fileLocalPath))
{
Directory.CreateDirectory(fileLocalPath);
}
using (var client = new WebClient())
{
client.DownloadFile(i.Url, $"{fileLocalPath}//{i.Flags ?? ""}_{i.FileName}");
}
});
var zipFilename = $"{zipPath}//result.zip";
if (File.Exists(zipFilename))
{
File.Delete(zipFilename);
}
ZipFile.CreateFromDirectory(startPath, zipFilename, CompressionLevel.Fastest, true);
var result = System.IO.File.ReadAllBytes(zipFilename);
return result;
}
catch (Exception ex)
{
var a = ex;
return null;
}
}
currently im writing all files in my base directory(may be not a good idea).corrently i have to manually delete all folders and files to avoid exception/unwanted files. Can everything be written in memory?
What changes required to write all files and folder structure in memory?
No you can't. Not with the built in Dotnet any way.
As per my comment I would recommend storing the files in a custom location based on a Guid or similar. Eg:
"/xxxx-xxxx-xxxx-xxxx/Folder-To-Zip/....".
This would ensure you could handle multiple requests with the same files or similar file / folder names.
Then you just have to cleanup and delete the folder again afterwards so you don't run out of space.
Hope the below code does the job.
public async Task<byte[]> CreateZip(Guid ownerId)
{
try
{
string startPath = Path.Combine(Path.GetTempPath(), $"{Guid.NewGuid()}_zipFolder");//folder to add
Directory.CreateDirectory(startPath);
var attachemnts = await ReadByOwnerId(ownerId);
attachemnts.Data = filterDuplicateAttachments(attachemnts.Data);
//filtering youtube urls
attachemnts.Data = attachemnts.Data.Where(i => !i.Flags.Equals("YoutubeUrl", StringComparison.OrdinalIgnoreCase)).ToList();
attachemnts.Data.ForEach(i =>
{
var fileLocalPath = $"{startPath}\\{i.Category}";
if (!Directory.Exists(fileLocalPath))
{
Directory.CreateDirectory(fileLocalPath);
}
using (var client = new WebClient())
{
client.DownloadFile(i.Url, $"{fileLocalPath}//{i.Flags ?? ""}_{i.FileName}");
}
});
using (var ms = new MemoryStream())
{
using (var zipArchive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
System.IO.DirectoryInfo di = new DirectoryInfo(startPath);
var allFiles = di.GetFiles("",SearchOption.AllDirectories);
foreach (var attachment in allFiles)
{
var file = File.OpenRead(attachment.FullName);
var type = attachemnts.Data.Where(i => $"{ i.Flags ?? ""}_{ i.FileName}".Equals(attachment.Name, StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
var entry = zipArchive.CreateEntry($"{type.Category}/{attachment.Name}", CompressionLevel.Fastest);
using (var entryStream = entry.Open())
{
file.CopyTo(entryStream);
}
}
}
var result = ms.ToArray();
return result;
}
}
catch (Exception ex)
{
var a = ex;
return null;
}
}

Read Text file without copying to hard disk

I'm using Asp.Net Core 3.0 and I find myself in a situation where the client will pass text file(s) to my API, the API will then parse the text files into a data model using a function that I have created called ParseDataToModel(), and then store that data model into a database using Entity Framework. Since my code is parsing the files into a data model, I really don't need to copy it to the hard disk if it isn't necessary. I don't have a ton of knowledge when it comes to Streams, and I've googled quite a bit, but I was wondering if there is a way to retrieve the string data of the uploaded files without actually copying them to the hard drive? It seems like a needless extra step.... Below is my code for the file Upload and insertion into the database:
[HttpPost("upload"), DisableRequestSizeLimit]
public IActionResult Upload()
{
var filePaths = new List<string>();
foreach(var formFile in Request.Form.Files)
{
if(formFile.Length > 0)
{
var filePath = Path.GetTempFileName();
filePaths.Add(filePath);
using(var stream = new FileStream(filePath, FileMode.Create))
{
formFile.CopyTo(stream);
}
}
}
BaiFiles lastFile = null;
foreach(string s in filePaths)
{
string contents = System.IO.File.ReadAllText(s);
BaiFiles fileToCreate = ParseFileToModel(contents);
if (fileToCreate == null)
return BadRequest(ModelState);
var file = _fileRepository.GetFiles().Where(t => t.FileId == fileToCreate.FileId).FirstOrDefault();
if (file != null)
{
ModelState.AddModelError("", $"File with id {fileToCreate.FileId} already exists");
return StatusCode(422, ModelState);
}
if (!ModelState.IsValid)
return BadRequest();
if (!_fileRepository.CreateFile(fileToCreate))
{
ModelState.AddModelError("", $"Something went wrong saving file with id {fileToCreate.FileId}");
return StatusCode(500, ModelState);
}
lastFile = fileToCreate;
}
return CreatedAtRoute("GetFile", new { fileId = lastFile.FileId }, lastFile);
}
It would be nice to just hold all of the data in memory instead of copying them to the hard drive, just to turn around and open it to read the text.... I apologize if this isn't possible, or if this question has been asked before. I'm sure it has, and I just wasn't googling the correct keywords. Otherwise, I could be wrong and it is already doing exactly what I want - but System.IO.File.ReadAllText() makes me feel it's being copied to a temp directory somewhere.
After using John's answer below, here is the revised code for anyone interested:
[HttpPost("upload"), DisableRequestSizeLimit]
public IActionResult Upload()
{
var filePaths = new List<string>();
BaiFiles lastFile = null;
foreach (var formFile in Request.Form.Files)
{
if (formFile.Length > 0)
{
using (var stream = formFile.OpenReadStream())
{
using (var sr = new StreamReader(stream))
{
string contents = sr.ReadToEnd();
BaiFiles fileToCreate = ParseFileToModel(contents);
if (fileToCreate == null)
return BadRequest(ModelState);
var file = _fileRepository.GetFiles().Where(t => t.FileId == fileToCreate.FileId).FirstOrDefault();
if (file != null)
{
ModelState.AddModelError("", $"File with id {fileToCreate.FileId} already exists");
return StatusCode(422, ModelState);
}
if (!ModelState.IsValid)
return BadRequest();
if (!_fileRepository.CreateFile(fileToCreate))
{
ModelState.AddModelError("", $"Something went wrong saving file with id {fileToCreate.FileId}");
return StatusCode(500, ModelState);
}
lastFile = fileToCreate;
}
}
}
}
if(lastFile == null)
return NoContent();
else
return CreatedAtRoute("GetFile", new { fileId = lastFile.FileId }, lastFile);
}
System.IO.File.ReadAllText(filePath) is a convenience method. It essentially does this:
string text = null;
using (var stream = FileStream.OpenRead(filePath))
using (var reader = new StreamReader(stream))
{
text = reader.ReadToEnd();
}
FormFile implements an OpenReadStream method, so you can simply use this in place of stream in the above:
string text = null;
using (var stream = formFile.OpenReadStream())
using (var reader = new StreamReader(stream))
{
text = reader.ReadToEnd();
}

Threading and SqlFileStream. The process cannot access the file specified because it has been opened in another transaction

I am extracting content of the Files in SQL File Table. The following code works if I do not use Parallel.
I am getting the following exception, when reading sql file stream simultaneously (Parallel).
The process cannot access the file specified because it has been opened in another transaction.
TL;DR:
When reading a file from FileTable (using GET_FILESTREAM_TRANSACTION_CONTEXT) in a Parallel.ForEach I get the above exception.
Sample Code for you to try out:
https://gist.github.com/NerdPad/6d9b399f2f5f5e5c6519
Longer Version:
Fetch Attachments, and extract content:
var documents = new List<ExtractedContent>();
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
var attachments = await dao.GetAttachmentsAsync();
// Extract the content simultaneously
// documents = attachments.ToDbDocuments().ToList(); // This works
Parallel.ForEach(attachments, a => documents.Add(a.ToDbDocument())); // this doesn't
ts.Complete();
}
DAO Read File Table:
public async Task<IEnumerable<SearchAttachment>> GetAttachmentsAsync()
{
try
{
var commandStr = "....";
IEnumerable<SearchAttachment> attachments = null;
using (var connection = new SqlConnection(this.DatabaseContext.Database.Connection.ConnectionString))
using (var command = new SqlCommand(commandStr, connection))
{
connection.Open();
using (var reader = await command.ExecuteReaderAsync())
{
attachments = reader.ToSearchAttachments().ToList();
}
}
return attachments;
}
catch (System.Exception)
{
throw;
}
}
Create objects for each file:
The object contains a reference to the GET_FILESTREAM_TRANSACTION_CONTEXT
public static IEnumerable<SearchAttachment> ToSearchAttachments(this SqlDataReader reader)
{
if (!reader.HasRows)
{
yield break;
}
// Convert each row to SearchAttachment
while (reader.Read())
{
yield return new SearchAttachment
{
...
...
UNCPath = reader.To<string>(Constants.UNCPath),
ContentStream = reader.To<byte[]>(Constants.Stream) // GET_FILESTREAM_TRANSACTION_CONTEXT()
...
...
};
}
}
Read the file using SqlFileStream:
Exception is thrown here
public static ExtractedContent ToDbDocument(this SearchAttachment attachment)
{
// Read the file
// Exception is thrown here
using (var stream = new SqlFileStream(attachment.UNCPath, attachment.ContentStream, FileAccess.Read, FileOptions.SequentialScan, 4096))
{
...
// extract content from the file
}
....
}
Update 1:
According to this article it seems like it could be an Isolation level issue. Has anyone ever faced similar issue?
The transaction does not flow in to the Parallel.ForEach, you must manually bring the transaction in.
//Switched to a thread safe collection.
var documents = new ConcurrentQueue<ExtractedContent>();
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
var attachments = await dao.GetAttachmentsAsync();
//Grab a reference to the current transaction.
var transaction = Transaction.Current;
Parallel.ForEach(attachments, a =>
{
//Spawn a dependant clone of the transaction
using (var depTs = transaction.DependentClone(DependentCloneOption.RollbackIfNotComplete))
{
documents.Enqueue(a.ToDbDocument());
depTs.Complete();
}
});
ts.Complete();
}
I also switched from List<ExtractedContent> to ConcurrentQueue<ExtractedContent> because you are not allowed call .Add( on a list from multiple threads at the same time.

How to avoid using foreach loop to get the filelist for different reason

Here what am trying to do:
I have a remote server (e.g:svr01,svr02,svr03). Using GetFileList to read the directory get all the files and match with the file name I have then copy to my local drive.
If any files matched then am adding them to an XML file also.
I was trying to do like below
class Program
{
static void Main(string[] args)
{
var getfiles = new fileshare.Program();
string realname = "*main*";
string Location = "SVR01";
bool anymatch = false;
foreach (var file in getfiles.GetFileList(realname,Location))
{anymatch=true;}
if (anymatch == true)
{ baseMeta(); }
foreach (var file in getfiles.GetFileList(realname,Location))
{getfiles.copytolocal(file.FullName); }
}
private FileInfo[] GetFileList(string pattern,string Location)
{
try
{
switch (Location)
{
case "SVR01":
{
var di = new DirectoryInfo(#"\\SVR01\Dev");
return di.GetFiles(pattern);
}
case "SVR02":
{
var di = new DirectoryInfo(#"\\SVR02\Dev");
return di.GetFiles(pattern);
}
case "SVR03":
{
var di = new DirectoryInfo(#"\\SVR03\Prod");
return di.GetFiles(pattern);
}
default: throw new ArgumentOutOfRangeException();
}
}
catch(Exception ex)
{ Console.Write(ex.ToString());
return null;
}
}
private void copytolocal(string filename)
{
string nameonly = Path.GetFileName(filename);
File.Copy(filename,Path.Combine(#"c:\",nameonly),true);
}
private void baseMeta()
{
XmlWriter xmlWrite = XmlWriter.Create(#"c:\basexml");
xmlWrite.WriteStartElement("job");
xmlWrite.WriteElementString("Name", "test");
xmlWrite.WriteElementString("time", DateTime);
xmlWrite.Close();
}
}
but this piece of code worries me because am doing the same process two times, any one please guide me how to avoid this.
foreach (var file in getfiles.GetFileList(realname,Location))
{
anymatch=true;}
if (anymatch == true)
{
baseMeta();
}
foreach (var file in getfiles.GetFileList(realname,Location))
{
getfiles.copytolocal(file.FullName);
}
}
Even am trying to find out if it match anyfile then i quit the first foreach loop generate the basemeta() then goes to next foreach loop to do the rest of the process.
Using LINQ you should be able to easily change your posted code into:
var getfiles = new fileshare.Program();
string realname = "*main*";
string Location = "SVR01";
var fileList = getFiles.GetFileList(realname, Location);
var anymatch = fileList.Any();
if (anymatch) // Or possibly `if (fileList.Any())` if anymatch isn't
// really used anywhere else
baseMeta();
foreach (var file in getfiles.GetFileList(realname,Location))
getfiles.copytolocal(file.FullName);
You'll get the greatest benefit by replacing your GetFileList method with:
private IEnumerable<FileInfo> GetFileList(string pattern,string Location)
{
string directory = string.Empty;
switch (Location)
{
case "SVR01":
directory = #"\\SVR01\Dev";
break;
case "SVR02":
directory = #"\\SVR02\Dev";
break;
case "SVR03":
directory = #"\\SVR03\Prod");
break;
default:
throw new ArgumentOutOfRangeException();
}
DirectoryInfo di = null;
try
{
di = new DirectoryInfo(directory);
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
yield break;
}
foreach(var fi in di.EnumerateFiles(pattern))
yield return fi;
}
Use this
var files = getfiles.GetFileList(realname, Location);
if (files.Length > 0)
{
baseMeta();
foreach(var file in files)
{
getfiles.copytolocal(file.FullName);
}
}
Try this:
Create method to check the file existence and do all in single loop.
your statement is not much clear that when you will copy or not.. use
your condition on which you want to copy or create xml entry..
What is your AnyMatch?? If you want to check that Is there any file then use
var fileList = getfiles.GetFileList(realname,Location);
if( fileList.Count() > 0)
{
baseMeta();
}
foreach (var file in fileList)
{
// copy the file if match does not exist..
getfiles.copytolocal(file.FullName);
}
But Foreach loop through collection if it have any item. so you need not to care about the count of the files..
If you want to do entry on every copy as per your code then why you need to check anyMatch etc. It will create entry on every file copy.
foreach (var file in getfiles.GetFileList(realname,Location))
{
baseMeta();
// copy the file
getfiles.copytolocal(file.FullName);
}

Categories

Resources