Load large file in SQL Server - c#

I am trying to upload large files (regardless of file type) into SQL Server database.
But when I upload a large one (at least 13.2MB or more) it appears the next error message:
System.IO.IOException: Supplied file with size 13897053 bytes exceeds the maximum of 512000 bytes.
When the user uploads the files I call the next method to save the files into IList<IBrowserFile>.
private IList<IBrowserFile> Files = new List<IBrowserFile>();
private int MaxAllowdFiles = int.MaxValue;
private long MaxSizeFiles = long.MaxValue;
private async Task OnInputFileChanged(InputFileChangeEventArgs e)
{
ClearDragClass();
/*var files = e.GetMultipleFiles();
foreach(var file in files)
{
Files.Add(file);
Console.WriteLine(Path.GetFullPath(file.Name));
}*/
//using var content = new MultipartFormDataContent();
foreach (var file in e.GetMultipleFiles(MaxAllowdFiles))
{
using var f = file.OpenReadStream(MaxSizeFiles);
using var fileContent = new StreamContent(f);
fileContent.Headers.ContentType = new MediaTypeHeaderValue(file.ContentType);
Files.Add(file);
}
}
Once the user has uploaded all the files, they click on a button that call the next method to upload it into a database.
private async void Upload()
{
List<string>? notUploadFiles = new();
foreach (var file in Files)
{
using Stream s = file.OpenReadStream();
using MemoryStream ms = new MemoryStream();
await s.CopyToAsync(ms);
byte[] fileBytes = ms.ToArray();
string extn = new FileInfo(file.Name).Extension;
var addArchivoTarea = new AddArchivoTareaRequestDTO(Tarea.Id, file.Name, fileBytes, extn);
var successResponse = await HttpTareas.AddArchivoToTareaAsync(addArchivoTarea);
if (!successResponse)
{
notUploadFiles.Add(file.Name);
}
}
if (notUploadFiles.Count > 0)
{
Snackbar.Configuration.SnackbarVariant = Variant.Filled;
Snackbar.Add("The following files could not be uploaded", Severity.Info);
Snackbar.Configuration.SnackbarVariant = Variant.Outlined;
foreach (var file in notUploadFiles)
{
Snackbar.Add(file, Severity.Error);
}
MudDialog.Close(DialogResult.Ok(true));
}
Snackbar.Add("All files have been successfully uploaded", Severity.Success);
MudDialog.Close(DialogResult.Ok(true));
}
I don't know what I should add or modify to be able to upload large files.
Any suggestions?

According to this
OpenReadStream enforces a maximum size in bytes of its Stream. Reading
one file or multiple files larger than 500 KB results in an exception.
This limit prevents developers from accidentally reading large files
into memory. The maxAllowedSize parameter of OpenReadStream can be
used to specify a larger size if required up to a maximum supported
size of 2 GB.
so you can have:
Stream s = file.OpenReadStream (maxAllowedSize :[the value you prefer]);

Related

Validation for attribute in blazor

I'm learning C# with blazor, I wanted to know how I can validate an input of type file, so that it doesn't allow files larger than 3MB
This example explains how to upload files(in svg format) in Blazor with the InputFile component. Pay attention to the variable maxFileSize that set the maximum number of bytes that can be supplied by the Stream.
FileUpload.razor.cs
private bool fileUploading;
private string filePath = "C:\\fileFolder\\file.svg";
private const int maxFileSize= 3 * 1024 * 1024; //3MB max size
private async Task UploadSvgFileAsync(InputFileChangeEventArgs e)
{
fileUploading = true;
try
{
string fileExtension = Path.GetExtension(e.File.Name);
if (!fileExtension.Equals(".svg", StringComparison.OrdinalIgnoreCase))
throw new Exception("SVG file is required.");
var stream = e.File.OpenReadStream(maxFileSize);
await using FileStream fs = new(filePath, FileMode.Create);
await stream.CopyToAsync(fs);
}
catch (Exception ex)
{
//some error handling
}
fileUploading = false;
}
And in your FileUpload.razor component:
#if (fileUploading )
{
<span>- File Uploading...</span>
}
<InputFile OnChange="#UploadSvgFileAsync" />
I recommend reading the following articles to learn how to properly work with files in Blazor apps:
File Uploads
File Downloads
FileInfo file = new FileInfo(yourFile);
get the files byte size:
var fileBytes = file.Length;
turn bytes to MBs:
var fileMBs = fileBytes / (1024*1024);

Zipping a number of potentially large files in chunks to avoid large memory consumption

I am working on an application that can take a list of file keys to files on AWS S3 as input and then create a zip file back on AWS S3 with all of those files inside. The compression part does not matter - the important part is to have a single zip file containing all of the other files.
To be able to run the application on a server without a lot of memory or file storage space, I was thinking of using the API that allows fetching a byte range from a file on S3: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html for downloading the files in chunks, and then add them to the zip file and upload the chunk using the multipart upload API: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
I have tried to make a small sample app, that will simulate how it could work (without actually calling the S3 APIs yet), but it gets stuck on this line: "await zipStream.WriteAsync(inBuffer, 0, currentChunk);"
public static async Task Main(string[] args)
{
const int ChunkSize = 5 * 1024 * 1024;
using (var fileOutputStream = new FileStream("/Users/SPE/Downloads/BG_K01.zip", FileMode.Create))
{
using (var fileInputStream = File.Open("/Users/SPE/Downloads/BG_K01.rvt", FileMode.Open))
{
long fileSize = new FileInfo("/Users/SPE/Downloads/BG_K01.rvt").Length;
int readBytes = 0;
using (AnonymousPipeServerStream pipeServer = new AnonymousPipeServerStream())
{
using (AnonymousPipeClientStream pipeClient = new AnonymousPipeClientStream(pipeServer.GetClientHandleAsString()))
{
using (var zipArchive = new ZipArchive(pipeServer, ZipArchiveMode.Create, true))
{
var zipEntry = zipArchive.CreateEntry("BG_K01.rvt", CompressionLevel.NoCompression);
using (var zipStream = zipEntry.Open())
{
// Simulate receiving and sending a chunk of bytes
while (readBytes < fileSize)
{
var currentChunk = (int)Math.Min(ChunkSize, fileSize - readBytes);
var inBuffer = new byte[currentChunk];
var outBuffer = new byte[currentChunk];
await fileInputStream.ReadAsync(inBuffer, 0, currentChunk);
await zipStream.WriteAsync(inBuffer, 0, currentChunk);
await pipeClient.ReadAsync(outBuffer, 0, currentChunk);
await fileOutputStream.WriteAsync(outBuffer, 0, currentChunk);
readBytes += currentChunk;
}
}
}
}
}
}
}
}
I am also not sure if using the pipe streams is the best way to do this, but my hope is that they will release any memory consumed once the stream has been read, and thereby keep the memory consumption very low.
Does anybody know why writing to the zipStream hangs?

Upload file uploaded via HTTP to ASP.NET further to SFTP server in C# and SSH.NET

I'm setting up a file transfer through Renci SSH.NET library using SFTP and C#.
And I'm stuck on this problem.
Whenever I upload a file in my ASP .NET Core program it sends the file, but it sends it as an empty file with the same name.
public async Task<IActionResult> UploadFiles(List<IFormFile> files)
{
string host = "----";
string username = "----";
string password = "----";
string Name = "";
var Stream = new MemoryStream();
List<MemoryStream> stream = new List<MemoryStream>();
var connectionInfo = new Renci.SshNet.ConnectionInfo(host, username, new PasswordAuthenticationMethod(username, password));
var sftp = new SftpClient(connectionInfo);
sftp.Connect();
sftp.ChangeDirectory("DIRECTORY");
try
{
//Read the FileName and convert it to Byte array.
foreach (var formFile in files)
{
var memoryStream = new MemoryStream();
await formFile.CopyToAsync(memoryStream);
Name = formFile.FileName;
using (var uplfileStream = memoryStream)
{
sftp.UploadFile(uplfileStream, Name, null);
}
}
}
catch (WebException ex)
{
throw new Exception((ex.Response as FtpWebResponse).StatusDescription);
}
sftp.Disconnect();
return View("Inbox");
}
I expect an output where a file is uploaded to the server, but the actual output is a file being uploaded to the server with the same name as the file I tried to upload, but the size is 0kb aka its empty.
Your immediate problem is answered here:
Upload from ByteArray/MemoryStream using SSH.NET - File gets created with size 0KB
Though you do not need the intermediate MemoryStream (and it's inefficient anyway).
Use IFormFile.OpenReadStream:
using (var uplfileStream = formFile.OpenReadStream())
{
sftp.UploadFile(uplfileStream, Name);
}

Download multiple files contained in a list of byte array in ASP.NET MVC C#

I'm developing an ASP.NET MVC 5 application, and I wrote a code that allows me to download files stored in a SQL Server database as varbinary, I'm able to download a single file with this:
public JsonResult PrepareSingleFile(int [] IdArray)
{
ImageContext _contexte = new ImageContext();
var response =_contexte.contents.Find(IdArray.FirstOrDefault());
//byte[] FileData =
Encoding.UTF8.GetBytes(response.image.ToString());
byte[] FileData = response.image;
Session["data"] = FileData;
Session["filename"] = response.FileName;
return Json(response.FileName);
}
public FileResult DownloadSingleFile()
{
var fname = Session["filename"];
var data = (byte[]) Session["data"];
//return File(data,"application/pdf");
return File(data,System.Net.Mime.MediaTypeNames.Application.Pdf, fname.ToString()+".pdf");
}
But now I want to download multiple files, so I'm getting the data of each file as a byte array and putting those byte arrays inside a List<byte[]> and I want to download those files as a zip file, so how can I do that?
I tried this:
File(data,"the Mime Type", "file name.extension")
But it doesn't work when data is a List<byte[]>.
You can do that using ZipArchive class available in .NET framework 4.5.
You may add a method in your controller that accepts a List<byte[]> parameter and then converts each byte[] to a memory stream and puts it in a zip file like this one,
public FileResult DownloadMultipleFiles(List<byte[]> byteArrayList)
{
using (MemoryStream ms = new MemoryStream())
{
using (var archive = new ZipArchive(ms, ZipArchiveMode.Create, true))
{
foreach(var file in byteArrayList)
{
var entry = archive.CreateEntry(file.fileName +".pdf", CompressionLevel.Fastest);
using (var zipStream = entry.Open())
{
zipStream.Write(file, 0, file.Length);
}
}
}
return File(ms.ToArray(), "application/zip", "Archive.zip");
}
}

Zip files and attach them to MailMessage without saving a file

I'm working on a little C# ASP.NET web app that pulls 3 files from my server, creates a zip of those files, and sends the zip file to an e-mail recipient.
The problem I'm having is finding a way to combine those 3 files without creating a zip file on the hard drive of the server. I think I need to use some sort of memorystream or filestream, but I'm in a little beyond my understanding when it comes to merging them into 1 zip file. I've tried SharpZipLib and DotNetZip, but I haven't been able to figure it out.
The reason I don't want the zip saved locally is that there might be a number of users on this app at once, and I don't want to clog up my server machine with those zips. I'm looking for 2 answers, how to zip files without saving the zip as a file, and how to attach that zip to a MailMessage.
Check this example for SharpZipLib:
https://github.com/icsharpcode/SharpZipLib/wiki/Zip-Samples#wiki-anchorMemory
using ICSharpCode.SharpZipLib.Zip;
// Compresses the supplied memory stream, naming it as zipEntryName, into a zip,
// which is returned as a memory stream or a byte array.
//
public MemoryStream CreateToMemoryStream(MemoryStream memStreamIn, string zipEntryName) {
MemoryStream outputMemStream = new MemoryStream();
ZipOutputStream zipStream = new ZipOutputStream(outputMemStream);
zipStream.SetLevel(3); //0-9, 9 being the highest level of compression
ZipEntry newEntry = new ZipEntry(zipEntryName);
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
StreamUtils.Copy(memStreamIn, zipStream, new byte[4096]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false; // False stops the Close also Closing the underlying stream.
zipStream.Close(); // Must finish the ZipOutputStream before using outputMemStream.
outputMemStream.Position = 0;
return outputMemStream;
// Alternative outputs:
// ToArray is the cleaner and easiest to use correctly with the penalty of duplicating allocated memory.
byte[] byteArrayOut = outputMemStream.ToArray();
// GetBuffer returns a raw buffer raw and so you need to account for the true length yourself.
byte[] byteArrayOut = outputMemStream.GetBuffer();
long len = outputMemStream.Length;
}
Try this:
public static Attachment CreateAttachment(string fileNameAndPath, bool zipIfTooLarge = true, int bytes = 1 << 20)
{
if (!zipIfTooLarge)
{
return new Attachment(fileNameAndPath);
}
var fileInfo = new FileInfo(fileNameAndPath);
// Less than 1Mb just attach as is.
if (fileInfo.Length < bytes)
{
var attachment = new Attachment(fileNameAndPath);
return attachment;
}
byte[] fileBytes = File.ReadAllBytes(fileNameAndPath);
using (var memoryStream = new MemoryStream())
{
string fileName = Path.GetFileName(fileNameAndPath);
using (var zipArchive = new ZipArchive(memoryStream, ZipArchiveMode.Create))
{
ZipArchiveEntry zipArchiveEntry = zipArchive.CreateEntry(fileName, CompressionLevel.Optimal);
using (var streamWriter = new StreamWriter(zipArchiveEntry.Open()))
{
streamWriter.Write(Encoding.Default.GetString(fileBytes));
}
}
var attachmentStream = new MemoryStream(memoryStream.ToArray());
string zipname = $"{Path.GetFileNameWithoutExtension(fileName)}.zip";
var attachment = new Attachment(attachmentStream, zipname, MediaTypeNames.Application.Zip);
return attachment;
}
}

Categories

Resources