can't stop 4 threads run in parallel - c#

I have a Windows Form application with one Button to encode any file by using the encryption function:
public byte[] EncodeRc6fileFun (byte[] byteText)
(note: the input of EncodeRc6fileFun Methods is 16 byte[])
I Write a code C#
when clicking on button:
we select the file to be encrypted.
divide the file (that we select to be encrypted) into parts, the length of any of them is 16 byte
speed up the encryption process by using 4 Method of EncodeRc6fileFun that work in parallel to encrypt first four parts of the file together, then the next four parts, and so on until the file ends
we specify the name and location of the encrypted file
private void button16_Click(object sender, EventArgs e)
{
string text;
text = textBox4.Text;
// Convert the text to a byte array
byte[] MainKeynew1 = System.Text.Encoding.UTF8.GetBytes(text);
rc68 = new RC6(Convert.ToInt32(comboBox1.Text), MainKeynew1);
//Select file and divide it into parts
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.Filter = "All Files|*.*";
openFileDialog1.Title = "Select a File";
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
string fileName = openFileDialog1.FileName;
//Divide the file into parts of 16 bytes each
byte[] bytes = File.ReadAllBytes(fileName);
int partSize = 16; //16 bytes per part
List<byte[]> parts = new List<byte[]>();
for (int i = 0; i < bytes.Length; i += partSize)
{
byte[] partBytes = bytes.Skip(i).Take(partSize).ToArray();
parts.Add(partBytes);
}
//Encrypt the file in parallel using 4 methods of EncodeRc6File()
ParallelOptions options = new ParallelOptions { MaxDegreeOfParallelism = 4 }; //4 threads to run in parallel
Parallel.ForEach(parts, options, part => {
byte[] encryptedPartBytes = rc68.EncodeRc6fileFun(part);
//Save the encrypted part to a list
lock (parts)
{
parts[parts.IndexOf(part)] = encryptedPartBytes;
}
});
//Write the encrypted parts to a single file
SaveFileDialog saveFileDialog1 = new SaveFileDialog();
if (saveFileDialog1.ShowDialog() == DialogResult.OK)
{
string savePath = saveFileDialog1.FileName;
using (var fs = new FileStream(savePath, FileMode.Create))
{
foreach (var part in parts)
{ fs.Write(part, 0, part.Length); }
fs.Close();
}
}
}
}
there is no error in code but its run too much then visual program break and throw exception:Managed Debugging Assistant 'ContextSwitchDeadlock' : 'The CLR has been unable to transition from COM context 0x10c6e28 to COM context 0x10c6d70 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.'

Related

Load large file in SQL Server

I am trying to upload large files (regardless of file type) into SQL Server database.
But when I upload a large one (at least 13.2MB or more) it appears the next error message:
System.IO.IOException: Supplied file with size 13897053 bytes exceeds the maximum of 512000 bytes.
When the user uploads the files I call the next method to save the files into IList<IBrowserFile>.
private IList<IBrowserFile> Files = new List<IBrowserFile>();
private int MaxAllowdFiles = int.MaxValue;
private long MaxSizeFiles = long.MaxValue;
private async Task OnInputFileChanged(InputFileChangeEventArgs e)
{
ClearDragClass();
/*var files = e.GetMultipleFiles();
foreach(var file in files)
{
Files.Add(file);
Console.WriteLine(Path.GetFullPath(file.Name));
}*/
//using var content = new MultipartFormDataContent();
foreach (var file in e.GetMultipleFiles(MaxAllowdFiles))
{
using var f = file.OpenReadStream(MaxSizeFiles);
using var fileContent = new StreamContent(f);
fileContent.Headers.ContentType = new MediaTypeHeaderValue(file.ContentType);
Files.Add(file);
}
}
Once the user has uploaded all the files, they click on a button that call the next method to upload it into a database.
private async void Upload()
{
List<string>? notUploadFiles = new();
foreach (var file in Files)
{
using Stream s = file.OpenReadStream();
using MemoryStream ms = new MemoryStream();
await s.CopyToAsync(ms);
byte[] fileBytes = ms.ToArray();
string extn = new FileInfo(file.Name).Extension;
var addArchivoTarea = new AddArchivoTareaRequestDTO(Tarea.Id, file.Name, fileBytes, extn);
var successResponse = await HttpTareas.AddArchivoToTareaAsync(addArchivoTarea);
if (!successResponse)
{
notUploadFiles.Add(file.Name);
}
}
if (notUploadFiles.Count > 0)
{
Snackbar.Configuration.SnackbarVariant = Variant.Filled;
Snackbar.Add("The following files could not be uploaded", Severity.Info);
Snackbar.Configuration.SnackbarVariant = Variant.Outlined;
foreach (var file in notUploadFiles)
{
Snackbar.Add(file, Severity.Error);
}
MudDialog.Close(DialogResult.Ok(true));
}
Snackbar.Add("All files have been successfully uploaded", Severity.Success);
MudDialog.Close(DialogResult.Ok(true));
}
I don't know what I should add or modify to be able to upload large files.
Any suggestions?
According to this
OpenReadStream enforces a maximum size in bytes of its Stream. Reading
one file or multiple files larger than 500 KB results in an exception.
This limit prevents developers from accidentally reading large files
into memory. The maxAllowedSize parameter of OpenReadStream can be
used to specify a larger size if required up to a maximum supported
size of 2 GB.
so you can have:
Stream s = file.OpenReadStream (maxAllowedSize :[the value you prefer]);

Zipping a number of potentially large files in chunks to avoid large memory consumption

I am working on an application that can take a list of file keys to files on AWS S3 as input and then create a zip file back on AWS S3 with all of those files inside. The compression part does not matter - the important part is to have a single zip file containing all of the other files.
To be able to run the application on a server without a lot of memory or file storage space, I was thinking of using the API that allows fetching a byte range from a file on S3: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html for downloading the files in chunks, and then add them to the zip file and upload the chunk using the multipart upload API: https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
I have tried to make a small sample app, that will simulate how it could work (without actually calling the S3 APIs yet), but it gets stuck on this line: "await zipStream.WriteAsync(inBuffer, 0, currentChunk);"
public static async Task Main(string[] args)
{
const int ChunkSize = 5 * 1024 * 1024;
using (var fileOutputStream = new FileStream("/Users/SPE/Downloads/BG_K01.zip", FileMode.Create))
{
using (var fileInputStream = File.Open("/Users/SPE/Downloads/BG_K01.rvt", FileMode.Open))
{
long fileSize = new FileInfo("/Users/SPE/Downloads/BG_K01.rvt").Length;
int readBytes = 0;
using (AnonymousPipeServerStream pipeServer = new AnonymousPipeServerStream())
{
using (AnonymousPipeClientStream pipeClient = new AnonymousPipeClientStream(pipeServer.GetClientHandleAsString()))
{
using (var zipArchive = new ZipArchive(pipeServer, ZipArchiveMode.Create, true))
{
var zipEntry = zipArchive.CreateEntry("BG_K01.rvt", CompressionLevel.NoCompression);
using (var zipStream = zipEntry.Open())
{
// Simulate receiving and sending a chunk of bytes
while (readBytes < fileSize)
{
var currentChunk = (int)Math.Min(ChunkSize, fileSize - readBytes);
var inBuffer = new byte[currentChunk];
var outBuffer = new byte[currentChunk];
await fileInputStream.ReadAsync(inBuffer, 0, currentChunk);
await zipStream.WriteAsync(inBuffer, 0, currentChunk);
await pipeClient.ReadAsync(outBuffer, 0, currentChunk);
await fileOutputStream.WriteAsync(outBuffer, 0, currentChunk);
readBytes += currentChunk;
}
}
}
}
}
}
}
}
I am also not sure if using the pipe streams is the best way to do this, but my hope is that they will release any memory consumed once the stream has been read, and thereby keep the memory consumption very low.
Does anybody know why writing to the zipStream hangs?

How do I replicate the functionality of tail -f in C# [duplicate]

I want to read file continuously like GNU tail with "-f" param. I need it to live-read log file.
What is the right way to do it?
More natural approach of using FileSystemWatcher:
var wh = new AutoResetEvent(false);
var fsw = new FileSystemWatcher(".");
fsw.Filter = "file-to-read";
fsw.EnableRaisingEvents = true;
fsw.Changed += (s,e) => wh.Set();
var fs = new FileStream("file-to-read", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (var sr = new StreamReader(fs))
{
var s = "";
while (true)
{
s = sr.ReadLine();
if (s != null)
Console.WriteLine(s);
else
wh.WaitOne(1000);
}
}
wh.Close();
Here the main reading cycle stops to wait for incoming data and FileSystemWatcher is used just to awake the main reading cycle.
You want to open a FileStream in binary mode. Periodically, seek to the end of the file minus 1024 bytes (or whatever), then read to the end and output. That's how tail -f works.
Answers to your questions:
Binary because it's difficult to randomly access the file if you're reading it as text. You have to do the binary-to-text conversion yourself, but it's not difficult. (See below)
1024 bytes because it's a nice convenient number, and should handle 10 or 15 lines of text. Usually.
Here's an example of opening the file, reading the last 1024 bytes, and converting it to text:
static void ReadTail(string filename)
{
using (FileStream fs = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
// Seek 1024 bytes from the end of the file
fs.Seek(-1024, SeekOrigin.End);
// read 1024 bytes
byte[] bytes = new byte[1024];
fs.Read(bytes, 0, 1024);
// Convert bytes to string
string s = Encoding.Default.GetString(bytes);
// or string s = Encoding.UTF8.GetString(bytes);
// and output to console
Console.WriteLine(s);
}
}
Note that you must open with FileShare.ReadWrite, since you're trying to read a file that's currently open for writing by another process.
Also note that I used Encoding.Default, which in US/English and for most Western European languages will be an 8-bit character encoding. If the file is written in some other encoding (like UTF-8 or other Unicode encoding), It's possible that the bytes won't convert correctly to characters. You'll have to handle that by determining the encoding if you think this will be a problem. Search Stack overflow for info about determining a file's text encoding.
If you want to do this periodically (every 15 seconds, for example), you can set up a timer that calls the ReadTail method as often as you want. You could optimize things a bit by opening the file only once at the start of the program. That's up to you.
To continuously monitor the tail of the file, you just need to remember the length of the file before.
public static void MonitorTailOfFile(string filePath)
{
var initialFileSize = new FileInfo(filePath).Length;
var lastReadLength = initialFileSize - 1024;
if (lastReadLength < 0) lastReadLength = 0;
while (true)
{
try
{
var fileSize = new FileInfo(filePath).Length;
if (fileSize > lastReadLength)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
fs.Seek(lastReadLength, SeekOrigin.Begin);
var buffer = new byte[1024];
while (true)
{
var bytesRead = fs.Read(buffer, 0, buffer.Length);
lastReadLength += bytesRead;
if (bytesRead == 0)
break;
var text = ASCIIEncoding.ASCII.GetString(buffer, 0, bytesRead);
Console.Write(text);
}
}
}
}
catch { }
Thread.Sleep(1000);
}
}
I had to use ASCIIEncoding, because this code isn't smart enough to cater for variable character lengths of UTF8 on buffer boundaries.
Note: You can change the Thread.Sleep part to be different timings, and you can also link it with a filewatcher and blocking pattern - Monitor.Enter/Wait/Pulse. For me the timer is enough, and at most it only checks the file length every second, if the file hasn't changed.
This is my solution
static IEnumerable<string> TailFrom(string file)
{
using (var reader = File.OpenText(file))
{
while (true)
{
string line = reader.ReadLine();
if (reader.BaseStream.Length < reader.BaseStream.Position)
reader.BaseStream.Seek(0, SeekOrigin.Begin);
if (line != null) yield return line;
else Thread.Sleep(500);
}
}
}
so, in your code you can do
foreach (string line in TailFrom(file))
{
Console.WriteLine($"line read= {line}");
}
You could use the FileSystemWatcher class which can send notifications for different events happening on the file system like file changed.
private void button1_Click(object sender, EventArgs e)
{
if (folderBrowserDialog.ShowDialog() == DialogResult.OK)
{
path = folderBrowserDialog.SelectedPath;
fileSystemWatcher.Path = path;
string[] str = Directory.GetFiles(path);
string line;
fs = new FileStream(str[0], FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
tr = new StreamReader(fs);
while ((line = tr.ReadLine()) != null)
{
listBox.Items.Add(line);
}
}
}
private void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
{
string line;
line = tr.ReadLine();
listBox.Items.Add(line);
}
If you are just looking for a tool to do this then check out free version of Bare tail

Zip files and attach them to MailMessage without saving a file

I'm working on a little C# ASP.NET web app that pulls 3 files from my server, creates a zip of those files, and sends the zip file to an e-mail recipient.
The problem I'm having is finding a way to combine those 3 files without creating a zip file on the hard drive of the server. I think I need to use some sort of memorystream or filestream, but I'm in a little beyond my understanding when it comes to merging them into 1 zip file. I've tried SharpZipLib and DotNetZip, but I haven't been able to figure it out.
The reason I don't want the zip saved locally is that there might be a number of users on this app at once, and I don't want to clog up my server machine with those zips. I'm looking for 2 answers, how to zip files without saving the zip as a file, and how to attach that zip to a MailMessage.
Check this example for SharpZipLib:
https://github.com/icsharpcode/SharpZipLib/wiki/Zip-Samples#wiki-anchorMemory
using ICSharpCode.SharpZipLib.Zip;
// Compresses the supplied memory stream, naming it as zipEntryName, into a zip,
// which is returned as a memory stream or a byte array.
//
public MemoryStream CreateToMemoryStream(MemoryStream memStreamIn, string zipEntryName) {
MemoryStream outputMemStream = new MemoryStream();
ZipOutputStream zipStream = new ZipOutputStream(outputMemStream);
zipStream.SetLevel(3); //0-9, 9 being the highest level of compression
ZipEntry newEntry = new ZipEntry(zipEntryName);
newEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(newEntry);
StreamUtils.Copy(memStreamIn, zipStream, new byte[4096]);
zipStream.CloseEntry();
zipStream.IsStreamOwner = false; // False stops the Close also Closing the underlying stream.
zipStream.Close(); // Must finish the ZipOutputStream before using outputMemStream.
outputMemStream.Position = 0;
return outputMemStream;
// Alternative outputs:
// ToArray is the cleaner and easiest to use correctly with the penalty of duplicating allocated memory.
byte[] byteArrayOut = outputMemStream.ToArray();
// GetBuffer returns a raw buffer raw and so you need to account for the true length yourself.
byte[] byteArrayOut = outputMemStream.GetBuffer();
long len = outputMemStream.Length;
}
Try this:
public static Attachment CreateAttachment(string fileNameAndPath, bool zipIfTooLarge = true, int bytes = 1 << 20)
{
if (!zipIfTooLarge)
{
return new Attachment(fileNameAndPath);
}
var fileInfo = new FileInfo(fileNameAndPath);
// Less than 1Mb just attach as is.
if (fileInfo.Length < bytes)
{
var attachment = new Attachment(fileNameAndPath);
return attachment;
}
byte[] fileBytes = File.ReadAllBytes(fileNameAndPath);
using (var memoryStream = new MemoryStream())
{
string fileName = Path.GetFileName(fileNameAndPath);
using (var zipArchive = new ZipArchive(memoryStream, ZipArchiveMode.Create))
{
ZipArchiveEntry zipArchiveEntry = zipArchive.CreateEntry(fileName, CompressionLevel.Optimal);
using (var streamWriter = new StreamWriter(zipArchiveEntry.Open()))
{
streamWriter.Write(Encoding.Default.GetString(fileBytes));
}
}
var attachmentStream = new MemoryStream(memoryStream.ToArray());
string zipname = $"{Path.GetFileNameWithoutExtension(fileName)}.zip";
var attachment = new Attachment(attachmentStream, zipname, MediaTypeNames.Application.Zip);
return attachment;
}
}

Editing a text file in place through C#

I have a huge text file, size > 4GB and I want to replace some text in it programmatically. I know the line number at which I have to replace the text but the problem is that I do not want to copy all the text (along with my replaced line) to a second file. I have to do this within the source file. Is there a way to do this in C#?
The text which has to be replaced is exactly the same size as the source text (if this helps).
Since the file is so large you may want to take a look at the .NET 4.0 support for memory mapped files. Basically you'll need to move the file/stream pointer to the location in the file, overwrite that location, then flush the file to disk. You won't need to load the entire file into memory.
For example, without using memory mapped files, the following will overwrite a part of an ascii file. Args are the input file, the zero based start index and the new text.
static void Main(string[] args)
{
string inputFilename = args[0];
int startIndex = int.Parse(args[1]);
string newText = args[2];
using (FileStream fs = new FileStream(inputFilename, FileMode.Open, FileAccess.Write))
{
fs.Position = startIndex;
byte[] newTextBytes = Encoding.ASCII.GetBytes(newText);
fs.Write(newTextBytes, 0, newTextBytes.Length);
}
}
Unless the new text is exactly the same size as the old text, you will have to re-write the file. There is no way around it. You can at least do this without keeping the entire file in memory.
Hello I tested the following -works well.This caters to variable length lines separated by Environment.NewLine. if you have fixed length lines you can straightaway seek to it.For converting bytes to string and vice versa you can use Encoding.
static byte[] ReadNextLine(FileStream fs)
{
byte[] nl = new byte[] {(byte) Environment.NewLine[0],(byte) Environment.NewLine[1] };
List<byte> ll = new List<byte>();
bool lineFound = false;
while (!lineFound)
{
byte b = (byte)fs.ReadByte();
if ((int)b == -1) break;
ll.Add(b);
if (b == nl[0]){
b = (byte)fs.ReadByte();
ll.Add(b);
if (b == nl[1]) lineFound = true;
}
}
return ll.Count ==0?null: ll.ToArray();
}
static void Main(string[] args)
{
using (FileStream fs = new FileStream(#"c:\70-528\junk.txt", FileMode.Open, FileAccess.ReadWrite))
{
int replaceLine=1231;
byte[] b = null;
int lineCount=1;
while (lineCount<replaceLine && (b=ReadNextLine(fs))!=null ) lineCount++;//Skip Lines
long seekPos = fs.Position;
b = ReadNextLine(fs);
fs.Seek(seekPos, 0);
string line=new string(b.Select(x=>(char)x).ToArray());
line = line.Replace("Text1", "Text2");
b=line.ToCharArray().Select(x=>(byte)x).ToArray();
fs.Write(b, 0, b.Length);
}
}
I'm guessing you'll want to use the FileStream class and seek to your positon, and place your updated data.

Categories

Resources