i'm using monotorrent that downloads a 20GB~ file, when monotorrent creates the files the memory and CPU reaches maximum which slows the computer and even overheat it, so i wanted to limit the memory usage by limiting the write rate.
here's what i have tried:-
, i checked around and found that you can limit read/write rate of the engine using this code:-
EngineSettings engineSettings = new EngineSettings(downloadsPath, port);
engineSettings.PreferEncryption = true;
engineSettings.AllowedEncryption = EncryptionTypes.All;
engineSettings.MaxWriteRate = **maximum write rate in bytes**;
engineSettings.MaxReadRate = **maximum read rate in bytes**;
engineSettings.GlobalMaxDownloadSpeed = **max download in bytes**;
the download rate worked but it didn't limited the memory usage, so i checked the write rate value in runtime using this code
MessageBox.Show(engine.DiskManager.WriteRate.ToString());
and it returned 0, so instead of adding MaxWriteRate to the EngineSettings i went into EngineSettings.cs and added a default value to MaxWriteRate by changing this code:-
public int MaxWriteRate
{
get { return 5000; }
set { maxWriteRate = 5000; }
}
and it didn't limited the memory usage also the WriteRate value returned 0, so i went into DiskManager.cs and added a default value to WriteRate by changing this code:-
public int WriteRate
{
get { return 5000; }
}
now WriteRate value returned 5000 but it didn't limited the memory usage, then i stuck and didn't found anything else to change,
does anyone know why it's not working? i'm thinking that WriteRate is not even about limiting the writing speed.
When downloading a torrent, the download speed is limited by three things:
1) The maximum allowed download speed speed for the TorrentManager
2) The maximum allowed download speed overall
3) No more than 4MB of data is held in memory while waiting to be written to disk.
Specifically on the third point, if there are more than 4MB of pieces held in memory then no further Socket.Receive calls will be made until that data is flushed. https://github.com/mono/monotorrent/blob/caac16cffd95749febe04c3f7cf22567c3e40432/src/MonoTorrent/MonoTorrent.Client/RateLimiters/DiskWriterLimiter.cs#L43-L46
This screenshot shows what happens today when you specify a max write rate of 2 * 1024 * 1024 (2,048 kB/sec):
The download rate auto-limits because the 4MB buffer fills up, which means setting the max disk write rate ends up limiting both download rate and memory consumption.
Related
Having the second code:
class Methods
{
public MemoryStream UniqPicture(string imagePath)
{
var photoBytes = File.ReadAllBytes(imagePath); // change imagePath with a valid image path
var quality = 70;
var format = ImageFormat.Jpeg; // we gonna convert a jpeg image to a png one
var size = new Size(200, 200);
using (var inStream = new MemoryStream(photoBytes))
{
using (var outStream = new MemoryStream())
{
using (var imageFactory = new ImageFactory())
{
imageFactory.Load(inStream)
.Rotate(new Random().Next(-7, 7))
.RoundedCorners(new RoundedCornerLayer(190))
.Pixelate(3, null)
.Contrast(new Random().Next(-15, 15))
.Brightness(new Random().Next(-15, 15))
.Quality(quality)
.Save(outStream);
}
return outStream;
}
}
}
public void StartUniq()
{
var files = Directory.GetFiles("mypath");
Parallel.ForEach(files, (picture) => { UniqPicture(picture); });
}
}
When I start method StartUniq() my CPU bound to 12-13% and no more. Can I use more CPU % for doing this operation? Why it not increase?
I try to do it from python, it's also only 12-13%. It's Core i7 8700.
The only way to do it operation faster it's to start the second window of application.
It's windows limit? Using Windows Server 2016.
I think this is system limit, because if I try this simple code it's bound 12% CPU too!
while (true)
{
var a = 1 + 2;
}
A bit of research shows that you are using ImageFactory from https://imageprocessor.org/, which wraps System.Drawing. System.Drawing itself is often a wrapper for GDI/GDI+, which... incorporates process-wide locks, so your attempts at multithreading will be severely bottlenecked. Try a better image library.
(See Robert McKee's answer, although maybe this could be about disk IO but maybe not.)
So, I haven't used Paralell.ForEach before, but it seems like you should be running your UniqPicture method in parallel for all files in a given directory. I think your approach is good here, but ultimately your hard drive is probably killing the speed of your program (and vice versa).
Have you tried running UniqPicture in a loop sequentially? My concern here is that your hard drive is thrashing possibly. But in general, it's most likely that the input / output (IO) from your hard drive is taking a considerable amount of time, so the CPU is waiting a considerable amount of time before it can operate on the images in UniqPicture. If you could pre-load the images into memory, I would think the CPU utilization would be much higher, if not maxing out your CPU.
In no particular order, here are some thoughts
What happens if you run this sequentially? This will max out one core on the CPU at max, but it may prevent hard drive thrashing. If there are 100 threads being spun up, that's a lot of requests for the hard drive to deal with at once.
You should be able to add this option to make it run sequentially (or just make it a normal loop without Parallel.):
new ParallelOptions { MaxDegreeOfParallelism = 1 },
Maybe try 2, 3, or 4 threads and see if anything changes.
Check your hard drive utilization in task manager. What's the latency on the hard drive where the images are stored? What's the percentage that Winows reports it as busy? You want the hard drive to be busy the entire time (100% usage), but you also want it to be grabbing your images with the highest throughput possible so the CPU can do its job.
A spinning hard drive (HDD) has far lower IOPS (IO per second) than an SSD in general. An SSD will usually have 1000 to 100,000+ IOPS, but a HDD is around 200, I believe, and has much lower throughput usually. An SSD should help your program utilize the CPU much more.
The size of the image files could have an impact here, again relating to IO.
Or maybe see Robert Mckee's answer about your threads getting bottlenecked. Maybe 13% CPU utilization is the best you can get. 1 / 6 (your CPU has 6 cores) cores being maxed is ~16.7%, so you actually aren't that far off on maxing one core already.
Ultimately, time how long it's taking. CPU utilization should scale inversely linearly (higher CPU usage = lower run time) with the time this takes to run, but time it just be to be sure since that's the real benchmark.
I am using these codes to copy a big file:
const int CopyBufferSize = 64 * 1024;
string src = #"F:\Test\src\Setup.exe";
string dst = #"F:\Test\dst\Setup.exe";
public void CopyFile()
{
Stream input = File.OpenRead(src);
long length = input.Length;
byte[] buffer = new byte[CopyBufferSize];
Stopwatch swTotal = Stopwatch.StartNew();
Invoke((MethodInvoker)delegate
{
progressBar1.Maximum = (int)Math.Abs(length / CopyBufferSize) + 1;
});
using (Stream output = File.OpenWrite(dst))
{
int bytesRead = 1;
// This will finish silently if we couldn't read "length" bytes.
// An alternative would be to throw an exception
while (length > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(CopyBufferSize, buffer.Length));
output.Write(buffer, 0, bytesRead);
length -= bytesRead;
Invoke((MethodInvoker)delegate
{
progressBar1.Value++;
label1.Text = (100 * progressBar1.Value / progressBar1.Maximum).ToString() + " %";
label3.Text = ((int)swTotal.Elapsed.TotalSeconds).ToString() + " Seconds";
});
}
Invoke((MethodInvoker)delegate
{
progressBar1.Value = progressBar1.Maximum;
});
}
Invoke((MethodInvoker)delegate
{
swTotal.Stop();
Console.WriteLine("Total time: {0:N4} seconds.", swTotal.Elapsed.TotalSeconds);
label3.Text += ((int)swTotal.Elapsed.TotalSeconds - int.Parse(label3.Text.Replace(" Seconds",""))).ToString() + " Seconds";
});
}
The file size is about 4 GB.
In the first 7 seconds it can copy up to 400 MB then this hot speed calms down.
What happen and how to keep this hot speed or even increase it?
Another question is here:
When the file copied, windows is still working on destination file(about 10 seconds).
Copy Time: 116 seconds
extra time: 10-15 seconds or even more
How to remove or decrease this extra time?
What happens? Caching, mostly.
The OS pretends you copied 400 MiB in seven seconds, but you didn't. You just sent 400 MiB to the OS (or file system) to write in the future, and that's as much as the buffer can take. If you try to write a 400 MiB file and you pull the plug as soon as it's "done", your file will not be written. The same thing deals with the "overtime" - your application has sent all it has to the buffer, but the buffer isn't yet written to the drive itself (either its buffer, or even slower, the actual physical platter).
This is especially visible with USB flash drives, which tend to use caching heavily. This makes working with the (usually very slow) drive much more pleasant, with the trade-off that you have to wait for the OS to finish writing everything before pulling the drive out (that's why you get the "Safe remove" icon).
So it should be obvious that you can't really make the total time shorter. All you can do is try and make the user interface reflect reality a bit better, so that the user doesn't see the "first 400 MiB are so fast!" thing... but it doesn't really work well. In any case, your read->write speed is ~30 MiB/s. The OS just hides the peaks to make it easier to deal with the slow hard drive - very useful when you're dealing with lots of small files, worthless when dealing with files bigger than the buffer.
You have a bit of control over this when you use the FileStream constructor directly, instead of using File.OpenWrite - you can use FileOptions.WriteThrough to instruct the OS to avoid any caching and write directly to disk[1], giving you a better idea of the real write speed. Do note that this usually makes the total time larger, though, and it may make concurrent access even worse. You definitely don't want to use it for small files.
[1] - Haha, right. The drive usually has caching of its own, and some ignore the OS' pleas. Tough luck.
One thing you could try is to increase the buffer size. This really matters when the write cache can no longer keep up (as discussed in other answer). Writing a lot of small blocks is often slower than writing a few large blocks. Instead of 64 kB, try 1 MB, 4 MB or even bigger:
const int CopyBufferSize = 1 * 1024 * 1024; // 1 MB
// or
const int CopyBufferSize = 4 * 1024 * 1024; // 4 MB
How do you determine a devices total memory? I would like to use a sequential program flow on low memory devices and a more asynchronous flow on higher memory devices.
Example: On a device with 1GB of memory my program works but on 512MB device my program hits an OutOfMemoryException as it is caching images from multiple sites asynchronously.
The MemoryManager class has some static properties to get the current usage and limit for the application.
// Gets the app's current memory usage.
MemoryManager.AppMemoryUsage
// Gets the app's memory usage level.
MemoryManager.AppMemoryUsageLevel
// Gets the app's memory usage limit.
MemoryManager.AppMemoryUsageLimit
You can react to the limit changing using the MemoryManager.AppMemoryUsageLimitChanging event
private void OnAppMemoryUsageLimitChanging(
object sender, AppMemoryUsageLimitChangingEventArgs e)
{
Debug.WriteLine(String.Format("AppMemoryUsageLimitChanging: old={0} MB, new={1} MB",
(double)e.OldLimit / 1024 / 1024,
(double)e.NewLimit / 1024 / 1024));
}
You can use the application's memory limit to decide how best to manage your memory allocation.
I've read quite a few SO posts and general articles on trying to allocate over 1GB of memory so before getting shot down like the others here is some context.
This app will run as a kiosk with a dedicated machine running no unnecessary processes.
My app acquires images from a high-speed camera with a rolling shutter at a rate of 120 frames per second at a resolution of 1920 x 1080 with a bit depth of 24. The app needs to write every single frame to disk for post-processing. The current problem I am facing is that the Disk I/O won't keep up with the capture rate even though it is limited to 120 frames per second. The Disk I/O bandwidth needed is around 750MBps!
The total length of the recording needs to be at least 10 seconds (7.5GB) in raw form. Performing any on-the-fly transcoding or compression brings the frame-rate down to utterly unacceptable levels.
To work around this, I have tried the following:
Compromising on quality by reducing the bit-depth at hardware-level to 16 which is still around 500MBps.
Disabled all image encoding and writing raw camera data to disk. This has saved some processing time.
Creating a single 10GB file on disk and doing a sequential write-through as frames come in. This has helped most so far. All dev and production systems have a 100GB dedicated drive for this application.
Using Contig.exe from Sysinternals to defragment the file. This has had astonishing gains on non-SSD drives.
Out of options to explore here. I am not familiar with memory-mapped files and when trying to create them, I get an IOException saying Not enough storage is available to process this command..
using (var file = MemoryMappedFile.CreateFromFile(#"D:\Temp.VideoCache", FileMode.OpenOrCreate, "MyMapName", int.MaxValue, MemoryMappedFileAccess.CopyOnWrite))
{
...
}
The large file I currently use requires either sequential write-though or sequential read access. Any pointers would be appreciated.
I could even force the overall recording size down to 1.8GB if only there was a way to allocate that much RAM. Once again, this will run on a dedicated with 8GB available memory and 100GB free space. However, not all production systems will have SSD drives.
32 bit processes on a 64 bit system can allocate 4 GB of RAM, so it should be possible to get 1.8 GB of RAM for storing the video, but of course you need to consider loaded DLLs and a buffer until the video is compressed.
Other than that, you could use a RAMDisk, e.g. from DataRam. You just need to find a balance between how much memory the application needs and how much memory you can grant the disk. IMHO a 5 GB / 3 GB setting could work well: 1 GB for the OS, 4 GB for your application and 3 GB for the file.
Don't forget to copy the file from the RAM disk to HDD if you want it persistent.
Commodity hardware is cheap for a reason. You need faster hardware.
Buy a faster disk system. A good RAID controller and four SSDs. Put the drives into a RAID 1+0 configuration and be done with this problem.
How much money is your company planning on spending developing and testing software to push cheap hardware past its limitations? And even if you can get it to work fast enough, how much do they plan on spending to maintain that software?
Memory mapped files don't speed-up very much writing to a file...
If you have a big file, you normally don't try to map it entirely in RAM... you map a "window" of it, then "move" the window (in C#/Windows API you create a "view" of the file starting at any one location and with a certain size)
Example of code: (here the window is 1mb big... bigger windows are possible... at 32 bits it should be possible to allocate a 64 or 128mb window without any problem)
const string fileName = "Test.bin";
const long fileSize = 1024L * 1024 * 16;
const long windowSize = 1024 * 1024;
if (!File.Exists(fileName)) {
using (var file = File.Create(fileName)) {
file.SetLength(fileSize);
}
}
long realFileSize = new FileInfo(fileName).Length;
if (realFileSize < fileSize) {
using (var file = File.Create(fileName)) {
file.SetLength(fileSize);
}
}
using (var mm = MemoryMappedFile.CreateFromFile(fileName, FileMode.Open)) {
long start = 0;
while (true) {
long size = Math.Min(fileSize - start, windowSize);
if (size <= 0) {
break;
}
using (var acc = mm.CreateViewAccessor(start, size)) {
for (int i = 0; i < size; i++) {
// It is probably faster if you write the file with
// acc.WriteArray()
acc.Write(i, (byte)i);
}
}
start += windowSize;
}
}
Note that here I'm writing code that will write a fixed pre-known number of bytes (fileSize)... Your code should be different (because you can't pre-know the "exact" fileSize). Still remember: Memory mapped files don't speed-up very much writing to a file.
my question is mainly about code optimization(at the moment)
I have created a network monitor that monitors different connections on the PC, what i had done is I'm sniffing packets at the 3rd level of the stack(the network level), after capturing the packet, i am supposed to create an object on the UI for each connection, what i am doing at the moment is looking at the overall consumed bandwidth and total data sent every second the program is run. here is that part of the code:
temp= packet_rtxt.TextLength;
tempdr = temp / 1024;
dr_txt.Text=tempdr.ToString();
totaldata = totaldata + temp;
totaldatadisp = totaldata;
packet_rtxt.Text = "";
//unit
if (totaldata < 10485760)
{
if (totaldata < 10240)
unit.Text = "bytes";
else
{
totaldatadisp = totaldatadisp / 1024;
unit.Text = "KBs";
}
}
else
{
totaldata = totaldatadisp / 1048576;
unit.Text = "MBs";
}
test.Text = totaldatadisp.ToString();
tds.Enabled = true;
}
so what im doing so far is writing out the captured packets into a rich text box, taking the length of that rtxt and adding it to a counter for the total data, taking the length and using it as the data rate, then clearing the rtxt for the next bits of data.
the total data recieved part is working fine, however the BPs section works fine for low amounts of data, then it goes crazy if the data rate is over 10kbps(on my pc)
should i try optimizing the whole code, or is there some other method(keep in mind i need to monitor every single connection), or do i need to use different UI controls?
should I focus on optimization or using new ways?
thanks in advance
The standard controls are not made for such a load. You need to separate the logging of data from the display of data.
I'd only show the last say 10kb of text once per second. You can still keep all of the log records in some data structure. But you don't have to push all of them to the UI.
Alternatively you can write your own text-display control but that is going to be a lot more work.