Downloaded size is more than real - c#

I have written a program with C# and Microsoft API that show's amount of download you had, but it doesn't telling me the correct size that is downloaded, for example: If I download a 10 MB file, the program shows 11 MB is downloaded.
I also checked in network status window still tell me just like my program.
Why? Does other software in ISP work same way that I have?
objIPInterfaceStatistics2 = objNetworkInterface[numberinterface].GetIPStatistics();
long newBytesreceived;
newBytesreceived = objIPInterfaceStatistics2.BytesReceived;
if (checkdata == true)
{
checkdata = false;
newBytesreceived = 0;
}
long newUsage = newBytesreceived - oldBytesreceived2;
trafficusage += newUsage;
float converttrafficusage = trafficusage / 1000000;
oldBytesreceived2 = objIPInterfaceStatistics2.BytesReceived;
worker.ReportProgress((int)Math.Ceiling(converttrafficusage));
Thread.Sleep(1000);

I can just assume that the two values are calculated differently.
In general, 1 megabyte refers to 1.000.000 bytes whereas 1 mebibyte refers to 2^20 bytes => 1.048.576 bytes. Normally, megabyte is used as it is easier to calculate with.
To be sure, some example code from you where the downloaded packags are being calculated would be good^^

Related

Saving the Tiff Image file using JPEG compression is time consuming

I am using Aspose.Imaging 19.11.0.0 for manipulating the Tiff Images with Compression JPEG,
But here If I have 10MB+ sized tiff files(having 50 pages) then in this case it is taking 30 to 40 minutes to rotate these all tiff pages and application went on not responding mode.
In my code, suppose I have 50 pages in Tiff image files, then from client application I am iterating each pages through foreach loop and sending corresponding rotate method for each page on server side for rotation,
I know the one of the factor for time consuming is the sending each pages instead of all pages at once,
but when I debugged the code then found that tiffImage.Save(Stream, tiffOptions) is taking more time for each page also.
Below are the server side code for rotating the page using JPEG compression ,
Here below RotatePageUsingAspose() method is called each time for all pages,
means suppose I have selected only 3rd page out of 50 then it is being called only one time for selected page with parameter pageNumber =3 and rotation degree = 90 degree
In this case, means rotating the 3rd page and saving this page is also taking almost 1 minute,which is far too slow.
Server side code for rotation:
private void RotatePageUsingAspose(int pageNo, RotationDegrees rotationDegree)
{
float angleOfRotation = (float)rotationDegree;
// Auto mode is flexible and efficient.
Cache.CacheType = CacheType.Auto;
// The default cache max value is 0, which means that there is no upper limit.
Cache.MaxDiskSpaceForCache = 1073741824; // 1 gigabyte
Cache.MaxMemoryForCache = 1073741824; // 1 gigabyte
// Changing the following property will greatly affect performance.
Cache.ExactReallocateOnly = false;
TiffOptions tiffOptions = new TiffOptions(TiffExpectedFormat.TiffJpegRgb);
//Set RGB color mode.
tiffOptions.Photometric = TiffPhotometrics.Rgb;
tiffOptions.BitsPerSample = new ushort[] { 8, 8, 8 };
try
{
using (TiffImage tiffImage = (TiffImage)Image.Load(Stream))
{
TiffFrame selectedFrame = tiffImage.Frames[pageNo - 1];
selectedFrame.Rotate(angleOfRotation);
tiffImage.Save(Stream, tiffOptions);
}
}
finally
{
tiffOptions.Dispose();
}
}
I have raised the same question to Aspose.Imaging team but they have not provide the solution for this yet.
Kindly suggest the improvements for above code for saving the pages in efficient manner.
If possible please provide the approach to achieve this.

How to increase the allowed RAM usage for IIS Express?

First of all, I have seen this question:
IIS Express - increse memory limit
But it is not a duplicate, because all answers are pointing to the 64bit version of IIS Express. I need to support 32 bit! And 32 bit can support 2 GB RAM per process.
I was debugging a strange problem. So I created a Console Application which works fine:
//Simplifiy my application logic
var trash = new List<int>();
long mbUsed = 0;
while (mbUsed < 600)
{
for (var i = 0; i < 100000; i++)
{
trash.Add(i);
}
GC.Collect();
mbUsed = Process.GetCurrentProcess().WorkingSet64 / 1024 / 1024;
Console.WriteLine(mbUsed + " MB used");
}
//Creating a image
var bitmap = new Bitmap(2000, 4000);
Basicly it fills the ram to 600 MB and then tries to create a large image.
Now if I paste the same code in an MVC Action, surprise, I get an OutOfMemoryException:
If I read it correctly, I am using less then 500 MB.
So how can I use more RAM? For normal IIS I can change it on the application pool.

How to efficiently set the number of bytes to download for HttpWebRequest?

I'm currently working on a file downloader project. The application is designed so as to support resumable downloads. All downloaded data and its metadata(download ranges) are stored on the disk immediately per call to ReadBytes. Let's say that I used the following code snippet :-
var reader = new BinaryReader(response.GetResponseStream());
var buffr = reader.ReadBytes(_speedBuffer);
DownloadSpeed += buffr.Length;//used for reporting speed and zeroed every second
Here _speedBuffer is the number of bytes to download which is set to a default value.
I have tested the application by two methods. First is by downloading a file which is hosted on a local IIS server. The speed is great. Secondly, I tried to download the same file's copy(from where it was actually downloaded) from the internet. My net speed is real slow. Now, what I observed that if I increase the _speedBuffer then the downloading speed from the local server is good but for the internet copy, speed reporting is slow. Whereas if I decrease the value of _speedBuffer, the downloading speed(reporting) for the file's internet copy is good but not for the local server. So I thought, why shouldn't I change the _speedBuffer at runtime. But all the custom algorithms(for changing the value) I came up with were in-efficient. Means the download speed was still slow as compared other downloaders.
Is this approach OK?
Am I doing it the wrong way?
Should I stick with default value for _speedBuffer(byte count)?
The problem with ReadBytes in this case is that it attempts to read exactly that number of bytes, or it returns when there is no more data to read.
So you receive a packet containing 99 bytes of data, then calling ReadBytes(100) will wait for the next packet to include that missing byte.
I wouldn't use a BinaryReader at all:
byte[] buffer = new byte[bufferSize];
using (Stream responseStream = response.GetResponseStream())
{
int bytes;
while ((bytes = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
DownloadSpeed += bytes;//used for reporting speed and zeroed every second
// on each iteration, "bytes" bytes of the buffer have been filled, store these to disk
}
// bytes was 0: end of stream
}

virtual comport is very slow compared to terminal

I'm writing an application where I need to send a file (~600kB) to another unit via a virtual serialport.
When I send it using a terminal application (TeraTerm) it takes less than 10 seconds, but using my program it takes 1-2 minutes.
My code is very simple:
port.WriteTimeout = 30000;
port.ReadTimeout = 5000;
port.WriteBufferSize = 1024 * 1024; // Buffer size larger than file size
...
fs = File.OpenRead(filename);
byte[] filedata = new byte[fs.Length];
fs.Read(filedata, 0, Convert.ToInt32(fs.Length));
...
for (int iter = 0; iter < filedata.Length; iter++) {
port.Write(filedata, iter, 1);
}
Calling port.Write with the entire file length seems to always cause a write timeout for unknown reason, so I'm writing 1 byte at a time.
Solved it, here's the details in case someone else finds this it might give some hints on what's wrong.
I was reading the file wrong, somehow the application used \r\n as newlines when transferring. The file itself is a Intel .hex file which contains checksums which were calculated using \r as newlines.
Checksum errors caused the other device to ACK very slowly, thus making the transfer super slow combined with the PC application now handling checking for checksum errors.
If you have similar errors I recommend using a software snoop to monitor what's actually being sent

Compress/decompress string on .NET server that was encoded with lz-string.js on client

I am using the LZString.compressToBase64 function of lz-string.js and need to decompress/compress the data on the server side.
The obvious solution seems to be lz_string_csharp but I am concerned about
this statement:
If you use just the regular Javascript 'compress' function then depending on the data in the string, it will not decompress correctly on the C# side.
However, if you are using the 'compress' function built into this C# version, then you should be ok to use the regular 'decompress' function included.
and about this reported issue: possible bug in c# version of compressToBase64
The full description in the link you give says that you should be able to use 'compressToUTF16' and it will always work, rather than just 'compress', which won't always work.
I've tested it personally and seen that it works.
(Though I changed the Context_Compress_Data.str field from a string to a StringBuilder in the C# file, because it was too slow to run. After that it took only 8 seconds for an 8 MB JSON file and compressed to 7% of the original size.)
We fixed this by adding enc1 = enc2 = enc3 = enc4 = 0; between the two rows below (line 580 in original file before stringbuilder version)
From what I remember, the bug was caused by the values of enc1, enc2, etc... not being reset at the start of each loop, so sometimes a new iteration of the loop had wrong values from the previous round.
i += 3;
enc1 = enc2 = enc3 = enc4 = 0;
enc1 = (int)(Math.Round(chr1)) >> 2;

Categories

Resources