Buffered stream details - c#

I want to explain my understanding with example. Let stream be any abstract buffered network stream with 4 bytes buffer. Let there be some byte-to-byte writing process (drawed like FIFO).
write->| | | | |----->net
----->net operation is very slow, and we want to minimize it's quantity. Here buffer helps.
write->| 1 | | | |----->net
write->| 5 | 1 | | |----->net
write->| 12 | 5 | 1 | |----->net
write->| 7 | 12 | 5 | 1 |----->net
Here, or, maybe, some time earlier, .NET virtual machine, or operating system, decides to complete writing operation and flushes the data:
write->| | | | |----->net 7 12 5 1->
So, write-> operations become very fast, and, at least after lag while stream closing, data become sended to the remote host.
In the code it can be look like this:
using(networkStream)
for(var i = 0; i < 4; i++)
networkStream.WriteByte(getNextByte());
Am I right? If getNextByte operation will lag a thread, can I count on that data will be passed to the stream stealthily (asynchroniously), will not WriteByte lag all the code? Or it will lag four times rarely? Haven't I implement some circular buffer to pass data to it, and launch additional thread which will read data from buffer and pass data to the network stream?
I also hope a lot that buffered network stream can increase speed of data receiving.
read<-| | 1 | 5 | 12 |<-----net <-7
using(networkStream)
while((b = networkStream.ReadByte()) >= 0)
process(b);
If I synchroniously get and process bytes from buffered network stream, can I count on that data to the stream buffer will be transmitted by networkStream stealthily (asynchroniously), will not ReadByte lag all the code? Or it will lag four times rarely?
P.S. I know that standard NetworkStream stream is buffered.
Wanna tell about my concrete case. I have to realize striping of stream. I read data from stream from remote client and want to pass it to several streams to remote servers, but with alternating (called it forking) (image a), like this
var i = 0;
while((c = inStream.Read(buf, 0, portions[i])) > 0)
{
outStreams[i].Write(buf, 0, c);
i = (i + 1) % outStreams.Length;
}
image b shows merging process, coding in the same way.
I don't want to doom remote client to wait while program will do slow Write to remote servers operations. So, I tried to organize manually backgroung writting from network to inStream and backgroung reading from outStreams to network. But maybe I haven't care while I'm using buffered streams? Maybe buffered streams eliminate such breaks of read-write processes?

Related

UWP: Garbage collection Gen 2 still happens inside a NoGCRegion

In my UWP app I need to perform a critical section for few seconds, when I need to be sure that the Garbage collector is not invoked.
So, I call
const long TOTAL_GC_ALLOWED = 1024L * 1024 * 240; // 240 MB, it seems max allowed is 244 for Workstations 64 bits
try
{
bool res = GC.TryStartNoGCRegion(TOTAL_GC_ALLOWED);
if (!res)
s_log.ErrorFormat("Cannot allocate noGC Region!");
}
catch (Exception)
{
s_log.WarnFormat("Cannot start NoGCRegion");
}
Unfortunately, even if the method GC.TryStartNoGCRegion() returns true, I still see the same amount of GarbageCollection of Gen 2 as in the case when I don't call this method.
Please also notice that I am trying on a machine with 16 GB of RAM, of which only 9GB are used by the whole O.S. when I was doing my tests.
What am I doing wrong?
How can I achieve to suppress the GC (for a limited amount of time)?

c#: How to safely send multicast packets without interference?

Sorry if the title is hard to understand but I don't really know how to put it in short. Let me explain.
I am currently developing a LAN-filesharing university project.
Everyone running the application will have to notify others that they available for file transfer. My idea is to Join a multicast group upon launch, and send a sort of "keep alive" packet in multicast: with keep alive I mean that this packet will tell all the receivers that the sender is still available for transferring files, if other users want to. So e.g.: I'm running the app and it will send this packet every 50 s or so, and other people in my network running my application will receive this packet and keep me in their memory.
This is what the client does (this is just an example, not actual code), at this point of the app I have already joined the multicast group and set the destination end point:
// Sending first data, e.g.: my ip address...
client.Send(buffer, buffer.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my real name...
client.Send(buffer2, buffer2.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my username...
client.Send(buffer3, buffer3.Length, MulticastAddressEndPoint);
From the official documentation I read that Send:
Sends a UDP datagram to the host at the specified remote endpoint.
So I'm guessing that I am sending 3 datagrams.
My listener thread is something like this:
IPEndPoint from = new IPEndPoint(IPAddress.IPv6Any, port);
while(true)
{
// Receive the ip
byte[] data = client.Receive(ref from);
// Receive the first name
data = client.Receive(ref from);
// Receive the username
data = client.Receive(ref from);
}
Now for the real question:
Let's suppose that two people are sending these 3 packets at the exactly same time (with their values of course, so different ip address etc), and no packet is dropped and they are delivered all in the correct sequence (first ip, then name, then username). The question is: I have absolutely NO guarantees that I will receive packets in this order:
packet1_A | packet2_A | packet3_A | packet1_B | packet2_B | packet3_B
instead of this
packet1_A | packet1_B | packet2_A | packet2_B | packet3_A | packet3_B
am I right?
The only thing that I can do is pack all information in one single byte array and then send it, right? This seems the most reasonable thing to do, but what if my information exceeds the 1500 bytes of ethernet? My datagram will be sent in more packets, so would I experience the same "interference" or my NIC will detect packets belonging to the same datagram and join them again and then deliver it to the OS?

Which PerformanceCounter in "Process" category return memory size for instance of process?

I wrote app that monitors currently running processes.
In the following snippet I get all instances in "Process" category.
var category = new PerformanceCounterCategory("Process");
var instanceNames = category.GetInstanceNames();
A little later, I get all categories for single instance.
It looks like this.
var counters = category.GetCounters(instanse);
I see that evry instance in this category contains 28 counters.
Below the counters list.
% Processor Time
% User Time
% Privileged Time
Virtual Bytes Peak
Virtual Bytes
Page Faults/sec
Working Set Peak
Working Set
Page File Bytes Peak
Page File Bytes
Private Bytes
Thread Count
Priority Base
Elapsed Time
ID Process
Creating Process ID
Pool Paged Bytes
Pool Nonpaged Bytes
Handle Count
IO Read Operations/sec
IO Write Operations/sec
IO Data Operations/sec
IO Other Operations/sec
IO Read Bytes/sec
IO Write Bytes/sec
IO Data Bytes/sec
IO Other Bytes/sec
Working Set - Private
So question. Which counter provides information about the memory bused by the current instance?
I think it's a simple question, but I can not find an answer. I would be grateful if someone will tell.
If we assume that this "Working Set":
ProcessName: SkypeC2CPNRSvc | ProcessId: 2500 Process:
Group: Process | Process: SkypeC2CPNRSvc | Name: Working Set | Value: 311296
This value is calculated in the following way: prfc.NextValue()/1024
In task manaker for this process i see 316K
"Working Set". "Working Set - Private" and "Private Bytes" are all counters that describe the memory used by the current process.
You can see this link for a good discussion on the differences:
What is private bytes, virtual bytes, working set?
I would use TraceEvent, to start a Realtime Session and activate provider Microsoft-Windows-Kernel-Memory with keywork 0x40 (KERNEL_MEM_KEYWORD_MEMINFO_EX).
Now Windows raises every 0.5s and event with those data:
Count, ProcessID, WorkingSetPageCount, CommitPageCount, VirtualSizeInPages, PrivateWorkingSetPageCount
Parse them in the way you need them.

C# and SUMO - Unsuccessful communication with sockets

I am trying to communicate with the traffic simulator SUMO with a C# Script. SUMO is launched listening to a port and waits for a client connection.
The connection is succesful. Then, I try to make a simulation step, sending the corresponding command, and then receiving the response.
However, when I try to receive the response, my program gets blocked when trying to execute this line:
int i = paramDataInputStream.ReadInt32() - 4;
Where paramDataInputStream is a BinaryReader. I understand that this method ReadInt32 is blocking the system because there is no data available to read, which leads me to the conclusion that something of the following is happening:
the command is not being sent properly.
the socket is not well defined
Since I took some piece of Java code and tried to translate it,
maybe there is some error
In SUMO's webpage they define the communication protocol. Is says the following:
A TCP message acts as container for a list of commands or results.
Therefore, each TCP message consists of a small header that gives the
overall message size and a set of commands that are put behind it. The
length and identifier of each command is placed in front of the
command. A scheme of this container is depicted below:
0 7 8 15
+--------------------------------------+
| Message Length including this header |
+--------------------------------------+
| (Message Length, continued) |
+--------------------------------------+ \
| Length | Identifier | |
+--------------------------------------+ > Command_0
| Command_0 content | |
+--------------------------------------+ /
...
+--------------------------------------+ \
| Length | Identifier | |
+--------------------------------------+ > Command_n-1
| Command_n-1 content | |
+--------------------------------------+ /
In the case of the "Simulation Step command", the identifier is 0x02 and the content is just an integer corresponding to the timestep (Click here for more detail).
Before providing some more code regarding the way I send the messages, I have one doubt regarding the way I defined the socket, that is maybe the reason. I looked on the Internet when trying to translate from Java to C# and I found this:
this.socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
While, in the Java source code, the socket is only defined as follows:
this.socket = new Socket();
Since the communication protocol doesn't look exactly like TCP (the header in my case is one the overall length, while TCP's header is quite more complex), maybe the way I defined the socket is not correct.
If the comments/answers state that this is not the problem, I will update with more code.
EDIT
I spend the whole day making trials and in the end nothing worked. In the end, I made a very simple code which seems logical to me but doesn't work either:
public static void step(NetworkStream bw, int j)
{
byte[] bytes = { 0, 0, 0, 10, 6, 2, 0, 0, 0, 0 };
bw.Write(bytes, 0, bytes.Length);
bw.Flush();
}
public static void Main(String[] argv)
{
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.NoDelay = true;
try
{
socket.Connect(new IPEndPoint(IPAddress.Parse("127.0.0.1"), 60634));
}
catch (Exception localConnectException)
{
Console.WriteLine(localConnectException.StackTrace.ToString());
}
NetworkStream ns = new NetworkStream(socket);
//BinaryWriter bw = new BinaryWriter(ns); (I tried with both, BinaryWriter and NetworkStream and the result was the same
for (int i = 0; i < 100; i++)
{
step(ns,i);
}
}
The bytes I am sending correspond to: 4 bytes (1 integer) for total length (which is 10 bytes), 1 byte for command length (which is 6 bytes), 1 byte for command identifier (which is 0x02), and 4 bytes (1 integer) for the content of the comand, which is 0 in this case because I want to advance 1 timestep only.
I have sniff the communication to check if the bytes were sent correctly, and I even receive and ACK from SUMO, but the timestep doesn't improve and I don't receive the answer from the Server.
What you have specified is an application layer protocol. It's defined on top of TCP. thus you still use a socket for the communication to send/receive data, and use the SUMO specification to know how to encode/decode the messages that you send.
I found the mistake. The error was not in the code, but in the way I launched SUMO. The "steplength" was not initialized, therefore, the timesteps were being done, but the simulation time was not changing because of that.

Calculate upload transfer speed problem

I have implemented a file transfer rate calculator to display kB/sec for an upload process occuring in my app, however with the following code it seems I am getting 'bursts' in my KB/s readings just after the file commences to upload.
This is the portion of my stream code, this streams a file in 1024 chunks to a server using httpWebRequest:
using (Stream httpWebRequestStream = httpWebRequest.GetRequestStream())
{
if (request.DataStream != null)
{
byte[] buffer = new byte[1024];
int bytesRead = 0;
Debug.WriteLine("File Start");
var duration = new Stopwatch();
duration.Start();
while (true)
{
bytesRead = request.DataStream.Read(buffer, 0, buffer.Length);
if (bytesRead == 0)
break;
httpWebRequestStream.Write(buffer, 0, bytesRead);
totalBytes += bytesRead;
double bytesPerSecond = 0;
if (duration.Elapsed.TotalSeconds > 0)
bytesPerSecond = (totalBytes / duration.Elapsed.TotalSeconds);
Debug.WriteLine(((long)bytesPerSecond).FormatAsFileSize());
}
duration.Stop();
Debug.WriteLine("File End");
request.DataStream.Close();
}
}
Now an output log of the upload process and associated kB/sec readings are as follows:
(You will note a new file starts and ends with 'File Start' and 'File End')
File Start
5.19 MB
7.89 MB
9.35 MB
11.12 MB
12.2 MB
13.13 MB
13.84 MB
14.42 MB
41.97 kB
37.44 kB
41.17 kB
37.68 kB
40.81 kB
40.21 kB
33.8 kB
34.68 kB
33.34 kB
35.3 kB
33.92 kB
35.7 kB
34.36 kB
35.99 kB
34.7 kB
34.85 kB
File End
File Start
11.32 MB
14.7 MB
15.98 MB
17.82 MB
18.02 MB
18.88 MB
18.93 MB
19.44 MB
40.76 kB
36.53 kB
40.17 kB
36.99 kB
40.07 kB
37.27 kB
39.92 kB
37.44 kB
39.77 kB
36.49 kB
34.81 kB
36.63 kB
35.15 kB
36.82 kB
35.51 kB
37.04 kB
35.71 kB
37.13 kB
34.66 kB
33.6 kB
34.8 kB
33.96 kB
35.09 kB
34.1 kB
35.17 kB
34.34 kB
35.35 kB
34.28 kB
File End
My problem is as you will notice, the 'burst' I am talking about starts at the beginning of every new file, peaking in MB's and then evens out properly. is this normal for an upload to burst like this? My upload speeds typically won't go higher than 40k/sec here so it can't be right.
This is a real issue, when I take an average of the last 5 - 10 seconds for on-screen display, it really throws things out producing a result around ~3MB/sec!
Any ideas if I am approaching this problem the best way? and what I should do? :S
Graham
Also: Why can't I do 'bytesPerSecond = (bytesRead / duration.Elapsed.TotalSeconds)' and move duration.Start & duration.Stop into the while loop and receive accurate results? I would have thought this would be more accurate? Each speed reads as 900 bytes/sec, 800 bytes/sec etc.
The way i do this is:
Save up all bytes transfered in a long.
Then every 1 second i check how much has been transfered. So i basicly only trigger the code to save speed once pr second. Your while loop is going to loop maaaaaaaaaaaany times in one second on a fast network.
Depending on the speed of your network you may need to check the bytes transfered in a seperate thread or function. I prefere doing this with a Timer so i can easly update UI
EDIT:
From your looking at your code, im guessing what your doing wrong is that you dont take into account that one loop in the while(true) is not 1 second
EDIT2:
Another advatage with only doing the speed check once pr second is that things will go much quicker. In cases like this updating the UI can be the slowest thing your are doing, so if you try to update the UI every loop, thats most likely your slowest point and is going to produce unresponsive UI.
Your also correct that you should avarage out the values, so you dont get the microsoft minutes bugs. I normaly do this in the Timer function running by doing something like this:
//Global variables
long gTotalDownloadedBytes;
long gCurrentDownloaded; // Where you add up from the download/upload untill the speedcheck is done.
int gTotalDownloadSpeedChecks;
//Inside function that does speedcheck
gTotalDownloadedBytes += gCurrentDownloaded;
gTotalDownloadSpeedChecks++;
long AvgDwnSpeed = gTotalDownloadedBytes / gTotalDownloadSpeedChecks; // Assumes 1 speedcheck pr second.
There's many layers of software and hardware between you and the system you're sending to, and several of those layers have a certain amount of buffer space available.
When you first start sending, you can pump out data quite quickly until you fill those buffers - it's not actually getting all the way to the other end that fast, though! After you fill up the send buffers, you're limited to putting more data into them at the same rate it's draining out, so the rate you see will drop to the underlying networking sending rate.
All, I think I have fixed my issue by adjusting the 5 - 10 averging variable to wait one second to account for the burst, not the best, but will allow internet to sort itself out and allow me to capture a smooth transfer.
It appears from my network traffic it down right is bursting so there is nothing in code I could do differently to stop this.
Please will still be interested in more answers before I hesitantly accept my own.

Categories

Resources