I am trying to communicate with the traffic simulator SUMO with a C# Script. SUMO is launched listening to a port and waits for a client connection.
The connection is succesful. Then, I try to make a simulation step, sending the corresponding command, and then receiving the response.
However, when I try to receive the response, my program gets blocked when trying to execute this line:
int i = paramDataInputStream.ReadInt32() - 4;
Where paramDataInputStream is a BinaryReader. I understand that this method ReadInt32 is blocking the system because there is no data available to read, which leads me to the conclusion that something of the following is happening:
the command is not being sent properly.
the socket is not well defined
Since I took some piece of Java code and tried to translate it,
maybe there is some error
In SUMO's webpage they define the communication protocol. Is says the following:
A TCP message acts as container for a list of commands or results.
Therefore, each TCP message consists of a small header that gives the
overall message size and a set of commands that are put behind it. The
length and identifier of each command is placed in front of the
command. A scheme of this container is depicted below:
0 7 8 15
+--------------------------------------+
| Message Length including this header |
+--------------------------------------+
| (Message Length, continued) |
+--------------------------------------+ \
| Length | Identifier | |
+--------------------------------------+ > Command_0
| Command_0 content | |
+--------------------------------------+ /
...
+--------------------------------------+ \
| Length | Identifier | |
+--------------------------------------+ > Command_n-1
| Command_n-1 content | |
+--------------------------------------+ /
In the case of the "Simulation Step command", the identifier is 0x02 and the content is just an integer corresponding to the timestep (Click here for more detail).
Before providing some more code regarding the way I send the messages, I have one doubt regarding the way I defined the socket, that is maybe the reason. I looked on the Internet when trying to translate from Java to C# and I found this:
this.socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
While, in the Java source code, the socket is only defined as follows:
this.socket = new Socket();
Since the communication protocol doesn't look exactly like TCP (the header in my case is one the overall length, while TCP's header is quite more complex), maybe the way I defined the socket is not correct.
If the comments/answers state that this is not the problem, I will update with more code.
EDIT
I spend the whole day making trials and in the end nothing worked. In the end, I made a very simple code which seems logical to me but doesn't work either:
public static void step(NetworkStream bw, int j)
{
byte[] bytes = { 0, 0, 0, 10, 6, 2, 0, 0, 0, 0 };
bw.Write(bytes, 0, bytes.Length);
bw.Flush();
}
public static void Main(String[] argv)
{
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.NoDelay = true;
try
{
socket.Connect(new IPEndPoint(IPAddress.Parse("127.0.0.1"), 60634));
}
catch (Exception localConnectException)
{
Console.WriteLine(localConnectException.StackTrace.ToString());
}
NetworkStream ns = new NetworkStream(socket);
//BinaryWriter bw = new BinaryWriter(ns); (I tried with both, BinaryWriter and NetworkStream and the result was the same
for (int i = 0; i < 100; i++)
{
step(ns,i);
}
}
The bytes I am sending correspond to: 4 bytes (1 integer) for total length (which is 10 bytes), 1 byte for command length (which is 6 bytes), 1 byte for command identifier (which is 0x02), and 4 bytes (1 integer) for the content of the comand, which is 0 in this case because I want to advance 1 timestep only.
I have sniff the communication to check if the bytes were sent correctly, and I even receive and ACK from SUMO, but the timestep doesn't improve and I don't receive the answer from the Server.
What you have specified is an application layer protocol. It's defined on top of TCP. thus you still use a socket for the communication to send/receive data, and use the SUMO specification to know how to encode/decode the messages that you send.
I found the mistake. The error was not in the code, but in the way I launched SUMO. The "steplength" was not initialized, therefore, the timesteps were being done, but the simulation time was not changing because of that.
Related
I am using .NET 6.0 and recently using int numBytes = client.Receive(bytes); has been taking around about 3 minutes.
The Socket variable is called client.
This issue was not occuring 3 days ago.
The full code that I am using is:
string data = "";
byte[] bytes = new byte[2048];
client = httpServer.Accept();
// Read inbound connection data
while (true)
{
int numBytes = client.Receive(bytes); // Taking about 3 minutes here
data += Encoding.ASCII.GetString(bytes, 0, numBytes);
if (data.IndexOf("\r\n") > -1 || data == "")
{
break;
}
}
The timing is also not always consistent. Sometimes (rarely) it can be instant and othertimes it can take 3 minutes - 1 hour.
I have attempted the following:
Restarting my computer
Changing networks
Turning off the firewall
Attempting on a different computer
Attempting on a different computer with the firewall off
Using a wired and wireless connection
However none of these worked and instead resulted in the same issue.
What I expect to happen and what used to happen is that it would continue through the code normally instead of being hung up on 1 line of code for a long time.
You could use the client.Poll() method to check if data is available to be read from the socket before calling client.Receive().
If client.Poll() returns false, it means that there is no data available to be read and you can handle that situation accordingly.
I have 2 GUI applications, one in C++ and one in C#.
The applications are the same and there is a function that writes and reads from COM port.
When I run my C++ app I receive the right result from the Serial.Read which is a buffer with 24 bytes.
But when I run my C# app I receive uneven results:
* Just 1 byte buffer if I don`t put sleep between write and read.
* Different sizes if I do put sleep between write and read (between 10-22 bytes).
What could be the reason for that?
My C++ code:
serial.write(&c, 1, &written);
serial.read(read_buf, read_len, &received); // received = 24
My C# code:
serial.Write(temp_char, 0, 1);
received = serial.Read(read_buff, 0,read_len); // received = 1
C# with sleep:
serial.Write(temp_char, 0, 1);
Thread.Sleep(100);
received = serial.Read(read_buff, 0,read_len); // received = (10~22)
Serial ports just give a stream of bytes, they don't know how you've written blocks of data to them. When you call read, the bytes that have been received are returned, if that isn't a complete message you need to call read repeatedly until you have the whole message.
You need to define a protocol to indicate message boundaries, this could be a special character (e.g. a new line in a text based protocol) or you can prefix your messages with a length.
Sorry if the title is hard to understand but I don't really know how to put it in short. Let me explain.
I am currently developing a LAN-filesharing university project.
Everyone running the application will have to notify others that they available for file transfer. My idea is to Join a multicast group upon launch, and send a sort of "keep alive" packet in multicast: with keep alive I mean that this packet will tell all the receivers that the sender is still available for transferring files, if other users want to. So e.g.: I'm running the app and it will send this packet every 50 s or so, and other people in my network running my application will receive this packet and keep me in their memory.
This is what the client does (this is just an example, not actual code), at this point of the app I have already joined the multicast group and set the destination end point:
// Sending first data, e.g.: my ip address...
client.Send(buffer, buffer.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my real name...
client.Send(buffer2, buffer2.Length, MulticastAddressEndPoint);
// Sending other data, e.g.: my username...
client.Send(buffer3, buffer3.Length, MulticastAddressEndPoint);
From the official documentation I read that Send:
Sends a UDP datagram to the host at the specified remote endpoint.
So I'm guessing that I am sending 3 datagrams.
My listener thread is something like this:
IPEndPoint from = new IPEndPoint(IPAddress.IPv6Any, port);
while(true)
{
// Receive the ip
byte[] data = client.Receive(ref from);
// Receive the first name
data = client.Receive(ref from);
// Receive the username
data = client.Receive(ref from);
}
Now for the real question:
Let's suppose that two people are sending these 3 packets at the exactly same time (with their values of course, so different ip address etc), and no packet is dropped and they are delivered all in the correct sequence (first ip, then name, then username). The question is: I have absolutely NO guarantees that I will receive packets in this order:
packet1_A | packet2_A | packet3_A | packet1_B | packet2_B | packet3_B
instead of this
packet1_A | packet1_B | packet2_A | packet2_B | packet3_A | packet3_B
am I right?
The only thing that I can do is pack all information in one single byte array and then send it, right? This seems the most reasonable thing to do, but what if my information exceeds the 1500 bytes of ethernet? My datagram will be sent in more packets, so would I experience the same "interference" or my NIC will detect packets belonging to the same datagram and join them again and then deliver it to the OS?
I want to explain my understanding with example. Let stream be any abstract buffered network stream with 4 bytes buffer. Let there be some byte-to-byte writing process (drawed like FIFO).
write->| | | | |----->net
----->net operation is very slow, and we want to minimize it's quantity. Here buffer helps.
write->| 1 | | | |----->net
write->| 5 | 1 | | |----->net
write->| 12 | 5 | 1 | |----->net
write->| 7 | 12 | 5 | 1 |----->net
Here, or, maybe, some time earlier, .NET virtual machine, or operating system, decides to complete writing operation and flushes the data:
write->| | | | |----->net 7 12 5 1->
So, write-> operations become very fast, and, at least after lag while stream closing, data become sended to the remote host.
In the code it can be look like this:
using(networkStream)
for(var i = 0; i < 4; i++)
networkStream.WriteByte(getNextByte());
Am I right? If getNextByte operation will lag a thread, can I count on that data will be passed to the stream stealthily (asynchroniously), will not WriteByte lag all the code? Or it will lag four times rarely? Haven't I implement some circular buffer to pass data to it, and launch additional thread which will read data from buffer and pass data to the network stream?
I also hope a lot that buffered network stream can increase speed of data receiving.
read<-| | 1 | 5 | 12 |<-----net <-7
using(networkStream)
while((b = networkStream.ReadByte()) >= 0)
process(b);
If I synchroniously get and process bytes from buffered network stream, can I count on that data to the stream buffer will be transmitted by networkStream stealthily (asynchroniously), will not ReadByte lag all the code? Or it will lag four times rarely?
P.S. I know that standard NetworkStream stream is buffered.
Wanna tell about my concrete case. I have to realize striping of stream. I read data from stream from remote client and want to pass it to several streams to remote servers, but with alternating (called it forking) (image a), like this
var i = 0;
while((c = inStream.Read(buf, 0, portions[i])) > 0)
{
outStreams[i].Write(buf, 0, c);
i = (i + 1) % outStreams.Length;
}
image b shows merging process, coding in the same way.
I don't want to doom remote client to wait while program will do slow Write to remote servers operations. So, I tried to organize manually backgroung writting from network to inStream and backgroung reading from outStreams to network. But maybe I haven't care while I'm using buffered streams? Maybe buffered streams eliminate such breaks of read-write processes?
I have an Arduino microcontroller with a Sparkfun WiFly shield.
I build a simple program in C#/.NET that connects to the Arduino using System.Net.Sockets:
Socket Soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
public void SendMsg(string msg)
{
try
{
byte[] buffer = StrToByteArray(msg);
if (Soc.Connected)
Soc.Send(buffer);
else
{
Soc.Connect(this.remoteIP);
Soc.Send(buffer);
}
}
catch (Exception e){}
}
On the arduino I have:
while(SpiSerial.available() > 0) {
byte b = SpiSerial.read();
Serial.println(b);
}
When a socket connection does a handshake, I get: "*OPEN*" and when closed, I get: "*CLOS*".
The problem is that I get the messages one byte by another and sometimes I don't get the full message on one while loop.
So if I use the code I showed above on the Arduino, my serial terminal looks like:
*
O
P
E
N
*
T
E
S
T
*
C
L
O
S
*
So how can I figure out the message the PC is trying to send?
I know I need somehow to use a special byte that will symbolise the end of my message. (A special byte that I won't use in my message, only to symbolise the end of a message)
But how can I do it? And which byte to use?
You need to design your own protocol here. You should define a byte (preferably one that won't occur in the data) to indicate "start", and then you have three choices:
follow the start byte with a "length" byte indicating how much data
to read
define an "end" byte that marks the end of your data
read data until you have a complete message that matches one of
the ones you expect
The third option is the least extensible and flexible, of course, as if you already have a message "OPEN" you can't then add a new message "OPENED" for instance.
If you take the second option and define an "end" byte then you need to worry about escaping that byte if it occurs within your data (or use another byte that is guaranteed not to be in your data).
Looking at your current example, a good starting point would be to simply prefix each message with a length prefix.
If you want to support long messages you can use a 2 byte length prefix, then you read the first 2 bytes to get the length and then you continue reading from the socket until you have read the number of bytes indicated by the length prefix.
Once you have read a complete message you are then back to expecting to read the length prefix for the next message and so on until the communication is terminated by one of the parties.
Of course in between all this you need to check for error conditions like the socket on one end being closed prematurely etc. and how to handle the potential partial messages that can result form the premature closing of the socket.