This is what I am trying to do. I am taking byte data from an incoming socket program. By default, the bytes passed in to be Encoded and appended to my string will be 1500 bytes since this is the size I defined for the bytes array. My question is, I would like to know how to pass in part of the byte array instead of the whole 1500 bytes.
IPAddress localAddr = IPAddress.Parse(args[0]);
System.Console.WriteLine("The local IP is {0}", localAddr);
Int32 port = int.Parse(args[1]);
System.Console.WriteLine("The port is {0}", port);
TcpListener myListener = new TcpListener(localAddr, port);
byte[] bytes = new byte[1500];
string sem = "";
do
{
int flag = 0;
int rec = 1;
Console.Write("Waiting");
myListener.Start();
Socket mySocket = myListener.AcceptSocket();
// receiving the hl7 message
StringBuilder sbb = new StringBuilder();
do{
// bytes = null;
rec = mySocket.Receive(bytes,SocketFlags.None);
// rec = mySocket.Receive(bytes);
Console.WriteLine("rec = {0} ",rec);
for (int i=0; i<bytes.Length; i++)
{
if (bytes[i]==0x1C)
{
flag = 1;
}
}
sbb.Append(Encoding.ASCII.GetString(bytes));
}while (flag == 0);
Firstly, let's be clear about what the code shown does:
it creates an empty byte array
it decodes from this array using an encoding, creating a new string
it passes this string to append to a StringBuilder
What it doesn't do is "copy bytes to a string" - not least a string is essentially "char" data (16 bits each), not byte data. If you wanted to treat byte data as char data, it would just-about work for UTF-16 (depending on the endianness), but not ASCII.
Re choosing how much to append:
Encoding.GetString has an overload to specify the offset and count of byte data to consider
StringBuilder.Append has an overload to specify the offset and count of char data to consider
Either or both may be useful here; however, I don't think the code does what you think it does; there are easier ways to initialise a StringBuilder
Related
For the sake of this question, let's assume nothing is wrong with the server and let's just focus on the client and how it receives the data.
So I have this client / server setup going and this is how it currently flows.
The server sends a packet that's 5000 bytes.
All the packets are structured like this.. The first 4 bytes are reserved for the length of the entire packet, the next byte is reserved for the OpCode.. The remaining bytes represents the actual payload.
Length (4 bytes) | OpCode (1 byte) | Payload (x bytes)
On the client I'm currently doing this
byte[] RawBuffer = new byte[Constants.BufferSize]; //Constants.BufferSize = 1024
int packLen = 0;
int totalReceived = 0;
private byte[] allData;
private void ReceiveCallback(IAsyncResult ar)
{
var stream = (NetworkStream)ar.AsyncState;
var received = stream.EndRead(ar);
Debug.Print($"Data received: {received} bytes");
//If we haven't assigned a length yet
if (packLen <= 0)
{
//Use "allData" as the final buffer that we use to process later down the line.
using (var ms = new MemoryStream(RawBuffer))
using (var reader = new BinaryReader(ms))
{
packLen = reader.ReadInt32();
allData = new byte[packLen];
}
}
Buffer.BlockCopy(RawBuffer, 0, allData, 0, received);
totalReceived += received;
if (totalReceived == allData.Length)
{
Debug.Print($"We've successfully appended {allData.Length} bytes out of {packLen}");
}
stream.BeginRead(RawBuffer, 0, Constants.BufferSize, ReceiveCallback, _stream);
}
That only works if I send 1 packet from the server to the client, but if I send 5 packets for instance it never actually splits them up.
How do I properly split them up so that once it's received the full packet based on the Length header, it prints out "Received packet successfully"
I have a device emulator which accept the data as text from socket.Below code works fine until I send from 0x00 to 0x7F means upto Ascii limit (0 to 127).
Issue arise when I try to send beyond the Ascii limit like 0x80,0x81. It send to emulator 0x3F('?') and it change the whole meaning of command. because it does not able to understand this.
So What may be the possible solution to send the data beyond Ascii limit
Send data code:
//string data = textBox1.Text;
string d1 = ConvertHex("35"); //getting exact byte in socket
byte[] buffer = Encoding.ASCII.GetBytes(d1);
clientStream.Write(buffer, 0, buffer.Length);
clientStream.Flush();
ConverHex function:
public static string ConvertHex(String hexString)
{
try
{
string ascii = string.Empty;
for (int i = 0; i < hexString.Length; i += 2)
{
String hs = string.Empty;
hs = hexString.Substring(i, 2);
uint decval = System.Convert.ToUInt32(hs, 16);
char character = System.Convert.ToChar(decval);
ascii += character;
}
return ascii;
}
catch (Exception ex) { Console.WriteLine(ex.Message); }
return string.Empty;
}
but when I send more than 79 then I get 3F in emulator.
7F is in fact the upper bound. Because that's 127 in decimal, the highest code point supported by the ASCII encoding. Code points higher than that get decoded to a question mark, having the code point of 63 or 3F in hexadecimal.
That's because you're using text to transmit binary data. Don't do that. See How can I convert a hex string to a byte array? for a proper implementation of "hex string to byte array".
I have a custom binary protocol response I'm receiving from a TCP server in the following format:
Response Structure
Name Length Description
Header 2 bytes Header is a fixed value Hex 0x0978.
Status 1 byte A value of 0 is Success. A value other than 0 indicates an error. A full description of each possible error is described below.
Length 4 bytes Unsigned integer of total request length including all bytes in the request (server returns little endian UInt32)
Data Variable, 0 to 1,048,576 bytes Data sent from client to server to be encoded or decoded depending on the operation being requested.
Checksum 1 byte The checksum of bytes in the request from Header to Data (i.e. excluding checksum byte).
The problem I have is that the data is of variable size, so I don't know what size to make the byte array that the response is read into from the stream. How can I achieve this?
EDIT: I want the first 7 bytes to be also included with the data in the final byte array.
One possible solution:
class Program
{
private static byte[] data = new byte[8]
{
// header
0,
0,
// status
1,
// message size
8,
0,
0,
0,
// data
1
};
static byte[] Read(Stream stream)
{
const int headerLength = 7;
const int sizePosition = 3;
var buffer = new byte[headerLength];
stream.Read(buffer, 0, headerLength);
// for BitConverter to work
// the order of bytes in the array must
// reflect the endianness of the computer system's architecture
var size = BitConverter.ToUInt32(buffer, sizePosition);
var result = new byte[size];
Array.Copy(buffer, result, headerLength);
stream.Read(result, headerLength, (int)size - headerLength);
return result;
}
static void Main(string[] args)
{
var stream = new MemoryStream(data);
byte[] bytes = Read(stream);
foreach (var b in bytes)
{
Console.WriteLine(b);
}
}
}
I'm trying to send a long string from a python server to a C# client. The string is 230400 bytes long. I'm both sending and receiving in chunks of 64 bytes. Server code:
import socket
def initialize():
global s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', 1719))
s.listen()
initialize()
while(1):
sock, addr = s.accept()
msgstr = generate_msg_string() # irrelevant
msglen = len(msgstr)
totalsent = 0
while totalsent < msglen:
sent = sock.send(msgstr[totalsent:totalsent+64])
totalsent = totasent + sent
sock.close()
Client code:
TcpClient tcpClient = new TcpClient();
tcpClient.Connect(ip, 1719);
byte[] ba = new byte[230400];
byte[] buffer = new byte[64];
tcpClient.ReceiveBufferSize = 64
int i=0;
while(i != 230400)
{
stream.Read(buffer, 0, 64);
buffer.CopyTo(ba, i);
i += 64;
}
tcpClient.Close();
I've checked a few connections in a row - the first 1523 bytes are correct and all the rest are gibberish - at least seemingly random.
Any idea what might be the cause?
while(i != 230400)
{
stream.Read(buffer, 0, 64);
buffer.CopyTo(ba, i);
i += 64;
}
The fundamental error here is assuming that Read read 64 bytes. It can read any of:
0 if the socket gets closed for any reason
64 bytes if that happens to be available and it chooses to
1-63 bytes, just for fun
You are not guaranteed anything other than "non-positive if the stream is closed, else at least 1 byte and no more than 64 bytes"
You must must must read the return value of Read and only process that much of the buffer. This remains the case if you switch to Socket.Receive, by the way.
Also - why don't you just fill ba in the first place, increment in the offset and decrementing the count each time?
int count = 230400, offset = 0, read;
byte[] ba = new byte[count];
while(count > 0 && (read=stream.Read(ba, offset, count)) > 0)
{
offset += read;
count -= read;
}
if(read!=0) throw new EndOfStreamException();
It seems that I hurried with the question.
Changing TcpClient to Socket fixed the problem. The approach remained the same.
I am doing some data chunking and I'm seeing an interesting issue when sending binary data in my response. I can confirm that the length of the byte array is below my data limit of 4 megabytes, but when I receive the message, it's total size is over 4 megabytes.
For the example below, I used the largest chunk size I could so I could illustrate the issue while still receiving a usable chunk.
The size of the binary data is 3,040,870 on the service side and the client (once the message is deserialized). However, I can also confirm that the byte array is actually just under 4 megabytes (this was done by actually copying the binary data from the message and pasting it into a text file).
So, is WCF causing these issues and, if so, is there anything I can do to prevent it? If not, what might be causing this inflation on my side?
Thanks!
The usual way of sending byte[]s in SOAP messages is to base64-encode the data. This encoding takes 33% more space than binary encoding, which accounts for the size difference almost precisely.
You could adjust the max size or chunk size slightly so that the end result is within the right range, or use another encoding, e.g. MTOM, to eliminate this 33% overhead.
If you're stuck with soap, you can offset the buffer overhead Tim S. talked about using the System.IO.Compression library in .Net - You'd use the compress function first, before building and sending the soap message.
You'd compress with this:
public static byte[] Compress(byte[] data)
{
MemoryStream ms = new MemoryStream();
DeflateStream ds = new DeflateStream(ms, CompressionMode.Compress);
ds.Write(data, 0, data.Length);
ds.Flush();
ds.Close();
return ms.ToArray();
}
On the receiving end, you'd use this to decompress:
public static byte[] Decompress(byte[] data)
{
const int BUFFER_SIZE = 256;
byte[] tempArray = new byte[BUFFER_SIZE];
List<byte[]> tempList = new List<byte[]>();
int count = 0;
int length = 0;
MemoryStream ms = new MemoryStream(data);
DeflateStream ds = new DeflateStream(ms, CompressionMode.Decompress);
while ((InlineAssignHelper(count, ds.Read(tempArray, 0, BUFFER_SIZE))) > 0) {
if (count == BUFFER_SIZE) {
tempList.Add(tempArray);
tempArray = new byte[BUFFER_SIZE];
} else {
byte[] temp = new byte[count];
Array.Copy(tempArray, 0, temp, 0, count);
tempList.Add(temp);
}
length += count;
}
byte[] retVal = new byte[length];
count = 0;
foreach (byte[] temp in tempList) {
Array.Copy(temp, 0, retVal, count, temp.Length);
count += temp.Length;
}
return retVal;
}