All the posts and search results that I've reviewed regarding WCF and uploading of large files have pretty much have the same answers for increasing the maximums like maxReceivedMessageSize. That's great I guess if you're just trying to get it working, but what if you have an actual maximum that you want to enforce. How do you handle that better?
Currently the client gets a System.ServiceModel.EndpointNotFoundException
saying, "There was no endpoint listening", inner exception "The remote server returned an error: (404) Not Found." How could I catch this in my service and return a better error message? Like "File exceeds maximum allowable size"
You can pass your file as a Stream parameter to your service method. Doing so allows to dynamically check the file size without processing it entirely (one should set here some sane maxReceivedMessageSize which shouldn't be exceeded in the most of real-world cases).
Here is an example of RESTful WCF service:
[ServiceContract]
public interface IFileService
{
[OperationContract, WebInvoke(Method = "POST", UriTemplate = "/ProcessFile")]
string ProcessFile(Stream file);
}
public class FileService : IFileService
{
public string ProcessFile(Stream file)
{
const int bufferLength = 32;
const int maxSize = 256;
var buffer = new byte[bufferLength];
int bytesRead, totalBytesRead = 0;
do
{
bytesRead = file.Read(buffer, 0, bufferLength);
totalBytesRead += bytesRead;
if (totalBytesRead > maxSize)
return $"File is too large - maximum {maxSize} bytes allowed.";
}
while (bytesRead > 0);
return $"Total {totalBytesRead} bytes read.";
}
}
Here is a sample code for hosting this service:
var host = new ServiceHost(typeof(FileService));
host.AddServiceEndpoint(typeof(IFileService),
new WebHttpBinding { TransferMode = TransferMode.StreamedRequest },
"http://localhost:8080")
.EndpointBehaviors.Add(new WebHttpBehavior());
host.Open();
Console.WriteLine("The host is opened. Press ENTER to exit...");
Console.ReadLine();
host.Close();
Note that one needs TransferMode = TransferMode.StreamedRequest to be able to process large files - see this question for details.
Related
I want to create same message and send it with C# as I do it with C++ where all works. Note that I have C# client where I have troubles, C++ client where all works fine and C++ server that should read messages from both C# and C++ clients.
Here is how I send the message from C++:
void ConnectAuthserverCommand::SendLogin(tcp::socket &s, const flatbuffers::FlatBufferBuilder &builder) const {
ClientOpcode opc = CLIENT_LOGIN_REQUEST;
flatbuffers::FlatBufferBuilder builder2;
auto email = builder2.CreateString("test#abv.bg");
auto password = builder2.CreateString("test");
auto loginRequest = Vibranium::CreateLoginRequest(builder2, email, password);
builder2.FinishSizePrefixed(loginRequest);
size_t size2 = builder2.GetSize();
uint8_t *buf2 = builder2.GetBufferPointer();
uint8_t *actualBuffer2 = new uint8_t[size2 + 2];
actualBuffer2[1] = (opc >> 8);
actualBuffer2[0] = (opc&0xFF);
memcpy(actualBuffer2 + 2, buf2, size2);
boost::asio::write(s, boost::asio::buffer(actualBuffer2,size));
}
ClientOpcode is as follows:
enum ClientOpcode : uint16_t{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
};
What I do is the following: I get a ClientOpcode which I want to put infront of FlatBuffers message. So I create an array of uint8_t which I extend with exactly 2 bytes(Because the size of uint16_t is 2 bytes.) Than on the server I read the first 2 bytes in order to get the header and here is how I do that:
void Vibranium::Client::read_header() {
auto self(shared_from_this());
_packet.header_buffer.resize(_packet.header_size);
boost::asio::async_read(socket,
boost::asio::buffer(_packet.header_buffer.data(), _packet.header_size),
[this, self](boost::system::error_code ec,std::size_t bytes_transferred)
{
if ((boost::asio::error::eof == ec) || (boost::asio::error::connection_reset == ec))
{
Disconnect();
}
else
{
assert(_packet.header_buffer.size() >= sizeof(_packet.headerCode));
std::memcpy(&_packet.headerCode, &_packet.header_buffer[0], sizeof (_packet.headerCode));
if(_packet.headerCode)
read_size();
else
Logger::Log("UNKNOWN HEADER CODE", Logger::FatalError);
}
});
}
So far so good, however I am not able to send correctly formatted same message from the C# client. Note that I send exactly same data, take a look:
Client authClient = GameObject.Find("Client").GetComponent<AuthClient>().client; // This is how I get Client class instance.
ClientOpcode clientOpcode = ClientOpcode.CLIENT_LOGIN_REQUEST;
var builder = new FlatBuffers.FlatBufferBuilder(1);
var email = builder.CreateString("test#abv.bg");
var password = builder.CreateString("test");
var loginRequest = LoginRequest.CreateLoginRequest(builder, email, password);
builder.FinishSizePrefixed(loginRequest.Value);
authClient.Send(builder, clientOpcode);
And here is how I actually prepend the header and send the data in C#:
public static Byte[] PrependClientOpcode(FlatBufferBuilder byteBuffer, ClientOpcode code)
{
var originalArray = byteBuffer.SizedByteArray();
byte[] buffer = new byte[originalArray.Length + 2];
buffer[1] = (byte)((ushort)code / 0x0100);
buffer[0] = (byte)code;
Array.Copy(originalArray, 0, buffer, 2, originalArray.Length);
return buffer;
}
public void Send(FlatBufferBuilder builder, ClientOpcode opcode)
{
byte[] buffer = builder.SizedByteArray();
var bufferToSend = PrependClientOpcode(builder, opcode);
if (bufferToSend.Length > MaxMessageSize)
{
Logger.LogError("Client.Send: message too big: " + bufferToSend.Length + ". Limit: " + MaxMessageSize);
return;
}
if (Connected)
{
// respect max message size to avoid allocation attacks.
if (bufferToSend.Length <= MaxMessageSize)
{
// add to send queue and return immediately.
// calling Send here would be blocking (sometimes for long times
// if other side lags or wire was disconnected)
sendQueue.Enqueue(bufferToSend);
sendPending.Set(); // interrupt SendThread WaitOne()
}
}
else
{
Logger.LogWarning("Client.Send: not connected!");
}
}
ClientOpcode enum on C# is as follows:
public enum ClientOpcode : ushort
{
CLIENT_AUTH_CONNECTION = 0x001,
CLIENT_LOGIN_REQUEST = 0x002,
CLIENT_NUM_MSG_TYPES = 0x003,
}
I think I can use ushort as a replacement of uint16_t in C#. That is why ClientOpcode is ushort.
When I send the message I get error on the client saying UNKNOWN HEADER CODE. If you take a look at the C++ server code to read the header you'll see that this message is displayed when the server is unable to read the header code. So somehow I am unable to place the ClientOpcode header correctly infront of the TCP message send from the C# client.
In order to find out what are the differences I installed WireShark on the host to track both messages. Here are they:
This one is from the correctly working C++ client:
And this one is the dump of the C# client:
As you can see on the second image of the TCP dump the Length of is bigger. C++ message is with length of 58 where C# message's length is 62. Why?
The C++ client is sending data:
0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
When the C# client is sending:
0000003a0200340000000c00000008000c00040008000800000014000000040000000400000074657374000000000b00000074657374406162762e626700
The C# client is adding to it's message in front 0000003a. If I remove that messages should be the same and all will work.
Why is my C# client adding those extra data in front and how can I fix it?
Details about the application:
Developed under Visual Studio 2019 (Windows 10)
Designed on the UWP platform with C# and the XAML language
The application receives information from a remote server. A connection via Socket is used for communication between the two parties.
The server sent by frame a message, and in this message we find several essential elements with each a size and a different definition as can be seen below:
Content of each message:
- Name: ID Message / Type : UINT16 / Size : 4 bytes
- Name: ID Device/ Type : UINT8 / Size : 4 bytes
- Name: Temperature / Type : UINT16 / Size : 4 bytes
- Name: Activation / Type : BOOLEAN / Size : 4 bytes
- Name: Weather / Type : STRING[32] / Size : 16 bytes
To recover the data transmitted via the socket, the application has a background task that takes care of retrieving all the information.
Here is my code which is therefore in the background task:
StreamReader reader;
int SizeBuffer = 2048;
int SizeReceive = 0;
reader = new StreamReader(socket.InputStream.AsStreamForRead());
string result;
result = "";
while (true)
{
char[] buffer = new char[SizeBuffer];
SizeReceive = await reader.ReadAsync(buffer, 0, SizeBuffer);
int i = 0;
Debug.WriteLine("Text 1 : ")
while (i < 2047)
{
Debug.WriteLine(buffer[i]);
i++;
}
string data = new string(buffer);
if (data.IndexOf("\0") >= 0 || reader.EndOfStream)
{
result = data.Substring(0, data.IndexOf("\0"));
break;
}
result += data;
}
Debug.WriteLine("Text 2 : " + result);
dataString = result;
I am using two Debug.WriteLines to see my incoming data.
That's where there is a problem. For the message Text1, I get this kind of character: ������������������������
And for the Text2 message, I get a single character: �
How can I get my message completely and store it in each of the parameters listed above in relation to its type and corresponding size?
A black diamond with a question mark character is a placeholder for unrecognized characters. It looks like a problem with the encoding of data received from the server.
The default StreamReader constructor with one argument uses UTF-8 encoding. Maybe your server sends data in another encoding.
Try to explicitly specify the encoding using the StreamReader(stream, encoding) constructor.
Here is the solution:
try
{
DataReader reader1 = new DataReader(socket.InputStream);
reader1.InputStreamOptions = InputStreamOptions.Partial;
uint numFileBytes = await reader1.LoadAsync(2048);
byte[] byArray = new byte[numFileBytes];
reader1.ReadBytes(byArray);
string test = BitConverter.ToString(byArray);
Debug.WriteLine("Conversion : " + test);
}
catch (Exception exception)
{
Debug.WriteLine("ERROR LECTURE : " + exception.Message);
}
I wrote a C# chat software that uses a new (at least for me) system that I called request system. I don't know if that has been created before, but for now I think of it as my creation :P
Anyhow, this system works like this:
soc receives a signal
checks the signal
if the data it just received is the number 2, the client software knows that the server is about to send a chat message. if the number is 3, so the client knows that the server is about to send the member list, and so on.
The problem is this: when I do step-by-step in VS2012 it works fine, the chat is working properly. When I use it on debug mode or just run it on my desktop, there seems to be missing data, and it shouldn't be because the code is working just fine...
Example of code for the sending&receiving message on client:
public void RecieveSystem()
{
while (true)
{
byte[] req = new byte[1];
soc.Receive(req);
int requestID = int.Parse(Encoding.UTF8.GetString(req));
if (requestID == 3)
{
byte[] textSize = new byte[5];
soc.Receive(textSize);
byte[] text = new byte[int.Parse(Encoding.UTF8.GetString(textSize))];
soc.Receive(text);
Dispatcher.Invoke(() => { ChatBox.Text += Encoding.UTF8.GetString(text) + "\r\n"; });
}
}
}
public void OutSystem(string inputText)
{
byte[] req = Encoding.UTF8.GetBytes("3");
soc.Send(req);
byte[] textSize = Encoding.UTF8.GetBytes(Encoding.UTF8.GetByteCount(inputText).ToString());
soc.Send(textSize);
byte[] text = Encoding.UTF8.GetBytes(inputText);
soc.Send(text);
Thread.CurrentThread.Abort();
}
and on the server:
public void UpdateChat(string text)
{
byte[] req = Encoding.UTF8.GetBytes("3");
foreach (User user in onlineUsers)
user.UserSocket.Send(req);
byte[] textSize = Encoding.UTF8.GetBytes(Encoding.UTF8.GetByteCount(text).ToString());
foreach (User user in onlineUsers)
user.UserSocket.Send(textSize);
byte[] data = Encoding.UTF8.GetBytes(text);
foreach (User user in onlineUsers)
user.UserSocket.Send(data);
}
public void RequestSystem(Socket soc)
{
~~~
}
else if (request == 3)
{
byte[] dataSize = new byte[5];
soc.Receive(dataSize);
byte[] data = new byte[int.Parse(Encoding.UTF8.GetString(dataSize))];
soc.Receive(data);
UpdateChat(Encoding.UTF8.GetString(data));
}
}
catch
{
if (!soc.Connected)
{
Dispatcher.Invoke(() => { OnlineMembers.Items.Remove(decodedName + " - " + soc.RemoteEndPoint); Status.Text += soc.RemoteEndPoint + " Has disconnected"; });
onlineUsers.Remove(user);
Thread.CurrentThread.Abort();
}
}
}
}
What could be the problem?
You're assuming that you'll have one packet for each Send call. That's not stream-oriented - that's packet-oriented. You're sending multiple pieces of data which I suspect are coalesced into a single packet, and then you'll get them all in a single Receive call. (Even if there are multiple packets involved, a single Receive call could still receive all the data.)
If you're using TCP/IP, you should be thinking in a more stream-oriented fashion. I'd also encourage you to change the design of your protocol, which is odd to say the least. It's fine to use a length prefix before each message, but why would you want to encode it as text when you've got a perfectly good binary connection between the two computers?
I suggest you look at BinaryReader and BinaryWriter: use TcpClient and TcpListener rather than Socket (or at least use NetworkStream), and use the reader/writer pair to make it easier to read and write pieces of data (either payloads or primitives such as the length of messages). (BinaryWriter.Write(string) even performs the length-prefixing for you, which makes things a lot easier.)
I've been doing a lot of research on this issue but unfortunately I wasn't able to find a solution.
My problem is that I am experiencing a quite high CPU load even on powerful machines when using WCF (NetTcpBinding, Streamed) - to be more specific, my i5 860 has a load of 20-40 percent when handling 20 client threads. When it comes to deploying a real service (and not a testing project) and there's around 50 real clients sending small data packages every second (around 20kb per transfer) the CPU load is already at 80-90 percent. In the end there should be 200+ clients but I can't imagine how this should work with such CPU loads...
For testing purposes I have set up a small project with just a simple client and server based on WCF streamed transfer using NetTcpBinding. There's already a lot of 'desperation code' in it, because I have tried to make it work... for my testing I used a 200MB file that's being sent to the WCF service 20 times.
Here's the contract:
[ServiceContract(Namespace = "WCFStreamTest.WCFService")]
public interface IStreamContract
{
[OperationContract(Name = "ReceiveStream")]
StreamMessage ReceiveStream(StreamMessage msg);
[OperationContract(Name = "SendStream")]
StreamMessage SendStream(StreamMessage msg);
}
The StreamMessage class used in here is just a MessageContract containing a string header and a Stream object.
The server code looks as follows:
[ServiceBehavior(IncludeExceptionDetailInFaults = false, InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = true, MaxItemsInObjectGraph = int.MaxValue)]
public class StreamService : IStreamContract
{
public StreamMessage ReceiveStream(StreamMessage msg)
{
if (File.Exists(msg.Parameters))
return new StreamMessage() { Parameters = msg.Parameters, DataStream = new System.IO.FileStream(msg.Parameters, System.IO.FileMode.Open, System.IO.FileAccess.Read) };
return new StreamMessage();
}
public StreamMessage SendStream(StreamMessage msg)
{
if (msg.Parameters.Trim().Length > 0)
{
int bufferSize = 8096 * 4;
byte[] buffer = new byte[bufferSize];
int bytes = 0;
while ((bytes = msg.DataStream.Read(buffer, 0, bufferSize)) > 0)
{
byte b = buffer[0];
b = (byte)(b + 1);
}
}
return new StreamMessage();
}
}
The test project just uses the SendStream method for testing - and that method just reads the data stream and does nothing else.
At this point I think I'll just save your time reading and don't post the full code in here. Maybe a download link to the demo project will be sufficient? (To make it work there is one line in the client's Program.cs that's object to be changed: FileInfo fi = new FileInfo(#"<<<>>>");)
WCFStreamTest Project
I'd be really happy about any idea on how to lower CPU usage... thanks in advance for any help and tips...
Can you try to make the thread sleep in while loop, and see how it goes.
while ((bytes = msg.DataStream.Read(buffer, 0, bufferSize)) > 0)
{
byte b = buffer[0];
b = (byte)(b + 1);
Thread.Sleep(100);
}
If it is a video streaming service you might have to tweak the sleep interval.
I'm trying to create a transparent proxy with c#. i was able to transfer my network traffic into my proxy client and redirect it to my proxy server. it's working, but i have 2 problems,
1- It's slow, max speed is 60kbps, here is how i transfer traffic between my server and proxy client
while (SocketConnected(tcp_link.Client) &&
SocketConnected(_tcp.Client) &&
!ioError)
{
try
{
Thread.Sleep(1);
if (streamLink.DataAvailable)
{
byte[] l_buffer = new byte[4096];
int l_read = streamLink.Read(l_buffer, 0, l_buffer.Length);
byte[] l_data = new byte[l_read];
Array.Copy(l_buffer, l_data, l_data.Length);
byte[] l_send = MBR.reverse(l_data);
_stream.Write(l_send, 0, l_send.Length);
}
if (_stream.DataAvailable)
{
byte[] c_buffer = new byte[4596];
int c_read = _stream.Read(c_buffer, 0, c_buffer.Length);
byte[] c_data = new byte[c_read];
Array.Copy(c_buffer, c_data, c_data.Length);
byte[] c_send = MBR.reverse(c_data);
streamLink.Write(c_send, 0, c_send.Length);
}
}
catch (Exception ex)
{
onErrorLog(this, new ErrorLogEventArgs(ex));
ioError = true;
}
}
my other question is: when should i close my socket? and which one should get closed first? is http server going to close connection with my proxy server or i should disconnect?
sorry for my back english
I think it's not a problem with mere logic but rather about handling the parallelism. I have used SocketAsyncEventArgs for implementing a high performance, async TCP server and it shines.
A good article can be found here.