So I have two smartbulbs from Yeelight. I am trying to turn them on/off at the same time. You can achieve that by sending some TCP messages to them.
This works completely fine from NodeJS with this little script:
const net = require("net")
send_to_lamp("192.168.178.83", '{"id": 1, "method": "set_power", "params": ["on", "sudden"]}\r\n', answ => console.log(answ))
send_to_lamp("192.168.178.84", '{"id": 1, "method": "set_power", "params": ["on", "sudden"]}\r\n', answ => console.log(answ))
console.log("sent")
function send_to_lamp(ip, body, callback) {
var con = net.createConnection({port: 55443, host: ip}, () => {
con.write(`asd\r\n`) // send two requests to wake up connection
con.write(body)
})
}
But now comes the weird behavior to play: When trying to do the exact same thing from C#,
only the first bulb that is contacted reacts to the message. For example when executing this code:
using System.Net.Sockets;
Thread t1 = new Thread(con1);
t1.Start();
await Task.Delay(1000);
Thread t2 = new Thread(con2);
t2.Start();
await Task.Delay(5000);
async static void con1() {
await TryConnect("192.168.178.83", "{\"id\": 1, \"method\": \"set_power\", \"params\": [\"on\", \"sudden\"]}\r\n");
}
async static void con2() {
await TryConnect("192.168.178.84", "{\"id\": 1, \"method\": \"set_power\", \"params\": [\"on\", \"sudden\"]}\r\n");
}
static async Task TryConnect(String server, String message)
{
try
{
using (var client = new TcpClient()){
await client.ConnectAsync(server, 55443);
using (var netstream = client.GetStream())
using (var writer = new StreamWriter(netstream))
using (var reader = new StreamReader(netstream))
{
writer.AutoFlush = true;
netstream.ReadTimeout = 3000;
await writer.WriteLineAsync(message);
Console.WriteLine("sent {0}", message);
}
}
}
catch (ArgumentNullException e)
{
Console.WriteLine("ArgumentNullException: {0}", e);
}
catch (SocketException e)
{
Console.WriteLine("SocketException: {0}", e);
}
}
Only the lamp with the IP 192.168.178.83 turns itself on.
It even gets weirder. When just swapping the IP in the both con1/2 functions, so that the lamp with the IP 192.168.178.84 is called first, only it reacts and the lamp with the IP 192.168.178.83 does nothing.
I tried several different methods of sending TCP messages from C# (using async, multi-threading, have a dedicated class for each TcpClient ...)
Anyone got an idea what I can try to get this to work?
Wireshark log during NodeJS script
Wireshark log during C# script
Disclaimer: I do not know much about TCP connections and how they should look in Wireshark, but just seen from the software side of things this really seams nonsensical.
In the C # log you have RST because there you are using using which automatically closes the TCP connection. In Node.js, you would probably have to trigger the connection to close manually.
It's best to test your application using a TCP terminal (https://www.hw-group.com/software/hercules-setup-utility). Use it to create a local TCP server with listening port 55443.
Example settings
Now in your C # application refer both to IP 127.0.0.1 and change the transmitted content to distinguish between frames. For example, for the second lamp, set "id": 2.
This way you will see if your application is establishing a connection correctly and if the transferred data is complete.
Related
I'm working on a reader program. It is based on Winorms.
I need a code where winform would send via TCP (port 3573) some data on demand. (by demand I mean command GET when program receives it via TCP.
I'm kind of a newbie so this topic looks pretty hard for me to combine all of these: threads, TCPRead TCP Send, Event Handler...
So I need here help with an entire code or examples how to implement it.
I've tried some example codes from internet but none works (threading, TCPreader and TCPsender, event handling by TCPreaded)
On TCP Reader we receive GET and then we send some string lets say "hello world" by TCP Sender
Sockets are really hard to get right, and the API is just... nasty. Since that is just asking for mistakes, I'm going to recommend using the "pipelines" API, which is far more aligned to modern async code and is easier to get right (and has far better options in terms of frame processing). So; here's a pipelines example;
note that this requires Pipelines.Sockets.Unofficial, which is on nuget via:
<PackageReference Include="Pipelines.Sockets.Unofficial" Version="2.0.22" />
(adding this will automatically add all the other pieces you need)
using Pipelines.Sockets.Unofficial;
using System;
using System.IO.Pipelines;
using System.Net;
using System.Text;
using System.Threading.Tasks;
static class Program
{
static async Task Main()
{
var endpoint = new IPEndPoint(IPAddress.Loopback, 9042);
Console.WriteLine("[server] Starting server...");
using (var server = new MyServer())
{
server.Listen(endpoint);
Console.WriteLine("[server] Starting client...");
Task reader;
using (var client = await SocketConnection.ConnectAsync(endpoint))
{
reader = Task.Run(() => ShowIncomingDataAsync(client.Input));
await WriteAsync(client.Output, "hello");
await WriteAsync(client.Output, "world");
Console.WriteLine("press [return]");
Console.ReadLine();
}
await reader;
server.Stop();
}
}
private static async Task ShowIncomingDataAsync(PipeReader input)
{
try
{
while (true)
{
var read = await input.ReadAsync();
var buffer = read.Buffer;
if (buffer.IsEmpty && read.IsCompleted) break; // EOF
Console.WriteLine($"[client] Received {buffer.Length} bytes; marking consumed");
foreach (var segment in buffer)
{ // usually only one segment, but can be more complex
Console.WriteLine("[client] " + Program.GetAsciiString(segment.Span));
}
input.AdvanceTo(buffer.End); // "we ate it all"
}
}
catch { }
}
private static async Task WriteAsync(PipeWriter output, string payload)
{
var bytes = Encoding.ASCII.GetBytes(payload);
await output.WriteAsync(bytes);
}
internal static unsafe string GetAsciiString(ReadOnlySpan<byte> span)
{
fixed (byte* b = span)
{
return Encoding.ASCII.GetString(b, span.Length);
}
}
}
class MyServer : SocketServer
{
protected override Task OnClientConnectedAsync(in ClientConnection client)
=> RunClient(client);
private async Task RunClient(ClientConnection client)
{
Console.WriteLine($"[server] new client: {client.RemoteEndPoint}");
await ProcessRequests(client.Transport);
Console.WriteLine($"[server] ended client: {client.RemoteEndPoint}");
}
private async Task ProcessRequests(IDuplexPipe transport)
{
try
{
var input = transport.Input;
var output = transport.Output;
while (true)
{
var read = await input.ReadAsync();
var buffer = read.Buffer;
if (buffer.IsEmpty && read.IsCompleted) break; // EOF
Console.WriteLine($"[server] Received {buffer.Length} bytes; returning it, and marking consumed");
foreach (var segment in buffer)
{ // usually only one segment, but can be more complex
Console.WriteLine("[server] " + Program.GetAsciiString(segment.Span));
await output.WriteAsync(segment);
}
input.AdvanceTo(buffer.End); // "we ate it all"
}
}
catch { }
}
}
I could write this with raw sockets, but it would take a lot more code to show best practice and avoid problems - all of that ugliness is already hidden inside "pipelines".
Output:
[server] Starting server...
[server] Starting client...
[server] new client: 127.0.0.1:63076
press [return]
[server] Received 5 bytes; returning it, and marking consumed
[server] hello
[server] Received 5 bytes; returning it, and marking consumed
[client] Received 5 bytes; marking consumed
[client] hello
[server] world
[client] Received 5 bytes; marking consumed
[client] world
I have a server that needs to get instructions to run processes for clients on another machine.
The clients send a job message, the Server processes the job and later sends the back results.
I tried using the NetMQ Request-Response pattern (see below)
This works nicely for 1 client, BUT if a second client sends a request before previous client job is finished - I get an error.
I really need to be able to receive ad-hoc messages from clients, and send results when they are completed. Clearly, I am using the wrong pattern, but reading the ZeroMQ docs has not highlighted a more appropriate one.
namespace Utils.ServerMQ
{
class ServerMQ
{
public static void Go()
{
using (var responseSocket = new ResponseSocket("#tcp://*:393"))
{
while (true)
{
Console.WriteLine("Server waiting");
var message = responseSocket.ReceiveFrameString();
Console.WriteLine("Server Received '{0}'", message);
//System.Threading.Thread.Sleep(1000);
var t2 = Task.Factory.StartNew(() =>
{
RunProcMatrix(message, responseSocket);
});
}
}
}
public static void RunProcMatrix(object state, ResponseSocket responseSocket)
{
var process = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = Path.Combine(#"H:\Projects\Matrix\Matrix\bin\Debug\", "Matrix001.exe"),
Arguments = (string)state,
WindowStyle = ProcessWindowStyle.Normal,
CreateNoWindow = false
}
};
process.Start();
process.WaitForExit();
responseSocket.SendFrame((string)state);
}
}
}
You want a ROUTER socket on the server side, so it can receive multiple requests at a time. (Guide) REQ sockets on the client side are still fine unless the server may arbitrarily push data to them, then they need to be DEALER sockets.
Note that for sockets beyond REQ/RESP you need to manually handle the message envelope (the first frame of the message indicating its destination). Guide
The 0MQ docs are incredibly dense... I don't blame you for not intuiting this from them :)
This example from the NetMQ docs is full ROUTER-DEALER: https://netmq.readthedocs.io/en/latest/router-dealer/#router-dealer, you can take just the router side and it should work the same though.
From various examples, I have set up my named pipe loop like so:
NamedPipeServerStream pipeServer;
private void initPipeServer(string pipeName)
{
Task.Factory.StartNew(async () =>
{
pipeServer = new NamedPipeServerStream(pipeName, PipeDirection.InOut);
var threadId = Thread.CurrentThread.ManagedThreadId;
pipeServer.WaitForConnection();
onNotify("Client connected on thread[{0}]", threadId);
try
{
var pipeReader = new StreamReader(pipeServer);
var pipeWriter = new StreamWriter(pipeServer);
// how do I call and get a response from outside this loop?
// var result = await pipeReader.ReadLineAsync();
// await pipeWriter.WriteLineAsync("stuff");
// LOOP HERE SOMEHOW WAITING FOR ASYNC MESSAGES AND RETURNING RESPONSE
while (no body quits)
{
let communication happen, when a hotkey is pressed in Host, it
will use the named pipe to request and receive information from the client
}
pipeServer.Disconnect();
}
catch (IOException ex)
{
onNotify(ex);
}
finally
{
pipeServer.Close();
initPipeServer(pipeName);
}
}, TaskCreationOptions.LongRunning);
}
but how would I send and receive a response from outside the loop instead of just having a predetermined exchange?
Update. I think I'm looking for something like Async two-way communication with Windows Named Pipes (.Net) except I'm not using WCF and I'm not on a server. This is a WPF application.
Update 2. Ideal flow:
Host sets up named pipe server
Client connects to pipe server
When hotkey is pressed in Host, request is sent to client for information
Client returns information to Host
Proceed until either Host or Client quits, if Client quits, Host resets pipe server and waits for connection
I had recently asked a related question In C# how do I have a socket continue to stay open and accept new data? and somewhat solved it, but now I'm having trouble getting my server to receive data from the same client more than once.
I have written a simple socket server using a windows form in VS that has a button that calls a receive function
public void Receive()
{
try
{
byte[] bytes = new byte[256];
received = s1.Accept().Receive(bytes);
receivedText.Text += System.Text.ASCIIEncoding.ASCII.GetString(bytes);
}
catch (SocketException e)
{
Console.WriteLine("{0} Error code: {1}.", e.Message, e.ErrorCode);
return;
}
}
It works if the first time I send from my client, but if I try sending anything else and click on receive, my server just loops and never picks up the new data. However, if I send from somewhere else, or restart the connection from my client, I'm able to send data.
I would like to have my server able to receive any amount of data from the same client(s) at a time. Please ask if you need more code/details. Not sure what's relevant as I'm pretty new to socket programming.
You must call Accept() only once per client, not every time you want to receive new data. Accept() basically waits for a client to connect to your server socket, s1 (and returns a new socket to send/receive data with this client), so here each time your Receive() function is called your socket waits for another client to connect, that's why it works only once.
Here is an example (the code comes from your previous question) :
s1.Bind(endP);
s1.Listen(10);
Socket s2 = s1.Accept(); // Waits for a client to connect and return a socket, s2, to communicate with him
while (true) {
Receive(s2);
}
...
Receive() function :
public void Receive(Socket s)
{
try
{
byte[] bytes = new byte[256];
received = s.Receive(bytes);
receivedText.Text += System.Text.ASCIIEncoding.ASCII.GetString(bytes);
}
catch (SocketException e)
{
Console.WriteLine("{0} Error code: {1}.", e.Message, e.ErrorCode);
return;
}
}
I have created simple tcp server - it works pretty well.
the problems starts when we switch to the stress tests -since our server should handle many concurrent open sockets - we have created a stress test to check this.
unfortunately, looks like the server is choking and can not respond to new connection request in timely fashion when the number of the concurrent open sockets are around 100.
we already tried few types of server - and all produce the same behavior.
the server: can be something like the samples in this post(all produce the same behavior)
How to write a scalable Tcp/Ip based server
here is the code that we are using - when a client connects - the server will just hang in order to keep the socket alive.
enter code here
public class Server
{
private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 2060);
public Server()
{
listener.Start();
Console.WriteLine("Started.");
while (true)
{
Console.WriteLine("Waiting for connection...");
var client = listener.AcceptTcpClient();
Console.WriteLine("Connected!");
// each connection has its own thread
new Thread(ServeData).Start(client);
}
}
private static void ServeData(object clientSocket)
{
Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId);
var rnd = new Random();
try
{
var client = (TcpClient)clientSocket;
var stream = client.GetStream();
byte[] arr = new byte[1024];
stream.Read(arr, 0, 1024);
Thread.Sleep(int.MaxValue);
}
catch (SocketException e)
{
Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e);
}
}
}
the stress test client: is a simple tcp client, that loop and open sokets, one after the other
class Program
{
static List<Socket> sockets;
static private void go(){
Socket newsock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp);
IPEndPoint iep = new IPEndPoint(IPAddress.Parse("11.11.11.11"), 2060);
try
{
newsock.Connect(iep);
}
catch (SocketException ex)
{
Console.WriteLine(ex.Message );
}
lock (sockets)
{
sockets.Add(newsock);
}
}
static void Main(string[] args)
{
sockets = new List<Socket>();
//int start = 1;// Int32.Parse(Console.ReadLine());
for (int i = 1; i < 1000; i++)
{
go();
Thread.Sleep(200);
}
Console.WriteLine("press a key");
Console.ReadKey();
}
}
}
is there an easy way to explain this behavior? maybe c++ implementation if the TCP server will produce better results? maybe it is actually a client side problem?
Any comment will be welcomed !
ofer
Specify a huge listener backlog: http://msdn.microsoft.com/en-us/library/5kh8wf6s.aspx
Firstly a thread per connection design is unlikely to be especially scalable, you would do better to base your design on an asynchronous server model which uses IO Completion Ports under the hood. This, however, is unlikely to be the problem in this case as you're not really stressing the server that much.
Secondly the listen backlog is a red herring here. The listen backlog is used to provide a queue for connections that are waiting to be accepted. In this example your client uses a synchronous connect call which means that the client will never have more than 1 connect attempt outstanding at any one time. If you were using asynchronous connection attempts in the client then you would be right to look at tuning the listen backlog, perhaps.
Thirdly, given that the client code doesn't show that it sends any data, you can simply issue the read calls and remove the sleep that follows it, the read calls will block. The sleep just confuses matters.
Are you running the client and the server on the same machine?
Is this ALL the code in both client and server?
You might try and eliminate the client from the problem space by using my free TCP test client which is available here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html
Likewise, you could test your test client against one of my simple free servers, like this one: http://www.lenholgate.com/blog/2005/11/simple-echo-servers.html
I can't see anything obviously wrong with the code (apart from the overall design).