Testing a C# sockets based multithreading server - c#

We're developing a server application that uses sockets to handle client communication, obviously we want to test the server. Does anyone know of a good (open source) harness that can test the server using multiple threads on multiple client machines?

Quite frankly I would just roll your own and that's always what I do. A open source application would not be able to send packets according to your application level protocol as I'd advise to test at least authentication, some basic application level instructions, etc.
What I usually do is, write a stress tester application that spawns x amount (I usually test about 1000 clients per server) of clients and then just use a Timer (e.g. 25ms resolution) and on every timer callback, I randomly pick one of the connected clients and randomly execute either nothing, send data or disconnect the client. If I receive a disconnect notification for one of the clients, I spawn a new one to bring the total connected count back to x.
Then I let the whole thing run for a couple hours...

Related

How to block TCP and UDP packets (flood attack)

I have a program that tells you if your computer is online or not. The way I do it is with the help of a Server that basically sends UDP packets to clients. Clients then respond back letting the server know that they are online. If a client does not respond for the next 5 seconds then I mark it as offline.
Anyways I was testing this service and from a different computer I sent thousands of udp packets to the Server. After sending so many packages the server was not working the way it was supposed to.
So I know if someone is sending me a lot of packets. The problem is how do I block those packages so that my Server can still work?
Edit Possible Solution
I think I will implement the following solution what u guys think?
I will require 2 or more Servers now. If one client finds that the server is not responding then it will then talk to the Second Server. So the attacker will also have to know that there is a second server. Depending on how secure you want to be you could have even 5 servers. I guess that if the attacker knows that there are 5 servers then I just wasted my time and money right? lol
The general solution to this is you buy extra hardware that goes in front of the computer that looks at the incoming packets.
What that extra hardware does depends on what solution you want to use, you could have that hardware distribute the requests to many servers all running the same software (this would make the hardware you added a Load Balancer). You also could have the hardware detect that a unusually large number of packets coming from a single address, the hardware could then start dropping packets from that address instead of forwarding them on to the server (this would make the hardware you added a Stateful Firewall)
There are more options beyond those two but all solutions revolve around reducing the load on the server (usually shifting the load to another piece of hardware dedicated to taking the load). You could potentially upgrade your software to be more resilient to packet floods but unless your current software is written very poorly it won't buy you too much more capacity.

How to solve limitations of SignalR in scaleout for backplane

I use ASP.NET MVC and C# .I found SignalR for transfer data in real time,but signalR have some limits.
according to the issue for this :
Using a backplane, the maximum message throughput is lower than it is when clients talk directly to a single server node. That's because the backplane forwards every message to every node, so the backplane can become a bottleneck. Whether this limitation is a problem depends on the application. For example, here are some typical SignalR scenarios:
Server broadcast (e.g., stock ticker): Backplanes work well for this
scenario, because the server controls the rate at which messages are
sent.
Client-to-client (e.g., chat): In this scenario, the backplane might
be a bottleneck if the number of messages scales with the number of
clients; that is, if the rate of messages grows proportionally as
more clients join.
High-frequency realtime (e.g., real-time games): A backplane is not
recommended for this scenario.
My project needs to High-frequency realtime (e.g., real-time games) .
Also I need real time video chat
My scenario :
I have a Master server and multi Slave servers, Clients connect to the Slave servers and ans Slave servers connect to Master server.
Example :
Server Slave-1 and server Slave-2 connected to Master server, client-A and client-B connected to Slave-1 an client-C and client-D connected to Slave-2,
client-A send message or data or in live chat with client-D
How I can implement this scenario ?
[Update-1]
If i don't use signalR for that problem, So what should I use?
[Update-2]
In my scenario, the master server acts like a router and Slave server acts like a switch . Clients connected to switch and switch connected to router .if client-A send data packet to client-C, data packet should be send to router and router handle data packet.Over 2000 possible number of Slave servers and the number of users for each server it is over 10,000.
Thanks.
A backplane will introduce delays in message delivery, which will not work well for low-latency work. If you absolutely must have multiple servers to handle your clients, and you absolutely must have minimal latency, then a backplane is probably not going to work for you.
However, check out this conversation on the ASP forums. The poster is seeing average latencies of around 25ms for 60,000 messages per second to 3,000 connected clients on one server.
As is often the case, the trade-off here is between latency and complexity. The optimal solution is for messages to be routed only to the server(s) containing the target client(s). To achieve this you need a way to track every client connection, deal with reconnects to different servers, etc. You can probably solve this with a few tens of hours of hard slog programming, but in doing so you're going to break most of what makes SignalR useful.
For alternatives, the first that comes to mind is ZeroMQ. A bit more work, especially if your clients are browser based, but low latency and high throughput are project goals for ZeroMQ. You'll need to handle scale-out yourself though... and you're back to tracking connection points across multiple servers and reconnects.
If neither of these solves your problems, then you might have to look at changing your architecture. One common method for MMOs is to have related clients connect to the same servers to reduce inter-server communication requirements. Clients who legitimately need to communicate real-time data are put together on a single server which doesn't have to worry about back-plane issues. This server then communicates back to the 'Master' server only what is required to maintain world state and so on.
Plan your architecture to reduce the problems before they start... but don't spend weeks working on something that might not be necessary. Do some tests on SignalR and see what effect the backplane actually has on latency before you dive into the abyss.

Finding server(s) capacity

I am working on a application stack as follows:
There is a cisco load balancer that can handle 40K concurrent users in front of 2 application server. In the back there is a sql server with 2 TB data.
What i want to find out is how much traffic this application can handle at peak times, in other words what is the capacity of this stack.
I am aware of JMeter. I dont think running jmeter from a single machine can find out the capacity of the application.
How would one determine capacity of this stack?
I dont think running jmeter from a single machine can find out the
capacity of the application.
JMeter has a distributed mode allowing you to simulate huge loads by using multiple client machines:
In the event that your JMeter client machine is unable,
performance-wise, to simulate enough users to stress your server, an
option exists to control multiple, remote JMeter engines from a single
JMeter GUI client. By running JMeter remotely, you can replicate a
test across many low-end computers and thus simulate a larger load on
the server. One instance of the JMeter GUI client can control any
number of remote JMeter instances, and collect all the data from
them.

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

Voice Conference - how to have more people in conversation?

first of all, I'm just a hobbyist, so I'm sorry if this is dumb question or if I'm being too naive. (It also means that I can't buy expensive libraries)
This is the situation: I'm building a simple voice chat application in C#.NET (something like Ventrilo or TeamSpeak but only for about 15 or 20 people, and running on 100Mbps LAN). I have working server (spawning thread for each client) and client application using UDP for connection and DirectSound for capturing and playing the sound. I can make "1 on 1" calls but I can't figure out one of the most important things:
How do i have more than two people in the conversation?
You need some centralized place to send the packets back out via a multicast, or else you need a decentralized approach where every client is connected to every other client, and each client is hosting a multicast. What you want to avoid is making the machines forward out their data to every other machine, which would result in O(n) time to send a message to each machine (and I/O is slow!).
In either scenario, you end up with the same problem: how to combine the audio streams. One simple mechanism to accomplish this is to bitwise-or the signals together before you send them back out (either out the network port, or out to your speakers), but this assumes you have access to non-compressed and reasonably-synchronized streams.

Categories

Resources