Multithreaded server creates multiple threads by fails to read - c#

I have been making a multithreaded server, I cant share much but I will tell in clear theory how it is working.
I have a server that goes in a while(true) loop and it waits for a connection. Everything works, it makes the multithreads. and the server does work when I read one at a time. However when I decide to run 10 threads, it returns and says it fails to read the message (in the cmd window).
Why is this or why could this be? I am also reading in the message by:
int characterNumer;
characterNumber = streamReader.read();
while(CharacterNumber > 0)
{
message += (char)CharacterNumber;
characterNumber = streamreader.read();
}
the second line is in a try catch statement twice so it does read. but for the server it fails. why is this?
What would you recommend or advise? It does work perfectly when it is not multithreading.

Related

why does MySQL Connector claim there is already an open DataReader when there isn't?

I'm using the .NET Connector to access a MySQL database from my C# program. All my queries are done with MySqlCommand.BeginExecuteReader, with the IAsyncResults held in a list so I can check them periodically and invoke appropriate callbacks whenever they finish, fetching the data via MySqlCommand.EndExecuteReader. I am careful never to hold one of these readers open while attempting to read results from something else.
This mostly works fine. But I find that if I start two queries at the same time, then I get the dreaded MySqlException: There is already an open DataReader associated with this Connection which must be closed first exception in EndExecuteReader. And this is happening the first time I invoke EndExecuteReader. So the error message is full of baloney; there is no other open DataReader at that point, unless the connector has somehow opened one behind the scenes without me calling EndExecuteReader. So what's going on?
Here's my update loop, including copious logging:
for (int i=queries.Count-1; i>=0; i--) {
Debug.Log("Checking query: " + queries[i].command.CommandText);
if (!queries[i].operation.IsCompleted) continue;
var q = queries[i];
queries.RemoveAt(i);
Debug.Log("Finished, opening Reader for " + q.command.CommandText);
using (var reader = q.command.EndExecuteReader(q.operation)) {
try {
q.callback(reader, null);
} catch (System.Exception ex) {
Logging.LogError("Exception while processing: " + q.command.CommandText);
Logging.LogError(ex.ToString());
q.callback(null, ex.ToString());
}
}
Debug.Log("And done with callback for: " + q.command.CommandText);
}
And here's the log:
As you can see, I start both queries in rapid succession. (This is the first thing my program does after opening the DB connection, just to pin down what's happening.) Then the first one I check says it's done, so I call EndExecuteReader on it, and boom -- already it claims there's another open one. This happens immediately, before it even gets to my callback method. How can that be?
Is it not valid to have two open queries at once, even if I only call EndExecuteReader on one at a time?
When you run two queries concurrently, you must have two Connection objects. Why? Each Connection can only handle one query at a time. It looks like your code got into some kind of race condition where some of your concurrent queries worked and then a pair of them collided and failed.
At any rate your system will be more resilient in production if you can keep your startup sequences simple. If I were you I'd run one query after another rather than trying to run them all at once. (Obvs if that causes real performance problems you'll have to run them concurrently. But keep it simple until you need it to be complex.)

SqlBulkCopy.WriteToServer() keep getting "connection is closed"

This makes no sense to me but maybe someone with keener eyes can spot the problem.
I have a Windows service that uses FileSystemWatcher. It processes some files and uploads data to an MSSQL database. It works totally fine on my machine -- detached from Visual Studio (ie not debugging) and running as a service. If copy this compiled code to our server, and have it point to the same database, and even the same files (!), I get this error every single time:
System.InvalidOperationException: Invalid operation. The connection is closed.
at System.Data.SqlClient.SqlConnection.GetOpenTdsConnection()
at System.Data.SqlClient.SqlBulkCopy.CopyBatchesAsyncContinuedOnError(Boolean cleanupParser)
at System.Data.SqlClient.SqlBulkCopy.<>c__DisplayClass30.<CopyBatchesAsyncContinuedOnSuccess>b__2c()
at System.Data.SqlClient.AsyncHelper.<>c__DisplayClass9.<ContinueTask>b__8(Task tsk)
I have tried pointing my local code to the server's files and it works fine. .Net 4.5.1 is on both machines. Services are both running under the same domain user. It is baffling. Perhaps there is something I don't understand about SqlBulkCopy.WriteToServerAsync()? Does it automatically share connections or something? Does it close in between calls or something? Here's the relevant code:
private static void ProcessFile(FileInfo fileInfo)
{
using (var bulkCopy = new SqlBulkCopy("Data Source=myserver;Initial Catalog=mydb;Persist Security Info=True;User ID=myusr;Password=mypwd;")
using (var objRdr = ObjectReader.Create(ReadLogFile(fileInfo)
.Where(x => !string.IsNullOrEmpty(x.Level)),
"Id", "AppId", "AppDomain", "AppMachine",
"LocalDate", "UtcDate", "Thread", "Level", "Logger", "Usrname",
"ClassName", "MethodName", "LineNo", "Message", "Exception",
"StackTrace", "Properties"))
{
bulkCopy.DestinationTableName = "EventLog";
bulkCopy.BulkCopyTimeout = 600;
bulkCopy.EnableStreaming = true;
bulkCopy.BatchSize = AppConfig.WriteBatchSize;
bulkCopy.WriteToServerAsync(objRdr).ContinueWith(t =>
{
if (t.Status == TaskStatus.Faulted)
{
CopyToFailedDirectory(fileInfo);
_log.Error(
string.Format(
"Error copying logs to database for file {0}. File has been copied to failed directory for inspection.",
fileInfo.FullName), t.Exception.InnerException ?? t.Exception);
Debug.WriteLine("new handle error {0}",
(t.Exception.InnerException ?? t.Exception).Message);
}
if (t.Status == TaskStatus.RanToCompletion)
{
_log.InfoFormat("File {0} logs have been copied to database.", fileInfo.FullName);
Debug.WriteLine("Yay, finished {0}!", fileInfo.Name);
}
// if this is the last one, delete the original file
if (t.Status == TaskStatus.Faulted || t.Status == TaskStatus.RanToCompletion)
{
Debug.WriteLine("deleting file {0}", fileInfo.Name);
PurgeFile(fileInfo);
}
});
}
}
Couple notes in case you ask:
ObjectReader is a FastMember IDataReader implementation. CRAZY fast. It reads the file into custom objects with the properties you see listed.
It throws the error for every single file.
Again, this works on my machine, both as a service and as a console app. I even had it working once on the server. It threw the error and never worked again.
Any ideas?
Looks like an issue with it being Async.
Please let me know if I wrong, but what I noticed is you have your SqlBulkCopy and ObjectReader in a using statement which is great, however, you are doing all the processing asynchronously. Once, you call it and it starts doing work, your using statements are disposing of your objects which will also kill your connection.
The odd thing is that it sounds like it works sometimes, but perhaps it just becomes a race condition at that point.
Way late to throw this one on here but I was having what I thought was the same issue with SqlBulkCopy. Tried some of the steps in the other answers but with no luck. Turns out in my case that the actual error was caused by a string in the data going above the max length on one of the varchar columns, but for some reason the only error I was getting was the one about the closed connection.
Strangely, my coworker tried the same thing, and got an actual error message about the varchar being out of bounds. So we fixed the data and everything worked, but if you're here because of this error and nothing else works, you might want to start looking for different issues in your data.
This looks to me to be a bug in the SqlBulkCopy implementation. If you run a (large) number of bulk copies in parallel in separate tasks concurrently, disable your network connection and then trigger a full garbage collection, you will reliably get this exception thrown on the GC's finalizer thread. It is completely unavoidable.
That shouldn't happen because you are continuing the WriteToServerAsync task and handling the fault. But in the implementation, on error they start a new task that they don't continue or await.
This still seems to be a bug in .NET 4.6.2.
The only fix I can see is to subscribe to TaskScheduler.UnobservedTaskException and look for something in the stacktrace that identifies the issue. That isn't a fix by the way, it is a hack.

Detecting unexpected socket disconnect

This is not a question about how to do this, but a question about whether it's wrong what I'm doing. I've read that it's not possible to detect if a socket is closed unexpectedly (like killing the server/client process, pulling the network cable) while waiting for data (BeginReceive), without use of timers or regular sent messages, etc. But for quite a while I've been using the following setup to do this, and so far it has always worked perfectly.
public void OnReceive(IAsyncResult result)
{
try
{
var bytesReceived = this.Socket.EndReceive(result);
if (bytesReceived <= 0)
{
// normal disconnect
return;
}
// ...
this.Socket.BeginReceive...;
}
catch // SocketException
{
// abnormal disconnect
}
}
Now, since I've read it's not easily possible, I'm wondering if there's something wrong with my method. Is there? Or is there a difference between killing processes and pulling cables and similar?
It's perfectly possible and OK to do this. The general idea is:
If EndReceive returns anything other than zero, you have incoming data to process.
If EndReceive returns zero, the remote host has closed its end of the connection. That means it can still receive data you send if it's programmed to do so, but cannot send any more of its own under any circumstances. Usually when this happens you will also close your end the connection thus completing an orderly shutdown, but that's not mandatory.
If EndReceive throws, there has been an abnormal termination of the connection (process killed, network cable cut, power lost, etc).
A couple of points you have to pay attention to:
EndReceive can never return less than zero (the test in your code is misleading).
If it throws it can throw other types of exception in addition to SocketException.
If it returns zero you must be careful to stop calling BeginReceive; otherwise you will begin an infinite and meaningless ping-pong game between BeginReceive and EndReceive (it will show in your CPU usage). Your code already does this, so no need to change anything.

Thread Monitor class in c#

In my c# application multiple clients will access the same server, to process one client ata a time below code is written.In the code i used Moniter class and also the queue class.will this code affect the performance.if i use Monitor class, then shall i remove queue class from the code.
Sometimes my remote server machine where my application running as service is totally down.is the below code is the reasond behind, coz all the clients go in a queue, when i check the netstatus -an command using command prompt, for 8 clients it shows 50 connections are holding in Time-wait...
Below is my code where client acces the server ...
if (Id == "")
{
System.Threading.Monitor.Enter(this);
try
{
if (Request.AcceptTypes == null)
{
queue.Enqueue(Request.QueryString["sessionid"].Value);
string que = "";
que = queue.Dequeue();
TypeController.session_id = que;
langStr = SessionDatabase.Language;
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
TypeController.session_id = "";
filter.Execute();
Request.Clear();
return filter.XML;
}
else
{
TypeController.session_id = "";
filter = new AllThingzFilter(SessionDatabase, parameters, langStr);
filter.Execute();
}
}
finally
{
System.Threading.Monitor.Exit(this);
}
}
Locking this is pretty wrong, it won't work at all if every thread uses a different instance of whatever class this code lives in. It isn't clear from the snippet if that's the case but fix that first. Create a separate object just to store the lock and make it static or give it the same scope as the shared object you are trying to protect (also not clear).
You might still have trouble since this sounds like a deadlock rather than a race. Deadlocks are pretty easy to troubleshoot with the debugger since the code got stuck and is not executing at all. Debug + Break All, then Debug + Windows + Threads. Locate the worker threads in the thread list. Double click one to select it and use Debug + Call Stack to see where it got stuck. Repeat for other threads. Look back through the stack trace to see where one of them acquired a lock and compare to other threads to see what lock they are blocking on.
That could still be tricky if the deadlock is intricate and involves multiple interleaved locks. In which case logging might help. Really hard to diagnose mandelbugs might require a rewrite that cuts back on the amount of threading.

Socket.SendAsync is not sending in-order on Mono/Linux

There is a a single-threaded server using .NET Socket with TCP protocol, and Socket.Pool(), Socket.Select(), Socket.Receive().
To send, I used:
public void SendPacket(int clientid, byte[] packet)
{
clients[clientid].socket.Send(packet);
}
But it was very slow when sending a lot of data to one client (halting the whole main thread), so I replaced it with this:
public void SendPacket(int clientid, byte[] packet)
{
using (SocketAsyncEventArgs e = new SocketAsyncEventArgs())
{
e.SetBuffer(packet, 0, packet.Length);
clients[clientid].socket.SendAsync(e);
}
}
It works fine on Windows with .NET (I don't know if it's perfect), but on Linux with Mono, packets are either dropped or reordered (I don't know). Reverting to slow version with Socket.Send() works on Linux. Source for whole server.
How to write non-blocking SendPacket() function that works on Linux?
I'm going to take a guess that it has to do with your using statement and your SendAsync call. Perhaps e falls out of scope and is being disposed while SendAsync is still processing the buffer. But then this might throw an exception. I am really just taking a guess. Try removing the using statement and see what happens.
I would say by not abusing the async method. YOu will find nowhere a documentation stating that this acutally is forced to maintain order. it queues iem for a scheuler which get distributed to threads, and by ignoring that the oder is not maintained per documentation you open yourself up to implementation details.
The best possibly is to:
Have a queue per socket.
When you write dasta into this queue, and there is no worker thread, start a work item (ThreadPool) to process the thread.
This way you have separate distinct queues that maintain order. Only one thread will ever process one queue / socket.
I got the same problem; Linux and windows react not in the same way with SendAsync. Sometimes linux truncate the data, but there is a workaround. First of all you need to use a queue. Each time you use SendAsync you have to check the callback.
If e.Offset + e.BytesTransferred < e.Buffer.Length, you just have to e.SetBuffer(e.Offset + e.BytesTransferred, e.Buffer.Length - e.BytesTransferred - e.Offset); and call SendAsync again.
I dont know why mono-linux believe it's completed before sending all the data and it's strange but i'm sure he does.
just like #mathieu, 10y later, I can confirm on Unity Mono+Linux complete callback is called without all bytes being sent in some cases. For me it was large packets only.

Categories

Resources