TClientSocket in Delphi persistent connection - c#

I have an application written in Delphi using TClientSocket that is sending data to another application written in C#. For many reasons, the C# application is slow to respond, blocking my Delphi application and not respecting the time-out I have set.
My Delphi application reads responses like this:
Sock.Socket.ReceiveText
This causes the application to wait for a response. But if I do this instead, the application waits and respects the time-out:
while not receiveData do
begin
if Sock.Socket.ReceiveLength > 0 then
begin
receiveData := True;
end;
Inc(Cont);
Sleep(100);
if (Cont > 10) then
raise Exception.Create('Timeout');
end;
My Delphi app sends two requests. The first one times out, but C# is still processing it. My Delphi app then sends the second request, and this time C# sends the response for the first request.
Will the second request receive data for the first request? Or, when I timeout in Delphi, will they cross information?

Once your Delphi code times out, it forgets about the first request, but your C# code does not know that. Since you are not dropping the connection, the second request will indeed receive the response data for the first request. By implementing timeout logic and then ignoring the cause of the timeout, you are getting your two apps out of sync with each other. So, either use a longer timeout (or no timeout at all), or else drop the connection if a timeout occurs.
As for your Delphi app freezing, that should only happen if you are using the TClientSocket component in blocking mode and performing your reading in the context of the main UI thread. You should not be using blocking mode in the main UI thread. Either:
Use TClientSocket in non-blocking mode, do all of your reading in the OnRead event only, and do not read more than ReceiveLength indicates.
Use TClientSocket in blocking mode, and do all of your reading in a worker thread, and then signal the main UI thread only when there is data available for it to process (better would be to process the data in the worker thread, and only sync with the main thread when making UI updates).

Related

Is there a way to return a response from an API call to ASP.NET while keeping the instance running?

I am writing an API using ASP.NET and I have some potentially long running code from the different end points. The system uses CQRS and Event Sourcing. A Command comes into to an end point and is then published as an event using MediatR. However the Handlers are potentially long running. Since some of the Requests coming in might be sent to multiple Handlers. This process could take longer than the 12s that AWS allows before returning an Error code.
Is there a way to return a response back to the caller to say that the event has been created while still contining with the process? That is to say fire off a separate task that performs the long running piece of code, that also catches and logs errors. Then return a value back to the user saying the Event has been successfully created?
I believe that ASP.NET spins up a new instance each time a call is made, will the old instance die one a value is returned, killing the task?
I could be wrong with a number of points here, this is my knowledge gleaned from the internet but I could have missunderstood articles.
Thanks.
Yes, you should pass the long-running task off to a background process and return to the user. When the task is complete, notifiy the user with whatever mechanism is appropriate for your site.
But do not start a new thread, what you want is to have a background service running for this, and use that to manage your request.
If a new thread is running the long operation it will remain “open/live” until it finishes. Also you can configure the app pool to always be active.
There are a lot of frameworks to work with long running tasks like Hangfire.
And to keep the user updated with the status of the task you can use SignalR to push notifications to the UI

Asynchronous calls within a WCF Service

We have a situation where we need to execute some long running code in the InitializeService method of a Data Service. Currently the first call to the data service fires off the code, but does not receive a response until the long running code has finished. The client is not required to wait for this action to complete. I have attempted to use a new thread to execute the code, however with the code being run we are replacing some files on the server which seems to kill the thread and causes it to bomb out. If I don't have it in a thread it runs fine, but the InitializeService method takes a long time to complete.
Are there any other ways to run this code asynchronously (was thinking maybe there is a way to call another method in the same fashion that a client would)?
Thanks in advance.
All WCF communication is basically Asynchronous. Each call spins up its own thread on the host and the processing starts. The problem you're running into, like many of us, is that the client times out before the host is finished with the work, and there's no easy way around that beyond setting the timeout to some ridiculous amount of time.
It's better to split your processing up into two or more parts, starting the intialization process and finishing the initialization process in separate steps, like this:
One option you could try a duplexed WCF service with a call back function to the client. In other words, client "A" calls the host and starts the initialization routine, but the host immediately sends back the client a value of IntializationStart=True so that the client isn't left waiting for the timeout. Then, when the host has finished compiling the files, it calls the client (which has its own listener) and sends a messages that the initialization is ready. Then the client calls the host and downloads the processed files.
This will works well PC-to-server, or server-to-server.
Another option could work this way: client "A" contacts host and host starts the Initialization routine, again sending back IntializationStarted=True. The host sets an internal (DB) value of FilesReady=False for client "A" until all the files are finished. At that point, host sets its internal value of FilesReady=True. Meanwhile, the client is on a timer, polling the host every minute until it finally receives that FilesReady=True, then it downloads the waiting files.
If you're talking about an iPhone-to-server or Android-to-server, then this is a better route.
You follow?

ObjectDisposedException when using Multiple Asynchronous Clients to Multiple Servers

I've been looking into the Asynchronous Client and Asynchronous Server Socket examples on MSDN and have happily punched up the example that works flawlessly when one Client connects to one Server. My problem is that I need to synchronise a chunk of work with a number of machines so they execute at about the same time (like millisecond difference). The action is reasonably simple, talk to the child servers (all running on the same machine but on different ports for initial testing), simulate its processing and send a 'Ready' signal back to the caller. Once all the Servers have returned this flag (or a time-out occurs), a second message to is passed from the client to the acknowledged servers telling them to execute.
My approach so far has been to create two client instances, stored within a list, and start the routine by looping through the list. This works well but not particularly fast as each client's routine is ran synchronously. To speed up the process, I created a new thread and executed the routine on that for each client. Now this does work allowing two or more servers to return back and synchronise appropriately. Unfortunately, this is very error prone and the code errors with the 'ObjectDisposedException' exception on the following line of the 'ReceiveCallback' method...
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
With some investigation and debugging I tracked the sockets being passed to the routine (using its handle) and found while it isn't connected, it is always the second socket to return that fails and not the first that does successfully read its response. In addition, these socket instances (based upon the handle value) appear to be separate instances, but somehow the second (and subsequent responses) continue to error out on this line.
What is causing these sockets to inappropriately dispose of themselves before being legitmately processed? As they are running in separate threads and there are no shared routines, is the first socket being inappropriately used on the other instances? Tbh, I feel a bit lost at sea and while I could band-aid up these errors, the reliability of the code and potentially losing returning acknowledgements is not a favourable goal. Any pointers?
Kind regards
Turns out the shared / static ManualResetEvent was being set across the different instances so thread 1 would set the ManualResetEvent disposing the socket on the second thread. By ensuring that no methods / properties were shared / static - each thread and socket would execute under its own scope.

thread in asp.net/c#

I want to create a thread on user_login Event or Form_load Event.
In this thread i want to call a class function to execute some sql statement, and the user do not have to wait for the result or background process to be finished, he is just directed to his desired page let say my profile or what ever.
I am using ASP.NET, C#.
For what it is worth, this is a Bad Idea. There are a variety of ways that that IIS could terminate that background thread (IIS restart, app pool restart, etc., all of which are normal expected behavior of IIS), and the result would be that your DB transaction gets silently rolled back.
If these queries need to be reliably executed, you should either execute them in the request or send them to a windows service or other long-lived process. This doesn't mean that the user can't get feedback on the progress- the IIS request itself could be executed via an AJAX call.
Here are some examples of asynchronous calls in C#.
http://support.microsoft.com/kb/315582
The choice of pattern depends on your needs. Note that in sample 5 you can provide null as the callback if you don't want any code executed when the action is done.
You could do this by calling your method in a thread using Thread.Start. IsBackground is false by default, which should prevent your application from stopping.
http://msdn.microsoft.com/en-us/library/7a2f3ay4.aspx
But, since most of your time will probably be spent in your database call. Why not just execute it asynchronously without a callback? On a SqlCommand that would be BeginExecuteNonQuery.

Sequential access to asynchronous sockets

I have a server that has several clients C1...Cn to each of which there is a TCP connection established. There are less than 10,000 clients.
The message protocol is request/response based, where the server sends a request to a client and then the client sends a response.
The server has several threads, T1...Tm, and each of these may send requests to any of the clients. I want to make sure that only one of these threads can send a request to a specific client at any one time, while the other threads wanting to send a request to the same client will have to wait.
I do not want to block threads from sending requests to different clients at the same time.
E.g. If T1 is sending a request to C3, another thread T2 should not be able to send anything to C3 until T1 has received its response.
I was thinking of using a simple lock statement on the socket:
lock (c3Socket)
{
// Send request to C3
// Get response from C3
}
I am using asynchronous sockets, so I may have to use Monitor instead:
Monitor.Enter(c3Socket); // Before calling .BeginReceive()
And
Monitor.Exit(c3Socket); // In .EndReceive
I am worried about stuff going wrong and not letting go of the monitor and therefore blocking all access to a client. I'm thinking that my heartbeat thread could use Monitor.TryEnter() with a timeout and throw out sockets that it cannot get the monitor for.
Would it make sense for me to make the Begin and End calls synchronous in order to be able to use the lock() statement? I know that I would be sacrificing concurrency for simplicity in this case, but it may be worth it.
Am I overlooking anything here? Any input appreciated.
My answer here would be a state machine per socket. The states would be free and busy:
If socket is free, the sender thread would mark it busy and start sending to client and waiting for response.
You might want to setup a timeout on that wait just in case a client gets stuck somehow.
If the state is busy - the thread sleeps, waiting for signal.
When that client-related timeout expires - close the socket, the client is dead.
When a response is successfully received/parsed, mark the socket free again and signal/wakeup the waiting threads.
Only lock around socket state inquiry and manipulation, not the actual network IO. That means a lock per socket, plus some sort of wait primitive like a conditional variables (sorry, don't remember what's really available in .NET)
Hope this helps.
You certainly can't use the locking approach that you've described. Since your system is primarily asynchronous, you can't know what thread operations will be running on. This means that you may call Exit on the wrong thread (and have a SynchronizationLockException thrown), or some other thread may call Enter and succeed even though that client is "in use", just because it happened to get the same thread that Enter was originally called on.
I'd agree with Nikolai that you need to hold some additional state alongside each socket to determine whether it is currently in use or not. You woud of course need locking to update this shared state.

Categories

Resources