I am polling an OPCDA server for data every second. I use the standard .NET DLL's from OPC Foundation to achieve this.
My service is located on the same server as the OPCDA server. However, my read times are often around 900-1000ms. Is this normal or something wrong in my code or server setup? I poll around 20 OPCDA tags. What is a "standard" response time of such an operation or is it impossible to say?
It doesn't sound normal, but it's impossible to say for certain without knowing what the source of the data is.
Check documentation of OPC DA interface you use to fetch data from the server and what parameters you pass to it.
If you use synchronous reads then problem definitely on server side or its backend (meaning that it takes too much time for server to read actual data).
If you use asynchronous reads (subscriptions) check parameter named like 'update rate'. It defines how often new data will be sent to client. E.g. if it is 1 second client will receive new data NOT faster than 1 second.
Subscriptions are supported by all OPC DA versions. If server doesn't implement this interface you will not be able to read asynchronously and will get error code like 'not implemented'.
What OPC server are you using? There may be a setting to keep the update rate fixed or respect the client update rate.
Related
Current situation: an existing SQL Server stored procedure I have no control upon returns 10 large strings in separate resultsets in about 30 seconds (~3 seconds per dataset). The existing ASP.NET Web API controller method that collects these strings only returns a response once all strings are obtained from the stored procedure. When the client receives the response, it takes another 30 seconds to process the strings and display the results, for a total of 1 minute from request initiation to operation completion.
Contemplated improvement: somehow transmit the strings to the client as soon as each is obtained from the SqlDataReader, so the client can work on interpreting each string while receiving the subsequent ones. The total time from request initiation to completion would thus roughly be halved.
I have considered the WebClient events at my disposal, such as DownloadStringCompleted and DownloadProgressChanged, but feel none is viable and generally think I am on the wrong track, hence this question. I have all kinds of ideas, such as saving strings to temporary files on the server and sending each file name to the client through a parallel SignalR channel for the client to request in parallel, etc., but feel I would both lose my time and your opportunity to enlighten me.
I would not resort to inverting the standard client / server relationship using a "server push" approach. All you need is some kind of intermediary dataset. It could be a singleton object (or multiple objects, one per client) on your server, or another table in an actual database (perhaps NoSql).
The point is that the client will not directly access the slow data flow you're dealing with. Instead the client will only access the intermediary dataset. On the first request, you will start off the process of migrating data from the slow dataset to the intermediary database and the client will have to wait until the first batch is ready.
The client will then make additional requests as he processes each result on his end. If more intermediary results are already available he will get them immediately, otherwise he will have to wait like he did on the first request.
But the server is continuously waiting on the slow data set and adding more data to the intermediate data set. You will have to have a way of marking the intermediate data as having already been sent to the client or not. You will probably want to spawn a separate thread for the code that moves data from the slow data source to the intermediate one.
I have a .Net Remoting service that will return a class to the client application. That class has a string property where the string can range from 1kb to 400kb worth of data.
I tried passing 256kb worth of string from server to client and the client was able to get it in less than 5 seconds which is still ok since this call will only be used for trouble-shooting purposes by an administrator. However I read
here that when sending huge data: "the socket will be blocked from receiving all other messages until it receives the remaining .... packets". If my data ever reached an MB size I do not want to block the client from receiving other messages.
How can I achieve my goal of not blocking the client? Do I compress the string using GZipStream like in here? Or are there other better ways?
Good article from Tess Fernandez : https://blogs.msdn.microsoft.com/tess/2008/09/02/outofmemoryexceptions-while-remoting-very-large-datasets/
I never saw someone commenting this. How should I program when using sockets?
For example, comparing web service and sockets, using web services I can create methods with a clarified name. How can I do this using sockets? What if I want to have "methods" in different classes? How do I organize them?
I am trying to make a game. What if I need to have 300 methods and I need to use sockets?
What about this?
udpClient.Connect("localhost", 15000);
Byte[] sendBytes = Encoding.ASCII.GetBytes("MyClass MyMethod(firstParameter");
udpClient.Send(sendBytes, sendBytes.Length);
I need to pass a string to an UDP communication. How can I organize this in the server side?
How can I split in classes and methods? Or do I need to put 300 "if" in the server side like this?
if(message.Contains("MyClass MyMethod"))
{
MyClass.MyMethod();
}
if(message.Contains("MyClass MyMethod2"))
...
if(message.Contains("MyClass MyMethod3"))
...
Web services like you're talking about tend to have a few layers between the actual data being received and the method that you've provided. For example:
Client sends message
Server receives message
Server translates message into something usable
Server provides usable message to web service method
In your case, you are wanting to emulate the same kind of functionality with a direct socket connection and no web service framework to do the heavy lifting for you, so you need to provide 2, 3, and 4 yourself.
It would help if you had a standard way of structuring your data so that you can provide information to the server, such as "Which method do you want to call?", and frankly, you can do it however you want. It's your client/server, you could use SOAP, JSON, you could even create your own fixed-length field format, as long as both sides speak in terms of the same data structure.
If you have specific implementation questions about a particular path you are going down, however, you are welcome to ask a different question.
I'm trying to write an asynch socket application which transfering complex objects over across sides..
I used the example here...
Everything is fine till i try send multi package data. When the transferred data requires multiple package transfer server application is suspending and server is going out of control without any errors...
After many hours later i find a solution; if i close client sender socket after each EndSend callback, the problem is solving. But i couldn't understand why this is necessary? Or are there any other solution for the situation?
My (2) projects is same with example above only i changed EndSend callback method like following:
public void EndSendCallback(IAsyncResult result)
{
Status status = (Status)result.AsyncState;
int size = status.Socket.EndSend(result);
status.Socket.Close(); // <--------------- This line solved the situation
Console.Out.WriteLine("Send data: " + size + " bytes.");
Console.ReadLine();
allDone.Set();
}
Thanks..
This is due to the example code given not handling multiple packages (and being broken).
A few observations:
The server can only handle 1 client at a time.
The server simply checks whether the data coming in is in a single read smaller than the data requested and if so, assumes that's the last part.
The server then ignores the client socket while leaving the connection open. This puts the responsibility of closing the connection on the client side which can be confusing and which will waste resources on the server.
Now the first observation is an implementation detail and not really relevant in your case. The second observation is relevant for you since it will likely result in unexplained bugs- probably not in development- but when this code is actually running somewhere in a real scenario. Sockets are not streamlined. When the client sents over 1000 bytes. This might require 1 call to read on the server or 10. A call to read simply returns as soon as there is 'some' data available. What you need to do is implement some sort of protocol that communicates either how much data is being sent over- or when all the data has been sent over. I really recommend just to stick with the HTTP protocol since this is a well tested and well supported protocol that suits most scenario's.
The third observation might also cause bugs where the server is running out of resources since it leaves all connections open.
I need your suggest about data processing.
My server is a data server (using SQL Server 2005). And my client will get data from the server, and display them on windows.
Server and client is on internet (not LAN). So, time to get client is depended on the data's size and internet speed.
Assume: the SQL Server has a table with 2 column (Value and Change), the client will get data from this table (store in a datatable) and display them on a datagridview with 3 columns: Value, Change, and ChangePercent.
Note: ChangePercent = Change/Value;
I have a question: data in ChangePercent field should be calculated at server or client?
If I do at the server, the server will be overhead if there are a lot of clients. Moreover, the data returns to clients is greater (data of 3 fields).
If I do on the client, the client will only get data with 2 fields (Value & Change). Data in column ChangePercent will be calculated at client.
P/S: the connection between client and server is across a .net remoting. Client is a winform C# 2.0.
Thanks.
Go with calculation on the client.
Almost certainly the calculation will be faster than it takes to get the extra field over the line, apart from the fact that business logic shouldn't be calculated on a database server anyway.
Assuming that all variables will be of the same type, you needlessly increase your data transfer with 33% when calculating on the server. This matters only for large result sets obviously.
I don't think it matters where you do it, a division operation won't be too much of an overhead for either the server or the client. But consider that you have to write code on the client to handle a very simple operation that can be handled on the server.
EDIT: you can make a test table with, say, 1.000.000 records and see the actual execution time with the division and without it.
I would suggest using Method #2: Send 2 fields and let the third being calculated by the client.
The relative amount of calculation is very small for the client.