I have small in C# project which has several classes.
The First copies the files, the second synchronizes database SQL CE from SQL SERVER,
and the third is packed to ZIP files etc.
I want to almost every function in these classes repoted to what she did, or failed to do something or not.
At this moment i am so in place when something happened to report it calls
public event EventHandler OperationStatusChanged,
in the StatusEventArgs i pass on the string with a description.
Handling event is in a class that is executing an instance of the class.
Ultimately, I want that all messages, error, etc are stored in the database using Nlog.
It is possible to do it more elegantly than calling event and its handling
Thank you for your time.
Tom
There is nothing wrong in event handling and it is appropriate for asynchronous operations.
However, it looks like your different operations are sequential so your calling class could call them one after the other. To avoid blocking your user interface, start the calling class loop in a separated thread and inform the UX at the end of each operation.
Related
In the baseform of a winform project the code for connecting to the database is moved from the load event to the show event.
In the show event there is a call to Update() before fetching data, this makes the form appear much faster which is more pleasant for the users.
But now I found code on some places like this for example :
FormRitDetail ritDetail = new FormRitDetail();
ritDetail.PrimaryKeyValue = ritID;
ritDetail.Show();
ritDetail.SendSaleEmail(cancelSale);
ritDetail.Close();
This worked perfect while the code for fetching data was in the load event, but now it gives an error which I have tracked down. In the SendSaleEmail method the data is not fetched yet.
The fetching happens in the Shown() event, but it seems that c# does the call to SendSaleEmail first, and than the call to Show().
How can I force c# to do the methods in the order as I write them ?
I can do a call to ritDetail.Update() after the ritDetail.Show() I know that, but I would like a general solution that does not involves writing additional code everywhere the show() method is called.
Is this possible ?
In the baseform of a winform project the code for connecting to the database is moved from the load event to the show event.
There is your real problem. You depend on an event to get executed to get into a valid object state. That's called temporal coupling. It's what makes you experience the current problem.
A general guideline is to never execute business logic within events. Instead create separate methods for that. Those methods can in turn be executed from the event handlers.
The other problem is that you need to load and show a form just so send a send email? At least I interpret your question as that the form will just open, execute and close. Move that code to a new class which just that responsibility.
So the answer to your question is:
Do not depend on UI events ensure that business data is loaded. It can be loaded directly but not yet populated into the form until it's ready.
Forms have a UI responsibility. They should not be responsible of business logic. Create separate classes.
Update
Regarding the actual problem, I just checked reference source for the Form class. The Show() method just changes the internal state (using SetWindowLongPtr WinApi function). Thus nothing is done until the message pump processes that message.
There is no guarantee that it's done before the next method call (i.e. SendSaleEmail).
I have a system wherein the already set up service for a specific process used to have a dingle instance mode. The service was used to run a long process that could be serve only one client. The architecture is as follows:
Now I am trying to make this wcf service per-session, so that it can run the long operation for two or more clients simultaneously. Since the process usually takes time. I am also sending the percentage of completion back to the client using a callback channel. This is what the architecture looks like the one shown below:
The major difference between the two architecture is:
Previously only one user could run the process for multiple
objects.Now each user can run the long process but for different
objects.
We have added callback facility to the new architecture
with per-session service.
We also plan on giving the user facility
to terminate the process,if he wishes to or the client connection is
closed.
But while trying to achieve the above we are facing the following issues.
The long time taking operation, occurs in database with the help of multiple stored procedures, called one by one from the static datamanager class.
Each SP is responsible for addition of around 500k rows in the multiple tables.
Though terminating the connection from client removes the instance of the service but since the database operations are done in the static class, the control gets stuck there and everything stops responding.
I know there is a DBCommand.Cancel() method which stops the operation associated with the DBCommand, but since the class is static cancelling that is also not possible.
Please suggest the architectural changes needed to solve this issue. I am ready to share more details.
From what I understand, you want multiple client at the same time and the static behavior that makes to have a singleton don't match together.
I would correct that.
Regards
I have this scenario, and I don't really know where to start. Suppose there's a Web service-like app (might be API tho) hosted on a server. That app receives a request to proccess some data (through some method we will call processData(data theData)).
On the other side, there's a robot (might be installed on the same server) that procceses the data. So, The web-service inserts the request on a common Database (both programms have access to it), and it's supposed to wait for that row to change and send the results back.
The robot periodically check the database for new rows, proccesses the data and set some sort of flag to that row, indicating that the data was processed.
So the main problem here is, what should the method proccessData(..) do to check for the changes of the data row?.
I know one way to do it: I can build an iteration block that checks for the row every x secs. But i don't want to do that. What I want to do is to build some sort of event listener, that triggers when the row changes. I know it might involve some asynchronous programming
I might be dreaming, but is that even possible in a web enviroment.?
I've been reading about a SqlDependency class, Async and AWait classes, etc..
Depending on how much control you have over design of this distributed system, it might be better for its architecture if you take a step back and try to think outside the domain of solutions you have narrowed the problem down to so far. You have identified the "main problem" to be finding a way for the distributed services to communicate with each other through the common database. Maybe that is a thought you should challenge.
There are many potential ways for these components to communicate and if your design goal is to reduce latency and thus avoid polling, it might in fact be the right way for the service that needs to be informed of completion of this work item to be informed of it right away. However, if in the future the throughput of this system has to increase, processing work items in bulk and instead poll for the information might become the only feasible option. This is also why I have chosen to word my answer a bit more generically and discuss the design of this distributed system more abstractly.
If after this consideration your answer remains the same and you do want immediate notification, consider having the component that processes a work item to notify the component(s) that need to be notified. As a general design principle for distributed systems, it is best to have the component that is most authoritative for a given set of data to also be the component to answer requests about that data. In this case, the data you have is the completion status of your work items, so the best component to act on this would be the component completing the work items. It might be better for that component to inform calling clients and components of that completion. Here it's also important to know if you only write this data to the database for the sake of communication between components or if those rows have any value beyond the completion of a given work item, such as for reporting purposes or performance indicators (KPIs).
I think there can be valid reasons, though, why you would not want to have such a call, such as reducing coupling between components or lack of access to communicate with the other component in a direct manner. There are many communication primitives that allow such notification, such as MSMQ under Windows, or Queues in Windows Azure. There are also reasons against it, such as dependency on a third component for communication within your system, which could reduce the availability of your system and lead to outages. The questions you might want to ask yourself here are: "How much work can my component do when everything around it goes down?" and "What are my design priorities for this system in terms of reliability and availability?"
So I think the main problem you might want to really try to solve fist is a bit more abstract: how should the interface through which components of this distributed system communicate look like?
If after all of this you remain set on having the interface of communication between those components be the SQL database, you could explore using INSERT and UPDATE triggers in SQL. You can easily look up the syntax of those commands and specify Stored Procedures that then get executed. In those stored procedures you would want to check the completion flag of any new rows and possibly restrain the number of rows you check by date or have an ID for the last processed work item. To then notify the other component, you could go as far as using the built-in stored procedure XP_cmdshell to execute command lines under Windows. The command you execute could be a simple tool that pings your service for completion of the task.
I'm sorry to have initially overlooked your suggestion to use SQL Query Notifications. That is also a feasible way and works through the Service Broker component. You would define a SqlCommand, as if normally querying your database, pass this to an instance of SqlDependency and then subscribe to the event called OnChange. Once you execute the SqlCommand, you should get calls to the event handler you added to OnChange.
I am not sure, however, how to get the exact changes to the database out of the SqlNotificationEventArgs object that will be passed to your event handler, so your query might need to be specific enough for the application to tell that the work item has completed whenever the query changes, or you might have to do another round-trip to the database from your application every time you are notified to be able to tell what exactly has changed.
Are you referring to a Message Queue? The .Net framework already provides this facility. I would say let the web service manage an application level queue. The robot will request the same web service for things to do. Assuming that the data needed for the jobs are small, you can keep the whole thing in memory. I would rather not involve a database, if you don't already have one.
After reading this: http://sourcemaking.com/design_patterns/command
I still don't quite understand why we need this.
The idea is that if commands are encapsulated as objects then those commands can be captured, stored, queued, replayed etc.
It also makes it easier for the command to know how to undo themselves (ie perform the reverse operation) so that then if a command is processed it can be stored in a list and then 'undone' in the reverse order to restore the state before the commands were done.
Also it decouples the sender of the command from the receiver. This can allow multiple things to generate the same command (a menu item and a button for example) and they will be handled in the same way.
It's a good way to encapsulate asynchronous operations and keep their parameters and context in one place.
E.g. a HTTP request: You send a request over a socket, and wait for the response to arrive. If your application is e.g. a web browser, you don't want to block until the request is done but move on. If the response arrives, you have to continue in the context were you stopped, e.g. reading the data and putting it into the right place (e.g. put the downloaded image data somewhere for later rendering). To match the response to the context it belongs to can become tricky if you have one big client class firing off multiple asynchronous operations. Responses might arrive in arbitrary order. Which response belongs to what? What again should be done with the response? How to handle possible errors? If you have those requests encapsulated in commands and let the commands only receive their own response, they'll know better how to continue from there and handle the response. If you have sequences of requests/responses, it's also much easier to keep track of the sequence's state. One can group commands to composite commands (composite pattern).
The client passes everything needed to the command, and waits until the command finishes, reporting back either success or error.
Another big advantage is when using multi-threading: if all data needed for the operation is encapsulated in the command object, it's easy to move the command to another thread and have it executed there, without the usual locking headaches you get when sharing objects among threads. Create command, pass everything it needs to it (copy, not by reference), pass to other thread, synchronize only when receiving the result, done.
The command pattern separates the code that knows how to do some work from the code that knows when it needs to be done, and with what parameters.
The most obvious case is a button that knows when you click it, but doesn't know what work to do at that moment. The command pattern lets you pass a do-some-work object to the button, which invokes the object when it is clicked.
Basically, the Command pattern is a way to partially achieve "Function as object" in Java (or C#).
Since you can't just create a function (or method) and do whatever you want with it like pass it as a parameter to some other function or keep it in a variable for later execution, this is the workaround to do that:
You wrap some code in a class (this is your execute method).
Instantiate the class. Now, this object you have is "a function as an object".
You can pass the object as a parameter, keep it around or whatever.
Eventually, you'll want to call the execute method.
It describes a solution to a problem. Mainly, that we want to issue commands and don't want to define 30 methods over 8 classes to achieve this. By using the pattern mention, we issue a Command object and the object is free to ignore it, or act on it in someway. The complexity of the Command object is implementation-defined, but this is a nice way to tell objects "hey, do this".
Additionally, because we have encapsulated this in an object we can go further and queue commands, dispatch them at intervals we wish and also revert them (provided of course, that the object you send the command to can 'undo' a Command as well as 'do it').
So, imagine a drawing package that allows you to add shapes to a canvas. Each time the user does this, a command can be issued:
m_Canvas.push_back(new Line(1.0f, 2.0f));
m_Canvas.push_back(new Line(3.5f, 3.1f));
m_Canvas.push_back(new Circle(2.0f, 3.0f, 1.5f));
and so on. This assumed Line and Circle are derived from a common Command base class.
Our renderer can use this canvas collection as a way of rendering and un-doing is simply a case of removing the last performed command. By tracking what the user un-does in a separate collection, we can also redo.
I use a third party dll to get some data from their servers. There is a void method that I call and then i subscribe to an event raised by the call to this method. The event raised returns the data through its parameters.
so,
call to : void getdata(id)
raises: void onReturn(object) --> which returns an object that has the data.
This WORKS everytime when there is a single call to getdata(id)
The problem is when i loop through the list of ids, and inside that loop call getdata(id) for that list, the corresponding events are not raised properly.
Say for a list of 10 ids, there are 10 calls to getdata(id) but only few onReturns are raised.
The returned object also returns the id that was passed to getdata(id) so I can match the data i sent and the data that i receive.
Is there a way to make sure that all events get listened to? So if I send 10 ids by getdata(id), I want to make sure that the 10 onReturns are processed.
And i'm using c#, .net 4.0
Thanks
If it's a third party DLL, there's no telling how they've implemented it. When you step through in Debug mode, do you get past the call to getData() before the onReturn() listener is called? If so, it might be using threads (or at least asynchronous listeners) internally, and calling multiple getData()s too close together might cause it to stomp on pending responses.
The only way I could think to try and get around this is to use multithreading yourself, e.g. with a Mutex that waits after the call to getData() and releases in the onReturn() event. This way you'd only have one outstanding request at a time, which seems to be the condition that works for you.
Edit: Have you talked to the third party vendor about this yet? I'm guessing their support isn't the best if you thought of us first, but it might be worth a shot.
Edit the second: When you say it gets data from their servers, does this mean it makes requests over the network? If those requests aren't encrypted, perhaps you could reverse engineer the protocol and make a new API for yourself instead of relying on a proven buggy black box.