For a call tracking application I'm developing, I want to maintain local database.
As it stands, the application searches for new records in Twilio and inserts them into my database every time it loads. This is very time consuming.
In order to avoid that runtime expense, is there a way I can use usage triggers in Twilio to automatically populate my database in real time? Or even just daily?
If not, how can I achieve something like this?
Since Twilio is already calling your servers (unless there's some way to use it without doing that, but I don't think there is), can't you implement logging there? For instance, before you feed back your greeting, pop in a logging routine to note that you've received a call?
I'm not sure if they offer any other sorts of APIs or callbacks, but I really don't see why anything like that would be necessary. It'd just tie up your servers with more requests at no additional gain. I was just going through their documentation and I don't see anything like this. I could be just totally glossing over it, but again it just seems redundant. The entire Twilio system is based effectively on event hooks, so having separate ones wouldn't serve much additional use.
On the other hand, if for some reason you have absolutely no access whatsoever to the code or people behind the code that serves TwiML back, unless someone else is seeing an event hook API, you might want to just set up a scheduled job on your server (or in Azure, or whatever you're using) to query Twilio daily, since I know you mentioned that that would be sufficient. You could also, of course, set it more frequently. But that really seems like a waste of resources and effort when they're already telling you everything about every call through the massive list of query parameters they pass with every request.
Related
I have this scenario, and I don't really know where to start. Suppose there's a Web service-like app (might be API tho) hosted on a server. That app receives a request to proccess some data (through some method we will call processData(data theData)).
On the other side, there's a robot (might be installed on the same server) that procceses the data. So, The web-service inserts the request on a common Database (both programms have access to it), and it's supposed to wait for that row to change and send the results back.
The robot periodically check the database for new rows, proccesses the data and set some sort of flag to that row, indicating that the data was processed.
So the main problem here is, what should the method proccessData(..) do to check for the changes of the data row?.
I know one way to do it: I can build an iteration block that checks for the row every x secs. But i don't want to do that. What I want to do is to build some sort of event listener, that triggers when the row changes. I know it might involve some asynchronous programming
I might be dreaming, but is that even possible in a web enviroment.?
I've been reading about a SqlDependency class, Async and AWait classes, etc..
Depending on how much control you have over design of this distributed system, it might be better for its architecture if you take a step back and try to think outside the domain of solutions you have narrowed the problem down to so far. You have identified the "main problem" to be finding a way for the distributed services to communicate with each other through the common database. Maybe that is a thought you should challenge.
There are many potential ways for these components to communicate and if your design goal is to reduce latency and thus avoid polling, it might in fact be the right way for the service that needs to be informed of completion of this work item to be informed of it right away. However, if in the future the throughput of this system has to increase, processing work items in bulk and instead poll for the information might become the only feasible option. This is also why I have chosen to word my answer a bit more generically and discuss the design of this distributed system more abstractly.
If after this consideration your answer remains the same and you do want immediate notification, consider having the component that processes a work item to notify the component(s) that need to be notified. As a general design principle for distributed systems, it is best to have the component that is most authoritative for a given set of data to also be the component to answer requests about that data. In this case, the data you have is the completion status of your work items, so the best component to act on this would be the component completing the work items. It might be better for that component to inform calling clients and components of that completion. Here it's also important to know if you only write this data to the database for the sake of communication between components or if those rows have any value beyond the completion of a given work item, such as for reporting purposes or performance indicators (KPIs).
I think there can be valid reasons, though, why you would not want to have such a call, such as reducing coupling between components or lack of access to communicate with the other component in a direct manner. There are many communication primitives that allow such notification, such as MSMQ under Windows, or Queues in Windows Azure. There are also reasons against it, such as dependency on a third component for communication within your system, which could reduce the availability of your system and lead to outages. The questions you might want to ask yourself here are: "How much work can my component do when everything around it goes down?" and "What are my design priorities for this system in terms of reliability and availability?"
So I think the main problem you might want to really try to solve fist is a bit more abstract: how should the interface through which components of this distributed system communicate look like?
If after all of this you remain set on having the interface of communication between those components be the SQL database, you could explore using INSERT and UPDATE triggers in SQL. You can easily look up the syntax of those commands and specify Stored Procedures that then get executed. In those stored procedures you would want to check the completion flag of any new rows and possibly restrain the number of rows you check by date or have an ID for the last processed work item. To then notify the other component, you could go as far as using the built-in stored procedure XP_cmdshell to execute command lines under Windows. The command you execute could be a simple tool that pings your service for completion of the task.
I'm sorry to have initially overlooked your suggestion to use SQL Query Notifications. That is also a feasible way and works through the Service Broker component. You would define a SqlCommand, as if normally querying your database, pass this to an instance of SqlDependency and then subscribe to the event called OnChange. Once you execute the SqlCommand, you should get calls to the event handler you added to OnChange.
I am not sure, however, how to get the exact changes to the database out of the SqlNotificationEventArgs object that will be passed to your event handler, so your query might need to be specific enough for the application to tell that the work item has completed whenever the query changes, or you might have to do another round-trip to the database from your application every time you are notified to be able to tell what exactly has changed.
Are you referring to a Message Queue? The .Net framework already provides this facility. I would say let the web service manage an application level queue. The robot will request the same web service for things to do. Assuming that the data needed for the jobs are small, you can keep the whole thing in memory. I would rather not involve a database, if you don't already have one.
I'm writing an application in C# that allows people to track the amount of time they spend on tasks. It can be used by a single person to track their own personal time, but it will also be able to work in, for example, a company - like, if they want to track the amount of time spend on some project.
The data being stored by this program is pretty simple - a collection of all the tasks and each "block" of time that was spent on it (including date, start/stop time, and length of time spent).
For the multiuser functionality, my plan was to have a single server that the clients send updates to the tracked time. I don't think the clients will need a continuous connection as the updates would typically be pretty far apart.
Additionally, as both the server and the client will store a copy of the data, either of them can ask for a copy from the other if there's a data loss on either. Femaref has informed me that this is a poor idea, so I've removed it.
So, my question is, how should I approach this? I've seen some C# client/server tutorials, but those seem to be geared towards continuous connections.
Your best bet is to track the data separately. First Allow users to track there own time, and just store that in a local db (you can use something like csharp-sqlite ), then when the user connects sync what data you want to keep on server.
For data that you want to track sever side your just going to want the app to sign in and say its starting a task and then sign out when its stopping a task(then have the server side hit the db functions)(your going to want to keep the user data, and the server data separate, so you know what you can trust, and what implications there are for using what data ) .
Obviously, your going to want to handle situations where a task goes on longer then expected. For example someone forgets to say there done with the task(like there computer just crashes)(you can do this by having your app just say its still working on a task every so often).
The best way I have found to get around issues that are caused by trusting peoples input is to just tie into something like your local A.D or LDAP and allow management control(because in the end they are the ones that sort out any messes that come from people having the wrong hours) thats all handled server side. If you don't have A.D or LDAP, you might have to consider implementing some kind of RSA key mechanism for authentication and authority chains.
For talking to the server side process on the client, I suggest something like SOAP (SOAP using C#). That way you can move your server language to what ever makes your feel all warm and fuzzy.
This is a bit of a broad question so its hard to cover everything, but it should give you some leads in the right direction.
I want a certain action request to trigger a set of e-mail notifications. The user does something, and it sends the emails. However I do not want the user to wait for page response until the system generates and sends the e-mails. Should I use multithreading for this? Will this even work in ASP.NET MVC? I want the user to get a page response back and the system just finish sending the e-mails at it's own pace. Not even sure if this is possible or what the code would look like. (PS: Please don't offer me an alternative solution for sending e-mails, don't have time for that kind of reconfiguration.)
SmtpClient.SendAsync is probably a better bet than manual threading, though multi-threading will work fine with the usual caveats.
http://msdn.microsoft.com/en-us/library/x5x13z6h.aspx
As other people have pointed out, success/failure cannot be indicated deterministically when the page returns before the send is actually complete.
A couple of observations when using asynchronous operations:
1) They will come back to bite you in some way or another. It's a risk versus benefit discussion. I like the SendAsync() method I proposed because it means forms can return instantly even if the email server takes a few seconds to respond. However, because it doesn't throw an exception, you can have a broken form and not even know it.
Of course unit testing should address this initially, but what if the production configuration file gets changed to point to a broken mail server? You won't know it, you won't see it in your logs, you only discover it when someone asks you why you never responded to the form they filled out. I speak from experience on this one. There are ways around this, but in practicality, async is always more work to test, debug, and maintain.
2) Threading in ASP.Net works in some situations if you understand the ThreadPool, app domain refreshes, locking, etc. I find that it is most useful for executing several operations at once to increase performance where the end result is deterministic, i.e. the application waits for all threads to complete. This way, you gain the performance benefits while still having a clear indication of results.
3) Threading/Async operations do not increase performance, only perceived performance. There may be some edge cases where that is not true (such as processor optimizations), but it's a good rule of thumb. Improperly used, threading can hurt performance or introduce instability.
The better scenario is out of process execution. For enterprise applications, I often move things out of the ASP.Net thread pool and into an execution service.
See this SO thread: Designing an asynchronous task library for ASP.NET
I know you are not looking for alternatives, but using a MessageQueue (such as MSMQ) could be a good solution for this problem in the future. Using multithreading in asp.net is normally discouraged, but in your current situation I don't see why you shouldn't. It is definitely possible, but beware of the pitfalls related to multithreading (stolen here):
•There is a runtime overhead
associated with creating and
destroying threads. When your
application creates and destroys
threads frequently, this overhead
affects the overall application
performance. •Having too many threads
running at the same time decreases the
performance of your entire system.
This is because your system is
attempting to give each thread a time
slot to operate inside. •You should
design your application well when you
are going to use multithreading, or
otherwise your application will be
difficult to maintain and extend. •You
should be careful when you implement a
multithreading application, because
threading bugs are difficult to debug
and resolve.
At the risk of violating your no-alternative-solution prime directive, I suggest that you write the email requests to a SQL Server table and use SQL Server's Database Mail feature. You could also write a Windows service that monitors the table and sends emails, logging successes and failures in another table that you view through a separate ASP.Net page.
You probably can use ThreadPool.QueueUserWorkItem
Yes this is an appropriate time to use multi-threading.
One thing to look out for though is how will you express to the user when the email sending ultamitely fails? Not blocking the user is a good step to improving your UI. But it still needs to not provide a false sense of success when ultamitely it failed at a later time.
Don't know if any of the above links mentioned it, but don't forget to keep an eye on request timeout values, the queued items will still need to complete within that time period.
I'm currently in the process of building an ASP.NET MVC web application in c#.
I want to make sure that this application is built so that it can scale out in the future without the need for major re-factoring.
I'm quite keen on using some sort of queue to post any writes to my database base to and have a process which polls that queue asynchronously to perform the update. Once this data has been posted back to the database the client then needs to be updated with the new information. The implication here being that the process to write the data back to the database could take a short while based on business rules executing on the server.
My question is what would be the best way to handle the update from the client\browser perspective.
I'm thinking along the lines of posting the data back to the server and adding it to the queue and immediately sending a response to the client then polling at some frequency to get the updated data. Any best practices or patterns on this would be appreciated.
Also in terms of reading data from the database would you suggest using any particular techniques or would reading straight from db be sufficient given my scenario.
Update
Thought I'd post an update on this as it's been a while. We've actually ended up using Windows Azure but the solution is applicable to other platforms.
What we've ended up doing is using the Windows Azure Queue to post messages\commands to. This is a very quick process and returns immediately. We then have a worker role which processes these messages on another thread. This allows us to minimize any db writes\updates on the web role in theory allowing us to scale more easily.
We handle informing the user via emails or even silently depending on the type of data we are dealing with.
Not sure if this helps but why dont you have an auto refresh on the page every 30 seconds for example. This is sometimes how news feeds work on sports websites, saying the page will be updated every x minutes.
<meta http-equiv="refresh" content="120;url=index.aspx">
Why not let the user manually poll the status of the request? This is how your typical e-commerce app is implemented. When you purchase something online, the order is submitted to a queue for fullfillment. After it's submitted, the user is presented with a "Thank you for your order" page and a link where they can check the status of the order. The user can visit the link anytime to check the status, no need for an auto-poll mechanism.
Is your scenario so different from this?
Sorry in my previous answer I might have misunderstood. I was talking of a "queue" as something stored in a SQL DB, but it seems on reading your post again you are may be talking about a separate message queueing component like MSMQ or JMS?
I would never put a message queue in the front end, between a user and backend SQL DB. Queues are good for scaling across time, which is suitable between backend components, where variances in processing times are acceptable (e.g. order fulfillment)... when dealing with users, this variance is usually not acceptable.
While I don't know if I agree with the logic of why, I do know that something like jQuery is going to make your life a LOT easier. I would suggest making a RESTful web API that your client-side code consumes. For example, you want to post a new order to the system and have the client responsive? Make a post to www.mystore.com/order/create and have that return the new URI to access the order (i.e. order#) as a URI (www.mystore.com/order/1234). That response is then stored in the client code and a jQuery call is setup to poll for a response or stop polling on an error.
For further reading check out this Wikipedia article on the concept of REST.
Additionally you might consider the Reactive Extensions for .NET and within that check out the RxJS sub-project which has some pretty slick ways of handling with the polling problem without causing you to write the polling code yourself. Fun things to play with!
Maybe you can add a "pending transactions" area to the UI. When you queue a transaction, add it to the user's "pending transactions" list.
When it completes, show that in the user's "pending transactions" list the next time they request a new page.
You can make a completed transaction stay listed until the user clicks on it, or for a predetermined length of time.
I need a method to run every so often that does some database processing. However, I may need it to be triggerable by an admin on the site. But I don't want this method being run more than once at the same time, as this could cause issues with the way it hits the database.
For example, could I...
Create a singleton class that runs the method on a timer, and instantiate it in the global.asax file. Then, since it's a singleton, I can call it from my normal .aspx pages and call the method whenever I want. I would probably need to use that "lock" feature of C# to check to see if the method is already running.
I heard some talk lately that Singletons are "evil", but this seems like the perfect fit for it. What do you think? Thanks in advance.
Timers and locks (that are intended to synchronize access to the database) are a bad idea on the web; you may have zero, one or many app-pools on different servers. They may recycle at any time, and won't be spun up until needed. Basically, this won't prevent you hammering the db from multiple sources.
Personally, I'd be tempted to either write a service to do this work (either db-polling, or via WCF etc), or use the db (a SP or similar) - set a flag in a table-row to say "in progress", do the work at the db, and clear the flag (duplicate attempts exit immediately while in progress).
I would do it this way
Build a normal ASP.NET page which does the processing
StealBorrow LFSR Consultings idea for a flag in the DB which does the work of checking if the process is currently running
Use normal cronjob or windows task scheduler to call the web page on a regular basis.
And Singletons aren't evil they just get abused easily.
Another option which Joel Spolsky mentioned in one of the SO Podcasts, i believe it was #20 something. Is to set an empty Cache object on application start with a certain expiration date, and in the CacheItemRemovedCallback make a call out to page or do some work and then reset the empty cache object.
I'm probably horribly mis-quoting him, so I recommend you listen or look through the transcripts for yourself.
What about just setting up a flag in the database and checking that to determine if the job is running or not? Seems simpler IMO.
The canonical way to write a singleton ends up not being thread safe. Especially in a webby environment, where threads needn't even be on the same machine!
If you really want to do a "singleton", think of it as a service that you only ever deploy to one machine. Then use the transactional semantics of your database like Marc Gravell suggests to synchronize the locks.
We've done similar things by using a Web Service to do the backend processing, then writing a Desktop App to call it on whatever schedule we need. We can then run that app on a server, or an admin can run it directly from their PC to trigger the job.
Edit: After I saw your revision that you don't want them to run simulatenously, we have usually just controlled that with a database flag like a few others have said, nothing fancy but it gets the job done
Set an Application wide variable to denote that the process is running. That should be a little easier than storing the variable in the database, right?