Sending SMS in automated batches AND through web-interface - c#

This is more about the design/efficiency of the application rather than the syntax - I need to create a process that sends a batch of texts that will be run on a scheduler (automated batches), but I also need to allow an admin to send a batch manually (manual batch) or individual SMS messages (triggered). My initial thought was to build a server-side console application that can be executed with parameters to handle the sending of all texts, but I'm not positive if this would be the best option. I'm a bit worried about conflicts arising with multiple instances of the console app running (which I would obviously need to code for). Any suggestions on the best way to tackle this?
The batches will process one at a time in a loop, which will post the message to the operator (Twilio) and log the message in our database as sent.

It probably depends on your operator. This one has quite a lot of tech samples and docs.

Related

Threading to continue process while web page is left

I've read a good bit about threading with C#, but to be upfront I haven't done anything in production using it.
I have an application that has to process a bunch of documents and then send the documents via email. This may take 60 seconds to accomplish. I don't want the user of my web application to have to wait for these things to process to move on to other parts of the site.
On a button click the SendEmail function is called. What can I do to this code to make it so that my users can continue browsing the site without discontinuing the processing I need to do within the EmailPDFs function?
[Authorize]
public ActionResult SendEmail(decimal? id, decimal? id2)
{
EmailPDFs(..., ..., ...);
}
Thanks so much!
This is really the kind of thing that message queues are designed to handle. Fire off a message, and a process on a potentially separate server picks it up and processes it. When it's done, it sends a message back to a queue on your server, where a process on your server picks it up and notifies you that it's complete. You then notify your user that the work is finished.
Modern message queue systems can be backed by databases (such as Mongo, MySql, or SQL Server), and are extremely robust. The great thing about them is that they allow you to move long-running or CPU-intensive processes off onto other servers so that your web site remains nice and snappy.
You could try to add multi-threading and parallelism to your web application, by using TaskFactory and all that other stuff (for many folks, this is the route they take), but it doesn't make it very easy to separate your application if you need to, and break those big, resource-hogging pieces off if it becomes necessary.
I urge you to consider a queue-based solution.
Update:
For samples and information on how to implement this type of solution, see the following:
Reliable Messaging with MSMQ and .NET on MSDN
C#: A Message Queuing Service Application on MSDN
Also, consider glancing at this StackOverflow question for a quick crash course on the bare minimimum amount of code required.
A final note: MSMQ is built into certain flavors of Windows, and can be added to it through the Add/Remove Programs feature of the Control Panel. However, how you install it will depend on your specific flavor and version of Windows. A simple Google search will help you to find the appropriate instructions.
Good luck!

JMS: updating message version / prevent certain message from being queued

I am trying to create a message based application based with ActiveMQ, using .NET Clients.
Client 1: A Web Service (producer)
Client 2: A Windows Service (consumer)
My question is: Is it possible to prevent messages of a certain type or content from being queued by a Client?
The reason why I want to do this is Version Updating.
I think there will be a time, when I need to extend or change the message type.
My plan is to do that update in the following order:
Prevent messages of the old version to be queued.
Wait until the consumer has processed all messages of the old version.
Update producer and consumer software.
I would like the Web Service to be still available during the update process to report back to the call. But it should not be able to queue new messages.
Of course if there is a better way of solving this problem altogether, please let me know.
As a general rule it is a good idea to only have one type of payload per queue. An easy way to do this is to use two different queues for the two different message versions. Something like:
mysystem.orders.1_0
mysystem.orders.1_1
The version should be the last part of the queue name, as it makes it easy to work with wildcards, which are used for a lot of the config options in ActiveMQ.
Splitting up different versions into different queues gets you around the problem of having to upgrade the producer and consumer at the same time, and also gives you some visibility as whether all of the 1_0 messages have been consumed.

How to prevent NHibernate long-running process from locking up web site?

I have an NHibernate MVC application that is using ReadCommitted Isolation.
On the site, there is a certain process that the user could initiate, and depending on the input, may take several minutes. This is because the session is per request and is open that entire time.
But while that runs, no other user can access the site (they can try, but their request won't go through unless the long-running thing is finished)
What's more, I also have a need to have a console app that also performs this long running function while connecting to the same database. It is causing the same issue.
I'm not sure what part of my setup is wrong, any feedback would be appreciated.
NHibernate is set up with fluent configuration and StructureMap.
Isolation level is set as ReadCommitted.
The session factory lifecycle is HybridLifeCycle (which on the web should be Session per request, but on the win console app would be ThreadLocal)
It sounds like your requests are waiting on database locks. Your options are really:
Break the long running process into a series of smaller transactions.
Use ReadUncommitted isolation level most of the time (this is appropriate in a lot of use cases).
Judicious use of Snapshot isolation level (Assuming you're using MS-SQL 2005 or later).
(N.B. I'm assuming the long-running function does a lot of reads/writes and the requests being blocked are primarily doing reads.)
As has been suggested, breaking your process down into multiple smaller transactions will probably be the solution.
I would suggest looking at something like Rhino Service Bus or NServiceBus (my preference is Rhino Service Bus - I find it much simpler to work with personally). What that allows you to do is separate the functionality down into small chunks, but maintain the transactional nature. Essentially with a service bus, you send a message to initiate a piece of work, the piece of work will be enlisted in a distributed transaction along with receiving the message, so if something goes wrong, the message will not just disappear, leaving your system in a potentially inconsistent state.
Depending on what you need to do, you could send an initial message to start the processing, and then after each step, send a new message to initiate the next step. This can really help to break down the transactions into much smaller pieces of work (and simplify the code). The two service buses I mentioned (there is also Mass Transit), also have things like retries built in, and error handling, so that if something goes wrong, the message ends up in an error queue and you can investigate what went wrong, hopefully fix it, and reprocess the message, thus ensuring your system remains consistent.
Of course whether this is necessary depends on the requirements of your system :)
Another, but more complex solution would be:
You build a background robot application which runs on one of the machines
this background worker robot can be receive "worker jobs" (the one initiated by the user)
then, the robot processes the jobs step & step in the background
Pitfalls are:
- you have to programm this robot very stable
- you need to watch the robot somehow
Sure, this is involves more work - on the flip side you will have the option to integrate more job-types, enabling your system to process different things in the background.
I think the design of your application /SQL statements has a problem , unless you are facebook I dont think any process it should take all this time , it is better to review your design and check where is the bottleneck are, instead of trying to make this long running process continue .
also some times ORM is not good for every scenario , did you try to use SP ?

Send out multiple emails from Webpage Asp.net

I am making an webpage with asp.net C#.
I want people to log on and enter in quote requests. then the quote request is emailed to all the relevant people to quote (could be 100+ people).
Obviously I can not have the user sit and wait for the 100+ people to be emailed as the webpage will freeze.
I have thought about implementing a backend program on the server. perhapes that checks for a text file or something and when that text file is there. searches the database for any un-emailed quotes. and emails to the relevant people. then marks record as emailed.
But there must be a better way? IS there a que system or something designed to do things like this?
You can use
ThreadPool.QueueUserWorkItem
in a loop to queue your email sending. See http://msdn.microsoft.com/en-us/library/system.threading.threadpool.queueuserworkitem.aspx for info.
I would use something like Microsoft message queue, and then have a task that sweeps the queue periodically.
I'd get the website to push to the queue, as have the queue set-up as persistent so if the box crashes, restarts etc. nothing is lost.
Simple. Just send it. If it freezes more than 0.1 seconds or so you need better hardware and confiuguration.
Have the SMTP class write to a FOLDER On disc, have the locally installed SMTP service use this as pickup directory. Then youre page is done the moment the file is written. Stnadard .NET classes support this setup via a simple configuration setting. Most people never bother to read the documentation, though.
Noo network invovlved, just 100 file generation or so. Or even 10 files with 10 bcc recipients each. Finished.
Generating 10 small files should be fat. if your discs overload get a small SSD ;) They cost near nothing.
All the rest is a lot of programming work and introduces more to watch into your system.
You could combine the two, although it wouldn't be the approach I'd use.
But using your example, you could create a static listener interface to your backend (server) application, and instead of putting a file and check periodically, you could ping your backend operation to start the operation.
Generating 1 email with 100 people in the "To" or "CC" fields should not cause the page to freeze up. Have you actually observed this behavior? If so, check your SMTP configuration, as something sounds amiss.
However, the solution I've seen put into good use, is to have a SQL database that holds all pending messages to be sent out, then have SQL Server run a job every ten minutes to run through that pending table and do the emailing (as opposed to emailing straight from the .NET app).

SQL Service Broker vs Custom Queue

I am creating a mass mailer application, where a web application sets up a email template and then queues a bunch of email address for sending. The other side will be a Windows service (or exe) that will poll this queue, picking up the messages for sending.
My question is, what would the advantage be of using SQL Service Broker (or MSMQ) over just creating my own custom queue table?
Everything I'm reading is suggesting I use Service Broker, but I really don't see what the huge advantage over a flat table (that would be a lot simpler to work with for me). For reference the application will be used to send 50,000-100,000 emails almost daily.
Do you know how to implement a queue over a flat table? This is not a silly question, implementing a queue over a table correctly is much harder than it sounds. Queue-like-tables are notoriously deadlock prone and you need to carefully consider the table design and the enqueue and dequeue operations. Also, do you know how to scale your pooling of the table? And how are you goind to handle retries and timeouts (ie. what timers are used for)?
I'm not saying you should use SSB. The lerning curve is very steep and is primarily a distributed applicaiton platform, not a local queueing product so some features, like dialogs, will actually be obstacles for you rather than advantages. I'm just saying that you must consider also the difficulties of flat-table-queues. If you never implemented a flat-table-queue then be warned, there are many dragons under that bridge.
50k-100k messages per day is nothing, is only one message per second. If you want 100k per minute, then we have something to talk about.
If you every need to port to another vendor's database, you will have less problem if you used normal tables.
As you seem to only have one reader and one write from your queue, I would tend to use a standard table until you hit problem. However if you start to feel the need to use “locking hints” etc, that the time to switch to the Service Broker Queues.
I would not use MSMQ, if both the sender and the reader need a database connection to work. MSMQ would be good if the sender did not talk to the database at all, as it lets the sender keep working when the database is down. However having to setup and maintain both the MSMQ and the database is likely to be more work then it is worth for most systems.
For advantages of Service Broker see this link:
http://msdn.microsoft.com/en-us/library/ms166063.aspx
In general we try to use a tool or standard functionality rather than building things ourselves. This lowers the cost and can make upgrading easier.
I know this is old question, but is sufficiently abstract to be relevant for long enough time.
After using both paradigms I would suggest flat table. It is surprisingly scalable and nifty. Correct hints need to be used.
Once the application goes distributed, or starts using mutiple allways on groups with different RW and RO servers, the Service Broker (or any other method of distributed communication) becomes a neccessity.
Flat table
needs only few hints (higly dependent on isolation level) to work scalably and reliably in the consumer (READPAST, UPDLOCK, ROWLOCK)
the order of message processing is not set in stone
the consumer must make sure that the message stays in the queue if the processing fails
needs some polling mechanism (job, CDC (here lies madness :)), external application...)
turn of maintenance jobs and automatic statistics for the table
Service broker
needs extremely overblown "infrastructure" (message types, contracts, services, queues, activation procedures, must be enabled after each server restart, conversations need to be correctly created and dropped...)
is extremely opaque - we have spent ages trying to make it run after it mysteriously stopped working
there is a predefined order of message processing
the tables it uses can cause deadlocks themselfs if SB is overused
is the only way (except for linked servers...) to send messages directly from database on RW server of one HA group to a database that is RO in this HA group (without any external app)
is the only way to send messages between different servers (linked servers are a big NONO (unless they become an YESYES - you know the drill - it depends)) (without any external app)

Categories

Resources