I have application that use MSSQL database.
Application have module that is using for sending messages between application users.
When one user send message to another i insert message in database, and set message status to 1( after user read message database I update and set message status to 0).
Now,i am using system.timers.timer for checking message status, and if message status 1 user get alert that he has one message inbox.
The problem is that this application can be used by many users, and if timer run ever 5 minutes this gone slow application and database.
Is there any other solution to do this, with out runing timer?
Thanks!
I don't think the solution using a timer which does polling is that bad. And 50 Users is relatively little.
Does each user run a client app, which directly connects to the database? Or is this a ASP.NET app? Or a service which connects to the db and notifies client apps?
If you have client apps connecting directly to the DB, I'd stay with the timer and probably reduce the timeout (the number of queries seems to be extremely low in your case).
Other options
Use SqlDependency/Query notifications MSDN
Only if your message processing logic gets more complex, probably take a look at service broker. Especially if you need queuing behavior. But as it seems, this would be far too complex.
I wouldn't use a trigger.
Maybe you should look into having a "monitor" service, which is the only one looking at changes in the database and then sending a message to the other applications (a delegate) that data has updated, and they themselves should fetch their own data only when they get that message.
If checking always against the message table you can use add a column to your user table named: HasNewMessage, which is updated by a trigger on the message table
To illustrate it:
User 1 gets a new message
Messagetable trigger sets HasNewMessage to 1 for user1
You then check every 5 minutes if user1 HasNewMessage (should be faster due to indexed user
table)
If user1 looks into his mailbox you set HasNewMessages back to 0
Hope this helps
Related
We're using ActiveMQ locally to transfer data between 5 processes that turn simultaneously.
I have some data I need to send to a process, both at runtime (which works perfectly fine), but also a default value on start. Thing is it is published when the process starts, it just doesn't read because it wasn't subscribed to the topic at the time the data was sent.
I have multiple solutions : I could delay the first publishing for a moment so that the process has time to launch (which doesn't seem very appealing) ; or is there a way to send all stored previously non-treated messages to some process that just subscribed ?
I'm coding in C#.
I don't have any experience with ActiveMQ, but other message system usually have an option which marks the subscription as persistent, which means that; after the first subscription; the message queue itself checks if a certain message is delivered to that system and retries with a timeout. In this scenario you need to start the receiver at least 1 time.
If this is not an option and you want to plug in receiver afterwards, you might want to consider a setup of your messages which allows you to retrieve the full state, i.e. if you send total-messages instead of differential- messages.
After a little google, I came upon this definition durable subscribers, I hope this helps:
See:
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
and
http://activemq.apache.org/manage-durable-subscribers.html
since you are using C# client i don't konw if this is supported
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
http://activemq.apache.org/retroactive-consumer.html
So, another solution is to configure this behavior on the broker side by adding that to the activemq.xml and restart :
The subscription recovery policy allows you to go back in time when
you subscribe to a topic.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<subscriptionRecoveryPolicy>
<timedSubscriptionRecoveryPolicy recoverDuration="10000" />
<fixedCountSubscriptionRecoveryPolicy maximumSize="10000" />
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
http://activemq.apache.org/subscription-recovery-policy.html
I went around the issue by sending a message from each process when they're launched back to the main one, and then only sending the info I needed to send.
I have a site I'm building and have a DB backup running on it several times a day.
Every successful backup is sent to my mail (and directed by a rule to a folder since there are many backups..) and every failure is also sent to me (not directed by a rule).
I'm afraid that the task will stop running from some reason and I will not know (no alerts arriving but how can I notice it in the blur of mails).
Is there a SW or process that alerts me when a mail was not received during a specific time?
The reason I'm asking here is that I want to develop this kind of thing (if does not exist).
Thanks
You cannot know, unless you create something for yourself. E.g. a rule to auto-reply the received message (at receiver's mailbox). Your program (that sent the e-mail) should check his own mailbox within x seconds for the reply after sending.
Normally it is just a send and goodbye system, unless the e-mail box is full, unreachable, etc.
I am creating a mail(not email) messaging system on a website (in the same line as Facebook). I am looking at employing a queue for creating the messages. The problem I am facing is that in terms of user experience and UI, if I create a new conversation/message, while it gets added to the queue, it may sit there for 30+ seconds while the next poll runs. As the list of messages being returned comes from the non-queue table, there are limited options to how to show that the message has been sent.
Can only think of:
- When message is created, show a "messaging sending" ajax loader, and initialize a javascript poll of the queue to run every 5 secs. When the queue item no longer exists, reload the conversation list with the updated items.
- When a message is created, or page loads, query the message table, and join against the queue table for any messages created by senderid, so to user, it essentially looks like message has truly been sent. (Only issue with this, is that is technically negates the reason for a queue).
I am creating a Windows Service app that I would like to have programmatically pause when either a system error, odbc connection, or missing file error occur while . I was wondering if anyone knows how to do this? The Windows service app uses an odbc connection and datareader to connect to an MS Access database and an Oracle table, so there are the probable errors that I would be handling with those, I just want to allow a pause for the user handle the errors if/when they occur.
ServiceController service = new ServiceController(serviceName);
TimeSpan timeout = TimeSpan.FromMilliseconds(timeoutValue);
service.Pause(); //or whatever you want here.
sevice.WaitForStatus(ServiceControllerStatus.Paused, timeout);
...
Then to restart, do the same thing except for
service.Continue();
sevice.WaitForStatus(ServiceControllerStatus.Running, timeout);
You can do this for any state you want. Check out the msdn documentation by googling SeviceController. It will be the first result returned.
Also, you will need to handle the OnPause and OnContinue events in your service.
Have you tried?
System.Threading.Thread.Sleep(1000); // sleep for 1 second
Adjust the 1000 to 1000 times however long you want it to sleep in seconds.
Assuming that your service has a continual loop that checks for data, add a check to an external source for pause/continue commands. This source can be a message queue like MSMQ or a database table.
I implemented something along like this by having my service continually check a table for commands, and reporting its status in another table. When it gets a start command it launches a processing loop on another thread. A stop command causes it to signal the thread to gracefully exit. The service core never stops running.
The user interacts via a separate app with a UI that lets them view the service's status and submit commands. Since the app does its control via a database it doesn't have to run on the same machine that the service is running on.
Imagine this scenario: you have a WCF web service that gets hit up to a million times a day. Each hit contains an "Account ID" identifier. The WCF service is hosted in a distributed ASP.NET cluster and you don't have Remote Desktop access to the server.
Your goal is to save "number of hits per hour" for each Account ID into a SQL database. The results should look like this:
[Time], [AccountID], [NumberOfHits]
1 PM, Account ID (Bob), 10 hits
2 PM, Account ID (Bob), 10 hits
1 PM, Account ID (Jane), 5 hits
The question is: How can you do this without connecting to a SQL server database on every hit?
Here's one solution I thought of: Store the temporary results in a System.Web.Cache object, listen to its expiration, and on Cache Expiration, write all the accumulated data to the database when Cache expires.
Any thoughts on a better approach?
Deffered update is the key, indeed, and you are on the right path with your local cache approach. As long as you don't have a requirement to display the last-update-count on each visit, the solution is simple: update a local cache of account_id->count and periodically sweep through this cache, replace the count with 0 and add the count to the total in the database. You may loose some visit counts if your ASP.Net process is lost, and your display hit count is not accurate (Node 1 int he ASP farm returns it's lats count, Node 2 returns its own local one, different from Node 1).
If you must have accurate display of counts on each return result (whether this is an page return or a service return, matter little) then it gets hairy quite fast. Centralized cache like Memcache can help to create a solution, but is not trivial.
Here is how I would keep the local cache:
class HitCountCache
{
class Counter
{
public unsigned int count {get;set}
public accountid {get;set}
};
private Dictionary<accountType, Counter> _counts = new Dictionary<...>();
private Object _lock= new Object();
// invoke this on every call
//
void IncrementAccountId (accountId)
{
Counter count;
lock(_lock)
{
if (_counts.TryGetValue (accountId, out count))
{
++count.count;
}
else
{
_counts.Add (accountId,
new Counter {accountId = accountId; count=0});
}
}
}
// Schedule this to be invoked every X minutes
//
void Save (SqlConnection conn)
{
Counter[] counts;
// Snap the counts, under lock
//
lock(_lock)
{
counts = _counts.ToArray();
_counts.Clear();
}
// Lock is released, can do DB work
//
foreach(Counter c in counts)
{
SqlCommand cmd = new SqlCommand(
#"Update table set count+=#count where accountId=#accountId",
conn);
cmd.Parameters.AddWithValue("#count", c.count);
cmd.Parameters.AddWithValue("#accountId", accountId);
cmd.ExecuteNoQuery();
}
}
}
This is a skeleton, it can be improved, and can also be made to return the current total count if needed, at least the total count as known by local node.
One option is to dump the relevant information into your server logs (logging APIs are already optimised to deal with high transaction volumes) and reap them with a separate process.
You asked: "How can you do this without connecting to a SQL server database on every hit?"
Use connection pooling. With connection pooling a several connection to SQL server opened ONCE and then they are reused for subsequent calls. So on each database hit, you do not need to connect to SQL server, because you will already be connected and can reuse existing connection for you database access.
Note, that connection pooling is used by default with SQL ado.net provider, so you might be using already without even knowing it.
An in-memory object as proposed is fastest but risks data loss in the event of an app or server crash. To reduce data loss you can lazy-write the cached data to disk. Then periodically read back from the cache file and write the aggregated information to your SQL server.
Any reason why they aren't using app fabric or the like?
Can you get into the service implementation? If so, the way to hit this is to have the service implementation fire a "fire and forget" style logging call to whatever other service you've setup to log this puppy. Shouldn't hold up the execution, should survive app crashes and the like and won't require digging into the SQL angle.
I honestly wouldn't take the job if I couldn't get into the front end of things, most other approaches are doomed to fail here.
If your goal is performance on the website then like another poster said, just use fire and forget. This could be a WebService that you post the data to or you can create a service running in the background listening on an MSMQ queue. I can give you more examples of this if interested. If you need to keep the website or admin tool in sync with the database you can store the values in a high performance cache like memcache at the same time you update the database.
If you want to run a batch of 100 queries on the DB in one query then make a separate service, again with MSMQ, which polls the queue and waits for > 100 messages in the queue. Once it detects there is 100 messages it opens a transaction with MSTDC and reads all the messages into memory and batches them up to run in one query. MSMQ durable, meaning that if the server shuts off or the service is down when the message is sent, it will still get delivered when the service comes online. Messages will only be removed from the queue once the query has completed. If the query errors out or something happens to the service the messages will still be in the queue for processing, you don't loose anything. MSTDC just helps you keep everything in one transaction so if one part of the process fails everything gets rolled back.
If you can't make a windows service to do this then just make a WebService that you call. You still send the MSMQ message each time a page loads, and say once every 10 times the page loads you fire the web service to process all the messages in the queue. The only problem you might have is getting the MSMQ service installed, however many hosting places and install something like this for you if you request it.