I tried creating a poison message scenario in the following manner.
1- Created a message queue on a server (transactional queue).
2- Created a receiver app that handles incoming messages on that server.
3- Created a client app located on a client machine which sends messages to that server with the specific name for the queue.
4- I used the sender client app with the following code (C# 4.0 framework):
System.Messaging.Message mm = new System.Messaging.Message("Some msg");
mm.TimeToBeReceived = new TimeSpan(0, 0, 50);
mm.TimeToReachQueue = new TimeSpan(0, 0, 30);
mm.UseDeadLetterQueue = true;
mq.Send(mm);
So this is setting the timeout to reach queue to 30 seconds.
First test worked fine. Message went through and was received by the server app.
My second test, I disconnected my ethernet cable, then did another send from the client machine.
I can see in the message queue on the client machine that the message is waiting to be sent ("Waiting for connection"). My problem is that when it goes beyond the 30 sec (or 50sec too), the message never goes in the Dead-letter queue on the client machine.
Why is it so ? ... I was expecting it to go there some it timed-out.
Tested on Windows 7 (client) / Windows server 2008 r2 (server)
Your question is a few days old already. Did you find out anything?
My interpretation of your scenario would be that the unplugged cable is the key.
In the scenario John describes, there is an existing connection and the receiver could not process the message correctly within the set time limit.
In you scenario, however, the receiving endpoint never gets the chance to process the message, so the timeout can never occur. As you said, the state of the message is Waiting for connection. A message that was never sent cannot logically have a timeout to reach its destination.
Just ask yourself, how many resources Windows/ MSMQ would unneccessaryly sacrifice - and how often - to check MessageQueues for how-many conditions if the queues is essentially inactive? There might be a lot of queues with a lot of messages on a system.
The behavior I would expect is that if you plug the network cable back in and the connection is re-established that then, only when it is needed, your poison message wil be checked for the timeout and eventually moved to the DeadLetter queue.
You might want to check this scenario out - or did you already check it out the meantime?
Related
I have some code that sends a message to a remote queue.
var queue = new MessageQueue(queueName);
var message = new Message(queueMessage, new BinaryMessageFormatter());
queue.Send(message);
I've tried setting the queue using IP and hostname, it makes no difference:
FormatName:Direct=TCP:1.2.3.4\Private$\my.queue
FormatName:Direct=OS:servername\Private$\my.queue
The messages appear in the outgoing messages queue (if I pause it)
When unpaused they're sent to the server.
There is a private queue set up on the server. There is nothing running that will take messages off the queue.
However, messages never appear in the queue on the remote machine. I don't know how to debug this problem. The queue is a private non-transactional queue.
Creating a local private queue and sending messages to it works fine.
Are there some logs or something I can look at to see what might be happening?
The status in outgoing messages shows state as 'connected' so there is no connection issue.
Edit:
The only logging I can find is in event viewer > microsoft > windows > msmq which has an entry that simply says "Message came over network" whenever I send a message via MSMQ. It has no other information.
Solved, I added this:
message.UseDeadLetterQueue = true;
This made the server put it into the dead letter queue under System Queues > Dead-letter messages
Once this happened I could see my message and clicking it, it said 'Access Denied' under the 'Class' heading.
A quick google revealed that even though I had granted Everyone full access permissions to the queue, it was necessary to add Anonymous Logon and give that full access too in the security tab of the queue.
This may be a little long of a post
I have a server and a client, and i have the ability to see the sent/rec messages and count them on both machines live, so that i dont have to go into debug in VS2012,
i am running my server on an alternate machine in California so that the server and the client are on completely different IP's and running through the internet for live tests, they are using DNS names and resolving them fine.
Both Physical PC's have no antivirus and no firewall and windows firewall is off on both.
Both machines have wireshark installed for tracking my packets on UDP port 29999
How the sequence works is the client sends a logon, the server verifies the client by some credentials and the server sends player information (stats)
When starting the server executable the first time, the messages come through to the client without failing. every time
if you restart the client and try again the client does not receive the messages.
1) The counter on the server.exe increments properly,
2) Server's Wireshark on the server shows the messages sent
3) Client's Wireshark sees the UDP packets come in on the proper port from the proper IP address
but the client.exe message counter does not increment.
If i run the client in DEBUG mode in VS2012 and set a breakpoint as shown here :
while ((_NetworkIncomingMsg = _NetworkClient.ReadMessage()) != null)
{
ReadInTime = DateTime.Now; // <<-- break point here
// blah blah more code
}
It never hits, no message is ever received.
Its important to note that if i but a break point on the while statement, yes it is firing, but no message is read, and thus its null, and thus skips the code
I believe it has something to do with either the timing or the placement of the ReadServerMessages() method.
i have the ReadServerMessages() method being firing on a Timer as an event under its elapsed as shown here. The timer is constructed to fire ever 1.0 milliseconds. As this works pretty much flawlessly in every other portion of the software including when actully connected to a dedicated server and constantly sending packets.
public System.Timers.Timer ClientNetworkTick = new System.Timers.Timer(1.0);
void update_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
// Check if server sent new messages
try
{
ReadServerMessages();
}
catch (Exception ex)
{
}
}
Any thoughts? any thing i left out let me know thanks!
I'm using RabbitMQ to deliver messages to worker processes (using the official C# client). I have been running simple tests during the implementation, and all has been going swimmingly until now.
I ran a test where I queued messages for a worker process that was not listening (no connection). Once I had queued several hundred messages, I started that process. It created its IModel, declared its queue (which already existed), and began consuming messages (with BasicConsume). This went great. This process, as it processed messages, created messages for other queues. There were processes already listening to these queues (with BasicConsume), and so the messages were immediately delivered to those clients (or so the server thought...). The messages are never processed.
The server definitely believes that the messages have been delivered (the messages are all in the "unacked" bucket, not the "ready" bucket), but
IBasicConsumer.HandleBasicDeliver never got called on the client. I have tried several different techniques (using a Subscription, using QueueingBasicConsumer as well as my own custom consumer), and the outcome is exactly the same. I'm at a complete loss. If I close the connection (there is only one connection here), then the messages immediately move from the "unacked" bucket to the "ready" bucket".
Why doesn't the client get notified when messages are delivered?
Looking into the code, ModelBase.Close() calls ConsumerDispatcher.Shutdown() (ModelBase.cs line 301), and from there, it calls workService.StopWork() (ConcurrentConsumerDispatcher.cs line 27). It seems to me (by a cursory view of the code) that this stops ALL work in the connection's ConsumerWorkService. Instead, should ConcurrentConsumerDispatcher.Shutdown() be calling workService.StopWork(this) on line 27?
It's a bug in the RabbitMQ client, and a fix has already been merged in.
It should be available in the next nightly build, on 4/18/2015.
If your BasicConsume defines noAck = false, after you Dequeues a message needs to run the next code:channel.BasicAck(result.DeliveryTag, false);
If your BasicConsume defines noAck = true, after you Dequeues a message it's removed from the server automatically.
I'm sending a message to a private queue via c# :
MessageQueue msgQ = new MessageQueue(#".\private$\aaa");
msgQ.Formatter = new XmlMessageFormatter(new[] { typeof (String) });
msgQ.Send(msg);
It does work and I do see the message in the queue.
However, is there any way to get an ACK whether the message got to the queue with success ?
ps
BeginPeek and PeekCompleted is an event which is raised when a message becomes available in the queue or when the specified interval of time has expired. it is not helping me because I need to know if the message that I sent was received by msmq. BeginPeek will be raised also if someone else entered a message to the queue. and the last thing I want is to check via BeginPeek - from who this message comes from.
How can I do that?
ps2
Or maybe I don't have to worry since msgQ.Send(msg); will raise an exception if a message wasn't inserted....?
I think what you are trying to do should not be handled in code. When you send the message, it is placed in the outgoing queue. There are numerous reasons why it would not reach the destination, such as a network partition or the destination queue being full. But this should not matter to your application - as far as it is concerned, it sent the message, it committed transaction, it received no error. It is a responsibility of the underlying infrastructure to do the rest, and that infrastructure should be monitored to make sure there are no technical issues.
Now what should really be important to your application is the delivery guarantees. I assume from the scenario that you are describing that you need durable transactional queues to ensure that the message is not lost. More about the options available can be read here
Also, if you need some identifier to display to the user as a confirmation, a common practice is to generate it in the sending code and place it in the message itself. Then the handling code would use the id to do the required work.
Using transactional queues and having all your machines enroll in DTC transactions likely would provide what you're looking for. However, it's kinda a pain in the butt and DTC has side effects - like all transactions are enrolled together, including DB transactions.
Perhaps a better solution would to be use a framework like MassTransit or NServiceBus and do a request-response, allowing the reviecer to respond with actual confirmation message say not only "this has been delivered" but also "I acknowledge this" with timeout options.
As Oleksii have explained about reliable delivery.
However this can effect on performance.
What I can suggest is:
Why not create a MSMQ server on the machine that is sending MSG to other system.
What I am thinking is
Server 1 sends MSMSQ to Server 2
Server 2 receives adds to queue
Server 2 process queue/fire your code here to send a MSMQ msg to Server 1
Server 1 receives MSG (any successful msg with MSGId)
Do your further task
This approach can be an extra mile, but will keep your servers out of performance Lag.
I'm facing an extremely puzzling problem. I have a Windows service that monitors two MSMQ queues for input and sends messages to another MSMQ queue. Although the send operation seems instant from the service's perspective it actually takes the message exactly three (3) minutes to arrive (as shown in the properties window in the MSMQ MMC). I've been testing this problem with nothing else listening on the other side so that I can see the messages piling up. This is how the service sends messages:
var proxyFactory = new ChannelFactory<IOtherServerInterface>(new NetMsmqBinding(NetMsmqSecurityMode.None)
{
Durable = true,
TimeToLive = new TimeSpan(1, 0, 0),
ReceiveTimeout = TimeSpan.MaxValue
});
IOtherServerInterface server = this.proxyFactory.CreateChannel(new EndpointAddress("net.msmq://localhost/private/myqueue"));
var task = new MyTask() { ... };
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
server.QueueFile(task);
scope.Complete();
}
The service is running on Windows Server 2008 R2. I also tested it on R1 and noticed the same behavior. Again, everything happens on the same machine. All components are deployed there so I don't think it could be a network issue.
EDIT #1:
I turned on the WCF diagnostics and what I noticed is very strange. The MSMQ datagram does get written normally. However, after the "a message was closed" trace message there is nothing going on. It is as if the service is waiting for something to happen. Exactly 3 minutes later and exactly when the MSMQ message arrives (according to the MSMQ MMC), I see another trace message about a previous activity. I suspect there is some kind of interference.
Let me give you more details about how the services work. There is an IIS app which receives tasks from clients and drops them in an MSMQ queue. From there, the troublesome service (MainService) picks them up and starts processing them. In some cases, another service (AuxService) is required to complete the task so MainService sends a message (that always gets delayed) to AuxService. AuxService has its own inbox queue where it receives MSMQ messages and when it's done, it sends an MSMQ message to MainService. In the meanwhile, the thread that sent the message to AuxService waits until it gets a signal or until it times out. There is a special queue where MainService looks for messages from AuxServices. When a message is received the abovementioned thread is woken up and resumes its activity.
Here's a representation of the whole architecture:
IIS app -> Q1 -> MainService
MainService -> Q2 -> AuxService
AuxService -> Q3 -> MainService
Although all operations are marked with OneWay, I'm wondering whether starting a MSMQ operation from within another MSMQ operation is somehow illegal. It seems to be the case given the empirical evidence. If so, is there away to change this behavior?
EDIT #2:
Alright, after some more digging it seems WCF is the culprit. I switched both the client code in MainService and the server code in AuxService to use MSMQ SDK directly and it works as expected. The 3 minute timeout I was experiencing was actually the time after which MainService gave up and considered that AuxService failed. Therefore, it seems that for some reason WCF refuses to perform the send until the current WCF activity exits.
Is this by design or is it a bug? Can this behavior be controlled?
You have transactions setup on the queue code, do you have the msmq object setup for transactions? 3 minutes sounds like the timeout period for a Distributed Transaction Coordinator enlistment.