I'm currently having to send files over the size of 4MB through several servers using MSMQ. The files are initially sent in chunks, like so:
using (MessageQueueTransaction oTransaction = new MessageQueueTransaction())
{
// Begin the transaction
oTransaction.Begin();
// Start reading the file
using (FileStream oFile = File.OpenRead(PhysicalPath))
{
// Bytes read
int iBytesRead;
// Buffer for the file itself
var bBuffer = new byte[iMaxChunkSize];
// Read the file, a block at a time
while ((iBytesRead = oFile.Read(bBuffer, 0, bBuffer.Length)) > 0)
{
// Get the right length
byte[] bBody = new byte[iBytesRead];
Array.Copy(bBuffer, bBody, iBytesRead);
// New message
System.Messaging.Message oMessage = new System.Messaging.Message();
// Set the label
oMessage.Label = "TEST";
// Set the body
oMessage.BodyStream = new MemoryStream(bBody);
// Log
iByteCount = iByteCount + bBody.Length;
Log("Sending data (" + iByteCount + " bytes sent)", EventLogEntryType.Information);
// Transactional?
oQueue.Send(oMessage, oTransaction);
}
}
// Commit
oTransaction.Commit();
}
These messages are sent from Machine A to Machine B, and then forwarded to Machine C. However, I've noticed that the PeekCompleted event on Machine B is triggered before all messages are sent.
For example, a test run just now showed 8 messages sent, and were processed on Machine B in groups of 1, 1 and then 6.
I presume this is due to the transactional part ensuring the messages arrive in exactly the right order, but not guaranteeing they are all collected at exactly at the same time.
The worry I have is that when Machine B passes the messages to Machine C, these now count as 3 separate transactions, and I'm unsure as to whether the transactions themselves are delivered in the correct order (for example, 1 then 6 then 1).
My question is, is it possible to receive messages using PeekCompleted by transaction (meaning, all 8 messages are collected first), and pass them on so Machine C gets all 8 messages together? Even in a system where multiple transactions are being sent at the same time?
Or are the transactions themselves guaranteed to arrive in the correct order?
I think I missed this when looking at the topic:
https://msdn.microsoft.com/en-us/library/ms811055.aspx
That these messages will either be sent together, in the order they
were sent, or not at all. In addition, consecutive transactions
initiated from the same machine to the same queue will arrive in the
order they were committed relative to each other. Moreover
So, no matter how diluted the transactions get, the order will never be affected.
Related
I am using .NET 6.0 and recently using int numBytes = client.Receive(bytes); has been taking around about 3 minutes.
The Socket variable is called client.
This issue was not occuring 3 days ago.
The full code that I am using is:
string data = "";
byte[] bytes = new byte[2048];
client = httpServer.Accept();
// Read inbound connection data
while (true)
{
int numBytes = client.Receive(bytes); // Taking about 3 minutes here
data += Encoding.ASCII.GetString(bytes, 0, numBytes);
if (data.IndexOf("\r\n") > -1 || data == "")
{
break;
}
}
The timing is also not always consistent. Sometimes (rarely) it can be instant and othertimes it can take 3 minutes - 1 hour.
I have attempted the following:
Restarting my computer
Changing networks
Turning off the firewall
Attempting on a different computer
Attempting on a different computer with the firewall off
Using a wired and wireless connection
However none of these worked and instead resulted in the same issue.
What I expect to happen and what used to happen is that it would continue through the code normally instead of being hung up on 1 line of code for a long time.
You could use the client.Poll() method to check if data is available to be read from the socket before calling client.Receive().
If client.Poll() returns false, it means that there is no data available to be read and you can handle that situation accordingly.
The normal expected behaviour for the code below, would be that ReceiveAsync, looks at the Azure queue for up to 1 minute before returning null or a message if one is received. The intended use for this is to have an IoT hub resource, where multiple messages may be added to a queue intended for one of several DeviceClient objects. Each DeviceClient will continuously poll this queue to receive message intended for it. Messages for other DeviceClients are thus left in the queue for those others.
The actual behaviour is that ReceiveAsync is immediately returning null each time it's called, with no delay. This is regardless of the value that is given with TimeSpan - or if no parameters are given (and the default time is used).
So, rather than seeing 1 log item per minute, stating there was a null message received, I'm getting 2 log items per second (!). This behaviour is different from a few months ago,. so I started some research - with little result so far.
using Microsoft.Azure.Devices;
using Microsoft.Azure.Devices.Client;
public static TimeSpan receiveMessageWaitTime = new TimeSpan(0, 1 , 0);
Microsoft.Azure.Devices.Client.Message receivedMessage = null;
deviceClient = DeviceClient.CreateFromConnectionString(Settings.lastKnownConnectionString, Microsoft.Azure.Devices.Client.TransportType.Amqp);
// This code is within an infinite loop/task/with try/except code
if(deviceClient != null)
{
receivedMessage = await deviceClient.ReceiveAsync(receiveMessageWaitTime);
if(receivedMessage != null)
{
string Json = Encoding.ASCII.GetString(receivedMessage.GetBytes());
// Handle the message
}
else
{
// Log the fact that we got a null message, and try again later
}
await Task.Delay(500); // Give the CPU some time, this is an infinite loop after all.
}
I looked at the Azure hub, and noticed 8 messages in the queue. I then added 2 more, and neither of the new messages were received, and the queue is now on 10 items.
I did notice this question: Azure ServiceBus: Client.Receive() returns null for messages > 64 KB
But I have no way to see whether there is indeed a message that big currently in the queue (since receivemessage returns null...)
As such the questions:
Could you preview the messages in the queue?
Could you get a queue size, e.g. ask the number of messages in the queue before getting them?
Could you delete messages from the queue without getting them?
Could you create a callback based receive instead of an infinite loop? (I guess internally the code would just do a peek and the same as we are already doing)
Any help would be greatly appreciated.
If you use the Azure ServiceBus, I recommend that you could use the Service Bus Explorer to preview the message, get the number of message in the queue. And Also you could delete the message without getting them.
I am testing a project with a dead letter queue with Microsoft Service Bus. I send 26 messages (representing the alphabet) and I use a program that when receiving the messages, randomly puts some of them in a dead letter queue. The messages are always read in peek mode from the dead letter queue, so once they reach there they stay there. After running a few times, all 26 messages will be in the dead letter queue, and always remain there.
However, when reading them, sometimes only a few (e.g. 6) are read, sometimes all 26.
I use the command:
const int maxToRead = 200; // It seems one wants to set this higher than
// the anticipated load, obtaining only some back
IEnumerable<BrokeredMessage> dlIE =
deadletterSubscriptionClient.ReceiveBatch(maxToRead);
There is an overload of ReceiveBatch which has a timeout, but this doesn't help, and proably only adds to the complexity.
Why doesn't it obtain all 26 messages every time, since it is used in "peek" mode and the messages stay there.
I can use "Service Bus Explorer" to actually verify that all messages are in the deadletter queue and remain there.
This is mostly a testing example, but one would hope that "ReceiveBatch" would work in deterministic mode and not in a very (bad) random manner...
This is only a partial-answer or work-around; the following code reliably gets all elements, but doesn't use the "ReceiveBatch"; note, as far as I can discern, Peek(i) operates on a one-based index. Also: depending on which server one is running on, if you are charged by the message pull, this may (or may not) be more expensive, so use at your own risk:
List<BrokeredMessage> dlIE = new List<BrokeredMessage>();
BrokeredMessage potentialMessage = null;
int loopCount = 1;
while ((potentialMessage = deadletterSubscriptionClient.Peek(loopCount)) != null)
{
dlIE.Add(potentialMessage);
loopCount++;
}
We have an intranet ASP.NET web forms application which performs the actual work via service layer, represented by .NET Remoting services. A couple of weeks ago we've started to get timeout exceptions from IIS, it turned to be that the request from front end was not processed in allowed default execution time (for .NET 2.0+ it is 110sec). During investigation we've found out, that the problem has to be with sending messages to the to the transactional MSMQ queue (runs on W2K3 x64 server). The way we send the messages is the following: from DB we're getting all the records we want to push to the queue and then every single record pushed in separated MSMQ transaction in foreach cycle like this:
using (MessageQueue queue = new MessageQueue(#".\private$\OurQueue"))
{
using (MessageQueueTransaction tran = new MessageQueueTransaction())
{
tran.Begin();
Message msg = new Message(BODY); // BODY is some class which holds few fields of type Guid, String and DateTime
msg.Label = "Some label for the message";
msg.UseDeadLetterQueue = true;
msg.TimeToBeReceived = new TimeSpan(7, 0, 0, 0);
msg.Priority = MessagePriority.Normal;
queue.Send(msg, tran);
tran.Commit();
queue.Close();
return msg.Id;
}
}
The number of messages to be sent can reach 30.000-100.000, that's why the performance is crucial.
Experiments with Stopwatch class have revealed that if you try to change the logic and send all messages in ONE transaction like this:
using (MessageQueue queue = new MessageQueue(#".\private$\OurQueue"))
{
using (MessageQueueTransaction tran = new MessageQueueTransaction())
{
tran.Begin();
for (int i = 0; i < recordsNumber; i++)
{
Message msg = new Message(BODY); // BODY is some class which holds few fields of type Guid, String and DateTime
msg.Label = "Some label for the message";
msg.UseDeadLetterQueue = true;
msg.TimeToBeReceived = new TimeSpan(7, 0, 0, 0);
msg.Priority = MessagePriority.Normal;
queue.Send(msg, tran);
}
tran.Commit();
queue.Close();
}
}
the performance is much better - for separated transactions time required to push 30K of records to the MSMQ is ~115 sec, while for 1 transaction it is only ~16 sec. Now this is all guessing to some extend since the profiling and performance estimation was done on my development machine - I don't have access to the production servers, but still this makes me think: is this a correct comparison and if it's a good idea to send several tens of thousands of records to the MSMQ in 1 transaction? However the main question for which I'm still looking for answer is how it happened, that it worked like this for last 7 years or so with separated MSMQ transactions and never timed out, but recently all of a sudden became so slow? Except moving from .NET 3.5 to 4.0 (not to 4.5 unfortunately as we're still using old W2K3 servers) we didn't change anything in the code and farm admin claims there were no changes recently done to the servers themselves (DB x 1, service with Remoting x 2, front end x 2). Profiling on my dev machine has shown that the most work appears to be done in queue.Send, but it shouldn't cause the code to time out, since the call to Send is asynchronous by design, i.e. it should immediately return to the caller (as it's pointed out in documentation). The code which sends messages to the MSMQ is run inside "main" unit of work, which is DB transaction, but I doubt that the MSMQ transaction got promoted to the MSDTC - I guess in that case it would be always slow.
Does anybody know what I could miss here?
I have implemented a web socket server using Alchemy web sockets, and am now trying to stress test it. I have written the following method in C# to create numerous clients to connect to the server and send some data:
private void TestWebSocket()
{
int clients = 10;
long messages = 10000;
long messagesSent = 0;
String host = "127.0.0.1";
String port = "11005";
WSclient[] clientArr = new WSclient[clients];
for (int i = 0; i < clientArr.Length; i++)
{
clientArr[i] = new WSclient(host, port);
}
Random random = new Random();
var sw = Stopwatch.StartNew();
for (int i = 0; i < messages; i++)
{
clientArr[i % clients].Send("Message " + i);
messagesSent++;
}
sw.Stop();
Console.WriteLine("Clients " + clients);
Console.WriteLine("Messages to Send" + messages);
Console.WriteLine("Messages Sent " + messagesSent);
Console.WriteLine("Time " + sw.Elapsed.TotalSeconds);
Console.WriteLine("Messages/s: " + messages / sw.Elapsed.TotalSeconds);
Console.ReadLine();
for (int i = 0; i < clientArr.Length; i++)
{
clientArr[i].Disconnect();
}
Console.ReadLine();
}
However the server is receiving less messages (even with a small number e.g. 100). Or sometimes multiple messages are received as a single message e.g.:
Message1 = abc Message2 = def
Received As = abcdef
I am trying to more or less replicate the example shown here . At the moment both the server and the client are running locally. Any ideas on what the problem is or on how to improve the test method?
There are two open issues on the github project that sound similar:
Server drops inbound messages and receives corrupted input
JSON messages truncated
One of the commenters reported better luck with Fleck
TCP is a streaming protocol, not a message oriented protocol. That means that the receiver is responsible for finding the beginning/end of each message contained within the stream. It also means that the receiver is responsible not only for breaking apart large reads into individual messages, but sometimes it will also need to collect small reads until a complete message is received. The example messages provided show that two were sent and TWO were received, but apparently your server cannot determine where one message ends and the other begins. You probably need to add some sort of internal protocol with your data to mark the beginning and end of each message. If your messages are always exactly the same length, you could just work with the size, but that's less reliable and potentially difficult to port reliably to other communication methods (if that is ever needed later in the program's life -- something that almost always happens to me!)
If your messages are all the same length, the receiver can normally limit the read size (I don't know your library, though) to that length so that picking apart large reads is not necessary. HOWEVER, small reads may still occur due to the way the TCP/IP stack may collect data from the stream into packets for transmission on the physical network. If you don't want to write collection code, then you need to find a peek function that will tell you how much data is available to read before you actually perform the read, allowing your program to wait until there is at least enough for one whole message ready to read.