I am working with a c# program within my network and am able to post messages to an Azure Service Bus queue. When receiving them, I get an exception on MessageReceiver.Receive(). The code and error is below;
MessagingFactory factory = MessagingFactory.CreateFromConnectionString(QueueConnectionString);
//Receiving a message
MessageReceiver testQueueReceiver = factory.CreateMessageReceiver(QueueName);
using (BrokeredMessage retrievedMessage = testQueueReceiver.Receive(new TimeSpan(0, 0, 20)))
{
try
{
var message = new StreamReader(retrievedMessage.GetBody<Stream>(), Encoding.UTF8).ReadToEnd();
retrievedMessage.Complete();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
retrievedMessage.Abandon();
}
}
The error gets thrown on the 'using' line at
testQueueReceiver.Receive(...);
The server rejected the upgrade request. 400 This endpoint is only for web-socket requests
I can't find anything on the web with the exception of one post which seems to suggest it is a firewall / ports issue. I have all the azure service bus ports outbound open (9350-9354, 80, 443) locally but there is a chance the 9000's are blocked at the firewall. Should it require these? Any pointers would be greatly appreciated !
Service MessagingCommunication Exception - The End point is only for web socket requests
I'm just wondering why don't you use OnMessage instead of polling the queue?
var connectionString = "";
var queueName = "samplequeue";
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
client.OnMessage(message =>
{
Console.WriteLine(String.Format("Message body: {0}", message.GetBody<String>()));
Console.WriteLine(String.Format("Message id: {0}", message.MessageId));
message.Complete()
});
This was fixed due to some proxy issues.
The account that the code was running under was an async service. I needed to log in as that account, open IE and go to connections (LAN) and remove the proxy checkboxes (detect settings automatically, etc). Once this was done, the code bypassed the proxy and worked fine.
Related
I have an MQTT listener written in c#.
The program is running in Azure and
for some reason after a period of time, it gets disonnected with an exception:
"The operation has timed out." or
"Exception of type 'MQTTnet.Exceptions.MqttCommunicationTimedOutException' was thrown."
On production, the listener must always be online so on disconnect event i'm reconnecting, but it happens randomly, it can get disconnected 4 times a day and sometimes it can stay online without disconnect for a few days.
question is, why is it happening? the device that it listens to is sending a timestamp request every few minutes, but it should be very fast and shouldn't cause a timeout.
Here is the code:
private static IMqttClient _client;
private static IMqttClientOptions _options;
static async Task Main(string[] args)
{
//create subscriber client
var factory = new MqttFactory();
_client = factory.CreateMqttClient();
//configure options
_options = new MqttClientOptionsBuilder()
.WithClientId("ListenerClient")
.WithTcpServer(Utility.brokerIp, Utility.brokerPort).WithCredentials(Utility.brokerUser, Utility.brokerPassword)
.WithCleanSession()
.Build();
//Handlers
_client.UseConnectedHandler(e =>
{
Console.WriteLine("Connected successfully with MQTT Brokers Topic.");
WriteToLog("***Connected To MQTT Listener.***");
//Subscribe to topics******************
});
_client.UseDisconnectedHandler(e =>
{
WriteToLog("***DisConnected From MQTT Listener.***");
WriteToLog(e.Exception.Message);
_client.ConnectAsync(_options).Wait();
return;
});
_client.UseApplicationMessageReceivedHandler(async e =>
{
//manage messages
});
//Connect
_client.ConnectAsync(_options).Wait();
Task.Run(() => Thread.Sleep(Timeout.Infinite)).Wait();
_client.DisconnectAsync().Wait();
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
We had a similar issue at one time, I believe the queue you are trying to connect to has very intermittent traffic. And then whatever service, or server you have hosting the queue it self, is setup to hibernate the queue when no traffic hits the queue for some predetermined period of time.
When you then try to use the queue, the "timeout" happens because the queue can't wake up from hibernation quick enough for you to get a processed message through.
if the queue is Azure hosted, try to get Azure support to confirm this is the case, if you are on premise hosting, try to verify your own configuration is set this way, and reduce the "wait" period to something deliberately small like 30 seconds, and verify that a hibernated queue, causes a time-out.
I was following the documentation for How to use Service Bus Queues on Azure documentation.
I created 2 different web applications to test out using Queues for a project I need to work on. I created one project to publish messages into the queue and then the other application is supposed to listen to the Queue to process the messages.
I successfully was able to publish messages to the Queue and I can see that the queue length in the Azure portal says 3. So there should be 3 messages waiting for me to process. However, when I run the web application that has the QueueClient.OnMessage no messages are getting pushed. Is there something else that I am missing when doing this?
var client = QueueClient.CreateFromConnectionString(ConnectionString, "TestQueue");
var options = new OnMessageOptions
{
AutoComplete = false,
AutoRenewTimeout = TimeSpan.FromMinutes(1)
};
// Callback to handle received messages.
client.OnMessage((message) =>
{
try
{
// Process message from queue.
var messageBody = message.GetBody<string>();
// Remove message from queue.
message.Complete();
}
catch (Exception)
{
// Indicates a problem, unlock message in queue.
message.Abandon();
}
}, options);
}
}
The ConnectionStrings are the same in both applications so there is no differences there.
Here is the code that I am using to connect and send a message to the Queue.
var client = QueueClient.CreateFromConnectionString(ConnectionString, "TestQueue");
var message = new BrokeredMessage(value);
message.Properties["TestProperty"] = "This is a test";
message.Properties["UserId"] = "TestUser";
client.Send(message);
If anyone has any insights into why this is happening it would be greatly appreciated.
So it turns out that I was running into a 407 Proxy Authentication Required error when trying to connect to the service bus that I noticed when trying to debug this through a different application.
All I had to do was add the following to the system.net section of the web.config once I did that I was able to receive messages.
<defaultProxy useDefaultCredentials="true">
<proxy bypassonlocal="True" usesystemdefault="True"/>
</defaultProxy>
We have been using RabbitMQ as messaging service in the project. We will be pushing message into a queue and which will be received at the message consumer and will be trying to make entry into database. Once the values entered into the database we will be sending positive acknowledgement back to the RabbitMQ server if not we will be sending negative acknowledgement.
I have created Message Consumer as Windows service.Message has been successfully entered and well received by the message consumer(Made entry in table)but with an exception log "Shared Queue closed".
Please find the code block.
while (true)
{
try
{
if (!Connection.IsOpen || !Channel.IsOpen)
{
CreateConnection(existConnectionConfig, QueueName);
consumer = new QueueingBasicConsumer(Channel);
consumerTag=Channel.BasicConsume(QueueName,false,consumer);
}
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
bool ack = onMessageReceived(body);
if (ack == true)
{
Channel.BasicAck(e.DeliveryTag, false);
}
else
Channel.BasicNack(e.DeliveryTag, false, true);
}
catch (Exception ex)
{
//Logged the exception in text file where i could see the
//message as "Shared queue closed"
}
}
I have surfed in net too but couldn't able to what the problem. It will be helpful if anyone able to help me out.
Thanks in advance,
Selva
In answer to your question, I have experienced the same problems when my Web Client has reset the connection due to App Pool recycling or some other underlying reason the connection has been dropped that appears beyond your scope. You may need to build in a retry mechanism to cope with this.
You might want to look at MassTransit. I have used this with RabbitMQ and it makes things a lot easier by effectively providing a management layer to RabbitMQ. MassTransit takes away the headache of retry mechanisms - see Connection management. It also provides a nice multi threaded concurrent consumer configuration.
This has the bonus of your implementation being more portable - you could easily change things to MSMQ should the requirement come along.
I have successfully created an Azure application that sends DbTransactions to a ServiceBus Queue, and then, enqueues a 'notifying message' to a ServiceBus Topic for other clients to monitor (...so they can receive the updates automatically).
Now, I want to use SignalR to monitor & receive the SubscriptionClient messages...and I have test-code that works just fine on its' own.
I have found many examples for sending messages to an Azure Queue (that is easy). And, I have the code to receive a BrokeredMessage from a SubscriptionClient. However, I cannot get SignalR to continuously monitor my Distribute method.
How do I get SignalR to monitor the Topic?
CODE BEHIND: (updated)
public void Dequeue()
{
SubscriptionClient subscription = GetTopicSubscriptionClient(TOPIC_NAME, SUBSCRIPTION_NAME);
subscription.Receive();
BrokeredMessage message = subscription.Receive();
if (message != null)
{
try
{
var body = message.GetBody<string>();
var contextXml = message.Properties[PROPERTIES_CONTEXT_XML].ToString();
var transaction = message.Properties[PROPERTIES_TRANSACTION_TYPE].ToString();
Console.WriteLine("Body: " + body);
Console.WriteLine("MessageID: " + message.MessageId);
Console.WriteLine("Custom Property [Transaction]: " + transaction);
var context = XmlSerializer.Deserialize<Person>(contextXml);
message.Complete();
Clients.All.distribute(context, transaction);
}
catch (Exception ex)
{
// Manage later
}
}
}
CLIENT-SIDE CODE:
// TEST: Hub - GridUpdaterHub
var hubConnection = $.hubConnection();
var gridUpdaterHubProxy = hubConnection.createHubProxy('gridUpdaterHub');
gridUpdaterHubProxy.on('hello', function (message) {
console.log(message);
});
// I want this automated
gridUpdaterHubProxy.on('distribute', function (context, transaction) {
console.log('It is working');
});
connection.start().done(function () {
// This is successful
gridUpdaterHubProxy.invoke('hello', "Hello");
});
I would not do it like that. Your code is consuming and retaining ASP.NET thread pool's threads for each incoming connection, so if you have many clients you are not scaling well at all. I do not know the internals of SignalR that deep, but I'd guess that your never-ending method is preventing SignalR to let the client call your callbacks because that needs the server method to end properly. Just try to change while(true) with something exiting after, let's say, 3 messages in the queue, you should be called back 3 times and probably those calls will happen all together when your method exits.
If that is right, then you can move to something different, like dedicating a specific thread to consuming the queue and having callbacks called from there usning GlobalHost.ConnectionManager.GetHubContext. Probably better, you could try a different process consuming the queue and doing HTTP POST to your web app, which in turns broadcasts to the clients.
I have a piece of code that calls a WCF service that is hosted on a server.
The code keeps looping around and around calling this method over and over again. (It's asking for a 'status', so it's not doing any work at all).
That's fine except that after a short period of time I get an error:
This request operation sent to net.tcp://serverName:9001/service1 did not receive a reply within the configured timeout (00:00:09.9843754)
And suddenly i cannot get to the server at all EVER. I increased the timeout to 1min but still the same problem. Note that the program that hosts the service is doing nothing else, just offering it's 'status'. So it's not an issue with the WCF service app being busy.
I think it's a problem with the code calling the service because when i re-start the app it can connect to the service just fine ... until after another short time i get the timeout error again. For this reason i don't thnk it's a network error either, as when I restart the app it's ok for a period of time.
Here is the code i use to call the service. Do i need to dispose of the ChannelFactory after each call to clean it up or what am i doing worng?
NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message);
binding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
EndpointAddress endPoint = new EndpointAddress(new Uri(clientPath));
ChannelFactory<IClient> channel = new ChannelFactory<IClient>(binding, endPoint);
channel.Faulted += new EventHandler(channel_Faulted);
IClient client = channel.CreateChannel();
((IContextChannel)client).OperationTimeout = new TimeSpan(0, 0, 10);
ClientStatus clientStatus = client.GetStatus();
You do have to close client connections after you finish calling GetStatus. The best way to do this is to use a using block. But you can also do something like this after your call client.GetStatus()
ClientStatus clientStatus = client.GetStatus();
try
{
if (client.State != System.ServiceModel.CommunicationState.Faulted)
{
client.Close();
}
}
catch (Exception ex)
{
client.Abort();
}