I am building a system that need to send some transactional mails, and to achieve this I am using Azure storage queues to store the message temporarily before it is picked up by a WebJob and sent off to the intended recipient.
My Code is as follows:
SendGridMessage message = new SendGridMessage();
//Populate message with details - omitted for brevity
var serializer = new JavaScriptSerializer();
var modelAsString = serializer.Serialize(message);
try
{
var setting = CloudConfigurationManager.GetSetting("AzureStorageConnectionString");
var account = CloudStorageAccount.Parse(setting);
var queueClient = account.CreateCloudQueueClient();
var queue = queueClient.GetQueueReference("FSPortalEmailQueue");
queue.CreateIfNotExists();
queue.AddMessage(new CloudQueueMessage(modelAsString));
}
catch (Exception ex)
{
//Something went wrong
}
Each time I try to execute the coder, an exception is thrown on the
var modelAsString = serializer.Serialize(message);
"Exception has been thrown by the target of an invocation."
The inner exception thrown was
{"Bad key path!"} from source "SendGrid.SmtpApi"
Please advise what I am doing wrong here.
After a bit more digging, it turns out that the message.header node was not being initialised. After adding
message.Header = new SendGrid.SmtpApi.Header();
message.Header.SetTo(new List<String> { enquiry.EnquiryCreatedBy.Email });
all started working pretty magically
Related
I have an extremely simple setup for sending message to Kafka:
var producerConfig = new ProducerConfig
{
BootstrapServers = "www.example.com",
SecurityProtocol = SecurityProtocol.SaslSsl,
SaslMechanism = SaslMechanism.ScramSha512,
SaslUsername = _options.SaslUsername,
SaslPassword = _options.SaslPassword,
MessageTimeoutMs = 1
};
var producerBuilder = new ProducerBuilder<Null, string>(producerConfig);
using var producer = producerBuilder.Build();
producer.Produce("Some Topic", new Message<Null, string>()
{
Timestamp = Timestamp.Default,
Value = "hello"
});
Before, this code was working fine. Today it has decided to stop working and I'm trying to figure out why. I'm trying to get the Producer to throw an exception when failing to deliver a message, but it never seems to crash. Even when I fill in a wrong username and password, the producer still doesn't crash. Not even a logline in my local output window. How can I debug my Kafka connection when the producer never shows any problems?
You can add SetErrorHandler() to the ProducerBuilder. It would look like this:
var producerBuilder = new ProducerBuilder<Null, string>(producerConfig)
.SetErrorHandler(errorMessageString => .....);
Set a breakpoint in that lambda and you can break on errors.
Produce is asynchronous and not blocking, function signature is
void Produce(string topic, Message<TKey, TValue> message, Action<DeliveryReport<TKey, TValue>> deliveryHandler = null)
In order to verify that a message was delivered without error
you can add a delivery report handler function e.g.
private void DeliveryReportHandler(DeliveryReport<int, T> deliveryReport)
{
if (deliveryReport.Status == PersistenceStatus.NotPersisted)
{
_logger.LogError($"Failed message delivery: error reason:{deliveryReport.Error?.Reason}");
_messageWasNotDelivered = true;
}
}
_messageWasNotDelivered = false;
_producer.Produce(topic,
new Message<int, T>
{
Key = key,
Value = entity
},
DeliveryReportHandler)
_producer.Flush(); // Wait until all outstanding produce requests and delivery report callbacks are completed
if(_messageWasNotDelivered ){
// handle non delivery
}
This code can be trivially adjusted for batch producing like this
_messageWasNotDelivered = false;
foreach(var entity in entities){
_producer.Produce(topic,
new Message<int, T>
{
Key = entity.Id,
Value = entity
},
DeliveryReportHandler)
}
_producer.Flush(); // Wait until all outstanding produce requests and delivery report callbacks are completed
if(_messageWasNotDelivered ){
// handle non delivery
}
I am experimenting with a new NServiceBus project utilizing Azure Storage Queues for message transport and JSON serialization using custom message unwrapping logic seen here:
var jsonSerializer = new Newtonsoft.Json.JsonSerializer();
transportExtensions.UnwrapMessagesWith(cloudQueueMessage =>
{
using (var stream = new MemoryStream(cloudQueueMessage.AsBytes))
using (var streamReader = new StreamReader(stream))
using (var textReader = new JsonTextReader(streamReader))
{
try
{
var jObject = JObject.Load(textReader);
using (var jsonReader = jObject.CreateReader())
{
// Try deserialize to a NServiceBus envelope first
var wrapper = jsonSerializer.Deserialize<MessageWrapper>(jsonReader);
if (wrapper.MessageIntent != default)
{
// This was a envelope message
return wrapper;
}
}
// Otherwise this was an EventGrid event
using (var jsonReader = jObject.CreateReader())
{
var #event = jsonSerializer.Deserialize<EventGridEvent>(jsonReader);
var wrapper = new MessageWrapper
{
Id = #event.Id,
Headers = new Dictionary<string, string>
{
{ "NServiceBus.EnclosedMessageTypes", #event.EventType },
{ "NServiceBus.MessageIntent", "Publish" },
{ "EventGrid.topic", #event.Topic },
{ "EventGrid.subject", #event.Subject },
{ "EventGrid.eventTime", #event.EventTime.ToString("u") },
{ "EventGrid.dataVersion", #event.DataVersion },
{ "EventGrid.metadataVersion", #event.MetadataVersion },
},
Body = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(#event.Data)),
MessageIntent = MessageIntentEnum.Publish
};
return wrapper;
}
}
catch
{
logger.Error("Message deserialization failed, sending message to error queue");
throw;
}
}
});
The custom message unwrapping logic works correctly for properly formatted JSON messages and when an improperly formatted JSON message is put into the input queue the custom message unwrapping logic will error out on the first line inside the usings where I create the jObject which is the expected behavior. However, when the custom message unwrapping logic fails the error will get caught by the logic in the MessageRetrieved class which is part of the NServiceBus.Azure.Transports.WindowsAzureStorageQueues NuGet package (v8.2.0) seen below:
public async Task<MessageWrapper> Unwrap()
{
try
{
Logger.DebugFormat("Unwrapping message with native ID: '{0}'", rawMessage.Id);
return unwrapper.Unwrap(rawMessage);
}
catch (Exception ex)
{
await errorQueue.AddMessageAsync(rawMessage).ConfigureAwait(false);
await inputQueue.DeleteMessageAsync(rawMessage).ConfigureAwait(false);
throw new SerializationException($"Failed to deserialize message envelope for message with id {rawMessage.Id}. Make sure the configured serializer is used across all endpoints or configure the message wrapper serializer for this endpoint using the `SerializeMessageWrapperWith` extension on the transport configuration. Please refer to the Azure Storage Queue Transport configuration documentation for more details.", ex);
}
}
The first line of the try catch runs correctly adding the message to the configured error queue, however, when it does that, it appears to be changing the message ID and popreceipt of the raw message as seen here:
Initial Message Values
Updated Message Values
Then when the next line runs attempting to remove the original message from the input queue it is unable to find it as according to this article https://learn.microsoft.com/en-us/rest/api/storageservices/delete-message2#remarks it requires the original message ID and pop reciept which have now changed leading to the following error being thrown:
2020-04-20 14:17:58,603 WARN : Azure Storage Queue transport failed pushing a message through pipeline
Type: Microsoft.WindowsAzure.Storage.StorageException
Message: The remote server returned an error: (404) Not Found.
Source: Microsoft.WindowsAzure.Storage
StackTrace:
at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Executor\Executor.cs:line 50
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass7.<CreateCallbackVoid>b__5(IAsyncResult ar) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Util\AsyncExtensions.cs:line 121
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.Transport.AzureStorageQueues.MessageRetrieved.<Unwrap>d__3.MoveNext() in C:\BuildAgent\work\3c19e2a032c05076\src\Transport\MessageRetrieved.cs:line 40
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.Transport.AzureStorageQueues.MessagePump.<InnerReceive>d__7.MoveNext() in C:\BuildAgent\work\3c19e2a032c05076\src\Transport\MessagePump.cs:line 153
TargetSite: T EndExecuteAsync[T](System.IAsyncResult)
Is this an issue with the NServiceBus package logic, or is something in my custom message unwrapping logic causing these values to change?
This is a bug. When unwrapping is failing, the message is not yet going through the processing pipeline. As a result of that, the normal recoverability is not applicable. The CloudQueueMessage needs to be "cloned" and the clone to be sent to the error queue while the original message used to remove it from the input queue. I've raised a bug issue in GitHub and you can track the process there.
I am unable get an exception when the program fails to connect to the kafka cluster.
The code outputs the exception in the console logs but I need it throw an exception. I am using this c# library:
https://github.com/confluentinc/confluent-kafka-dotnet
ProducerConfig _configKafka = new ProducerConfig { BootstrapServers ="localhost:9092/" };
ProducerBuilder<string, string> _kafkaProducer = new ProducerBuilder<string, string>(_configKafka);
using (var kafkaProducer = _kafkaProducer.Build())
{
try
{
var dr = kafkaProducer.ProduceAsync("Kafka_Messages", new Message<string, string> { Key = null, Value = $"message {i++}" });
dr.Wait(TimeSpan.FromSeconds(10));
if(dr.Exception!=null)
{
Console.WriteLine($"Delivery failed:");
}
var status = dr.Status;
//Console.WriteLine($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'");
}
catch (ProduceException<Null, string> e)
{
Console.WriteLine($"Delivery failed: {e.Error.Reason}");
}
}
Below is the error printed by confluent-kafka in console:
%3|1565248275.024|FAIL|rdkafka#producer-1| [thrd:localhst:9092/bootstrap]: localhst:9092/bootstrap: Failed to resolve 'localhst:9092': No such host is known. (after 2269ms in state CONNECT)
%3|1565248275.024|ERROR|rdkafka#producer-1| [thrd:localhst:9092/bootstrap]: localhst:9092/bootstrap: Failed to resolve 'localhst:9092': No such host is known. (after 2269ms in state CONNECT)
%3|1565248275.025|ERROR|rdkafka#producer-1| [thrd:localhst:9092/bootstrap]: 1/1 brokers are down
To get the actual exception within your application you need to add .SetErrorHandler():
ProducerBuilder<string, string> _kafkaProducer = new ProducerBuilder<string, string>(_configKafka);
using (var kafkaProducer = _kafkaProducer.SetErrorHandler((producer, error) =>
{
//You can handle error right here
}).Build())
error.Reason contains the error message
You can use both .SetLogHandler and .SetErrorHandler in your consumer and producer code. Otherwise it kind of silently fails without providing much details. You can forward the messages to your logger there.
I am using the below code to read JSON from an endpoint in my Xamarin crossplatform project and I am getting error
Cannot read disposed object exception or it fires ObjectDisposedException
IS it something wrong with code Can I write it in a better way ?
public async Task<APISchoolDetailModel> GetSchooDetailsAsync()
{
APISchoolDetailModel api_data = new APISchoolDetailModel();
try
{
var client = new System.Net.Http.HttpClient();
client.DefaultRequestHeaders.Add("Accept", "application/json");
var web_client = await client.GetAsync("http://appapitest.net/APIs/Student/Schooldetails");
var response_string= web_client.Content.ReadAsStringAsync().Result;
DataContractJsonSerializer serializer = new DataContractJsonSerializer(api_data.GetType());
MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(response_string));
api_data = serializer.ReadObject(ms) as APISchoolDetailModel;
}
catch (Exception ex) { }
return api_data;
}
The controller comes till the line var web_client = await client.GetAsync(" and then its not going further and after few seconds I am getting exception
Is any better way to write this code for reading and parsing JSON
#Gserg pointed out something important you should not do this:
var response_string= web_client.Content.ReadAsStringAsync().Result;
in stead of that use:
var response_string= await web_client.Content.ReadAsStringAsync();
within an async Task method:
is you use .Result this may be causing deadlocks within threads or the same stuff that you are experiencing because a thread may be trying to update or use a variable that is already collected from the GC.
I create custom error logger in CRM 2013 have functionality to save error information into CRM entity. I debug my code and find that my code works well. But the problem is when CRM rollback the transaction, the log entity also disappear. I want to know is it possible to create entity on catch block and still throw that error?
public void Execute(IServiceProvider serviceProvider)
{
try
{
...
}
catch (Exception ex)
{
IPluginExecutionContext context =
(IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.
GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(Guid.Empty);
var log = new Log
{
Message = ex.Message
};
service.Create(log);
throw;
}
}
I found the other way to solve this issue. We can create new service to create new transaction outside the transaction being failed. Here some snippet if you want to do the same:
try
{
...
}
catch (Exception ex)
{
var HttpCurrentContext = HttpContext.Current;
var UrlBase = HttpCurrentContext.Request.Url.Host;
string httpUrl = #"http://";
if (HttpCurrentContext.Request.IsLocal)
{
UrlBase += ":" + HttpCurrentContext.Request.Url.Port;
}
if (!UrlBase.Contains(httpUrl))
{
UrlBase = httpUrl + UrlBase;
}
var UriBase = UriBuilder(UrlBase.ToLowerInvariant().Trim() + "/xrmservices/2011/organization.svc").Uri;
IServiceConfiguration<IOrganizationService> orgConfigInfo =
ServiceConfigurationFactory.CreateConfiguration<IOrganizationService>(UriBase);
var creds = new ClientCredentials();
using (_serviceProxy = new OrganizationServiceProxy(orgConfigInfo, creds))
{
// This statement is required to enable early-bound type support.
_serviceProxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
_service = (IOrganizationService)_serviceProxy;
var log = new Log
{
Message = ex.Message
};
_service.Create(NewLog);
}
throw;
}
Essentially, no. You cannot prevent that an exception rolls back the transaction. See a similar question on StackOverflow.
A common approach is to create a separate logging service that can store logs outside of the database transaction.
B.t.w. Dynamics CRM 2015 spring release introduces the capability to store logs regardless if your plugin is participating in a database transaction.