I have an Azure Function that has a blob trigger, in my function method args I expose the Blob itself via BlobClient and the name of the file uploaded.
[FunctionName("MyFunc")]
public async Task RunAsync([BlobTrigger("upload/{name}", Connection = "DataLake")]
BlobClient blob, string name)
{
var propertiesResponse = await blob.GetPropertiesAsync();
var properties = propertiesResponse.Value;
var metadata = properties.Metadata;
//do stuff with metadata
if (metadata.TryGetValue("activityId", out var activityId))
{
}
using (var stream = await blob.OpenReadAsync())
using (var sr = new StreamReader(stream))
{
//do some stuff with blob
}
}
I would like to unit test this function and was trying to mock BlobClient but having issues using the Moq library. I have found BlobsModelFactory that aims to help mocking but I can't see anything for BlobClient. Has anyone managed to mock BlobClient?
In line with the new Azure SDK guidelines public methods are marked virtual so they can be mocked:
A service client is the main entry point for developers in an Azure SDK library. Because a client type implements most of the “live” logic that communicates with an Azure service, it’s important to be able to create an instance of a client that behaves as expected without making any network calls.
Each of the Azure SDK clients follows mocking guidelines that allow their behavior to be overridden:
Each client offers at least one protected constructor to allow inheritance for testing.
All public client members are virtual to allow overriding.
In case of the BlobClient mocking can be done like this*:
var mock = new Mock<BlobClient>();
var responseMock = new Mock<Response>();
mock
.Setup(m => m.GetPropertiesAsync(null, CancellationToken.None).Result)
.Returns(Response.FromValue<BlobProperties>(new BlobProperties(), responseMock.Object))
A lot of stuff, like BlobProperties, can be mocked using the static class BlobsModelFactory, some examples:
var blobProps = BlobsModelFactory.BlobProperties(blobType: BlobType.Block);
var result = BlobsModelFactory.BlobDownloadResult(content: null);
Additional references:
https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/README.md#mocking
https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Mocking.md
*Code is for demonstration only, the references give clues on how to use BlobsModelFactory
Related
I have an Azure Function that has a blob trigger, in my function method args I expose the Blob itself via BlobClient and the name of the file uploaded.
[FunctionName("MyFunc")]
public async Task RunAsync([BlobTrigger("upload/{name}", Connection = "DataLake")]
BlobClient blob, string name)
{
var propertiesResponse = await blob.GetPropertiesAsync();
var properties = propertiesResponse.Value;
var metadata = properties.Metadata;
//do stuff with metadata
if (metadata.TryGetValue("activityId", out var activityId))
{
}
using (var stream = await blob.OpenReadAsync())
using (var sr = new StreamReader(stream))
{
//do some stuff with blob
}
}
I would like to unit test this function and was trying to mock BlobClient but having issues using the Moq library. I have found BlobsModelFactory that aims to help mocking but I can't see anything for BlobClient. Has anyone managed to mock BlobClient?
In line with the new Azure SDK guidelines public methods are marked virtual so they can be mocked:
A service client is the main entry point for developers in an Azure SDK library. Because a client type implements most of the “live” logic that communicates with an Azure service, it’s important to be able to create an instance of a client that behaves as expected without making any network calls.
Each of the Azure SDK clients follows mocking guidelines that allow their behavior to be overridden:
Each client offers at least one protected constructor to allow inheritance for testing.
All public client members are virtual to allow overriding.
In case of the BlobClient mocking can be done like this*:
var mock = new Mock<BlobClient>();
var responseMock = new Mock<Response>();
mock
.Setup(m => m.GetPropertiesAsync(null, CancellationToken.None).Result)
.Returns(Response.FromValue<BlobProperties>(new BlobProperties(), responseMock.Object))
A lot of stuff, like BlobProperties, can be mocked using the static class BlobsModelFactory, some examples:
var blobProps = BlobsModelFactory.BlobProperties(blobType: BlobType.Block);
var result = BlobsModelFactory.BlobDownloadResult(content: null);
Additional references:
https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/README.md#mocking
https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/core/Azure.Core/samples/Mocking.md
*Code is for demonstration only, the references give clues on how to use BlobsModelFactory
Microsoft Dynamics CRM 2015.
I test Asp.Net Core controller's action. When I create new Lead record some plugin generates new Guid for lead.new_master_id field (it's type is string). Therefore after creating I retrive the record to get it's generated new_master_id value. How can I emulate this plugin behaviour through Fake Xrm Easy?
var fakedContext = new XrmFakedContext();
fakedContext.ProxyTypesAssembly = typeof(Lead).Assembly;
var entities = new Entity[]
{
// is empty array
};
fakedContext.Initialize(entities);
var orgService = fakedContext.GetOrganizationService();
var lead = new Lead { FirstName = "James", LastName = "Bond" };
var leadId = orgService.Create(lead);
var masterId = orgService.Retrieve(Lead.EntityLogicalName, leadId,
new Microsoft.Xrm.Sdk.Query.ColumnSet(Lead.Fields.new_master_id))
.ToEntity<Lead>().new_master_id;
In v1.x of FakeXrmEasy you'll need to enable PipelineSimulation and register the plugin steps you would like to be fired on Create manually by registering their steps.
fakedContext.UsePipelineSimulation = true;
Once enabled, you'll need to enable the necessary steps via calling RegisterPluginStep. In your example you'll need to at least register something along the lines of:
fakedContext.RegisterPluginStep<LeadPlugin>("Create", ProcessingStepStage.Preoperation);
Where LeadPlugin would be the name of your plugin that generates the new_master_id property.
Keep in mind v1.x is limited in that it supports pipeline simulation for basic CRUD requests only.
Later versions (2.x and/or 3.x) come with a brand new middleware implementation allowing registering plugin steps for any message. Soon we'll be implementing automatic registration of plugin steps based on an actual environment and/or custom attributes.
Here's an example using the new middleware
public class FakeXrmEasyTestsBase
{
protected readonly IXrmFakedContext _context;
protected readonly IOrganizationServiceAsync2 _service;
public FakeXrmEasyTestsBase()
{
_context = MiddlewareBuilder
.New()
.AddCrud()
.AddFakeMessageExecutors()
.AddPipelineSimulation()
.UsePipelineSimulation()
.UseCrud()
.UseMessages()
.Build();
_service = _context.GetAsyncOrganizationService2();
}
}
You can find more info on the QuickStart guide here
Disclaimer: I'm the author of FakeXrmEasy :)
I am using AmazonS3Client to Read/Write data to S3 Object Storage. In my code i am creating a new connection everytime while doing operations like Read,List Buckets, Upload, Rename, Delete etc. After deploying my application to production i encountered some performance issues. After going throughh few blogs it was recommended to use single amazonS3 client connection. My code below ->
For every below CRUD operations if you see i am creating a new connection and then disposing it by using block. I am planning to have single connection and use it without using block on every call. Does maintaining a single connection good choice ? I have ~400 users accessing application at the same time.
public ObjectFileInfo(string path)
{
StorageClient = ObjectFileManager.GetClient();
objectFileInfo = ObjectFileManager.getFileInfo(StorageClient, path);
}
public class ObjectFileManager
{
public static Amazon.S3.AmazonS3Client GetClient()
{
AmazonS3Config Config = new AmazonS3Config();
AmazonS3Client StorageClient;
Config.RegionEndpoint = null;
Config.ServiceURL = ConfigurationManager.NGDMSobjECSEndPoint;
Config.AllowAutoRedirect = true;
Config.ForcePathStyle = true;
Config.Timeout = TimeSpan.FromMinutes(30);
StorageClient = new AmazonS3Client(ConfigurationManager.NGDMSobjECSUser, ConfigurationManager.NGDMSobjECSKey, Config);
return StorageClient;
}
public static string[] ListBuckets()
{
ListBucketsResponse Response;
//Creating AmazonS3Client and disposing it in using
using (AmazonS3Client StorageClient = GetClient())
{
Response = StorageClient.ListBuckets();
}
var BucketNames = from Bucket in Response.Buckets select Bucket.BucketName;
return BucketNames.ToArray();
}
public static bool DeleteFile(string keyName)
{
var delRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};
//Creating AmazonS3Client and disposing it in using
using (AmazonS3Client StorageClient = GetClient())
{
StorageClient.DeleteObject(delRequest);
}
return true;
}
}
Planning to use Singleton as below and removing using block ->
class S3ObjectStorageClient
{
/// <summary>
/// Singleton implementation of Object Storage Client
/// </summary>
private S3ObjectStorageClient()
{
}
public static AmazonS3Client Client
{
get
{
return S3Client.clientInstance;
}
}
/// <summary>
/// Nested private class to ensure Singleton
/// </summary>
private class S3Client
{
static S3Client()
{
}
internal static readonly AmazonS3Client clientInstance = ObjectFileManager.GetClient();
}
}
public ObjectFileInfo(string path)
{
StorageClient = S3ObjectStorageClient.Client; //Singleton
objectFileInfo = ObjectFileManager.getFileInfo(StorageClient, path);
}
public static string[] ListBuckets()
{
ListBucketsResponse Response;
//Singleton and removed using block
AmazonS3Client StorageClient = S3ObjectStorageClient.Client;
Response = StorageClient.ListBuckets();
var BucketNames = from Bucket in Response.Buckets select Bucket.BucketName;
return BucketNames.ToArray();
}
public static bool DeleteFile(string keyName)
{
var delRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};
//Singleton and removed using block
AmazonS3Client StorageClient = S3ObjectStorageClient.Client;
StorageClient.DeleteObject(delRequest);
return true;
}
}
As one of the authors of the AWS .NET SDK I can give a little more context. Under the cover the AmazonS3Client along with all of the other service clients in the SDK it manages a pool of HttpClients which are the expensive object to create. So when you are creating a new AmazonS3Client the SDK is reusing an HttpClient from a pool the SDK is managing.
If you are using a proxy with proxy credentials then the SDK does have to create a new HttpClient each time a service client is created.
An area where there could be potential performance issues with creating service clients all the time is determining the AWS credentials to use when an AWSCredentials object is not passed into the constructor. That means each service client will have to resolve the credentials which if you are using an assume role profile that could cause a lot of extra calls to perform the assume role. Getting credentials from instance metadata is optimized so that only one thread is refreshing those credentials per process.
Actually you can safely reuse it, according to the docs it is not a bad idea to create and reuse a client. But creating a new client is not very expensive:
The best-known aspect of the AWS SDK for .NET are the various service clients that you can use to interact with AWS. Client objects are thread safe, disposable, and can be reused. (Client objects are inexpensive, so you are not incurring a large overhead by constructing multiple instances, but it’s not a bad idea to create and reuse a client.)
Thus, according to this the performance benefits are probably not that huge. But since there is a small cost to creating a new client I would always reuse the client. That said, according to the docs your code
using (AmazonS3Client StorageClient = GetClient())
{
Response = StorageClient.ListBuckets();
}
is not really bad, but just a bit less efficient than using a singleton. If you think it hurts your performance in a noticable way, best bet is to measure it and if it is really the cause refactor to using a singleton.
Both are valid approach but you'll certainly gain code efficiency using a singleton.
Moreover, dependency injection is promoted by AWS as the right pattern when it comes to using clients. For example, new AWS service CodeGuru profiler highlights multiple client instances as a source of optimization.
See also : https://aws.amazon.com/fr/blogs/developer/working-with-dependency-injection-in-net-standard-inject-your-aws-clients-part-1/
Using the latest (12.3.0 at the time of writing) Nuget package for the Azure.Storage.Blobs assembly, and uploading asynchronously with the BlobServiceClient class, I want to set retry options in case of transient failure.
But no overload of the UploadAsync() method takes any object with retry options:
UploadAsync(Stream, BlobHttpHeaders, IDictionary<String,String>, BlobRequestConditions, IProgress<Int64>, Nullable<AccessTier>, StorageTransferOptions, CancellationToken)
And although when creating a BlobServiceClient, it is possible to set BlobClientOptions, and these do inherit a RetryOptions field from the abstract base class ClientOptions, this field is read only:
// Summary:
// Gets the client retry options.
public RetryOptions Retry { get; }
How do I set a retry policy on an Azure blob storage operation using the Azure.Storage.Blobs assembly?
You should specify the retry part when creating the blob client. Here's a sample:
var options = new BlobClientOptions();
options.Diagnostics.IsLoggingEnabled = false;
options.Diagnostics.IsTelemetryEnabled = false;
options.Diagnostics.IsDistributedTracingEnabled = false;
options.Retry.MaxRetries = 0;
var client = new BlobClient(blobUri: new Uri(uriString:""), options: options);
In addition, it is possible to set the BlobClientOptions when creating a BlobServiceClient:
var blobServiceClient = new BlobServiceClient
(connectionString:storageAccountConnectionString, options: options );
You can then use BlobServiceClient.GetBlobContainerClient(blobContainerName:"") and BlobContainerClient.GetBlobClient(blobName:"") to build the blob URI in a consistent manner, with options.
Apologies, I'm new to Azure. I created a service bus and queue via the Azure portal using this tutorial.
I can write and read from the queue ok. The problem is, to deploy to the next environment, I have to either update the ARM template to add the new queue or create the queue in code. I can't create the queue through the portal in the next environment.
I've chosen the latter: check to see if the queue exists and create as required via code. I already have an implementation for this for a CloudQueueClient (in the Microsoft.WindowsAzure.Storage.Queue namespace). This uses a CloudStorageAccount entity to create the CloudQueueClient, if it doesnt exists.
I was hoping it would be this simple but it appears not. I'm struggling to find a way to create a QueueClint (in the Microsoft.Azure.ServiceBus namespace). All I have is the Service Bus connection string and the queue name but having scoured Microsoft docs, there's talk of a NamespaceManager and MessagingFactory (in a different namespace) involved in the process.
Can anyone point me in the direction of how to achieve this and more importantly, is this the right approach? I'll be using DI to instantiate the queue so the check/creation will only be done once.
The solution is required for a service bus queue and not a storage account queue. Differences outlined here
Thanks
Sean Feldman's answer pointed me in the right direction. The main nuget packages/namespaces required (.net core ) are
Microsoft.Azure.ServiceBus
Microsoft.Azure.ServiceBus.Management
Here's my solution:
private readonly Lazy<Task<QueueClient>> asyncClient;
private readonly QueueClient client;`
public MessageBusService(string connectionString, string queueName)
{
asyncClient = new Lazy<Task<QueueClient>>(async () =>
{
var managementClient = new ManagementClient(connectionString);
var allQueues = await managementClient.GetQueuesAsync();
var foundQueue = allQueues.Where(q => q.Path == queueName.ToLower()).SingleOrDefault();
if (foundQueue == null)
{
await managementClient.CreateQueueAsync(queueName);//add queue desciption properties
}
return new QueueClient(connectionString, queueName);
});
client = asyncClient.Value.Result;
}
Not the easiest thing to find but hope it helps someone out.
To create entities with the new client Microsoft.Azure.ServiceBus you will need to use ManagemnetClient by creating an instance and invoking CreateQueueAsync().
The Microsoft.Azure.ServiceBus nuget package in the accepted answer is now deprecated. To use the Azure.Messaging.ServiceBus package instead, the code you want is as follows:
using Azure.Messaging.ServiceBus.Administration;
var client = new ServiceBusAdministrationClient(connectionString);
if (!await client.QueueExistsAsync(queueName))
{
await client.CreateQueueAsync(queueName);
}
You can create a Service Bus Queue using NamespaceManager likewise,
QueueDescription _serviceBusQueue = new QueueDescription("QUEUENAME"); //assign the required properties to _serviceBusQueue
NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString("CONNECTIONSTRING");
var queue = await namespaceManager.CreateQueueAsync(_azureQueue);
In case you need a more up to date implementation using the newer Azure.Messaging.ServiceBus library, you'll need to install the package System.Linq.Async
It's an extension method for creating "missing" queues from your service bus namespace
using Azure.Messaging.ServiceBus.Administration;
using System.Threading.Tasks;
using System.Linq;
using System;
public static class ServiceBusAdministrationClientExtensions
{
public static async Task CreateMissingQueuesAsync(this ServiceBusAdministrationClient serviceBusAdministrationClient, params string[] queueNames)
{
var allQueues = serviceBusAdministrationClient.GetQueuesAsync();
var queueList = await allQueues.ToListAsync();
foreach (var queueName in queueNames) {
var foundQueue = queueList.Where(q => q.Name == queueName.ToLower()).Any();
if (!foundQueue)
{
var queueOptions = new CreateQueueOptions(queueName)
{
DefaultMessageTimeToLive = TimeSpan.FromHours(1),
LockDuration = TimeSpan.FromSeconds(30)
};
await serviceBusAdministrationClient.CreateQueueAsync(queueOptions);
}
}
}
}
Then call the extension method with a call like this.
var _serviceBusAdminClient = new ServiceBusAdministrationClient(ServiceBusConnectionString);
await _serviceBusAdminClient.CreateMissingQueuesAsync("queueName");
I adapted this code from the accepted answer in this thread.