AmazonS3Client Single connection Vs new connection for each call C# - c#

I am using AmazonS3Client to Read/Write data to S3 Object Storage. In my code i am creating a new connection everytime while doing operations like Read,List Buckets, Upload, Rename, Delete etc. After deploying my application to production i encountered some performance issues. After going throughh few blogs it was recommended to use single amazonS3 client connection. My code below ->
For every below CRUD operations if you see i am creating a new connection and then disposing it by using block. I am planning to have single connection and use it without using block on every call. Does maintaining a single connection good choice ? I have ~400 users accessing application at the same time.
public ObjectFileInfo(string path)
{
StorageClient = ObjectFileManager.GetClient();
objectFileInfo = ObjectFileManager.getFileInfo(StorageClient, path);
}
public class ObjectFileManager
{
public static Amazon.S3.AmazonS3Client GetClient()
{
AmazonS3Config Config = new AmazonS3Config();
AmazonS3Client StorageClient;
Config.RegionEndpoint = null;
Config.ServiceURL = ConfigurationManager.NGDMSobjECSEndPoint;
Config.AllowAutoRedirect = true;
Config.ForcePathStyle = true;
Config.Timeout = TimeSpan.FromMinutes(30);
StorageClient = new AmazonS3Client(ConfigurationManager.NGDMSobjECSUser, ConfigurationManager.NGDMSobjECSKey, Config);
return StorageClient;
}
public static string[] ListBuckets()
{
ListBucketsResponse Response;
//Creating AmazonS3Client and disposing it in using
using (AmazonS3Client StorageClient = GetClient())
{
Response = StorageClient.ListBuckets();
}
var BucketNames = from Bucket in Response.Buckets select Bucket.BucketName;
return BucketNames.ToArray();
}
public static bool DeleteFile(string keyName)
{
var delRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};
//Creating AmazonS3Client and disposing it in using
using (AmazonS3Client StorageClient = GetClient())
{
StorageClient.DeleteObject(delRequest);
}
return true;
}
}
Planning to use Singleton as below and removing using block ->
class S3ObjectStorageClient
{
/// <summary>
/// Singleton implementation of Object Storage Client
/// </summary>
private S3ObjectStorageClient()
{
}
public static AmazonS3Client Client
{
get
{
return S3Client.clientInstance;
}
}
/// <summary>
/// Nested private class to ensure Singleton
/// </summary>
private class S3Client
{
static S3Client()
{
}
internal static readonly AmazonS3Client clientInstance = ObjectFileManager.GetClient();
}
}
public ObjectFileInfo(string path)
{
StorageClient = S3ObjectStorageClient.Client; //Singleton
objectFileInfo = ObjectFileManager.getFileInfo(StorageClient, path);
}
public static string[] ListBuckets()
{
ListBucketsResponse Response;
//Singleton and removed using block
AmazonS3Client StorageClient = S3ObjectStorageClient.Client;
Response = StorageClient.ListBuckets();
var BucketNames = from Bucket in Response.Buckets select Bucket.BucketName;
return BucketNames.ToArray();
}
public static bool DeleteFile(string keyName)
{
var delRequest = new DeleteObjectRequest
{
BucketName = bucketName,
Key = keyName
};
//Singleton and removed using block
AmazonS3Client StorageClient = S3ObjectStorageClient.Client;
StorageClient.DeleteObject(delRequest);
return true;
}
}

As one of the authors of the AWS .NET SDK I can give a little more context. Under the cover the AmazonS3Client along with all of the other service clients in the SDK it manages a pool of HttpClients which are the expensive object to create. So when you are creating a new AmazonS3Client the SDK is reusing an HttpClient from a pool the SDK is managing.
If you are using a proxy with proxy credentials then the SDK does have to create a new HttpClient each time a service client is created.
An area where there could be potential performance issues with creating service clients all the time is determining the AWS credentials to use when an AWSCredentials object is not passed into the constructor. That means each service client will have to resolve the credentials which if you are using an assume role profile that could cause a lot of extra calls to perform the assume role. Getting credentials from instance metadata is optimized so that only one thread is refreshing those credentials per process.

Actually you can safely reuse it, according to the docs it is not a bad idea to create and reuse a client. But creating a new client is not very expensive:
The best-known aspect of the AWS SDK for .NET are the various service clients that you can use to interact with AWS. Client objects are thread safe, disposable, and can be reused. (Client objects are inexpensive, so you are not incurring a large overhead by constructing multiple instances, but it’s not a bad idea to create and reuse a client.)
Thus, according to this the performance benefits are probably not that huge. But since there is a small cost to creating a new client I would always reuse the client. That said, according to the docs your code
using (AmazonS3Client StorageClient = GetClient())
{
Response = StorageClient.ListBuckets();
}
is not really bad, but just a bit less efficient than using a singleton. If you think it hurts your performance in a noticable way, best bet is to measure it and if it is really the cause refactor to using a singleton.

Both are valid approach but you'll certainly gain code efficiency using a singleton.
Moreover, dependency injection is promoted by AWS as the right pattern when it comes to using clients. For example, new AWS service CodeGuru profiler highlights multiple client instances as a source of optimization.
See also : https://aws.amazon.com/fr/blogs/developer/working-with-dependency-injection-in-net-standard-inject-your-aws-clients-part-1/

Related

Creating an Azure ServiceBus Queue via code

Apologies, I'm new to Azure. I created a service bus and queue via the Azure portal using this tutorial.
I can write and read from the queue ok. The problem is, to deploy to the next environment, I have to either update the ARM template to add the new queue or create the queue in code. I can't create the queue through the portal in the next environment.
I've chosen the latter: check to see if the queue exists and create as required via code. I already have an implementation for this for a CloudQueueClient (in the Microsoft.WindowsAzure.Storage.Queue namespace). This uses a CloudStorageAccount entity to create the CloudQueueClient, if it doesnt exists.
I was hoping it would be this simple but it appears not. I'm struggling to find a way to create a QueueClint (in the Microsoft.Azure.ServiceBus namespace). All I have is the Service Bus connection string and the queue name but having scoured Microsoft docs, there's talk of a NamespaceManager and MessagingFactory (in a different namespace) involved in the process.
Can anyone point me in the direction of how to achieve this and more importantly, is this the right approach? I'll be using DI to instantiate the queue so the check/creation will only be done once.
The solution is required for a service bus queue and not a storage account queue. Differences outlined here
Thanks
Sean Feldman's answer pointed me in the right direction. The main nuget packages/namespaces required (.net core ) are
Microsoft.Azure.ServiceBus
Microsoft.Azure.ServiceBus.Management
Here's my solution:
private readonly Lazy<Task<QueueClient>> asyncClient;
private readonly QueueClient client;`
public MessageBusService(string connectionString, string queueName)
{
asyncClient = new Lazy<Task<QueueClient>>(async () =>
{
var managementClient = new ManagementClient(connectionString);
var allQueues = await managementClient.GetQueuesAsync();
var foundQueue = allQueues.Where(q => q.Path == queueName.ToLower()).SingleOrDefault();
if (foundQueue == null)
{
await managementClient.CreateQueueAsync(queueName);//add queue desciption properties
}
return new QueueClient(connectionString, queueName);
});
client = asyncClient.Value.Result;
}
Not the easiest thing to find but hope it helps someone out.
To create entities with the new client Microsoft.Azure.ServiceBus you will need to use ManagemnetClient by creating an instance and invoking CreateQueueAsync().
The Microsoft.Azure.ServiceBus nuget package in the accepted answer is now deprecated. To use the Azure.Messaging.ServiceBus package instead, the code you want is as follows:
using Azure.Messaging.ServiceBus.Administration;
var client = new ServiceBusAdministrationClient(connectionString);
if (!await client.QueueExistsAsync(queueName))
{
await client.CreateQueueAsync(queueName);
}
You can create a Service Bus Queue using NamespaceManager likewise,
QueueDescription _serviceBusQueue = new QueueDescription("QUEUENAME"); //assign the required properties to _serviceBusQueue
NamespaceManager namespaceManager = NamespaceManager.CreateFromConnectionString("CONNECTIONSTRING");
var queue = await namespaceManager.CreateQueueAsync(_azureQueue);
In case you need a more up to date implementation using the newer Azure.Messaging.ServiceBus library, you'll need to install the package System.Linq.Async
It's an extension method for creating "missing" queues from your service bus namespace
using Azure.Messaging.ServiceBus.Administration;
using System.Threading.Tasks;
using System.Linq;
using System;
public static class ServiceBusAdministrationClientExtensions
{
public static async Task CreateMissingQueuesAsync(this ServiceBusAdministrationClient serviceBusAdministrationClient, params string[] queueNames)
{
var allQueues = serviceBusAdministrationClient.GetQueuesAsync();
var queueList = await allQueues.ToListAsync();
foreach (var queueName in queueNames) {
var foundQueue = queueList.Where(q => q.Name == queueName.ToLower()).Any();
if (!foundQueue)
{
var queueOptions = new CreateQueueOptions(queueName)
{
DefaultMessageTimeToLive = TimeSpan.FromHours(1),
LockDuration = TimeSpan.FromSeconds(30)
};
await serviceBusAdministrationClient.CreateQueueAsync(queueOptions);
}
}
}
}
Then call the extension method with a call like this.
var _serviceBusAdminClient = new ServiceBusAdministrationClient(ServiceBusConnectionString);
await _serviceBusAdminClient.CreateMissingQueuesAsync("queueName");
I adapted this code from the accepted answer in this thread.

How to send data as a SoapMessage and get a reply?

I have some data that needs to be send in SOAP format to a server. This server will immediately acknowledge that it received the messages. After a few hours I get (possibly from another server) a SOAP message that contains information about the processed data.
I read Stackoverflow: How to send SOAP request and receive response. However, the answers are 8 years old. Although they may still work, It may be that there are newer techniques.
And indeed it seems: Microsoft has System.Web.Services.Protocols, with classes like SoapMessage, SoapClientMessage, SoapServerMessage, etc.
Looking at the classes I find a lot of SOAP like classes (headers, extensions, client messages, server messages... Normally the provided examples give me an indication to how these classes work together and how to use them. In the MSDN documents I can only find examples of how to process already existing SOAP messages.
Given some data that needs to be sent, how can I wrap this data somehow in one of these SOAP classes and send this message?
Are these classes meant for this purpose? Or should I stick to the 2011 method where you'd create a SOAP Web request by formatting the XML data in soap format yourself, as the above mentioned Stackoverflow question suggests?
I'm awfully sorry, normally I would write things I have tried. Alas I don't see the relation between the provided SoapMessage classes. I haven't got a clue how to use them.
Addition after comments
I'm using windows server / visual studio (newest versions) / .NET (newest versions) / C# (newest versions).
The communication with the server is mutual authenticated. The certificate that I need to use to communicate with the server, is in PEM (CER / CRT) format. The privated key is RSA. This certificate is issued by a proper CA, the server will also use certificates used by a proper CA. So I don't need to create a new certificate (in fact, it won't be accepted). If needed, I'm willing to convert the certificates using programs like OpenSsl and the like.
I've tried to use Apache TomCat to communicate, but I have the feeling that that's way too much for the task of sending one SOAP message per day and waiting for one answer per day.
Maybe because java is a complete new technique for me, it was difficult for me to see the contents of the received messages. So back to C# and .NET.
I was planning to create a DLL, to be used by a console app. The function would have some data in a stream as input. It would create the soap message, send it, wait for reply that the message was received correctly, and wait (possible several hours) for a new Soap message containing the results of the processed data. To make proper reporting, and cancellation possible, I guess it is best to do this using async-await
If sending the order and waiting for the result can't be done in one application, I'm willing to create a windows service that that listens to the input, but I prefer to keep it simple.
The (virtual) computer will only be used for this task, so no one else will need to listen to port 443. There will be one order message send per day, and one result message per day.
Here is sample C# Console client and server code (they are in the same sample but this is only for demo purpose, of course) that uses HTTPS.
For the client side, we reuse the SoapHttpClientProtocol class, but for the server side, unfortunately, we cannot reuse anything because classes are completely tied to ASP.NET's (IIS) HttpContext class
For the server side, we use HttpListener, so, depending on your configuration, the server side will probably require admin rights to be able to call HttpListener's Prefixes.Add(url).
The code doesn't uses client certificate, but you can add this where I placed // TODO comments
The code assumes there is a certificate associated with the url and port used. If there's not (use netsh http show sslcert to dump all associated certs), you can use the procedure described here to add one: https://stackoverflow.com/a/11457719/403671
using System;
using System.IO;
using System.Net;
using System.Text;
using System.Threading.Tasks;
using System.Web.Services;
using System.Web.Services.Protocols;
using System.Xml;
namespace SoapTests
{
class Program
{
static void Main(string[] args)
{
// code presumes there is an sslcert associated with the url/port below
var url = "https://127.0.0.1:443/";
using (var server = new MyServer(url, MyClient.NamespaceUri))
{
server.Start(); // requests will occur on other threads
using (var client = new MyClient())
{
client.Url = url;
Console.WriteLine(client.SendTextAsync("hello world").Result);
}
}
}
}
[WebServiceBinding(Namespace = NamespaceUri)]
public class MyClient : SoapHttpClientProtocol
{
public const string NamespaceUri = "http://myclient.org/";
public async Task<string> SendTextAsync(string text)
{
// TODO: add client certificates using this.ClientCertificates property
var result = await InvokeAsync(nameof(SendText), new object[] { text }).ConfigureAwait(false);
return result?[0]?.ToString();
}
// using this method is not recommended, as async is preferred
// but we need it with this attribute to make underlying implementation happy
[SoapDocumentMethod]
public string SendText(string text) => SendTextAsync(text).Result;
// this is the new Task-based async model (TAP) wrapping the old Async programming model (APM)
public Task<object[]> InvokeAsync(string methodName, object[] input, object state = null)
{
if (methodName == null)
throw new ArgumentNullException(nameof(methodName));
return Task<object[]>.Factory.FromAsync(
beginMethod: (i, c, o) => BeginInvoke(methodName, i, c, o),
endMethod: EndInvoke,
arg1: input,
state: state);
}
}
// server implementation
public class MyServer : TinySoapServer
{
public MyServer(string url, string namespaceUri)
: base(url)
{
if (namespaceUri == null)
throw new ArgumentNullException(nameof(namespaceUri));
NamespaceUri = namespaceUri;
}
// must be same as client namespace in attribute
public override string NamespaceUri { get; }
protected override bool HandleSoapMethod(XmlDocument outputDocument, XmlElement requestMethodElement, XmlElement responseMethodElement)
{
switch (requestMethodElement.LocalName)
{
case "SendText":
// get the input
var text = requestMethodElement["text", NamespaceUri]?.InnerText;
text += " from server";
AddSoapResult(outputDocument, requestMethodElement, responseMethodElement, text);
return true;
}
return false;
}
}
// simple generic SOAP server
public abstract class TinySoapServer : IDisposable
{
private readonly HttpListener _listener;
protected TinySoapServer(string url)
{
if (url == null)
throw new ArgumentNullException(nameof(url));
_listener = new HttpListener();
_listener.Prefixes.Add(url); // this requires some rights if not used on localhost
}
public abstract string NamespaceUri { get; }
protected abstract bool HandleSoapMethod(XmlDocument outputDocument, XmlElement requestMethodElement, XmlElement responseMethodElement);
public async void Start()
{
_listener.Start();
do
{
var ctx = await _listener.GetContextAsync().ConfigureAwait(false);
ProcessRequest(ctx);
}
while (true);
}
protected virtual void ProcessRequest(HttpListenerContext context)
{
if (context == null)
throw new ArgumentNullException(nameof(context));
// TODO: add a call to context.Request.GetClientCertificate() to validate client cert
using (var stream = context.Response.OutputStream)
{
ProcessSoapRequest(context, stream);
}
}
protected virtual void AddSoapResult(XmlDocument outputDocument, XmlElement requestMethodElement, XmlElement responseMethodElement, string innerText)
{
if (outputDocument == null)
throw new ArgumentNullException(nameof(outputDocument));
if (requestMethodElement == null)
throw new ArgumentNullException(nameof(requestMethodElement));
if (responseMethodElement == null)
throw new ArgumentNullException(nameof(responseMethodElement));
var result = outputDocument.CreateElement(requestMethodElement.LocalName + "Result", NamespaceUri);
responseMethodElement.AppendChild(result);
result.InnerText = innerText ?? string.Empty;
}
protected virtual void ProcessSoapRequest(HttpListenerContext context, Stream outputStream)
{
// parse input
var input = new XmlDocument();
input.Load(context.Request.InputStream);
var ns = new XmlNamespaceManager(new NameTable());
const string soapNsUri = "http://schemas.xmlsoap.org/soap/envelope/";
ns.AddNamespace("soap", soapNsUri);
ns.AddNamespace("x", NamespaceUri);
// prepare output
var output = new XmlDocument();
output.LoadXml("<Envelope xmlns='" + soapNsUri + "'><Body/></Envelope>");
var body = output.SelectSingleNode("//soap:Body", ns);
// get the method name, select the first node in our custom namespace
bool handled = false;
if (input.SelectSingleNode("//x:*", ns) is XmlElement requestElement)
{
var responseElement = output.CreateElement(requestElement.LocalName + "Response", NamespaceUri);
body.AppendChild(responseElement);
if (HandleSoapMethod(output, requestElement, responseElement))
{
context.Response.ContentType = "application/soap+xml; charset=utf-8";
context.Response.StatusCode = (int)HttpStatusCode.OK;
var writer = new XmlTextWriter(outputStream, Encoding.UTF8);
output.WriteTo(writer);
writer.Flush();
handled = true;
}
}
if (!handled)
{
context.Response.StatusCode = (int)HttpStatusCode.BadRequest;
}
}
public void Stop() => _listener.Stop();
public virtual void Dispose() => _listener.Close();
}
}
Personally, I use ServiceStack to create both client and server
https://docs.servicestack.net/soap-support
Or SoapHttpClient nuget
https://github.com/pmorelli92/SoapHttpClient
Or my example from way back when
Is it possible that I can convert simple string to SOAP Message and send it?
The answer depends on what framework or libraries do you plan to use?
The simplest modern answer is to declare a simple class that defines the structure of your message and then serialize it using HttpClient to send it.
However, SOAP is a standard built for description based messaging so the still relevant recommendation is to generate your client code from the wsdl description using a "service reference" then use the generated client object.
I would however recommend, like others have pointed out that you try to move to REST services instead (assuming this is possible).
The code is less complex, the system is far simpler to use and it's a global standard.
Here is a comparison and example of both ...
https://smartbear.com/blog/test-and-monitor/understanding-soap-and-rest-basics/

Same method in Azure Function takes longer

I was curious about how much the response times would be improved when moving an intensive method to an Azure Function.
So I created 2 small applications. The first one is an ASP.NET MVC application (app service plan S1). The second one is an ASP.NET MVC application with 1 Azure Function (in a Function project), App Service plan S1 combined with a Consumption plan for the FunctionApp.
Both applications are the same except for 1 method, which is moved to an Azure Function, the MergeDocumentsAndExport.
In my application a given number of Document records are created on application start.
public class Document {
[Key]
public int Id { get; set; }
public string Name { get; set; }
public byte[] Content { get; set; }
public DateTime DateCreated { get; set; }
}
The enduser is able to download all the Documents in the database by clicking on a button in a view. When this button is clicked, the method MergeDocumentsAndExportis called.
Without Azure Functions
public byte[] MergeDocumentsAndExport()
{
var startmoment = DateTime.Now;
var baseDocument = new Spire.Doc.Document();
var documents = _documentDataService.GetAllDocuments();
foreach (var document in documents)
{
using (var memoryStream = new MemoryStream(document.Content))
{
var documentToLoad = new Spire.Doc.Document();
documentToLoad.LoadFromStream(memoryStream, FileFormat.Doc);
foreach (Section section in documentToLoad.Sections)
{
baseDocument.Sections.Add(section.Clone());
}
}
}
byte[] byteArrayToReturn;
using (var memoryStream = new MemoryStream())
{
baseDocument.SaveToStream(memoryStream, FileFormat.Doc);
byteArrayToReturn = memoryStream.ToArray();
}
_responseLogger.Log(startmoment, DateTime.Now, nameof(MergeDocumentsAndExport);
return byteArrayToReturn;
}
The _responseLogger.Log(..) method logs the start and end of the method with the name of the method en determines the executiontime (maybe responselogger isn't the best name for this service).
With Azure Functions
The same method is transformed into an HTTP-triggered Azure Function.
[DependencyInjectionConfig(typeof(DependencyConfig))]
public static class MergeDocumentsAndExportFunction
{
[FunctionName("MergeDocumentsAndExportFunction")]
public static HttpResponseMessage Run(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "MergeDocumentsAndExport")]HttpRequestMessage req,
TraceWriter log,
[Inject]IDocumentDataService documentDataService,
[Inject]IResponseLogger responseLogger)
{
var startmoment = DateTime.Now;
log.Info("MergeDocumentsAndExportFunction processed a request.");
var baseDocument = new Document();
var documents = documentDataService.GetAllDocuments();
foreach (var document in documents)
{
using (var memoryStream = new MemoryStream(document.Content))
{
var documentToLoad = new Document();
documentToLoad.LoadFromStream(memoryStream, FileFormat.Doc);
foreach (Section section in documentToLoad.Sections)
{
baseDocument.Sections.Add(section.Clone());
}
}
}
using (var memoryStream = new MemoryStream())
{
baseDocument.SaveToStream(memoryStream, FileFormat.Doc);
// Create response to send document as byte[] back.
var response = req.CreateResponse(HttpStatusCode.OK);
var buffer = memoryStream.ToArray();
var contentLength = buffer.Length;
response.Content = new StreamContent(new MemoryStream(buffer));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/ms-word");
response.Content.Headers.ContentLength = contentLength;
responseLogger.Log(startmoment, DateTime.Now, "MergeDocumentsAndExport");
return response;
}
}
In the MergeDocumentsAndExportFunction a bit of code is added like creating and returning a HttpResponseMessage. I didn't implement async calls, because it was only for this minor test and I wanted to compare the synchronous execution time of the method MergeDocumentsAndExport in both environments.
The results I got, was not what I was expecting. In almost every case the execution time was about the same or was much longer for the Azure Function. I know there is a startup time for an Azure Function and I excluded those. Maybe because the dependency injection with Autofac can take some time? The outcome in seconds when 1000 Documents from the database are merged and exported:
Without Azure Function:
49,34
50,21
51,26
49,00
50,21
50,68
...and so on...
Average: 49,69 seconds.
With Azure Function:
133,64 (startup, excluded)
77,68
85,18
66,46
86,00
65,17
...and so on...
Average: 82,69 seconds.
The execution time of the same method in different environments can differ 30 seconds for merging 1000 documents into 1. How is it possible that the Azure Function takes longer to execute (+ 30 seconds)? Is it because I am using it wrong? Thanks.
Azure Functions on Consumption Plan aren't very suitable to improve response time of long-running CPU-intensive workloads.
Based on my observation, the instances have quite moderate performance, so the request duration won't probably go down compared to fixed-plan App Service.
The strength of Functions is in short-lived executions with small- or variable throughput.
If you still want to compare, you could deploy the same Function App on Fixed App Service plan, measure the timings there and then choose what suits you best.
If an Azure Function on consumption plan is idle it releases the reserved compute from the host. When a new request comes it takes some time for the function to start up and request new compute.
You can set a Function to use app service plan and "always on" setting instead of consumption plan and it should go faster.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale

Azure resource management API not showing virtual machine state?

So I've been poking around at read-only api access into azure with the resource management api. Right now I'm focusing on Virtual Machines. I've been using this pre-release package with TokenCredentials:
https://www.nuget.org/packages/Microsoft.Azure.Management.Compute/13.0.1-prerelease
I get a bunch of rich info about my vms but I'm missing a pretty criticle piece of data and that's whether the vm is on or off. I've found a couple of meta data properties like InstanceView and Plan to be null when I expected them to be populated. It may because of how I launched my vms, it may be a unfinished or buggy new package, I can't tell. I was thinking InstanceViews statues would tell me what state the vm is in.
https://msdn.microsoft.com/en-us/library/microsoft.azure.management.compute.models.virtualmachineinstanceview.aspx
So I suppose I have to look elsewhere. I did find this older stackoverflow question that may be what I'm looking for:
azure management libraries virtual machine state
However I'm not sure what dll this GetAzureDeyployment is part of or if it's even TokenCredential compatible. Anyone know whats up?
You can use the following c# code to get the power status of your VM.
using System;
using System.Security;
using Microsoft.Azure.Management.Compute;
using Microsoft.Azure.Management.Compute.Models;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Microsoft.Rest;
namespace GetVmARM
{
class Program
{
private static String tenantID = "<your tenant id>";
private static String loginEndpoint = "https://login.windows.net/";
private static Uri redirectURI = new Uri("urn:ietf:wg:oauth:2.0:oob");
private static String clientID = "1950a258-227b-4e31-a9cf-717495945fc2";
private static String subscriptionID = "<your subscription id>";
private static String resource = "https://management.core.windows.net/";
static void Main(string[] args)
{
var token = GetTokenCloudCredentials();
var credential = new TokenCredentials(token);
var computeManagementClient = new ComputeManagementClient(credential);
computeManagementClient.SubscriptionId = subscriptionID;
InstanceViewTypes expand = new InstanceViewTypes();
var vm = computeManagementClient.VirtualMachines.Get("<the resource group name>", "<the VM>", expand);
System.Console.WriteLine(vm.InstanceView.Statuses[1].Code);
System.Console.WriteLine("Press ENTER to continue");
System.Console.ReadLine();
}
public static String GetTokenCloudCredentials(string username = null, SecureString password = null)
{
String authString = loginEndpoint + tenantID;
AuthenticationContext authenticationContext = new AuthenticationContext(authString, false);
var promptBehaviour = PromptBehavior.Auto;
var userIdentifierType = UserIdentifierType.RequiredDisplayableId;
var userIdentifier = new UserIdentifier("<your azure account>", userIdentifierType);
var authenticationResult = authenticationContext.AcquireToken(resource, clientID, redirectURI, promptBehaviour, userIdentifier);
return authenticationResult.AccessToken;
}
}
}
As you can see in this piece of code, I am using InstanceViewTypes which is not available in the document. This is new in the 13.0.1 pre-release version. But yes, if you adding this to your computeManagementClient.VirtualMachines.Get method, you will be able to get extra information for your VM.
Furthermore, I am using vm.InstanceView.Statuses[1] because vm.InstanceView.Statuses[0] is the ProvisioningState. And, I am not sure if the order is always like this, so you may need to loop through the whole status list.

Store the cache data locally

I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView.
My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model:
Client create a thread to get data -> thread complete and send event to client -> client display data on datagridview on a form.
However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly.
So, I think about a cached data:
Client <---get/add/edit/delete---> Cached Data ---get/add/edit/delete--->Server (web service)
Please give me some suggestions.
Example: cached data should be developed in another application which is same host with client? Or cached data is running in client.
Please give me some techniques to implement this solution.
If having any examples, please give me.
Thanks.
UPDATE : Hello everyone, maybe you think my problem so far. I only want to cache data in client's lifetime. I think cache data should be stored in memory. And when client want to get data, it will check from cache.
If you're using C# 2.0 and you're prepared to ship System.Web as a dependency, then you can use the ASP.NET cache:
using System.Web;
using System.Web.Caching;
Cache webCache;
webCache = HttpContext.Current.Cache;
// See if there's a cached item already
cachedObject = webCache.Get("MyCacheItem");
if (cachedObject == null)
{
// If there's nothing in the cache, call the web service to get a new item
webServiceResult = new Object();
// Cache the web service result for five minutes
webCache.Add("MyCacheItem", webServiceResult, null, DateTime.Now.AddMinutes(5), Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null);
}
else
{
// Item already in the cache - cast it to the right type
webServiceResult = (object)cachedObject;
}
If you're not prepared to ship System.Web, then you might want to take a look at the Enterprise Library Caching block.
If you're on .NET 4.0, however, caching has been pushed into the System.Runtime.Caching namespace. To use this, you'll need to add a reference to System.Runtime.Caching, and then your code will look something like this:
using System.Runtime.Caching;
MemoryCache cache;
object cachedObject;
object webServiceResult;
cache = new MemoryCache("StackOverflow");
cachedObject = cache.Get("MyCacheItem");
if (cachedObject == null)
{
// Call the web service
webServiceResult = new Object();
cache.Add("MyCacheItem", webServiceResult, DateTime.Now.AddMinutes(5));
}
else
{
webServiceResult = (object)cachedObject;
}
All these caches run in-process to the client. Because your data is coming from a web service, as Adam says, you're going to have difficulty determining the freshness of the data - you'll have to make a judgement call on how often the data changes and how long you cache the data for.
Do you have the ability to make changes/add to the webservice?
If you can Sync Services may be an option for you. You can define which tables are syncronised, and all the sync stuff is managed for you.
Check out
http://msdn.microsoft.com/en-us/sync/default.aspx
and shout if you need more information.
You might try the Enterprise Library's Caching Application Block. It's easy to use, stores in memory and, if you ever need to later, it supports adding a backup location for persisting beyond the life of the application (such as to a database, isolated storage, file, etc.) and even encryption too.
Use EntLib 3.1 if you're stuck with .NET 2.0. There's not much new (for caching, at least) in the newer EntLibs aside from better customization support.
Identify which objects you would like to serialize, and cache to isolated storage. Specify the level of data isolation you would like (application level, user level, etc).
Example:
You could create a generic serializer, a very basic sample would look like this:
public class SampleDataSerializer
{
public static void Deserialize<T>(out T data, Stream stm)
{
var xs = new XmlSerializer(typeof(T));
data = (T)xs.Deserialize(stm);
}
public static void Serialize<T>(T data, Stream stm)
{
try
{
var xs = new XmlSerializer(typeof(T));
xs.Serialize(stm, data);
}
catch (Exception e)
{
throw;
}
}
}
Note that you probably should put in some overloads to the Serialize and Deserialize methods to accomodate readers, or any other types you are actually using in your app (e.g., XmlDocuments, etc).
The operation to save to IsolatedStorage can be handled by a utility class (example below):
public class SampleIsolatedStorageManager : IDisposable
{
private string filename;
private string directoryname;
IsolatedStorageFile isf;
public SampleIsolatedStorageManager()
{
filename = string.Empty;
directoryname = string.Empty;
// create an ISF scoped to domain user...
isf = IsolatedStorageFile.GetStore(IsolatedStorageScope.User |
IsolatedStorageScope.Assembly | IsolatedStorageScope.Domain,
typeof(System.Security.Policy.Url), typeof(System.Security.Policy.Url));
}
public void Save<T>(T parm)
{
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Create))
{
SampleDataSerializer.Serialize<T>(parm, stm);
}
}
public T Restore<T>() where T : new()
{
try
{
if (GetFileNameByType<T>().Length > 0)
{
T result = new T();
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Open))
{
SampleDataSerializer.Deserialize<T>(out result, stm);
}
return result;
}
else
{
return default(T);
}
}
catch
{
try
{
Clear<T>();
}
catch
{
}
return default(T);
}
}
public void Clear<T>()
{
if (isf.GetFileNames(GetFileNameByType<T>()).Length > 0)
{
isf.DeleteFile(GetFileNameByType<T>());
}
}
private string GetFileNameByType<T>()
{
return typeof(T).Name + ".cache";
}
private IsolatedStorageFileStream GetStreamByStoredType<T>(FileMode mode)
{
var stm = new IsolatedStorageFileStream(GetFileNameByType<T>(), mode, isf);
return stm;
}
#region IDisposable Members
public void Dispose()
{
isf.Close();
}
}
Finally, remember to add the following using clauses:
using System.IO;
using System.IO.IsolatedStorage;
using System.Xml.Serialization;
The actual code to use the classes above could look like this:
var myClass = new MyClass();
myClass.name = "something";
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Save<MyClass>(myClass);
}
This will save the instance you specify to be saved to the isolated storage. To retrieve the instance, simply call:
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Restore<MyClass>();
}
Note: the sample I've provided only supports one serialized instance per type. I'm not sure if you need more than that. Make whatever modifications you need to support further functionalities.
HTH!
You can serialise the DataTable to file:
http://forums.asp.net/t/1441971.aspx
Your only concern then is deciding when the cache has gone stale. Perhaps timestamp the file?
In our implementation every row in the database has a last-updated timestamp. Every time our client application accesses a table we select the latest last-updated timestamp from the cache and send that value to the server. The server responds with all the rows that have newer timestamps.

Categories

Resources