How can I use a google speech to text api key in deployed WPF app? - c#

I have created a C# WPF app that uses google Speech-to-Text API. Currently, on my development machine, I have added an environment variable in windows, which references to the JSON file that google gave me.
Will I need to create that environment variable to every machine I deploy my app or is there a way to somehow store the JSON key on a server and reference it from there?

Possibly, Passing Credentials Using Code helps you.
This is the code copied from there:
// Some APIs, like Storage, accept a credential in their Create()
// method.
public object AuthExplicit(string projectId, string jsonPath)
{
// Explicitly use service account credentials by specifying
// the private key file.
var credential = GoogleCredential.FromFile(jsonPath);
var storage = StorageClient.Create(credential);
// Make an authenticated API request.
var buckets = storage.ListBuckets(projectId);
foreach (var bucket in buckets)
{
Console.WriteLine(bucket.Name);
}
return null;
}
// Other APIs, like Language, accept a channel in their Create()
// method.
public object AuthExplicit(string projectId, string jsonPath)
{
LanguageServiceClientBuilder builder = new LanguageServiceClientBuilder
{
CredentialsPath = jsonPath
};
LanguageServiceClient client = builder.Build();
AnalyzeSentiment(client);
return 0;
}

Related

Need to Access Google Drive V3 via IIS application - Without Using MVC

My need is very specific. I need to access a directory on Google Drive that is a shared folder. The only thing in it will be empty form documents and spreadsheet templates. There is nothing of value in the folder, and it is used internally only. So, I can be very optimistic WRT security concerns. I just need access.
I am extending an existing ERP system that runs as an IIS application.
My customization is .NET/C# project that extends the ERP's .NET classes. I cannot implement a login/auth system because one already exists for the ERP.
I did the .NET quickstart, but of course that is a console app, and will not work when I move it to IIS. The suggestion to follow the standard MVC model doesn't work for me -- adding a second web site/page is needlessly complicated for my needs.
My question is: How can I authorize access to a Google Drive that
A) Runs within IIS
B) Does not require a separate ASP Web Application to implement MVC for authorization.
=============================
Similar to issues in:
Google API Fails after hosting it to IIS
you could use OAuth authorization with your asp.net application:
Create Web Server client_secret.json.by using GetAuthorizationUrl() create url for get toke temporary token.Redirect to GoogleCallback() and get refresh and access tokens using ExchangeAuthorizationCode().Save them to the file "~/Resources/driveApiCredentials/drive-credentials.json/Google.Apis.Auth.OAuth2.Responses.TokenResponse-{account}".Use this saved tokens.
you could refer to the below link for more detail:
https://developers.google.com/api-client-library/dotnet/guide/aaa_oauth#web-applications-aspnet-mvc
Google Drive API upload Fails after hosting it to IIS. showing the error as Failed to launch the Browser with
Google Drive API not uploading file from IIS
Google Data API Authorization Redirect URI Mismatch
Jalpa's answer was not what I was looking for, nor was anything referenced in any of the links.
I'm going to put my answer here, because it is what I needed, and it might be useful to others.
First the overview
Google's QuickStart for .NET only shows the console based solution. As many have discovered, this does not work when you switch to an IIS based solution. It is almost as if the API deliberately defeats your attempts to do so. It simply will not allow you to use a token created for a local application using GoogleWebAuthorizationBroker.AuthorizeAsync -- it will error even if a browser isn't needed. (ie the token hasn't expired, so it won't need the browser to authenticate anything.)
Trying to run a refresh authorization gives you a token, but not a service. And even if the token is valid, you still can't use AuthorizeAsync to get your service from an IIS application (see above)
This is how I handle this:
Do the quick start and run the authorization that pops up the local browser and allows you to login and authenticate.
It creates a local folder(token.json), where it puts a token file (Google.Apis.Auth.OAuth2.Responses.TokenResponse-user) It's just a json file. Open it in notepad++ and you will find the fields:
"access_token": "token_type": "expires_in": "refresh_token":
"scope": "Issued": "IssuedUtc":
You need the refresh_token. I simply combined that with the initial credentials file I downloaded from the Google API Console (i.e. "credentials.json") and named it "skeleton_key.json"
This file is all you will need to generate valid tokens forever.
I have 2 classes I use for this. First the class that creates the Drive Service:
public class GDriveClass
{
public String LastErrorMessage { get; set; }
static string[] Scopes = { DriveService.Scope.Drive }; // could pull this from skeleton_key
static string ApplicationName = "GDrive Access"; // this is functionally irrelevant
internal UserCredential usrCredentials;
internal Google.Apis.Drive.v3.DriveService CurrentGDriveService = null;
internal String basePath = "."; // this comes in from calling program
// which uses HttpContext.Current.Server.MapPath("~");
public GDriveClass(string logFileBasePath)
{
basePath = logFileBasePath;
LastErrorMessage = "";
}
#region Google Drive Authenticate Code
public bool AuthenticateUser(string FullTokenAccessFileName)
{
UserCredential credential;
String JsonCredentialsonFile = System.IO.File.ReadAllText(FullTokenAccessFileName);
string credPath = basePath + #"\Credentials\token.json";
// Force a Refresh of the Token
RefreshTokenClass RTC = new RefreshTokenClass();
// Set field values in RefreshTokenClass:
var jObject = Newtonsoft.Json.Linq.JObject.Parse(JsonCredentialsonFile);
var fieldStrings = jObject.GetValue("installed").ToString();
var fields = Newtonsoft.Json.Linq.JObject.Parse(fieldStrings);
RTC.client_id = fields.GetValue("client_id").ToString();
RTC.client_secret = fields.GetValue("client_secret").ToString();
RTC.refresh_token = fields.GetValue("refresh_token").ToString();
RTC.ExecuteRefresh(); // this gets us a valid token every time
try
{
GoogleCredential gCredentials = GoogleCredential.FromAccessToken(RTC.access_token);
CurrentGDriveService = new DriveService(new BaseClientService.Initializer()
{
HttpClientInitializer = gCredentials,
ApplicationName = ApplicationName,
});
return true;
}
catch (Exception ex)
{
LastErrorMessage = "Error: Authenticating - " + ex.Message;
return false;
}
}
Usage is pretty straight forward:
string TokenFile = #basePath + #"\skeleton_key.json";
GDRIVER.AuthenticateUser(TokenFile);
var rslt = GDRIVER.LastErrorMessage;
if (!String.IsNullOrEmpty(rslt))
{
WriteToLogFile("ERROR in Google AuthenticateUser() ");
AlertMessage("Unable To Connect to Google Drive - Authorization Failed");
return;
}
And this is the class that refreshes the token via REST API as needed:
public class RefreshTokenClass
{
public string application_name { get; set; }
public string token_source { get; set; }
public string client_id { get; set; }
public string client_secret { get; set; }
public string scope { get; set; }
public string access_token { get; set; }
public string refresh_token { get; set; }
public RefreshTokenClass()
{
}
public bool ExecuteRefresh()
{
try
{
RestClient restClient = new RestClient();
RestRequest request = new RestRequest();
request.AddQueryParameter("client_id", this.client_id);
request.AddQueryParameter("client_secret", this.client_secret);
request.AddQueryParameter("grant_type", "refresh_token");
request.AddQueryParameter("refresh_token", this.refresh_token);
restClient.BaseUrl = new System.Uri("https://oauth2.googleapis.com/token");
var restResponse = restClient.Post(request);
// Extracting output data from received response
string response = restResponse.Content.ToLower(); // make sure case isn't an issue
// Parsing JSON content into element-node JObject
var jObject = Newtonsoft.Json.Linq.JObject.Parse(restResponse.Content);
//Extracting Node element using Getvalue method
string _access_token = jObject.GetValue("access_token").ToString();
this.access_token = _access_token;
return true;
}
catch (Exception ex)
{
//Console.WriteLine("Error on Token Refresh" + ex.Message);
return false;
}
}
Note: This makes use of Newtonsoft.Json and RestSharp.
Thanks to user: "OL." who gave me the way of creating a service from a token (that somehow I missed in the docs!)
How to create Service from Access Token
And to user:"purshotam sah" for a clean REST API approach
Generate Access Token Using Refresh Token

Authentication in Dialogflow API V2 using C#

I have .NET Web API Project for the fulfillment API as our webhook in my Dialogflow agent. In our Post method of the controller, after getting the request from Dialogflow, I implement the explicit authentication as shown in the Google Cloud documentation for C#.
//jsonFileName is the name of the serviceAccountKey json generated from the Google Cloud Platform that's encrypted internally
public bool AuthExplicit(string projectId, string jsonFileName)
{
try
{
string JsonCredential = DecryptHelper.Decrypt(jsonFileName);
var credential = GoogleCredential.FromJson(JsonCredential).CreateScoped(LanguageServiceClient.DefaultScopes);
var channel = new Grpc.Core.Channel(
LanguageServiceClient.DefaultEndpoint.ToString(),
credential.ToChannelCredentials());
var client = LanguageServiceClient.Create(channel);
AnalyzeSentiment(client);
if (client != null)
{
return true;
}
else
{
return false;
}
}
internal void AnalyzeSentiment(LanguageServiceClient client)
{
var response = client.AnalyzeSentiment(new Document()
{
Content = "Authenticated.",
Type = Document.Types.Type.PlainText
});
var sentiment = response.DocumentSentiment;
string score = $"Score: {sentiment.Score}";
string magnitude = $"Magnitude: {sentiment.Magnitude}";
}
The difference with the code is that after getting the client, when we call the AnalyzeSentiment() method, it doesn't do anything, and the projectId parameter is never used to authenticate. GCP docs are quite confusing, since when there is an AuthExplicit() that uses projectId, it uses it as a parameter for the buckets and only prints this on the console.
It works fine, until we test the service account key with a different agent. Expected output is that authentication would fail, but somehow it still passes.
Once the Post method goes through the AuthExplicit() method, it would only return a boolean. Is this the right way to authenticate? Or is there something else needed to invoke?
The difference with the code is that after getting the client, when we call the AnalyzeSentiment() method, it doesn't do anything,
Does client.AnalyzeSentiment() return an empty response? Does the call hang forever?
It works fine, until we test the service account key with a different agent.
What is a different agent? A different User-Agent header?
Once the Post method goes through the AuthExplicit() method, it would only return a boolean. Is this the right way to authenticate? Or is there something else needed to invoke?
What does 'the Post method' refer to? What is the 'it' that would only return a boolean?

azure deploy multiple site's instance on demand

I have a website called www.Request.com, when users access this site they will be able to request the creation of a new instance of another website that is already deployed in AZURE with the name www.MyTechnicalApp.com
for example when I access to www.Request.com I will request the creation of MyTechnicalApp for my company called "MyCompany", it's supposed that there is a script that will be executed by request.com website to create automatically www.MyCompany.MyTechnicalApp.com website.
would you please let me know how could I do that?
According to your description, to create a web app on Azure automatically, there are two ways to achieve this.
One: using "Windows Azure Management Libraries", this SDK is a wrapper around "Azure Service Management" API.
First, we need to do authentication with ASM API and we can refer to: Windows Azure Management Librairies : Introduction et authentification, then we will be able to create a website with something like this:
using (var AwsManagement = new Microsoft.WindowsAzure.Management.WebSites.WebSiteManagementClient(azureCredentials))
{
WebSiteCreateParameters parameters = new WebSiteCreateParameters()
{
Name = "myAws",
// this Service Plan must be created before
ServerFarm = "myServiceplan",
};
await AwsManagement.WebSites.CreateAsync("myWebSpace", parameters, CancellationToken.None);
}
Two: We can create a web site by using a POST request that includes the name of the web site and other information in the request body. We can check the code example for azure-sdk-for-net
use this link to get the credentials Authentication in Azure Management Libraries for Java.
https://github.com/Azure/azure-libraries-for-java/blob/master/AUTH.md
The below link helped me to find the answer.
static void Main(string[] args)
{
try
{
var resourceGroupName = "your ressource group name";
var subId = "64da6c..-.......................88d";
var appId = "eafeb071-3a70-40f6-9e7c-fb96a6c4eabc";
var appSecret = "YNlNU...........................=";
var tenantId = "c5935337-......................19";
var environment = AzureEnvironment.AzureGlobalCloud;
var credentials = SdkContext.AzureCredentialsFactory.FromServicePrincipal(appId, appSecret, tenantId, AzureEnvironment.AzureGlobalCloud);
var azure = Microsoft.Azure.Management.Fluent.Azure
.Configure()
.Authenticate(credentials)
.WithSubscription(subId);
azure.AppServices.WebApps.Inner.CreateOrUpdateHostNameBindingWithHttpMessagesAsync(resourceGroupName, "WebSiteName", "SubDomainName",
new HostNameBindingInner(
azureResourceType: AzureResourceType.Website,
hostNameType: HostNameType.Verified,
customHostNameDnsRecordType: CustomHostNameDnsRecordType.CName)).Wait();
}
catch (Exception ex)
{
}
}

Using tuespechkin with MVC project in Azure

I can't manage to get pechkin or tuespechkin to work on my azure site.
Whenever I try to access the site it just hangs with no error message (even with customErrors off). Is there any further setup I'm missing? Everything works perfectly locally.
For a 64 bit app I'm completing the following steps:
Create a new Empty MVC App with Azure, make sure Host in the cloud is selected
Change the app to 64 bit
Log onto the azure portal and upgrade the app to basic hosting and change it to 64 bit
Install the TuesPechkin.Wkhtmltox.Win64 and TuesPechkin nuget packages
Add a singleton class to return the IConverter
public class TuesPechkinConverter
{
private static IConverter converter;
public static IConverter Converter
{
get
{
if (converter == null)
{
converter =
new ThreadSafeConverter(
new PdfToolset(
new Win64EmbeddedDeployment(
new TempFolderDeployment())));
}
return converter;
}
}
}
Add a Home controller with the following code in the Index Action:
var document = new HtmlToPdfDocument
{
GlobalSettings =
{
ProduceOutline = true,
DocumentTitle = "Pretty Websites",
PaperSize = PaperKind.A4, // Implicit conversion to PechkinPaperSize
Margins =
{
All = 1.375,
Unit = Unit.Centimeters
}
},
Objects =
{
new ObjectSettings { HtmlText = "<h1>Pretty Websites</h1><p>This might take a bit to convert!</p>" },
new ObjectSettings { PageUrl = "www.google.com" }
}
};
byte[] pdfBuf = TuesPechkinConverter.Converter.Convert(document);
return File(pdfBuf, "application/pdf", "DownloadName.pdf");
As far as i know, you can't make it work in a web app. However, there is a way you can do it: you have to create a cloud service and add a worker role to it. TuesPechkin will be installed in this worker role.
The workflow would be the following: from your cloud web app, you would access the worker role(this thing is possible by configuring the worker role to host Asp.NET Web API 2). The worker role would configure a converter using TuesPechkin and would generate the PDF. We would wrap the pdf in the web api response and send it back. Now, let's do it...
To add a cloud service (suppose you have Azure SDK installed), go to Visual Studio -> right click your solution -> Add new project -> select Cloud node -> Azure Cloud Service -> after you click OK select Worker Role and click OK.
Your cloud service and your worker role are created. Next thing to do is to configure your Worker Role so it can host ASP.NET Web API 2.
This configuration is pretty straightforward, by following this tutorial.
After you have configured your Worker Role to host a web api, you will have to install the TuesPechkin.Wkhtmltox.Win64 and TuesPechkin nuget packages.
Your configuration should now be ready. Now create a controller, in which we will generate the PDF: add a new class in your Worker Role which will extend ApiController:
public class PdfController : ApiController
{
}
Add an action to our controller, which will return an HttpResponseMessage object.
[HttpPost]
public HttpResponseMessage GeneratePDF(PdfViewModel viewModel)
{
}
Here we will configure two ObjectSettings and GlobalSettings objects which will be applied to an HtmlToPdfDocument object.
You now have two options.
You can generate the pdf from html text(maybe you sent the html of your page in the request) or directly by page url.
var document = new HtmlToPdfDocument
{
GlobalSettings =
{
ProduceOutline = true,
DocumentTitle = "Pretty Websites",
PaperSize = PaperKind.A4, // Implicit conversion to PechkinPaperSize
Margins =
{
All = 1.375,
Unit = Unit.Centimeters
}
},
Objects = {
new ObjectSettings { HtmlText = "<h1>Pretty Websites</h1><p>This might take a bit to convert!</p>" },
new ObjectSettings { PageUrl = "www.google.com" }
}
};
A nice thing to now is that when using page url, you can use the ObjectSettings object to post parameters:
var obj = new ObjectSettings();
obj.LoadSettings.PostItems.Add
(
new PostItem()
{
Name = "paramName",
Value = paramValue
}
);
Also, from TuesPechkin documentation the converter should be thread safe and should be kept somewhere static, or as a singleton instance:
IConverter converter =
new ThreadSafeConverter(
new RemotingToolset<PdfToolset>(
new Win64EmbeddedDeployment(
new TempFolderDeployment())));
Finally you wrap the pdf in the response content, set the response content type to application/pdf and add the content-disposition header and that's it:
byte[] result = converter.Convert(document);
MemoryStream ms = new MemoryStream(result);
response.StatusCode = HttpStatusCode.OK;
response.Content = new StreamContent(ms);
response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/pdf");
response.Content.Headers.Add("content-disposition", "attachment;filename=myFile.pdf");
return response;
I'm afraid the answer is that it's not possible to get wkhtmltopdf working on Azure.
See this thread.
I am assuming you mean running wkhtmltopdf on Windows Azure Websites.
wkhtmltopdf uses Window's GDI APIs which currently don't work on Azure
Websites.
Tuespechkin supported usage
It supports .NET 2.0+, 32 and 64-bit processes, and IIS-hosted applications.
Azure Websites does not currently support the use of wkhtmltopdf.
Workaround
I ended up creating a Azure Cloud Service, that runs wkhtmltopdf.exe. I send the html to the service, and get a byte[] in return.

How to configure CORS setting for Blob storage in windows azure

I have created several containers in a azure storage and also uploaded some files into these containers. Now I need to give domain level access to the container/blobs. So I tried it from code level like below.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
ServiceProperties blobServiceProperties = new ServiceProperties();
blobServiceProperties.Cors.CorsRules.Add(new CorsRule(){
AllowedHeaders = new List<string>() {"*"},
ExposedHeaders = new List<string>() {"*"},
AllowedMethods = CorsHttpMethods.Post | CorsHttpMethods.Put | CorsHttpMethods.Get | CorsHttpMethods.Delete ,
AllowedOrigins = new List<string>() { "http://localhost:8080/"},
MaxAgeInSeconds = 3600,
});
blobClient.SetServiceProperties(GetBlobServiceProperties());
But above code seems to be work if I am creating everything from code (Correct me if I am wrong). I also find setting like below Here,
<CorsRule>
<AllowedOrigins>http://www.contoso.com, http://www.fabrikam.com</AllowedOrigins>
<AllowedMethods>PUT,GET</AllowedMethods>
<AllowedHeaders>x-ms-meta-data*,x-ms-meta-target,x-ms-meta-source</AllowedHeaders>
<ExposedHeaders>x-ms-meta-*</ExposedHeaders>
<MaxAgeInSeconds>200</MaxAgeInSeconds>
</CorsRule>
But I didn't get where this code have to put. I mean in which file. Or is there any setting for CORS while creating container or blob from azure portal. Please assist. Any help would be appreciable. Thanks!
The following answers the question that was actually asked in the title. It appears the questioner already knew how to do this largely from his code, but here is my answer to this. Unfortunately the code samples MS has put out has been far from easy or clear, so I hope this helps someone else. In this solution all you need is a CloudStorageAccount instance, which you can call this function from then (as an extension method).
// USAGE:
// -- example usage (in this case adding a wildcard CORS rule to this account --
CloudStorageAccount acc = getYourStorageAccount();
acc.SetCORSPropertiesOnBlobService(cors => {
var wildcardRule = new CorsRule() { AllowedMethods = CorsHttpMethods.Get, AllowedOrigins = { "*" } };
cors.CorsRules.Add(wildcardRule);
return cors;
});
// CODE:
/// <summary>
/// Allows caller to replace or alter the current CorsProperties on a given CloudStorageAccount.
/// </summary>
/// <param name="storageAccount">Storage account.</param>
/// <param name="alterCorsRules">The returned value will replace the
/// current ServiceProperties.Cors (ServiceProperties) value. </param>
public static void SetCORSPropertiesOnBlobService(this CloudStorageAccount storageAccount,
Func<CorsProperties, CorsProperties> alterCorsRules)
{
if (storageAccount == null || alterCorsRules == null) throw new ArgumentNullException();
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
ServiceProperties serviceProperties = blobClient.GetServiceProperties();
serviceProperties.Cors = alterCorsRules(serviceProperties.Cors) ?? new CorsProperties();
blobClient.SetServiceProperties(serviceProperties);
}
It may be helpful to consider the properties of the CorsRule class:
CorsRule corsRule = new CorsRule() {
AllowedMethods = CorsHttpMethods.Get, // Gets or sets the HTTP methods permitted to execute for this origin
AllowedOrigins = { "*" }, // (IList<string>) Gets or sets domain names allowed via CORS.
//AllowedHeaders = { "*" }, // (IList<string>) Gets or sets headers allowed to be part of the CORS request
//ExposedHeaders = null, // (IList<string>) Gets or sets response headers that should be exposed to client via CORS
//MaxAgeInSeconds = 33333 // Gets or sets the length of time in seconds that a preflight response should be cached by browser
};
Let me try to answer your question. As you know, Azure Storage offers a REST API for managing storage contents. An operation there is Set Blob Service Properties and one of the things you do there is manage CORS rules for blob service. The XML you have included in the question is the request payload for this operation. The C# code you mentioned is actually storage client library which is essentially a wrapper over this REST API written in .Net. So when you use the code above, it actually invokes the REST API and sends the XML.
Now coming to options on setting up CORS rules, there're a few ways you can achieve that. If you're interested in setting them up programmatically, then you can either write some code which consumes the REST API or you could directly use .Net storage client library as you have done above. You could simply create a console application, put the code in there and execute that to set the CORS rule. If you're looking for some tools to do that, then you can try one of the following tools:
Azure Management Studio from Cerebrata: http://www.cerebrata.com
Cloud Portam: http://www.cloudportam.com (Disclosure: This product is built by me).
Azure Storage Explorer (version 6.0): https://azurestorageexplorer.codeplex.com/
Its not a good idea to give domain level access to your containers. You can make the container private, upload the files (create blob) and then share it by using Shared Access Policy.
The below code can help you.
static void Main(string[] args)
{
var account = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings["AzureStorageAccount"].ConnectionString);
var bClient = account.CreateCloudBlobClient();
var container = bClient.GetContainerReference("test-share-container-1");
container.CreateIfNotExists();
// clear all existing policy
ClearPolicy(container);
string newPolicy = "blobsharepolicy";
CreateSharedAccessPolicyForBlob(container, newPolicy);
var bUri = BlobUriWithNewPolicy(container, newPolicy);
Console.ReadLine();
}
static void ClearPolicy(CloudBlobContainer container)
{
var perms = container.GetPermissions();
perms.SharedAccessPolicies.Clear();
container.SetPermissions(perms);
}
static string BlobUriWithNewPolicy(CloudBlobContainer container, string policyName)
{
var blob = container.GetBlockBlobReference("testfile1.txt");
string blobContent = "Hello there !!";
MemoryStream ms = new MemoryStream(Encoding.UTF8.GetBytes(blobContent));
ms.Position = 0;
using (ms)
{
blob.UploadFromStream(ms);
}
return blob.Uri + blob.GetSharedAccessSignature(null, policyName);
}
static void CreateSharedAccessPolicyForBlob(CloudBlobContainer container, string policyName)
{
SharedAccessBlobPolicy sharedPolicy = new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
};
var permissions = container.GetPermissions();
permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
container.SetPermissions(permissions);
}
<connectionStrings>
<add name="AzureStorageAccount" connectionString="DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key]" />
</connectionStrings>

Categories

Resources