What is needed in my ARM template to add azure function? - c#

I have an ARM template for my resource group. In there I have set up a Function App which is correctly deployed with my ARM template using Octopus Deploy (OD).
Now I have with C# developed an Azure Function which I want to be deployed into my Function App.
This can be done with a separate step (using a template) in OD and all seemed well when deploying... Except that it does not show up in my Function App in Azure as I expect it to.
Is there any further configuration in my ARM template I need to do? I haven't found anything on the subject, when deploying a c# azure function.
For a csx azure function it can be configured/deployed in the ARM template but it is not recommended for a c# azure function, what I read.
EDIT:
This is a part of my ARM template. The FunctionApp is created with the following resource.
{
"type": "Microsoft.Web/sites",
"apiVersion": "2018-11-01",
"name": "[parameters('sites_..._name')]",
"location": "[parameters('location')]",
"kind": "functionapp",
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', parameters('serverfarms_..._name'))]",
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
],
"identity": {
"type": "SystemAssigned"
},
"properties": {
"enabled": true,
"siteConfig": {
"appSettings": [
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "[parameters('runtimeStack')]"
},
{
"name": "AzureWebJobsStorage",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=',variables('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2017-06-01').keys[0].value)]"
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~2"
},
//...
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')),'2017-06-01').keys[0].value)]"
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "[parameters('sites_..._name')]"
}
]
},
"hostNameSslStates": [
{
"name": "[concat(parameters('sites_..._name'), '.azurewebsites.net')]",
"sslState": "Disabled",
"hostType": "Standard"
},
{
"name": "[concat(parameters('sites_..._name'), '.scm.azurewebsites.net')]",
"sslState": "Disabled",
"hostType": "Repository"
}
],
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarms_..._name'))]",
"reserved": false,
"isXenon": false,
"hyperV": false,
"scmSiteAlsoStopped": false,
"clientAffinityEnabled": false,
"clientCertEnabled": false,
"hostNamesDisabled": false,
"containerSize": 1536,
"dailyMemoryTimeQuota": 0,
"httpsOnly": true,
"redundancyMode": "None"
}
}
As you can see I am registering my FunctionApp which after deployment of the ARM template AND separate deploy of my azure function TO that FunctionApp should include the azure function.
Though, the azure function do not show up under my FunctionApp.

Related

What do I need to do to make a Teams bot callable

I'm trying to create a Microsoft Teams bot that can be called. Especially, we plan to use it as a destination for incoming calls on a Call Queue.
It seems that the call buttons
we usually get with contacts are nowhere to be seen on this bot:
So far, I have managed to create a bot according to the samples.
In https://dev.botframework.com/, the bot appears and I have the Enable Calling flag set (which - interestingly, seems to get disabled almost every time I run the project in Visual Studio).
My permissions.json looks like this:
[
{
"resource": "Microsoft Graph",
"delegated": [
"User.Read"
],
"application": [
"Calls.Initiate.All",
"Calls.InitiateGroupCall.All",
"Calls.JoinGroupCall.All",
"Calls.AccessMedia.All"
]
}
]
My manifest.template.json looks like this:
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.13/MicrosoftTeams.schema.json",
"manifestVersion": "1.13",
"version": "1.0.0",
"id": "{{state.fx-resource-appstudio.teamsAppId}}",
"packageName": "com.microsoft.teams.extension",
"developer": {
"name": "Teams App, Inc.",
"websiteUrl": "{{state.fx-resource-frontend-hosting.endpoint}}",
"privacyUrl": "{{state.fx-resource-frontend-hosting.endpoint}}{{state.fx-resource-frontend-hosting.indexPath}}/privacy",
"termsOfUseUrl": "{{state.fx-resource-frontend-hosting.endpoint}}{{state.fx-resource-frontend-hosting.indexPath}}/termsofuse"
},
"icons": {
"color": "resources/color.png",
"outline": "resources/outline.png"
},
"name": {
"short": "{{config.manifest.appName.short}}",
"full": "{{config.manifest.appName.full}}"
},
"description": {
"short": "Short description of {{config.manifest.appName.short}}",
"full": "Full description of {{config.manifest.appName.short}}"
},
"accentColor": "#FFFFFF",
"bots": [
{
"botId": "{{state.fx-resource-bot.botId}}",
"scopes": [
"personal",
"team",
"groupchat"
],
"supportsFiles": false,
"supportsCalling": true,
"supportsVideo": true,
"isNotificationOnly": false
}
],
"composeExtensions": [],
"configurableTabs": [],
"staticTabs": [],
"permissions": [
"identity",
"messageTeamMembers"
],
"validDomains": [],
"webApplicationInfo": {
"id": "{{state.fx-resource-aad-app-for-teams.clientId}}",
"resource": "{{state.fx-resource-aad-app-for-teams.applicationIdUris}}"
}
}
I also believe to have configured the correct permissions on the App Registration in Azure:
Any idea where to look next? What do I need to do to make the bot directly callable as I would be able to with any other user?

Creating an Azure Storage Account in C# using Fluent API with no Allow Blob Public Access

My goal is to create an Azure storage account from C# code using the Fluent API (Microsoft.Azure.Management.Fluent).
My code executes correctly except that my organization has a policy which requires that all storage accounts must be created with "Allow Blob public access" set to Disabled. Is there any way to specify that as an option using the Fluent approach?
TargetStorageAccount = AzureClient.StorageAccounts.Define(storageAccountName)
.WithRegion(Region.AustraliaCentral)
.WithExistingResourceGroup(ResourceGroupName)
.WithSku(StorageAccountSkuType.Standard_LRS)
.WithGeneralPurposeAccountKindV2()
.Create();
You can use ARM template to deploy storage account with "Allow Blob public access" set to Disabled.
For a fast demo, this is my template for the storage account:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storagePrefix": {
"type": "string",
"minLength": 3,
"maxLength": 11
},
"storageSKU": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_GRS",
"Standard_RAGRS",
"Standard_ZRS",
"Premium_LRS",
"Premium_ZRS",
"Standard_GZRS",
"Standard_RAGZRS"
]
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]"
}
},
"variables": {
"uniqueStorageName": "[concat(parameters('storagePrefix'), '4testonly')]"
},
"resources": [{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-04-01",
"name": "[variables('uniqueStorageName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('storageSKU')]"
},
"kind": "StorageV2",
"properties": {
"supportsHttpsTrafficOnly": true,
"allowBlobPublicAccess": false
}
}
],
"outputs": {
"result": {
"type": "string",
"value": "done"
}
}
}
allowBlobPublicAccess set to false.
This is the code using fluent API to create this deployment:
var storageAccountName = "<your new storage account name>";
var templateJson = File.ReadAllText(#"<path of template file>\storageOnly.json");
azure.Deployments.Define("storageDepolyTest")
.WithExistingResourceGroup("<your resource group name>")
.WithTemplate(templateJson)
.WithParameters("{\"storagePrefix\":{\"value\":\""+ storageAccountName + "\"}}")
.WithMode(DeploymentMode.Incremental)
.Create();
Result:

Google API Explorer doesn't create Compute Engine VM instance

as in title, I am trying to use Google API Explorer to run vm instance. I am using instances.insert for this, but I can't get it to work. After successfuly executing the call I can not see any newly creted vm instance in https://console.cloud.google.com/compute/instances
The request I am trying to execute is copied from Equivalent REST request in Google Cloud Console Create an instance web page :
{
"name": "some-name",
"machineType": "projects/my-project-id/zones/europe-west3-c/machineTypes/f1-micro",
"displayDevice": {
"enableDisplay": false
},
"metadata": {
"items": [
{
"key": "startup-script",
"value": "#! /bin/bash\necho hello\nEOF"
}
]
},
"tags": {
"items": []
},
"disks": [
{
"type": "PERSISTENT",
"boot": true,
"mode": "READ_WRITE",
"autoDelete": true,
"deviceName": "some-name",
"initializeParams": {
"sourceImage": "projects/debian-cloud/global/images/debian-10-buster-v20200910",
"diskType": "projects/my-project-id/zones/europe-west3-c/diskTypes/pd-standard",
"diskSizeGb": "10",
"labels": {}
},
"diskEncryptionKey": {}
}
],
"canIpForward": false,
"networkInterfaces": [
{
"subnetwork": "projects/my-project-id/regions/europe-west3/subnetworks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT",
"networkTier": "PREMIUM"
}
],
"aliasIpRanges": []
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": false,
"onHostMaintenance": "MIGRATE",
"automaticRestart": true,
"nodeAffinities": []
},
"deletionProtection": false,
"reservationAffinity": {
"consumeReservationType": "ANY_RESERVATION"
},
"serviceAccounts": [
{
"email": "some-number-compute#developer.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"shieldedInstanceConfig": {
"enableSecureBoot": false,
"enableVtpm": true,
"enableIntegrityMonitoring": true
},
"confidentialInstanceConfig": {
"enableConfidentialCompute": false
}
}
Here is the response with status 200
{
"id": "2981010757915612255",
"name": "operation-1602235056020-5b1396b5c5cee-e0e30499-4d06ce75",
"zone": "https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c",
"operationType": "insert",
"targetLink": "https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c/instances/ams2-linux-race-1",
"targetId": "1541614827291382879",
"status": "RUNNING",
"user": "email#gmail.com",
"progress": 0,
"insertTime": "2020-10-09T02:17:36.818-07:00",
"startTime": "2020-10-09T02:17:36.821-07:00",
"selfLink": "https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c/operations/operation-1602235056020-5b1396b5c5cee-e0e30499-4d06ce75",
"kind": "compute#operation"
}
I have the same issue with C# code example from https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert#examples
I can execute the same request without errors and in response I am getting this
{
"clientOperationId":null,
"creationTimestamp":null,
"description":null,
"endTime":null,
"error":null,
"httpErrorMessage":null,
"httpErrorStatusCode":null,
"id":3283200477858999168,
"insertTime":"2020-10-09T00:46:55.187-07:00",
"kind":"compute#operation",
"name":"operation-1602229614262-5b1382701b989-381126a6-cc145485",
"operationType":"insert",
"progress":0,
"region":null,
"selfLink":"https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c/operations/operation-1602229614262-5b1382701b989-381126a6-cc145485",
"startTime":"2020-10-09T00:46:55.189-07:00",
"status":"RUNNING",
"statusMessage":null,
"targetId":2365846324436118401,
"targetLink":"https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c/instances/some-name",
"user":"email#gmail.com",
"warnings":null,
"zone":"https://www.googleapis.com/compute/v1/projects/my-project-id/zones/europe-west3-c",
"ETag":null
}
but I can't see any new instance beeing created...
Does any one know what is the issue here?
The Compute Engine API is enabled. Result of gcloud services list:
NAME TITLE
...
compute.googleapis.com Compute Engine API
...
Can you please double check if Compute Engine Api is enabled and post the result in your question.
gcloud services list
I believe your Compute Engine Api is not enabled.

azure data factory project with custom activities

I have a azure data factory project.
I read the docs https://learn.microsoft.com/en-us/azure/data-factory/data-factory-use-custom-activities to add custom activities to one of my pipelines.
in the documentation said that you have to zip the dlls of your class library that represent the custom activities, and storage this zip in a azure blob.
and the definition of the pipeline are:
{
"name": "LoadFromOnerxSalesInvoicesRaw",
"properties": {
"description": "Test Deserialize Sales Invoices Raw",
"activities": [
{
"type": "DotNetActivity",
"typeProperties": {
"assemblyName": "BICodeActivities.dll",
"entryPoint": "BICodeActivities.Activities.OneRx.DeserializeSalesInvoiceToLines",
"packageLinkedService": "biCABlobLS",
"packageFile": "bi-activities-container/BICodeActivities.zip",
"extendedProperties": {
"SliceStart": "$$Text.Format('{0:yyyyMMddHH-mm}', Time.AddMinutes(SliceStart, 0))"
}
},
"inputs": [
{
"name": "o-staging-onerx-salesInvoices"
}
],
"outputs": [
{
"name": "o-staging-onerx-salesInvoicesLines"
}
],
"policy": {
"timeout": "00:30:00",
"concurrency": 2,
"retry": 3
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "DeserializeSalesInvoiceToLines",
"linkedServiceName": "biBatchLS"
},
{
"type": "DotNetActivity",
"typeProperties": {
"assemblyName": "BICodeActivities.dll",
"entryPoint": "BICodeActivities.Activities.OneRx.DeserializeSalesInvoiceToDiscounts",
"packageLinkedService": "biCABlobLS",
"packageFile": "bi-activities-container/BICodeActivities.zip",
"extendedProperties": {
"SliceStart": "$$Text.Format('{0:yyyyMMddHH-mm}', Time.AddMinutes(SliceStart, 0))"
}
},
"inputs": [
{
"name": "o-staging-onerx-salesInvoices"
}
],
"outputs": [
{
"name": "o-staging-onerx-salesInvoicesDiscounts"
}
],
"policy": {
"timeout": "00:30:00",
"concurrency": 2,
"retry": 3
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "DeserializeSalesInvoiceToDiscounts",
"linkedServiceName": "biBatchLS"
}
],
"start": "2017-04-26T09:20:00Z",
"end": "2018-04-26T22:30:00Z"
}
}
When set up this pipeline in my visual studio project and build i get an error "BICodeActivities.zip is not found in the solution".
I have to zip the dlls and add manually to the solution?, or i need to do something else?
I'm assuming you have the custom acitivites as class library in the same solution as your data factory project.
If so, you simply need to reference the class library project in the data factory project. Right click > Add > Reference. Select the library project.
Once done, when you build the solution Visual Studio will handle the zipping of the DLL's for you and also add the ZIP folder as a dependency which will show in the publishing wizard to be deployed to the blob storage linked service.
For further support check out this blog post.
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Hope this helps.

Deploy a .NET Windows Service with Amazon Elastic Beanstalk with no Web Application

I want to create an Elastic Beanstalk configuration that allows me to deploy a .NET Windows Service but without deploying a web application.
I have just read this blog post which explains how to use .ebextensions to deploy a Windows Service alongside your web application, but is there a scenario for which the .ebextensions can be run without deploying a Web Deploy package for a web application?
Is my only option to create an empty web application that contains the .ebextensions directory and then deploy the Web Deploy package?
The Elastic Beanstalk FAQ mentions the ability to deploy non-web applications (here) and I have found a similar (unanswered) question on the AWS developer forums (here).
Update
Due to the lack of activity on this question and my inability to find any other information on the internet, I just assumed that the answer to this question is "No" (at least for now).
I ended up creating an empty web application and used that to deploy my Windows Service via the .ebextensions YAML config.
As a side note, I'd like to highlight this page from Amazon's documentation which I found to be a very helpful guide to creating those special config files.
Another Update
After implementing the approach mentioned above, I discovered that Elastic Beanstalk was not executing my .ebextensions scripts for new Beanstalk instances. As a result, the Windows Service failed to be installed when new instances were created. I had to jump through several more hoops to finally arrive at a scalable solution. Please let me know if you want the details of the final solution.
Ultimately, it just seems like Elastic Beanstalk wasn't meant for deploying scalable Windows Services.
Basic Solution
I'm not comfortable releasing the source code since it was not for a personal project, but here is the basic structure of my current deployment solution:
A custom EC2 AMI contains a 'bootstrap' program that runs on startup. The program does the following:
1.1. Download a 'zip' archive from a (configurable) 'deployment' S3 bucket
1.2. Extract the downloaded zip file to a temporary directory
1.3. An "install.bat" script is located/executed (the name of the script is also configurable). This script installs and starts the windows service.
The Elastic Beanstalk "Instance AMI" is set to the custom AMI with the bootsrap program (see: this article)
To deploy new code: upload the installation .zip archive (that contains the windows service and install.bat file) to the S3 bucket and terminate all EC2 instances for the Elastic Beanstalk application. As the instances are re-created the bootstrapping program will download/install the newly updated code.
Of course, if I were starting over, I would just skip using Elastic Beanstalk and use the standard AWS auto-scaling along with a similar deployment scheme. The bottom line is that if you don't have a web application, don't use Elastic Beanstalk; you're better off with the standard AWS auto-scaling.
New AWS Deployment Tools
Amazon recently announced several new code deployment/management services that seem to address deployment issues: http://aws.amazon.com/blogs/aws/code-management-and-deployment/
I have yet to use these new services (I'm not even sure if they've been released yet), but they look promising.
Since this question has been around for a while and still have no answer, but continues to draw interest, let me share my solution to a very similar problem - installing a Windows service on a EC2 instance. I'm not using Beanstalk though, since that service is designed more for quick deploy of web applications. Instead, I'm using directly CloudFormation which Beanstalk uses underneath to deploy resources related to the web application.
The stack expects existing VPC (our spans through several availability zones), a S3 bucket that stores all service build artifacts and a EC2 key pair. Template creates EC2 instance using Windows AMI and few other resources like IAM User with access keys and a working S3 bucket just for illustration of how to create additional resources that your service might need. Template also takes as a parameter the name of a zipped package with all service binaries and configuration files that's been uploaded on the build artifacts S3 bucket (we use TeamCity build server that makes that for us, but you can create and upload the package manually of course). When you build new version of the service, you simply create new package (for example service.v2.zip), update the stack with the new name and the service will be updated automatically. Template contains ids of AMIs in 4 different regions, but you can always add other regions if you wish. Here's the stack template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Service stack template.",
"Parameters": {
"KeyPair": {
"Type": "String",
"Default": "MyDefaultKeys",
"Description": "Name of EC2 Key Pair."
},
"ServicePackageName": {
"Type": "String",
"Default": "service.zip",
"Description": "Name of the zip package of the service files."
},
"DeploymentBucketName": {
"Type": "String",
"Default": "",
"Description": "Name of the deployment bucket where all the artifacts are."
},
"VPCId": {
"Type": "String",
"Default": "",
"Description": "Identifier of existing VPC."
},
"VPCSubnets": {
"Default": "",
"Description": "Commaseparated list of existing subnets within the existing VPC. Could be just one.",
"Type": "CommaDelimitedList"
},
"VPCSecurityGroup": {
"Default": "",
"Description": "Existing VPC security group. That should be the ID of the VPC's default security group.",
"Type": "String"
}
},
"Mappings": {
"Region2WinAMI": {
"us-east-1": { "64": "ami-40f0d32a" },
"us-west-1": { "64": "ami-20601740" },
"us-west-2": { "64": "ami-ff4baf9f" },
"eu-west-1": { "64": "ami-3367d340" }
}
},
"Resources": {
"ServiceInstance": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"Comment": "Install Service",
"AWS::CloudFormation::Init": {
"configSets": {
"default": [ "ServiceConfig" ]
},
"ServiceConfig": {
"files": {
"c:\\service\\settings.config": {
"source": { "Fn::Join": [ "/", [ "https://s3.amazonaws.com", { "Ref": "DeploymentBucketName" }, "deployments/stacks", { "Ref": "AWS::StackName" }, "templates/settings.config.mustache" ] ] },
"context": {
"region": { "Ref": "AWS::Region" },
"accesskey": { "Ref": "IAMUserAccessKey" },
"secretkey": { "Fn::GetAtt": [ "IAMUserAccessKey", "SecretAccessKey" ] },
"bucket": { "Ref": "BucketName" }
}
},
"c:\\cfn\\cfn-hup.conf": {
"content": {
"Fn::Join": [
"",
[
"[main]\n",
"stack=",
{ "Ref": "AWS::StackId" },
"\n",
"region=",
{ "Ref": "AWS::Region" },
"\n",
"interval=1"
]
]
}
},
"c:\\cfn\\hooks.d\\cfn-auto-reloader.conf": {
"content": {
"Fn::Join": [
"",
[
"[cfn-auto-reloader-hook]\n",
"triggers=post.update\n",
"path=Resources.ServiceInstance.Metadata.AWS::CloudFormation::Init\n",
"action=cfn-init.exe -v -s ",
{ "Ref": "AWS::StackName" },
" -r ServiceInstance --region ",
{ "Ref": "AWS::Region" },
"\n"
]
]
}
}
},
"sources": {
"c:\\tmp\\service": { "Fn::Join": [ "/", [ "https://s3.amazonaws.com", { "Ref": "DeploymentBucketName" }, "deployments/stacks", { "Ref": "AWS::StackName" }, "artifacts/Service", { "Ref": "ServicePackageName" } ] ] }
},
"commands": {
"Install Service": {
"command": "call c:\\tmp\\service\\install.bat",
"ignoreErrors": "false"
}
},
"services": {
"windows": {
"cfn-hup": {
"enabled": "true",
"ensureRunning": "true",
"files": [ "c:\\cfn\\cfn-hup.conf", "c:\\cfn\\hooks.d\\cfn-auto-reloader.conf" ]
}
}
}
}
}
},
"Properties": {
"ImageId": { "Fn::FindInMap": [ "Region2WinAMI", { "Ref": "AWS::Region" }, "64" ] },
"InstanceType": "t2.micro",
"KeyName": { "Ref": "KeyPair" },
"SecurityGroupIds" : [{ "Ref": "VPCSecurityGroup" }],
"SubnetId" : { "Fn::Select": [ "0", { "Ref": "VPCSubnets" } ] },
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"",
[
"<script>\n",
"if not exist \"C:\\logs\" mkdir C:\\logs \n",
"cfn-init.exe -v -s ",
{ "Ref": "AWS::StackName" },
" -r ServiceInstance --region ",
{ "Ref": "AWS::Region" },
" -c default \n",
"</script>\n"
]
]
}
},
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": "true",
"VolumeSize": "40",
"VolumeType": "gp2"
}
}
],
"Tags": [
{ "Key": "Name", "Value": { "Fn::Join": [ ".", [ { "Ref": "AWS::StackName" }, "service" ] ] } }
]
}
},
"BucketName": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicRead"
},
"DeletionPolicy": "Retain"
},
"IAMUser": {
"Type": "AWS::IAM::User",
"Properties": {
"Path": "/",
"Groups": [ "stack-users" ],
"Policies": [
{
"PolicyName": "giveaccesstobuckets",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [ "s3:*" ],
"Resource": [ { "Fn::Join": [ "", [ "arn:aws:s3:::", { "Ref": "BucketName" }, "/*" ] ] } ]
}
]
}
}
]
}
},
"IAMUserAccessKey": {
"Type": "AWS::IAM::AccessKey",
"Properties": {
"UserName": { "Ref": "IAMUser" }
}
}
}
}
As you can see, after copying the artifacts, we execute install.bat batch file (included in the zip file) that will move the files to the correct location and register the service. Here is the contents of the file:
#echo off
sc query MyService > NUL
IF ERRORLEVEL 1060 GOTO COPYANDCREATE
sc stop MyService
waitfor /T 20 ServiceStop
echo D | xcopy "c:\tmp\service" "c:\service\" /E /Y /i
GOTO END
:COPYANDCREATE
echo D | xcopy "c:\tmp\service" "c:\service\" /E /Y /i
sc create MyService binpath= "c:\service\MyService.exe" start= "auto"
:END
sc start MyService
Template also creates config file (from the settings.config.mustache which also resides on the artifacts bucket) containing information about the other resources that has been created for the service to use. Here it is:
<appSettings>
<add key="AWSAccessKey" value="{{accesskey}}" />
<add key="AWSSecretKey" value="{{secretkey}}" />
<add key="AWSRegion" value="{{region}}" />
<add key="AWSBucket" value="{{bucket}}" />
</appSettings>
You create and later update the stack either from the AWS web console or CLI.
And that's pretty much it. You can visit the AWS CloudFormation website to get more info about the service and how to work with templates.
P.S.: I realised that it would be better if I also share the template that creates the VPC. I keep it separate as I have one VPC per region. You can integrate it with the Service template if you wish, but that would mean that every time you create new stack a new VPC will be also created. Here is the VPC template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "VPC stack template.",
"Mappings": {
"Region2AZ": {
"us-east-1": { "AZ": [ "us-east-1a", "us-east-1b", "us-east-1d" ] },
"us-west-1": { "AZ": [ "us-west-1b", "us-west-1c" ] },
"us-west-2": { "AZ": [ "us-west-2a", "us-west-2b", "us-west-2c" ] },
"eu-west-1": { "AZ": [ "eu-west-1a", "eu-west-1b", "eu-west-1c" ] }
}
},
"Conditions": {
"RegionHas3Zones": { "Fn::Not" : [ { "Fn::Equals" : [ { "Ref": "AWS::Region" }, "us-west-1" ] } ] }
},
"Resources": {
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsSupport" : "true",
"EnableDnsHostnames" : "true"
}
},
"VPCSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Security group for VPC.",
"VpcId": { "Ref": "VPC" }
}
},
"Subnet0": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": { "Ref": "VPC" },
"CidrBlock": "10.0.0.0/24",
"AvailabilityZone": { "Fn::Select": [ "0", { "Fn::FindInMap": [ "Region2AZ", { "Ref": "AWS::Region" }, "AZ" ] } ] }
}
},
"Subnet1": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": { "Ref": "VPC" },
"CidrBlock": "10.0.1.0/24",
"AvailabilityZone": { "Fn::Select": [ "1", { "Fn::FindInMap": [ "Region2AZ", { "Ref": "AWS::Region" }, "AZ" ] } ] }
}
},
"Subnet2": {
"Type": "AWS::EC2::Subnet",
"Condition": "RegionHas3Zones",
"Properties": {
"VpcId": { "Ref": "VPC" },
"CidrBlock": "10.0.2.0/24",
"AvailabilityZone": { "Fn::Select": [ "2", { "Fn::FindInMap": [ "Region2AZ", { "Ref": "AWS::Region" }, "AZ" ] } ] }
}
},
"InternetGateway": {
"Type": "AWS::EC2::InternetGateway",
"Properties": {
}
},
"AttachGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": { "Ref": "VPC" },
"InternetGatewayId": { "Ref": "InternetGateway" }
}
},
"RouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": { "Ref": "VPC" }
}
},
"Route": {
"Type": "AWS::EC2::Route",
"DependsOn": "AttachGateway",
"Properties": {
"RouteTableId": { "Ref": "RouteTable" },
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": { "Ref": "InternetGateway" }
}
},
"SubnetRouteTableAssociation0": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"SubnetId": { "Ref": "Subnet0" },
"RouteTableId": { "Ref": "RouteTable" }
}
},
"SubnetRouteTableAssociation1": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"SubnetId": { "Ref": "Subnet1" },
"RouteTableId": { "Ref": "RouteTable" }
}
},
"SubnetRouteTableAssociation2": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Condition": "RegionHas3Zones",
"Properties": {
"SubnetId": { "Ref": "Subnet2" },
"RouteTableId": { "Ref": "RouteTable" }
}
},
"NetworkAcl": {
"Type": "AWS::EC2::NetworkAcl",
"Properties": {
"VpcId": { "Ref": "VPC" }
}
},
"AllowAllInboundTCPAclEntry": {
"Type": "AWS::EC2::NetworkAclEntry",
"Properties": {
"NetworkAclId": { "Ref": "NetworkAcl" },
"RuleNumber": "100",
"Protocol": "6",
"RuleAction": "allow",
"Egress": "false",
"CidrBlock": "0.0.0.0/0",
"PortRange": { "From": "0", "To": "65535" }
}
},
"AllowAllInboundUDPAclEntry": {
"Type": "AWS::EC2::NetworkAclEntry",
"Properties": {
"NetworkAclId": { "Ref": "NetworkAcl" },
"RuleNumber": "101",
"Protocol": "17",
"RuleAction": "allow",
"Egress": "false",
"CidrBlock": "0.0.0.0/0",
"PortRange": { "From": "0", "To": "65535" }
}
},
"AllowAllOutboundTCPAclEntry": {
"Type": "AWS::EC2::NetworkAclEntry",
"Properties": {
"NetworkAclId": { "Ref": "NetworkAcl" },
"RuleNumber": "100",
"Protocol": "6",
"RuleAction": "allow",
"Egress": "true",
"CidrBlock": "0.0.0.0/0",
"PortRange": { "From": "0", "To": "65535" }
}
},
"AllowAllOutboundUDPAclEntry": {
"Type": "AWS::EC2::NetworkAclEntry",
"Properties": {
"NetworkAclId": { "Ref": "NetworkAcl" },
"RuleNumber": "101",
"Protocol": "17",
"RuleAction": "allow",
"Egress": "true",
"CidrBlock": "0.0.0.0/0",
"PortRange": { "From": "0", "To": "65535" }
}
},
"SubnetNetworkAclAssociation0": {
"Type": "AWS::EC2::SubnetNetworkAclAssociation",
"Properties": {
"SubnetId": { "Ref": "Subnet0" },
"NetworkAclId": { "Ref": "NetworkAcl" }
}
},
"SubnetNetworkAclAssociation1": {
"Type": "AWS::EC2::SubnetNetworkAclAssociation",
"Properties": {
"SubnetId": { "Ref": "Subnet1" },
"NetworkAclId": { "Ref": "NetworkAcl" }
}
},
"SubnetNetworkAclAssociation2": {
"Type": "AWS::EC2::SubnetNetworkAclAssociation",
"Condition": "RegionHas3Zones",
"Properties": {
"SubnetId": { "Ref": "Subnet2" },
"NetworkAclId": { "Ref": "NetworkAcl" }
}
}
},
"Outputs": {
"VPC": {
"Description": "VPC",
"Value": { "Ref": "VPC" }
},
"VPCSecurityGroup": {
"Description": "VPC Security Group Id",
"Value": { "Fn::GetAtt": [ "VPCSecurityGroup", "GroupId" ] }
},
"Subnet0": {
"Description": "Subnet0 Id",
"Value": { "Ref": "Subnet0" }
},
"Subnet1": {
"Description": "Subnet1 Id",
"Value": { "Ref": "Subnet1" }
},
"Subnet2": {
"Description": "Subnet2 Id",
"Condition": "RegionHas3Zones",
"Value": { "Ref": "Subnet2" }
}
}
}
From my own experience, implementing anything using ebextensions significantly adds to the elapsed time for deployment. So much so that it can take up to 15 minutes for an instance to spin up when auto-scaling. Almost defeats the purpose.
In any event, be sure to configure the Auto Scaling Group's "Health Check Grace Period" property to something significant. For example, we use 900 (i.e. 15 minutes). Anything less and the instance never passes the health check and the scale up event fails; which makes for an unending series of attempts to scale up.

Categories

Resources