Azure IoT module twin update gets reverted back - c#

My layered deployment (single module) properties read as follows:
"properties": {
"desired": {
"PublishInterval": 2000,
"OtherProperty": 1
"layeredProperties": {}
}
},
...
After the deployment is applied I would like to add a custom property on some of the devices using the Azure portal, so the result might look like so:
"properties": {
"desired": {
"PublishInterval": 2000,
"OtherProperty": 1
"layeredProperties": {
"instance-specific-property": 4000
}
}
},
...
A few minutes later this property gets reverted back and we end up with an empty layeredProperties collection.
Following up on similar questions asked here and here Im starting to think that it is not possible to do this at all and if one needs some specific properties on devices there should be a layered deployment created for that.
Is there really no way of updating a module's twin desired properties but using a deployment? Seems like an overkill.

So the trick is to specify the desired properties in a layered deployment that target specific property object e.g.
"properties.desired.powerSettings": {
"MaxChargePower": 5000,
"MaxDischargePower": 10000
}
This is the "unchangeable" object and will get reverted back by the layered deployment. But this now allows adding additional settings to the root object (properties.desired) through e.g. Azure portal, and it would look something like this
"properties": {
"desired": {
"powerSettings": {
"MaxChargePower": 5000,
"MaxDischargePower": 10000
},
"instance-specific-property": 4000, // new property that won't get reverted/removed by the layered deployment
....
}
Tested and verified - works as expected.
ref How to configure individual modules for 100+ edge devices

Related

How can I only get specific buckets returned by S3 in ListBuckets? [duplicate]

I have an S3 bucket with several folders/subfolders and I am using CloudBerry Drive to present the storage to users in Windows Explorer. For confidentiality reasons, I need to create an IAM policy that will limit the results shown in the s3:ListBucket operation by default - with project specific policies that will reveal specified folders when that policy is attached to the IAM account.
I have tried to use the prefix option (see code block) without success but the documentation I have found suggests this should work. Perhaps I have misunderstood the prefix option?
Here is an example of the S3 structure:
arn:aws-cn:s3:::mybucket
projects/
projects/confidential/
projects/my project/
projects/public/
I need a default policy that will only return projects/public when I list the content of the projects/ folder in Explorer. I then need to be able to add a policy to selected IAM accounts that would also list projects/my project when I list the content of the projects/ folder.
{
"Action": ["s3:GetBucketLocation","s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws-cn:s3:::mybucket"],
"Condition": {
"StringLike": {
"s3:prefix": ["","projects/public/*"
],
"s3:delimiter": ["/"]
}
}
}
So far, using various combinations of prefix, I can either list nothing at all, or everything - I just can't seem to limit what is returned. Is this even possible?
If not - are there any alternative approaches to achieve the same thing?
Thanks
It is not possible to "limit what is returned", but you can limit "what is requested".
That is, you can Allow a request where the Prefix is projects/public. Requests that do not have this prefix will then be Denied by default.
The difficulty is that users will not be able to "navigate" from the root of the bucket to the allowed prefix. They will need go go directory to the prefix to be able to list objects.

How do I make sure that the execution of the webpage is only proceeded if the api result is returned?

In
ngOninit() of the app.component, we have called an api that basically returns the true/false. Based on that, 'type' a class is injected into the document.body i.e. IsVisible. And in the Sass of app.component it disables the logos and other contents in the web page.
api call
const res= app.services.getIsActiveType().toPromise();
document.body.ClassList.Add(res.type);
in sass
body {
&.Type{
logos {
dispaly: none;
}
}
}
But when deployed to the real server we have an issue with the speed which results in calling and returning the response being delayed by around 6,7 seconds hence the content is visible until the webapi returns a result.
Now, shall we improve the speed of the server or change the logic because the particular role should not see those contents but the slow speed results in showing it for a while.
What should I do? Even if a speed is improved I cannot rely on that.
I searched and solutuon is to use resolvers but I don't know much about that since new to angular.
Approach 1: you can invert the behavior of the CSS class so that it shows instead of hides stuff and keep things hidden by default
Approach 2: you can use APP_INITIALIZER to wait for API's response, the special thing about APP_INITIALIZER is that it will be called even before root component's initialization and angular will wait until it gets resolved, here's the sample code:
const initializer = (apiService: ApiService) => {
return () => {
return apiService.getIsActiveType()
.then(res => res.json())
.then(res => {
document.body.classList.add(res.type);
});
};
}
register this initializer in app module like this:
#NgModule({
providers: [
{
provide: APP_INITIALIZER,
useFactory: initializer,
deps: [ApiService],
multi: true,
},
],
})
export class AppModule { }
if you go with this approach, also optimize API response so that user is not staring at a blank white screen for like 6-7 seconds
Please, try something like:
app.services
.getIsActiveType()
.subscribe( (result:any) => {
document.body.ClassList.Add(result.type);
} );
I would recommend the following:
If content is authorized for a specific user role, that content should not be visible until the role is confirmed.
Utilize Angular's template resources like *ngIf if you want to keep content hidden from the view. Hiding it with CSS is not secure.
I would recommend using observables instead of .toPromise() as this feature will eventually become deprecated in RxJS 7.x (Angular still uses 6.x for now.)

Invalid cache key parameter specified when enabling caching for a path parameter in AWS API Gateway

I have a serverless web API (API Gateway + Lambda) that I have built in C# and deployed via Visual Studio. This is achieved via a serverless.yml file that auto-creates a CloudFormation template, then that template is applied to create the API stack.
Once my stack is deployed, I have gone into the AWS Console to enable caching on one of the path parameters, but get this error:
!https://ibb.co/B4wmRRj
I'm aware of this post https://forums.aws.amazon.com/thread.jspa?messageID=711315&#711315 which details a similar but different issue where the user can't uncheck caching. My issue is I can't enable it to begin with. I also don't understand the steps provided to resolve the issue within that post. There is mention of using the AWS CLI, but not what commands to use, or what to do exactly. I have also done some reading on how to enable caching through the serverless.yml template itself, or cloud formation, but the examples I find online don't seem to match up in any way to the structure of my serverless file or resulting CF template. (I can provide examples if required). I just want to be able to enable caching on path parameters. I have been able to enable caching globally on the API stage, but that won't help me unless I can get the caching to be sensitive to different path parameters.
serverless.yml
"GetTableResponse" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "AWSServerlessInSiteDataGw::AWSServerlessInSiteDataGw.Functions::GetTableResponse",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole","AWSLambdaVPCAccessExecutionRole","AmazonSSMFullAccess"],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "kata/table/get/{tableid}",
"Method": "GET"
}
}
}
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for Prod environment",
"Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" }
}
}
--Update Start--
The reason, you are getting Invalid cache key parameter specified error because you did not explicitly highlighted the path parameters section.
This is because, although the UI somehow extrapolated that there is a
path parameter, it has not been explicitly called out in the API
Gateway configuration.
I tested with below and was able to replicate the behavior on console. To resolve this, follow my Point 1 section full answer.
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
--Update End--
Here you go. I still don't have your exact serverless.yml so I created a sample of mine similar to yours and tested it.
serverless.yml
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
request:
parameters:
paths:
tableid: true
resources:
Resources:
ApiGatewayMethodKataTableGetTableidVarGet:
Properties:
Integration:
CacheKeyParameters:
- method.request.path.tableid
Above should make tableid path parameter is cached.
Explanation:
Point 1. You have to make sure in your events after your method and path, below section is created otherwise next resources section of CacheKeyParameters will fail. Note - boolean true means path parameter is required. Once you explicitly highlight path parameter, you should be able to enable caching via console as well without resources section.
request:
parameters:
paths:
tableid: true
Point 2. The resources section tells API Gateway to enable caching on tableid path parameter. This is nothing but serverless interpretation of CloudFormation template syntax. How did I get that I have to use ApiGatewayMethodKataTableGetTableidVarGet to make it working?. Just read below guidelines and tip to get the name.
https://serverless.com/framework/docs/providers/aws/guide/resources/
Tip: If you are unsure how a resource is named, that you want to
reference from your custom resources, you can issue a serverless
package. This will create the CloudFormation template for your service
in the .serverless folder (it is named
cloudformation-template-update-stack.json). Just open the file and
check for the generated resource name.
What does above mean? - First run serverless package without resources section and find .serverless folder in directory and open above mentioned json file. Look for AWS::ApiGateway::Method. you will get exact normalized name(ApiGatewayMethodKataTableGetTableidVarGet) syntax you can use in resources section.
Here are some references I used.
https://medium.com/#dougmoscrop/i-set-up-api-gateway-caching-here-are-some-things-that-surprised-me-7526d954fbe6
https://serverless.com/framework/docs/providers/aws/events/apigateway#request-parameters
PS - If you still need CLI steps to enable it, let me know.

Use IAmazonDynamoDB or IDynamoDBContext (both?)

I started my Visual Studio project from AWS SDK template. It uses IDynamoDBContext in the function and IAmazonDynamoDB in the tests.
Everything worked to save and received documents when I received them with an id (hash). But it stopped to work when I added a range to my table. All my test were against AWS dynamoDb. But I got it to work in 2 ways. The first way were when I downloaded the local instance of dynamoDb. The second were when I replaced IDynamoDBContext to IAmazonDynamoDB in my function (so it used the same interface in both the function and in my test class). I don't know what the correct solution is, but why use 2 interfaces in the first place? Should I keep digging in why it didn't work with different interfaces or should I only use one of them?
// IDynamoDBContext (default) - Didn't save my item (did save it in local DynamoDB)
var test = new Test
{
UserId = "Test",
Id = 1
};
await DDBContext.SaveAsync<Test>(test);
// IAmazonDynamoDB - Did save my item
var putItemRequest = new PutItemRequest
{
TableName = "TestTable",
Item = new Dictionary<string, AttributeValue>()
{
{ "UserId", new AttributeValue { S = "Test" }},
{ "Id", new AttributeValue { N = "1" }}
}
};
await DDBContext.PutItemAsync(putItemRequest);
My test:
var item = new GetItemRequest
{
TableName = "TestTable",
Key = new Dictionary<string, AttributeValue>
{
{ "UserId", new AttributeValue { S = "Test" } },
{ "Id", new AttributeValue { N = "1" } },
},
};
Assert.True((await this.DDBClient.GetItemAsync(item)).Item.Count > 0);
We probably need someone on the AWS .Net SDK team to speak to this but here is my best insight.
Amazon documentation is always fun.
The documentation does not make it overly clear but IDynamoDBContext is found in the Amazon.DynamoDbv2.DataModel namespace which is used for object persistence data access.
So I think the IAmazonDynamoDB interface is used for general API calls against the DynamoDB service. The two modes have overlapping functionality in that both can work with given items in a dynamoDb table.
The docs, of course, are really clear in that for IDynamoDbContext it says
Context interface for using the DataModel mode of DynamoDB. Used to
interact with the service, save/load objects, etc.
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/Index.html
For IAmazonDynamoDB it says
Interface for accessing DynamoDB Amazon DynamoDB
IAmazonDynamoDB is from the Amazon.DynamoDbv2 namespace and IDynamoDBContext is found in the Amazon.DynamoDbv2.DataModel.
If you look at the documentation for them both though you will see by looking at the methods the actions each can performance are very different.
IAmazonDynamoDb allows you to interact and work more with DynamoDb via:
Creating tables
Deleting tables
Creating global indexes and backups
etc.
You can still work directly with items but the number of API calls
available via this interface is larger and allows working with
the overall DynamoDB service.
While IDynamoDBContext allows you work directly with items in a given DynamoDb table with methods like:
Save
Query
Scan
Load
Consistency is always key in programming so always use the same interface for areas that are meant to do the same level of work. So your code and tests should be using the same interface as they are focused on the same work scope. Hopefully based on that additional clarification you know what interface you are after. If all your code is trying to do is work with items in a DynamoDb table then IDynamoDBContext is probably what you are after.

How to make startup Azure Function

I have a Azure Function like that
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "AzureWebJobsServiceBus")]string myQueueItem, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
I want to dynamic bind myqueue and AzureWebJobServiceBus connection string in a startup or OnInit of app without as method's parameter above. I mean, I want to a method run first of all like Program.cs in WebJob to binding or start up global variables. Can I do that in Azure Function and how to do it?
Many thanks
The attributes here are compiled into a function.jsonfile before deployment that has the info on what the binding talks to. Often things like the connection string reference app settings. Neither of these can be modified within the code itself (so a Program.cs couldn’t modify the function.json binding).
Can you share any more on your scenario? If you have multiple queues you want to listen to could you deploy a function per queue? Given the serverless nature of Functions there isn’t a downside to having extra functions deployed. Let me know - happy to see if we can help with what you need.
Edit
The suggestion below doesn't work for a Trigger, only for a Binding.
We have to wait for the team to support Key Vault endpoints in Azure Functions, see this GitHub issue.
I think what you are looking for is something called Imperative Bindings.
I've discovered them myself just yesterday and had a question about them also. With these type of bindings you can just dynamically set up the bindings you want, so you can retrieve data from somewhere else (like a global variable, or some initialization code) and use it in the binding.
The thing I have used it for is retrieving some values from Azure Key Vault, but you can also retrieve data from somewhere else of course. Some sample code.
// Retrieving the secret from Azure Key Vault via a helper class
var connectionString = await secret.Get("CosmosConnectionStringSecret");
// Setting the AppSetting run-time with the secret value, because the Binder needs it
ConfigurationManager.AppSettings["CosmosConnectionString"] = connectionString;
// Creating an output binding
var output = await binder.BindAsync<IAsyncCollector<MinifiedUrl>>(new DocumentDBAttribute("TablesDB", "minified-urls")
{
CreateIfNotExists = true,
// Specify the AppSetting key which contains the actual connection string information
ConnectionStringSetting = "CosmosConnectionString",
});
// Create the MinifiedUrl object
var create = new CreateUrlHandler();
var minifiedUrl = create.Execute(data);
// Adding the newly created object to Cosmos DB
await output.AddAsync(minifiedUrl);
There are also some other attributes you can use with imperative binding, I'm sure you'll see this in the docs (first link).
Instead of using Imperative Bindings, you can also use your application settings.
As a best practice, secrets and connection strings should be managed using app settings, rather than configuration files. This limits access to these secrets and makes it safe to store function.json in a public source control repository.
App settings are also useful whenever you want to change configuration based on the environment. For example, in a test environment, you may want to monitor a different queue or blob storage container.
App settings are resolved whenever a value is enclosed in percent signs, such as %MyAppSetting%. Note that the connection property of triggers and bindings is a special case and automatically resolves values as app settings.
The following example is an Azure Queue Storage trigger that uses an app setting %input-queue-name% to define the queue to trigger on.
{
"bindings": [
{
"name": "order",
"type": "queueTrigger",
"direction": "in",
"queueName": "%input-queue-name%",
"connection": "MY_STORAGE_ACCT_APP_SETTING"
}
]
}

Categories

Resources