I'm using Autofac JSON files to register two classes for the same interface in my project.
If I do something like this:
JSON Config file 1:
{
"components": [
{
"type": "Services.FirstProvider, Services",
"services": [
{
"type": "Services.IHotelProvider, Services"
}
],
"parameters": {
"username": "<user>",
"password": "<pwd>"
}
}
]
}
JSON Config file 2:
{
"components": [
{
"type": "Services.SecondProvider, Services",
"services": [
{
"type": "Services.IHotelProvider, Services"
}
],
"parameters": {
"key": "<key>",
}
}
]
}
And register:
config.AddJsonFile("First/FirstProviderConfig.json");
config.AddJsonFile("Second/SecondProviderConfig.json");
I can see that only the SecondProviderhas been registered. And switching registration:
config.AddJsonFile("Second/SecondProviderConfig.json");
config.AddJsonFile("First/FirstProviderConfig.json");
Only FirstProvider has been registered.
If I try to register them in the same file:
{
"components": [
{
"type": "Services.FirstProvider, Services",
"services": [
{
"type": "Services.IHotelProvider, Services"
}
],
"parameters": {
"username": "<user>",
"password": "<pwd>"
}
},
{
"type": "Services.SecondProvider, Services",
"services": [
{
"type": "Services.IHotelProvider, Services"
}
],
"parameters": {
"key": "<key>"
}
}
]
}
It works.
I need to have separated files to configure them. What I miss?
The key point here is that you're using Microsoft.Extensions.Configuration as the basis for configuration files now, which means configuration is somewhat governed by the way Microsoft.Extensions.Configuration behaves.
When you have configuration, the way Microsoft.Extensions.Configuration wants to handle it is to override settings as you layer one configuration provider on top of another.
In the simple case, say you have two configurations:
{
"my-key": "a"
}
and
{
"my-key": "b"
}
It doesn't create an array of all possible values; it'll layer the second over the first based on the key (my-key) matching and override to have the value b.
When you parse JSON configuration it flattens everything out into key/value pairs. Does the same with XML. It does this because configuration supports environment variables and INI files and all sorts of other backing stores.
In the case of the above very simple files, you get
my-key == b
Nice and flat. Looking at something more complex:
{
"top": {
"simple-item": "simple-value",
"array-item": ["first", "second"]
}
}
It flattens out like:
top:simple-item == simple-value
top:array-item:0 == first
top:array-item:1 == second
Notice how the array (an "ordinal collection") gets flattened? Each item gets auto-assigned a fake "key" that has a 0-based index.
Now think about how two config files will layer. If I have the above more complex configuration and then put this...
{
"top": {
"array-item": ["third"]
}
}
That one flattens out to
top:array-item:0 == third
See where I'm going here? You layer that override config over the first one and you get:
top:simple-item == simple-value
top:array-item:0 == third
top:array-item:1 == second
The arrays don't combine, the key/value settings override.
You see them in a JSON representation, but it's all just key/value pairs.
You have two choices to try and fix this.
Option 1: Fudge the Array (Not Recommended)
Since your first configuration is (simplified):
{
"components": [
{
"type": "Services.FirstProvider, Services",
"services": [ ...]
}
]
}
You can potentially "fudge it" a little by putting a dummy empty element in the second "override" config:
{
"components": [
{
},
{
"type": "Services.SecondProvider, Services",
"services": [ ...]
}
]
}
Last I checked, the override thing was additive-only, so empty values don't erase previously set values. By shifting the array in the second configuration by 1, it'll change the flattened version of the key/value representation and the two arrays should "merge" the way you want.
But that's pretty ugly and I wouldn't do that. I just wanted to show you one way to make it work so you'd understand why what you're doing isn't working.
Option 2: Two Separate Configuration Modules (Recommended)
Instead of trying to combine the two JSON files, just create two separate IConfiguration objects by individually loading the JSON files. Register them separately in two different ConfigurationModule registrations. It shouldn't blow up if either of the configurations is empty.
var first = new ConfigurationBuilder();
first.AddJsonFile("autofac.json", optional: true);
var firstModule = new ConfigurationModule(first.Build());
var second = new ConfigurationBuilder();
second.AddJsonFile("autofac-overrides.json", optional: true);
var secondModule = new ConfigurationModule(second.Build());
var builder = new ContainerBuilder();
builder.RegisterModule(firstModule);
builder.RegisterModule(secondModule);
If the config is empty or missing it just won't register anything. If it's there, it will. In the case where you want to override things or add to your set of handlers for nice IEnumerable<T> resolution, this should work.
Related
I have a requirement where an event will emit a JSON following a defined schema = JSON-EventA schema. I have a config(config.json) that follows another JSON-EventB schema. Using data from JSON-EventA and Config from config.json, I have to generate a JSON event data which should follow the JSON-EventB schema. Is this possible to write a function GenerateEventData which takes the eventA data and config to generate EventB data ? Without coding the POCO and writing custom logic to convert from one json to another ? I want to be able to update the JSON-EventA schema, config.json without touching the function GenerateEventData, it should be able to generate data in the new JSON-EventB schema as directed by config.json
Illustrating an example that I want -
// I can get an event data and ensure it follows JSON-EventA schema
string eventAData = GetEvent(A);
/*
eventAData = {"Title": "MyTitle", "Address": {"PING": "123"}}
Follows the below schema
EventASchema = {
"type": "object",
"properties": {
"Title": {
"type": "string"
},
"Address": {
"type": "object",
"properties": {
"PIN": {
"type": "string"
}
}
}
};*/
string config = GetConfig();
// config = {"NewTitle": "Generated ${Title}", "PIN": "000${Address.PIN}", "Country": "US"}
//This is what I want to achieve.
string eventB = GenerateEventData(A, config);
/*
eventB = {"NewTitle": "Generated MyTitle", "PIN": "000123", "Country": "US"}
*/
I'm using nlog for logging, and c# activity for spans with contexts such as traceparent / traceid.
I am adding an optional custom property to the activity, like this: activity.SetCustomProperty(customProperty.Key,customProperty.Value))
I would like to add the value to a log record.
However, I'm unsure on the correct configuration for NLog.
When configuring NLog layouts, you can set attribute layouts.
One of the layout renderers is the ActivityTraceLayoutRenderer.
Here is how I'm trying to configure the ActivityTraceLayoutRenderer, assuming the CustomProperty configured on the trace is "ReservationId":
"layout": {
"type": "JsonLayout",
"attributes": [
{
"name": "timestamp",
"layout": "${date:format=yyyy-MM-ddTHH\\:mm\\:ss.ffffff}"
},
{
"name": "traceId",
"layout": "${activity:property=TraceId}"
},
{
"name": "reservationId",
"layout": "${activity:{property=CustomProperty,item=ReservationId}}"
}
]
}
However, the resulting logs look... weird:
{ "timestamp": "2022-08-08T14:39:38.000840", "reservationId": "}" }
I'm doing a proof of concept with connecting to dynamodb, but I appear to be having issues with permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": [
"arn:aws:dynamodb:us-west-1:172777662485:table/*"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${www.amazon.com:user_id}"
]
}
}
}
]
}
The code:
DynamoDBContext context = new DynamoDBContext(new AmazonDynamoDBClient("...", "...", new AmazonDynamoDBConfig() { RegionEndpoint = RegionEndpoint.USWest1, MaxErrorRetry = 6 }));
var writeContext = context.CreateBatchWrite<Music>();
writeContext.AddPutItem(new Music() { Artist = "Test Artist", SongTitle = "SongTitle" });
writeContext.Execute();
The error I get is:
arn:aws:iam::...:user/DynamoDBTestUser is not authorized to perform: dynamodb:BatchWriteItem on resource: arn:aws:dynamodb:us-west-1:172777662485:table/Music
Does anyone see anything wrong here?
thanks
Your policy appears to be based on some of the samples from this document:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/specifying-conditions.html
However, your code does not appear to be adding a record with a partition key matching the username.
So there are 2 possible solutions, depending on your desired outcome.
Solution 1:
Assuming you simply want to insert records into the table, you can use a policy such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "*"
}]
}
This policy grants the user fully DynamoDB access to any DynamoDB resource.
Solution 2:
If you want to restrict access to the table limiting the user to read/put records only for them, then you need to ensure your primary key attribute in your Music structure is populated with your IAM user name.
I am using Andy Crum's EmberDataModelMaker.
Having punched in the following two classes
// app/models/server-item.js
export default DS.Model.extend({
hostName: DS.attr('string'),
syncServers: DS.hasMany('string'),
subscribers: DS.hasMany('string'),
mailHost: DS.attr('string'),
mailHostLogin: DS.hasMany('credentials')
});
// app/models/credentials.js
export default DS.Model.extend({
user: DS.attr('string'),
password: DS.attr('string'),
server: DS.belongsTo('serverItem')
});
It's showing the following three different expected JSON formats (a very nice feature btw.):
DS.RESTAdapter
"serverItems": [
{
"id": 1,
"hostName": "foo",
"syncServers": [
<stringids>
],
"subscribers": [
<stringids>
],
"mailHost": "foo",
"mailHostLogin": [
<Credentialsids>
]
}
],
"credentials": [
{
"id": 1,
"user": "foo",
"password": "foo",
"server": <ServerItemid>
}
]
DS.ActiveModelAdapter
"serverItems": [
{
"id": 1,
"host_name": "foo",
"sync_server_ids": [
<stringids>
],
"subscriber_ids": [
<stringids>
],
"mail_host": "foo",
"mail_host_login_ids": [
<Credentialsids>
]
}
],
"credentials": [
{
"id": 1,
"user": "foo",
"password": "foo",
"server_id": <ServerItemid>
}
]
DS.JSONAPIAdapter
{
"data": {
"type": "server-items",
"id": "1",
"attributes": {
"HostName": "foo",
"MailHost": "foo",
},
"relationships": {
"SyncServers": {
"data": {
"type": "SyncServers",
"id": <SyncServersid>
}
},
"Subscribers": {
"data": {
"type": "Subscribers",
"id": <Subscribersid>
}
},
"MailHostLogin": {
"data": {
"type": "MailHostLogin",
"id": <MailHostLoginid>
}
}
},
"included": [
{
<sideloadedrelationships>
]
}
}
}
{
"data": {
"type": "credentials",
"id": "1",
"attributes": {
"User": "foo",
"Password": "foo",
},
"relationships": {
"Server": {
"data": {
"type": "Server",
"id": <Serverid>
}
}
},
"included": [
{
<sideloadedrelationships>
]
}
}
}
I am going to implement (or rather change) some WebServices on the Server side (using C#, ASP.NET Web API). Currently, the WebService already creates a result that is pretty similar to the format expected with DS.RESTAdapter - obviously, it would be ideal if I could use it without compromising the Data Integrity - can I?
If yes, would it empower Ember Data to send all the requests necessary to maintain the data consistency on the server? Meaning, would the client send a DELETE request to the server not only for the ServerItem but also for the Credentials item that is referenced via the mailHostLogin property when the user wants to delete a ServerItem?
If not: are both of the other two adapters fulfilling the above mentioned consistency requirement? Which of the other two should I implement - any experiences/recommendations out there?
You should choose whichever Adapter closest fits your API data structure as a basis(sounds like DS.RESTAdapter in this case). You can extend the adapters and serializers that are a closest fit to make any necessary adjustments(this can be done both application wide or on a per model basis).
However, I don't think that the Ember Data model relationships(i.e. belongsTo and hasMany) are binding in such a way that will automatically result in the "data consistency" you are looking for. If your application requirements are to delete all associated Credentials records when a ServerItem is deleted, I would recommend doing that server side when handling the DELETE ServerItem API request. That would result in better performance(1 HTTP call instead of 2 or N depending if credentials can be deleted in bulk) and be much less error prone due to potential network or other failure of calls to delete Credentials after a ServerItem is deleted.
After a successful ServerItem delete, you could loop through it's credentials and unload the records from the client side store to keep it in sync with the new state on the server. Something like:
serverItemCredentials.forEach(function(id) {
if (this.store.recordIsLoaded('credential', id)) {
this.store.unloadRecord(this.store.peekRecord('credential', id));
}
});
While getting our WCF Data Service ready for production we encountered an issue with the behaviour of the expand operator when paging is enabled.
With paging disabled, expand works as expected. But when I enable paging on any of the expanded entity sets, no matter what the page sizes, the expanded entities appear to page with a size of 1.
[UPDATE]
In the absence of any further input from here or the MSDN forums I've created a bug on Connect. Maybe someone over the wall will get to the bottom of it!
For example, supposed I have the following simple model:
It's running on a generated SQL database with some sample data:
INSERT INTO [dbo].[Towns] (Name) VALUES ('Berlin');
INSERT INTO [dbo].[Towns] (Name) VALUES ('Rome');
INSERT INTO [dbo].[Towns] (Name) VALUES ('Paris');
INSERT INTO [dbo].[Gentlemen] (Id, Name) VALUES (1, 'Johnny');
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Frieda', 'Berlin', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Adelita', 'Berlin', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Milla', 'Berlin', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Georgine', 'Paris', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Nannette', 'Paris', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Verona', 'Rome', 1);
INSERT INTO [dbo].[Ladies] (Name, Town_Name, Gentleman_Id) VALUES ('Gavriella', 'Rome', 1);
The Data Service is straightforward (note that here paging is disabled):
namespace TestWCFDataService
{
public class TestWCFDataService : DataService<TestModel.TestModelContainer>
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
config.SetEntitySetAccessRule("Ladies", EntitySetRights.AllRead);
config.SetEntitySetAccessRule("Gentlemen", EntitySetRights.AllRead);
config.SetEntitySetAccessRule("Towns", EntitySetRights.AllRead);
//config.SetEntitySetPageSize("Ladies", 10);
//config.SetEntitySetPageSize("Gentlemen", 10);
//config.SetEntitySetPageSize("Towns", 10);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
}
}
Now, my user wants to find every Lady whose Town is "Berlin" and also who their Gentleman is.
The query in question is:
http://localhost:62946/TestWCFDataService.svc/Towns('Berlin')?$expand=Ladies/Gentleman
When I run this query (JSON because the Atom version is gigantic) I get the expected output; a town with three ladies, all of whom have Johnny as their gentleman.
var result = {
"d": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Towns('Berlin')", "type": "TestModel.Town"
}, "Name": "Berlin", "Ladies": [
{
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(1)", "type": "TestModel.Lady"
}, "Id": 1, "Name": "Frieda", "Gentleman": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)", "type": "TestModel.Gentleman"
}, "Id": 1, "Name": "Johnny", "Ladies": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)/Ladies"
}
}
}, "Town": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(1)/Town"
}
}
}, {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(2)", "type": "TestModel.Lady"
}, "Id": 2, "Name": "Adelita", "Gentleman": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)", "type": "TestModel.Gentleman"
}, "Id": 1, "Name": "Johnny", "Ladies": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)/Ladies"
}
}
}, "Town": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(2)/Town"
}
}
}, {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(3)", "type": "TestModel.Lady"
}, "Id": 3, "Name": "Milla", "Gentleman": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)", "type": "TestModel.Gentleman"
}, "Id": 1, "Name": "Johnny", "Ladies": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)/Ladies"
}
}
}, "Town": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(3)/Town"
}
}
}
]
}
}
There are going to be many Towns eventually so I enable paging for Town.
...
config.SetEntitySetPageSize("Towns", 10);
...
The query continues to function as expected. But there are also going to be a lot of Ladies and Gentlemen so I want to be able to limit the number of results that are returned:
...
config.SetEntitySetPageSize("Ladies", 10);
config.SetEntitySetPageSize("Gentlemen", 10);
...
But when I set a page size on either the Ladies entity set or the Gentlemen entity set (or both) the results of my query change unexpectedly:
var result = {
"d": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Towns('Berlin')", "type": "TestModel.Town"
}, "Name": "Berlin", "Ladies": {
"results": [
{
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(1)", "type": "TestModel.Lady"
}, "Id": 1, "Name": "Frieda", "Gentleman": {
"__metadata": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)", "type": "TestModel.Gentleman"
}, "Id": 1, "Name": "Johnny", "Ladies": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Gentlemen(1)/Ladies"
}
}
}, "Town": {
"__deferred": {
"uri": "http://localhost:62946/TestWCFDataService.svc/Ladies(1)/Town"
}
}
}
]
}
}
}
The expand only includes one of the Lady objects (although at least her Gentleman is included). It doesn't matter how large the page size is set to, the query still only returns one object in the expanded collection.
It also does not matter whether or not the page size is set on one or both of the expanded entities, as long as one of them has a page size set then only one of the Lady objects will be eagerly loaded.
This behaviour smells buggy to me, as according to the OData Specification:
"A URI with a $expand System Query Option indicates that Entries associated with the Entry or Collection of Entries identified by the Resource Path section of the URI must be represented inline (i.e. eagerly loaded)."
Am I misreading the spec? Should I have expected this behaviour? I just want to be able to limit the page size of the entity sets when accessed directly but also have them eagerly loadable.
Is it a bug in WCF Data Services? (or my code? or my brain?)
[EDIT]
More info: the documentation for WCF Data Services states that:
"Also, when paging is enabled in the data service, you must explicitly load subsequent data pages from the service."
But I can't find an explanation of why the page size for the related entity sets seems to default to 1 no matter what page size is specified.
[EDIT]
Yet more info: the version in question is on .NET 4 version 4.0.30319 with System.Data.Services version 4.0.0.0. It's the version that comes in the box with Visual Studio 2010 (with SP1 installed).
[EDIT]
A sample solution showing the behaviour is now up in a github repository. It's got paging turned on in the InitializeService method and a DB creation script that also adds some sample data so that we're on the same page.
It took a few months but this is apparently going to be fixed in the next version:
Posted by Microsoft on 12/15/2011 at 8:08 AM
Thank you for reporting this issue. We have confirmed that a bug in the Entity Framework is
causing this issue. The fix required changes to the core Entity Framework components that ship
in the .NET Framework. We have fixed the bug, the fix will be included in the next release of
the .NET Framework.
What version of WCF Data Services are you using?
There is a bug that I found relating to using Expand with server-driven paging in .NET Framework 4, but I thought that it only affected entities with compound keys and when using the OrderBy option, neither of which seem to apply here.
Still it definitely sounds like a bug.
Have you tried using Atom instead of JSON, and if so are the entities in the expand still missing?
Your query:
localhost:62946/TestWCFDataService.svc/Towns('Berlin')?$expand=Ladies/Gentleman
doesn't expand Ladies, only the Gentelman.
The query should look like:
localhost:62946/TestWCFDataService.svc/Towns('Berlin')?$expand=Ladies,Ladies/Gentleman.
Hope this helps!
Monica Frintu