Add claims to user through azure ad b2c using graph api - c#

I have a web application and am planning to move all the authentication to Azure AD b2c.
I need to create user through Graph API and now i can create user through Graph API, i also need to add claims
When adding user or updating user, after further googling i found out, you need to add extension property, I tried adding extensions, but it seems to be not working for me. Any help will greatly appreciated.
I am using the sample provided by MSFT https://github.com/AzureADQuickStarts/B2C-GraphAPI-DotNet . I can now create an extension with the below JSON
{
"accountEnabled": true,
"signInNames": [
{
"type": "emailAddress",
"value": "kart.kala1#test.com"
}
],
"creationType": "LocalAccount",
"displayName": "Joe Consumer",
"mailNickname": "joec",
"passwordProfile": {
"password": "P#$$word!",
"forceChangePasswordNextLogin": false
},
"passwordPolicies": "DisablePasswordExpiration",
"city": "San Diego",
"country": null,
"facsimileTelephoneNumber": null,
"givenName": "Joe",
"mail": null,
"mobile": null,
"otherMails": [],
"postalCode": "92130",
"preferredLanguage": null,
"state": "California",
"streetAddress": null,
"surname": "Consumer",
"telephoneNumber": null,
"extension_a550f811ccfe41f19e895f7931f7a28a_admin": "admin1"
}
Above extension_a550f811ccfe41f19e895f7931f7a28a_admin": "admin1" property worked for me to add extension, but i created a user thru sign-signin profile in azure portal and then added the details which is got from "GET-USER" of another user and re-used the name. What is the alpha numeric value? is it created run time or can it be resolved using any value from the user data ?. I will have two accounts stage and production and i cannot be resolving the value or changing it runtime.

Don't have the rep to comment so need to put it in answer form.
I am not entirely sure what the question is, but I have successfully used the Graph API to set/get the extension data (for example using the JSON attribute extension_a550f811ccfe41f19e895f7931f7a28a_admin).
I am not sure how you are getting the user info but it seems as though it might be by name, which may not be unique.
For getting/setting B2C user info you need to use the objectId of the user as it is guaranteed unique. This means you need to store that objectId in your database/storage for that user.

Related

SSAS : How to automate cube processing and user role attribution?

I've been having to give a role to new users, problem I have to add them to 2 different cubes in 6 different environments, which is 12 times adding the user and processing the rights table, which amounts to around an hour on my company's rather weak laptop, for EVERY new user.
Is there any way to juste write some code with a list of users you wanna add to a list of cubes, and you'd just tell it to process the table after each addition ? It'd be a real life saver right now.
In SSIS, you can use the Analysis Services Execute DDL Task. This can take a TMSL script as input, which would look like below.
1) sequence - this command allows you to perform multiple operations
2) createOrReplace - this will refresh the role with the new list of members. Note that every existing member needs to be included in the role or they will be wiped out
3) refresh - processes the table
In ssis, you might create a connection to each environment and loop through a set of script files, so that you would not need to modify the package to add new members.
However, I would also suggest switching to an AD group instead of adding explicit users to the role. Then you would only need to refresh table.
{
"sequence": {
"operations": [{
"createOrReplace": {
"object": {
"database": "<Your Database>",
"role": "<Your Role Name>"
},
"role": {
"name": "Reader",
"modelPermission": "read",
"members": [{
"memberName": "<Your Domain>\\<User 1>",
"memberName": "<Your Domain>\\<User 2>",
<All the users in the role...>
}
]
}
}
}, {
"refresh": {
"type": "full",
"objects": [{
"database": "<Your Database>",
"table": "<Your Table>"
}
]
}
}
]
}
}

I can Authenticate with my JWT but my Name claim is not recognised in my ASP.NET Core application

I can authenticate to my ASP.NET Core 2.2 web api using a JWT but the Name property of the Identity is null.
The claim is there, though.
Here's the JWT which is decomposed:
{
"id": "1-A",
"name": "Pure Krome",
"email": "<snip>",
"picture": "https://<snip>",
"locale": "en-au",
"permissions": [
<snip>
],
"iss": "<snip>",
"sub": "google-oauth2|<snip>",
"aud": "<snip>",
"exp": 1597609078,
"iat": 1496325742
}
and here's what the server is seeing:
also .. it seems to "recognise" my email claim, though? (note: I've just obfuscated the real email value)
So I thought name isn't a recognised claim .. so I tried seeing if there's some standard rules for this and found IANA has a list of reserved and custom claims. name is the first one for custom claims.
Is there some trick I need to do to get ASP.NET Core security to recognise my name claim as NameClaimType ?
Why does email claim get recognised?
It's expecting a different claim type than what you have.
You can set like:
.AddJwtBearer(o => o.TokenValidationParameters = new TokenValidationParameters
{
NameClaimType = "name"
})
That'll change which claim it uses to populate the name.

C# Odata customize native metadata

I have a simple Odata V4 service that returns persons.
This service is not accessible directly:
it is accessible through Apigee, ie with http://myApigeeServer/Person
then Apigee is pointing on a load balancer, ie with http://myLoadBalancer/Person
finally there is actually two servers behind my load balancer virtual IP, ie with http://myFirstServer/Person and http://mySecondServer/Person
My problem is that we can see the real final server URL in the service response "#odata.context" metadata, so a call to http://myApigeeServer/Person('foo') can lead to two responses:
{
"#odata.context": "http://myFirstServer/$metadata#Person/$entity",
"FirstName": "John",
"LastName": "Doe",
"Phone#odata.type": "#Collection(String)",
"Phone": [
"+123456789"
],
}
Or
{
"#odata.context": "http://mySecondServer/$metadata#Person/$entity",
"FirstName": "John",
"LastName": "Doe",
"Phone#odata.type": "#Collection(String)",
"Phone": [
"+123456789"
],
}
I really must hide the final servers names. So my question is: is it possible to totally remove the "#odata.context" metadata or to customize it ? In my case customization would mean forcing the "#odata.context" metadata value with the Apigee URL.
[---EDIT----]
Based on the link proposed by Dylan Nicholson (thanks!), indeed it exists a hook on the ODataMediaTypeFormatter that enables to change the service URI base address.
But in my tests, it didn't work/I didn't manage to make it work for the #odata.context URI. So from links to links and tests to tests, I arrived here and lencharest's solution works perfectly well: using a custom URL helper that rewrites each link base address. I'll see in the future if it's too brutal...
#Max
To suppress the #odata.context, you can use "application/json;odata.metadata=none"

acess tfs to get capacity information

I am trying to get project information from tfs server programitaically.I want to know how to acess the capacity information.Ive serached for it online and it says that that capacity info is stored in [dbo].[tbl_TeamConfigurationCapacity].
But am not understanding how to query for the table using wiql.Anyone have any idea about it ?
This table is only available in the Project Collection database and querying that table is not supported through SQL nor WIQL. While technically possible through SQL, any direct access of the Project Collection database is unsupported and the underlying structure may change between major versions, updates and even hotfixes.
Instead of directly accessing the capacity in the database, the supported method is to use the REST api to query the capacity.
Example:
GET https://{instance}/DefaultCollection/{project}/{team}/_apis/work/TeamSettings/Iterations/{iterationid}/Capacities?api-version={version}
GET https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/Fabrikam-Fiber/_apis/work/teamsettings/iterations/2ec76bfe-ba74-4060-970d-4567a3e997ee/capacities?api-version=2.0-preview.1
{
"values": [
{
"teamMember": {
"id": "8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d",
"displayName": "Chuck Reinhart",
"uniqueName": "fabrikamfiber3#hotmail.com",
"url": "https://fabrikam-fiber-inc.vssps.visualstudio.com/_apis/Identities/8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d",
"imageUrl": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/_api/_common/identityImage?id=8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d"
},
"activities": [
{
"capacityPerDay": 0,
"name": null
}
],
"daysOff": [],
"url": "https://fabrikam-fiber-inc.visualstudio.com/DefaultCollection/6d823a47-2d51-4f31-acff-74927f88ee1e/748b18b6-4b3c-425a-bcae-ff9b3e703012/_apis/work/teamsettings/iterations/2ec76bfe-ba74-4060-970d-4567a3e997ee/capacities/8c8c7d32-6b1b-47f4-b2e9-30b477b5ab3d"
}
]
}

Box Content API : Is modified_at field of parent folder updated when deleting an item from folder?

We are building an application, using Box .NET sdk, to display the content of a customer Box account. Our synchronisation tool use the Box content API to retrieve folders and files and build a cache from this information. To detect if changes have happened since last synchronisation, we compare a folder modified_at field.
When inserting or updating a file, the parent folder modified_at fields is updated to the correct timestamp.
When deleting a file, the parent folder timestamp stays the same. Is it a bug or the correct behavior ?
Official forum question : https://community.box.com/t5/Developer-Forum/Box-Content-API-Is-modified-at-field-of-parent-folder-updated/td-p/15335
This is a known issue, but we currently do not have a timeline on a fix. Here is a workaround to discover if what files have been recently deleted.
(1) Call the Events API with these parameters: "stream_type=admin_logs&event_type=delete". This will return a list of items that have been deleted, along with each item's parent folder id.
Example Request
curl "https://api.box.com/2.0/events?stream_type=admin_logs&event_type=delete" -H "Authorization: Bearer AUTH_TOKEN"
Example Response
{
"chunk_size": 1,
"next_stream_position": "0000000000000000000",
"entries": [
{
"source": {
"item_type": "file",
"item_id": "00000000000",
"item_name": "example-file.txt",
"parent": {
"type": "folder",
"name": "Example Folder Name",
"id": "0000000000"
}
},
"created_by": {
"type": "user",
"id": "000000000",
"name": "Example Name",
"login": "example#example.com"
},
"created_at": "2016-04-15T00:00:00-07:00",
"event_id": "00000000-0000-0000-0000-000000000000",
"event_type": "DELETE",
"ip_address": "Unknown IP",
"type": "event",
"session_id": null,
"additional_details": {
"version_id": "00000000000"
}
}
]
}
(2) Use the next_stream_position returned in step 1 on subsequent calls to get the deleted items after that point.

Categories

Resources