I'm implementing a method to get groups from AzureAD with paging and filtering for the displayName property of the groups. Let's center in the paging. I'm actually able to do page results with the $top. But my requirement says that I need to return the data set from a specific page. I'm also able to do that but with not a very elegant solution, making requests to the NextPageRequest object of the results to retrieve the specified page. Here's the code:
public async Task<List<Group>> GetGroups(string filterText, int page, int pageSize = 1)
{
var graphClient = _graphSdkHelper.GetAuthenticatedClient();
List<Group> groups = new List<Group>();
//Microsoft Graph api allows a minimum page size of 1 and maximum of 999
if (pageSize < 1 || pageSize > 999)
{
return groups;
}
try
{
IGraphServiceGroupsCollectionPage graphGroups;
if (!string.IsNullOrEmpty(filterText))
{
graphGroups = await graphClient.Groups.Request().Filter($"startswith(displayName, '{filterText}')").Top(pageSize).GetAsync();
}
else
{
//if filter text is empty, return all groups
graphGroups = await graphClient.Groups.Request().OrderBy("displayName").Top(pageSize).GetAsync();
}
//navigate to the requested page. This is extremly inefficient as we make requests until we find the right page.
//$Skip query parameter doesn't work in groups services
var currentPage = 1;
while (currentPage < page && graphGroups.NextPageRequest != null && (graphGroups = await graphGroups.NextPageRequest.GetAsync()).Count > 0)
{
currentPage = currentPage + 1;
}
foreach (var graphGroup in graphGroups)
{
Group group = _graphSdkHelper.TranslateGroup(graphGroup);
groups.Add(group);
}
}
catch (Exception exception)
{
_logger.LogError(exception, "Error while searching for groups");
}
return groups;
}
Question 1: how can I improve and keep track of which page am I? Is this possible? Now I can just request the next page so if I need to access page 20 for example, means 20 requests to azure.
Regarding this post from the AzureAD Graph Api, this is not possible:
https://social.msdn.microsoft.com/Forums/en-US/199bbf92-642a-4bcc-add4-f8023a7684e2/paging-in-azure-ad-graph-client?forum=WindowsAzureAD
I just refuse to think that it's not possible to do real pagination in AzureAD.
Question 2: How can I get the total amount of groups with just one request. It seems that the $Countquery parameter is not implemented for groups or users:
https://github.com/microsoftgraph/microsoft-graph-docs/blob/master/concepts/query_parameters.md#count-parameter
Thanks
Related
I need to read all users from the AD. Here is code that I am using:
using Novell.Directory.Ldap;
using Novell.Directory.Ldap.Controls;
using System.Linq;
namespace LdapTestApp
{
class Program
{
static void Main()
{
LdapConnection ldapConn = new LdapConnection();
ldapConn.SecureSocketLayer = true;
ldapConn.Connect(HOST, PORT);
try
{
var cntRead = 0;
int? cntTotal = null;
var curPage = 0;
ldapConn.Bind(USERNAME, PASSWORD);
do
{
var constraints = new LdapSearchConstraints();
constraints.SetControls(new LdapControl[]
{
new LdapSortControl(new LdapSortKey("sn"), true),
new LdapVirtualListControl("sn=*", 0, 10)
});
ILdapSearchResults searchResults = ldapConn.Search(
"OU=All Users,DC=homecredit,DC=ru",
LdapConnection.ScopeSub,
"(&(objectCategory=person)(objectClass=user))",
null,
false,
constraints
);
while (searchResults.HasMore() && ((cntTotal == null) || (cntRead < cntTotal)))
{
++cntRead;
try
{
LdapEntry entry = searchResults.Next();
}
catch (LdapReferralException)
{
continue;
}
}
++curPage;
cntTotal = GetTotalCount(searchResults as LdapSearchResults);
} while ((cntTotal != null) && (cntRead < cntTotal));
}
finally
{
ldapConn.Disconnect();
}
}
private static int? GetTotalCount(LdapSearchResults results)
{
if (results.ResponseControls != null)
{
var r = (from c in results.ResponseControls
let d = c as LdapVirtualListResponse
where (d != null)
select (LdapVirtualListResponse)c).SingleOrDefault();
if (r != null)
{
return r.ContentCount;
}
}
return null;
}
}
}
I used this question Page LDAP query against AD in .NET Core using Novell LDAP as basis.
Unfortunatelly I get this exception when I am trying to recieve the very first entry:
"Unavailable Critical Extension"
000020EF: SvcErr: DSID-03140594, problem 5010 (UNAVAIL_EXTENSION), data 0
What am I doing wrong?
VLVs are browsing indexes and are not directly related to the possibility or not to browse large numbers of entries (see generic documentation). So even if this control would be activated on your AD, you wouldn't be able to retrieve more than 1000 elements this way :
how VLVs work on AD
MaxPageSize is 1000 by default on AD (see documentation)
So what you can do:
use a specific paged results control, but it seems that the Novell C# LDAP library does not have one
ask you the question: "is this pertinent to look for all the users in a single request?" (your request looks like a batch request: remember that a LDAP server is not designed for the same purposes than a classic database - that can easily return millions of entries - and that's why most of LDAP directories have default size limits around 1000).
The answer is no: review your design, be more specific in your LDAP search filter, your search base, etc.
The answer is yes:
you have a single AD server: ask your administrator to change the MaxPageSize value, but this setting is global and can lead to several side effects (ie. what happens if everybody start to request all the users all the time?)
you have several AD servers: you can configure one for specific "batch like" queries like the one you're trying to do (so large MaxPageSize, large timeouts etc.)
I had to use approach described here:
https://github.com/dsbenghe/Novell.Directory.Ldap.NETStandard/issues/71#issuecomment-420917269
The solution is far from being perfect but at least I am able to move on.
Starting with version 3.5 the library supports Simple Paged Results Control - https://ldapwiki.com/wiki/Simple%20Paged%20Results%20Control - and the usage is as simple as ldapConnection.SearchUsingSimplePaging(searchOptions, pageSize) or ldapConnection.SearchUsingSimplePaging(ldapEntryConverter, searchOptions, pageSize) - see Github repo for more details - https://github.com/dsbenghe/Novell.Directory.Ldap.NETStandard and more specifically use the tests as usage samples.
I have a table for messages and another table for messageviews. I'm looking to show unread messages to an individual user. With my message table, I've sent up a new field that looks to see if any messageviews exist for the current message. If they do, then my Viewed bool should return true otherwise false(if they haven't viewed the message). Everything works fine, except that I'm unable to find the currently logged in user with User.Identity.GetUser() as I normally would. I've added the correct usings as well. Is there some limitation within a model to restrict this type of call. If so, how can I find the current user within a model?
public bool Viewed
{
get
{
ApplicationDbContext db = new ApplicationDbContext();
//var _userId = User.Identity.GetUserId();
var _userId = Thread.CurrentPrincipal.Identity.GetUserId();
List<MessageView> m_List = db.MessageView.Where(u => u.UserId == _userId && u.MessageId == O_MessageId).ToList();
var count = m_List.Count();
if (count >= 1)
{
return true;
}
else
{
return false;
}
}
}
Resolved: Outside of a controller you can find the current user with this -
var _userId = System.Web.HttpContext.Current.User.Identity.GetUserId();
Outside of a controller you can find the current user with this:
var _userId = System.Web.HttpContext.Current.User.Identity.GetUserId();
I have this code which makes a call to Facebook API in order to get the mutual friends on the application for two given users. The parameter nextPage corresponds to facebook api's next page.
Problem is that although I have a limited number of common friends each time, I get errors from Facebook api, for having too many calls per second (about 5 milion/day overall). I tried mocking the call to facebook, to always return null, but that resulted in a CPU overloading due to w3p, with 99% usage. What am I missing?
public async Task<MutualFriendsModel> GetMutualFriends(Guid currentUserGuid,
Guid visitedUserGuid, string nextPage)
{
var currentUser = Get(currentUserGuid);
var visitedUser = _serviceUserLogin.GetByUserId(visitedUserGuid);
var mutualFriends = new MutualFriendsModel();
var hasNextPage = true;
while (hasNextPage && mutualFriends.Users.Count < 25)
{
var facebookResult = await
_facebookApi.GetMutualFriendsFacebookRequest(currentUser.Token,
visitedUser.ProviderKey, nextPage);
if (facebookResult == null) break;
mutualFriends.Update(facebookResult, this, _serviceUserLogin);
nextPage = mutualFriends.NextPageUrl;
hasNextPage = !string.IsNullOrEmpty(mutualFriends.NextPageUrl);
}
return mutualFriends;
}
Also, there is another variation of the above mentioned snippet, which only counts the mutual friends.
private async Task<IList<SavedCommentModel>> MutualFriendsCount(User currentUser,
IList<SavedCommentModel> comments)
{
var usersFriends = new Dictionary<Guid, long>();
foreach (var comment in comments)
{
if (usersFriends.ContainsKey(comment.UserId))
{
comment.TotalMutualFriends = usersFriends[comment.UserId];
}
else
{
var visitedUser = _serviceUserLogin.GetByUserId(comment.UserId);
if (visitedUser.LoginProvider != LoginProvider.Facebook.ToString()) continue;
var facebookResult = await
_facebookApi.GetMutualFriendsFacebookRequest(currentUser.Token, visitedUser.ProviderKey);
if (facebookResult == null) continue;
comment.TotalMutualFriends = facebookResult.Context.Mutual_friends.Summary.Total_Count;
usersFriends.Add(comment.UserId, comment.TotalMutualFriends);
}
}
return comments;
}
I'm wrote simple .NET Windows service, that pushes documents to Apache Solr v4.1. For access to Solr, I used SolrNet. My code is:
var solr = _container.Resolve<ISolrOperations<Document>>();
solr.Delete(SolrQuery.All);
var docs = from o in documents
orderby o.Id ascending
select o;
for (var i = 0; i < docs.Count(); i++ )
{
var texts = new List<string>();
if (docs.ToList()[i].DocumentAttachments.Count > 0)
{
foreach (var attach in docs.ToList()[i].DocumentAttachments)
{
using (var fileStream = System.IO.File.OpenRead(...))
{
var extractResult = solr.Extract(
new ExtractParameters(fileStream, attach.Id.ToString(CultureInfo.InvariantCulture))
{
ExtractFormat = ExtractFormat.Text,
ExtractOnly = true
}
);
texts.Add(extractResult.Content);
}
}
}
docs.ToList()[i].GetFilesText = texts;
solr.Add(docs.ToList()[i]);
if (i % _commitStep == 0)
{
solr.Commit();
solr.Optimize();
}
}
solr.Commit();
solr.Optimize();
solr.BuildSpellCheckDictionary();
"Document.GetFilesText" - this is a field, storing text, extracted from pdf files.
This example is cleaned from logging methods(writes to Windows Event Log). While indexing, I'm watched to:
a) Event Log - shows documents indexing progress
b) "Core Admin" page in "Solr Admin" webapp - shows count of documents in index
When I'm just indexing documents, without searching, all works right - event log shows "7500 docs added" entry, "Core Admin" shows num docs = 7500.
But, if I try to search documents during indexing, I have these errors:
- search results contains not all passed documents
- "Core Admin" resets num docs value. For example, EventLog shows 7500 docs indexed, but "Core Admin" shows num docs=23. And num docs resets every time, when I'm querying Solr.
My querying code:
searchPhrase = textBox1.Text;
var documents = Solr.Query(new SolrQuery(searchPhrase), new QueryOptions
{
Highlight = new HighlightingParameters
{
UsePhraseHighlighter = true,
Fields = new Collection<string> { "Field1", "Field2", "Field3" },
BeforeTerm = "<b>",
AfterTerm = "</b>"
},
Rows = 100
});
UPD: to make things clear
I have these lines in my webapp's "search" page:
public class MyController : Controller
{
public ISolrOperations<Document> Solr { get; set; }
public MyController()
{
//_solr = solr;
}
//
// GET: /Search/My/
public ActionResult Index()
{
Solr.Delete(SolrQuery.All);
return View();
}
...
And, opening this page in browser, causes totally loss of documents from Solr index.:-)
You are seeing this behavior because the first thing you do is clear the index.
solr.Delete(SolrQuery.All)
This removes all documents from the index. So once reindexing starts the index will be empty.
Now in your subsequent code you are adding the items back into the index in batches. However any new documents you add to the index will not be visible to users querying the index until a commit is issued. Since you are adding documents and issuing commits in batches during that explains why your document counts are increasing while you are rebuilding and why not all documents are visible. Your counts and total documents in the index will not be 7500 until the last commit is issued.
There might be a couple of options to help alleviate this for you.
Issue soft commits to Solr using commitWithin or auto soft commits to Solr. CommitWithin is supported as an optional AddParameter to the Add method in SolrNet. You could issue solr.Add(docs.ToList()[i], new AddParameters{ CommitWithin = 3000}); which would tell Solr to commit this batch of items within 3 seconds.
Use Solr Cores to have an "active" core that users are searching against and reload your logs data into a "standby" core. Once the load process to the standby core has completed, you can issue a command to SWAP the cores and this will be totally transparent to any users. CoreAdmin commands are supported in SolrNet as well, see the the tests in SolrCoreAdminFixture.cs for examples.
Hope this helps.
I'm trying to re write a search from System.DirectoryServices to System.DirectoryServices.Protocol
In S.DS I get all the requested attributes back, but in S.DS.P, I don't get the GUID, or the HomePhone...
The rest of it works for one user.
Any Ideas?
public static List<AllAdStudentsCV> GetUsersDistinguishedName( string domain, string distinguishedName )
{
try
{
NetworkCredential credentials = new NetworkCredential( ConfigurationManager.AppSettings[ "AD_User" ], ConfigurationManager.AppSettings[ "AD_Pass" ] );
LdapDirectoryIdentifier directoryIdentifier = new LdapDirectoryIdentifier( domain+":389" );
using ( LdapConnection connection = new LdapConnection( directoryIdentifier, credentials ) )
{
SearchRequest searchRequest = new SearchRequest( );
searchRequest.DistinguishedName = distinguishedName;
searchRequest.Filter = "(&(objectCategory=person)(objectClass=user)(sn=Afcan))";//"(&(objectClass=user))";
searchRequest.Scope = SearchScope.Subtree;
searchRequest.Attributes.Add("name");
searchRequest.Attributes.Add("sAMAccountName");
searchRequest.Attributes.Add("uid");
searchRequest.Attributes.Add("telexNumber"); // studId
searchRequest.Attributes.Add("HomePhone"); //ctrId
searchRequest.SizeLimit = Int32.MaxValue;
searchRequest.TimeLimit = new TimeSpan(0, 0, 45, 0);// 45 min - EWB
SearchResponse searchResponse = connection.SendRequest(searchRequest) as SearchResponse;
if (searchResponse == null) return null;
List<AllAdStudentsCV> users = new List<AllAdStudentsCV>();
foreach (SearchResultEntry entry in searchResponse.Entries)
{
AllAdStudentsCV user = new AllAdStudentsCV();
user.Active = "Y";
user.CenterName = "";
user.StudId = GetstringAttributeValue(entry.Attributes, "telexNumber");
user.CtrId = GetstringAttributeValue(entry.Attributes, "HomePhone");
user.Guid = GetstringAttributeValue(entry.Attributes, "uid");
user.Username = GetstringAttributeValue(entry.Attributes, "sAMAccountName");
users.Add(user);
}
return users;
}
}
catch (Exception ex)
{
throw;
}
}
Also, if I want to fetch EVERY user in AD, so I can synch data with my SQL DB, how do I do that, I Kept getting max size exceeded, errors. I set the size to maxInt32... is there an "ignore size" option?
Thanks,
Eric-
I think that the standard way is to use System.DirectoryServices, not System.DirectoryServices.Protocol. Why do you want to user the later ?
Concerning your second question about the error message "max sized exceeded", it may be because you try to fetch too many entries at once.
Active Directory limits the number of objects returned by query, in order to not overload the directory (the limit is something like 1000 objects). The standard way to fetch all the users is using paging searchs.
The algorithm is like this:
You construct the query that will fetch all the users
You specify a specific control (Paged Result Control) in this query indicating that this is
a paged search, with 500 users per page
You launch the query, fetch the first page and parse the first 500 entries in
that page
You ask AD for the next page, parse the next 500 entries
Repeat until there are no pages left