I am uploading a excel file containing all required users into my website using ASP.NET Identity and OwinContext and EF 6.
My code looks like below:
foreach (var bulkUserDetail in bulkUser.BulkUserDetails)
{
var userManager = owinContext.GetUserManager<ApplicationUserManager>();
var userProfile = new UserProfile();
userProfile.Username = bulkUserDetail.Username;
AspNetUser newUser = new AspNetUser
{
UserName = userProfile.Username,
Email = bulkUserDetail.Email,
LastPasswordChangedDate = null,
};
var creationResult = userManager.Create(newUser);
if (creationResult.Succeeded)
{
string token = userManager.GeneratePasswordResetToken(newUser.Id);
}
}
The issue is that the performance of following two lines is pretty disappointing
userManager.Create(newUser) -- (900 milliseconds)
userManager.GeneratePasswordResetToken(newUser.Id) --(1800 milliseconds)
In large quantity, i.e 2000 users, the performance become a serious issue.
Is there better a practice to speed up this process? I am open to suggestions but I have to keep the OwinContext library.
Thanks in advance
You could try doing the user creation inside a parallel for which might speed up the overall time, however there is an issue with this:
The call to Create and GeneratePasswordResetToken are slow because they call the database
Doing the work in parallel would increase the number of concurrent calls to the database potentially slowing it down even more, this really depends on how good the hardware hosting your database is.
var userManager = owinContext.GetUserManager<ApplicationUserManager>();
Parallel.ForEach (bulkUser.BulkUserDetails, bulkUserDetail =>
{
//Do you really need to make this userProfile as its not used
var userProfile = new UserProfile();
userProfile.Username = bulkUserDetail.Username;
AspNetUser newUser = new AspNetUser
{
UserName = userProfile.Username,
Email = bulkUserDetail.Email,
LastPasswordChangedDate = null,
};
var creationResult = userManager.Create(newUser);
if (creationResult.Succeeded)
{
string token = userManager.GeneratePasswordResetToken(newUser.Id);
}
})
Related
You are able to create a csv load job to load data from a csv file in Google Cloud Storage by using the BigQueryClient in Google.Cloud.BigQuery.V2 which has a CreateLoadJob method.
How can you guarantee idempotency with this API to ensure that say the network dropped before getting a response and you kicked off a retry you would not end up with the same data being loaded into BigQuery multiple times?
Example API usage
private void LoadCsv(string sourceUri, string tableId, string timePartitionField)
{
var tableReference = new TableReference()
{
DatasetId = _dataSetId,
ProjectId = _projectId,
TableId = tableId
};
var options = new CreateLoadJobOptions
{
WriteDisposition = WriteDisposition.WriteAppend,
CreateDisposition = CreateDisposition.CreateNever,
SkipLeadingRows = 1,
SourceFormat = FileFormat.Csv,
TimePartitioning = new TimePartitioning
{
Type = _partitionByDayType,
Field = timePartitionField
}
};
BigQueryJob loadJob = _bigQueryClient.CreateLoadJob(sourceUri: sourceUri,
destination: tableReference,
schema: null,
options: options);
loadJob.PollUntilCompletedAsync().Wait();
if (loadJob.Status.Errors == null || !loadJob.Status.Errors.Any())
{
//Log success
return;
}
//Log error
}
You can achieve idempotency by generating your own jobid based on e.g. file location you loaded and target table.
job_id = 'my_load_job_{}'.format(hashlib.md5(sourceUri+_projectId+_datasetId+tableId).hexdigest())
var options = new CreateLoadJobOptions
{
WriteDisposition = WriteDisposition.WriteAppend,
CreateDisposition = CreateDisposition.CreateNever,
SkipLeadingRows = 1,
JobId = job_id, #add this
SourceFormat = FileFormat.Csv,
TimePartitioning = new TimePartitioning
{
Type = _partitionByDayType,
Field = timePartitionField
}
};
In this case if you try reinsert the same job_id you got error.
You can also easily generate this job_id for check in case if pooling failed.
There are two places you could end up losing the response:
When creating the job to start with
When polling for completion
The first one is relatively tricky to recover from without a job ID; you could list all the jobs in the project and try to find one that looks like the one you'd otherwise create.
However, the C# client library generates a job ID so that it can retry, or you can specify your own job ID via CreateLoadJobOptions.
The second failure time is much simpler: keep the returned BigQueryJob so you can retry the polling if that fails. (You could store the job name so that you can recover even if your process dies while waiting for it to complete, for example.)
I'm developing a "Task Control System" that will allow its users to enter task description information including when to execute the task and what environment (OS, browser, etc.) the task requires.
The 'controller' saves the description information and schedules the task. When the scheduled time arrives, the scheduler retrieves the task information and 'queues' the task for a remote machine that matches the required environment.
My first cut at this used a relational database to persist the task descriptions and enough history information to track problems (about 2 weeks worth). But this is not a 'big data' problem and the relationships are simple and I need better performance.
So I'm looking for something that offers more performance.
I'm trying to use redis for this, but I'm having some problems. I'm using ServiceStack.Redis version 3.9.71.0 for the client and Redis 2.8.4 is the server.
This sample code is taken from Dan Swain's tutorial. It's updated to work with ServiceStack.Redis client v 3.9.71.0. Much of it works, but 'currentShippers.Remove(lameShipper);' does NOT work.
Can anyone see why that might be?
Thanks
public void ShippersUseCase()
{
using (var redisClient = new RedisClient("localhost"))
{
//Create a 'strongly-typed' API that makes all Redis Value operations to apply against Shippers
var redis = redisClient.As<Shipper>();
//Redis lists implement IList<T> while Redis sets implement ICollection<T>
var currentShippers = redis.Lists["urn:shippers:current"];
var prospectiveShippers = redis.Lists["urn:shippers:prospective"];
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Trains R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Trains,
UniqueRef = Guid.NewGuid()
});
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Planes R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Planes,
UniqueRef = Guid.NewGuid()
});
var lameShipper = new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "We do everything!",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.All,
UniqueRef = Guid.NewGuid()
};
currentShippers.Add(lameShipper);
Dump("ADDED 3 SHIPPERS:", currentShippers);
currentShippers.Remove(lameShipper);
.
.
.
}
}
Fixed the problem by adding these overrides to the 'Shipper' class:
public override bool Equals(object obj)
{
if (obj == null)
{
return false;
}
var input = obj as Shipper;
return input != null && Equals(input);
}
public bool Equals(Shipper other)
{
return other != null && (Id.Equals(other.Id));
}
public override int GetHashCode()
{
return (int)Id;
}
This working example shows how to implement List<>.Contains, List<>.Find, and List<>.Remove. Once applied to the 'Shipper' class the problem was solved!
The following code does not work, and I can't explain why... My user manager is causing significant distress in that it creates users and roles just fine but when I run this code userManager.IsInRole is always returning false, so the second time I run my seed I am hitting errors because it is trying to create the record despite the fact it already exists!
Please note that this is occurring when I am running update-database against my migrations project, is the fact this is a non ASP project causing issues, if so why? shouldn't an error be thrown.
This is the first project I have used Identity and although when it works it seems good, there is very little up to date good quality documentation available, so if anyone has any sources for this I would be grateful.
public void Run(BlogContext blogContext)
{
var userStore = new UserStore<User>((BlogContext) blogContext);
var userManager = new UserManager<User>(userStore);
var userRoles = new List<UserRole>()
{
new UserRole() {Username = "SysAdmin#test.com", Role = "SysAdmin"},
new UserRole() {Username = "testAdmin#test.com", Role = "Admin"},
new UserRole() {Username = "testAuthor#test.com", Role = "Author"}
};
foreach (var userRole in userRoles)
{
var userId = userManager.FindByName(userRole.Username).Id;
if (!userManager.IsInRole(userId, userRole.Role))
userManager.AddToRole(userId, userRole.Role);
}
blogContext.SaveChanges();
}
So I will answer this myself to save anyone the hours of pain I suffered because of this.
The reason for this occurring was that I had lazy loading disabled, I have enabled this to be on in my Migrations project like so.
protected override void Seed(BlogContext blogContext)
{
AutomaticMigrationsEnabled = true;
blogContext.Configuration.LazyLoadingEnabled = true;
//Add seed classes here!
}
We are having an issue with searching a custom record through SuiteTalk. Below is a sample of what we are calling. The issue we are having is in trying to set up the search using the internalId of the record. The issue here lies in in our initial development account the internal id of this custom record is 482 but when we deployed it through the our bundle the record was assigned with the internal Id of 314. It would stand to reason that this internal id is not static in a site per site install so we wondered what property to set up to reference the custom record. When we made the record we assigned its “scriptId’ to be 'customrecord_myCustomRecord' but through suitetalk we do not have a “scriptId”. What is the best way for us to allow for this code to work in all environments and not a specific one? And if so, could you give an example of how it might be used.
Code (C#) that we are attempting to make the call from. We are using the 2013.2 endpoints at this time.
private SearchResult NetSuite_getPackageContentsCustomRecord(string sParentRef)
{
List<object> PackageSearchResults = new List<object>();
CustomRecord custRec = new CustomRecord();
CustomRecordSearch customRecordSearch = new CustomRecordSearch();
SearchMultiSelectCustomField searchFilter1 = new SearchMultiSelectCustomField();
searchFilter1.internalId = "customrecord_myCustomRecord_sublist";
searchFilter1.#operator = SearchMultiSelectFieldOperator.anyOf;
searchFilter1.operatorSpecified = true;
ListOrRecordRef lRecordRef = new ListOrRecordRef();
lRecordRef.internalId = sParentRef;
searchFilter1.searchValue = new ListOrRecordRef[] { lRecordRef };
CustomRecordSearchBasic customRecordBasic = new CustomRecordSearchBasic();
customRecordBasic.recType = new RecordRef();
customRecordBasic.recType.internalId = "314"; // "482"; //THIS LINE IS GIVING US THE TROUBLE
//customRecordBasic.recType.name = "customrecord_myCustomRecord";
customRecordBasic.customFieldList = new SearchCustomField[] { searchFilter1 };
customRecordSearch.basic = customRecordBasic;
// Search for the customer entity
SearchResult results = _service.search(customRecordSearch);
return results;
}
I searched all over for a solution to avoid hardcoding internalId's. Even NetSuite support failed to give me a solution. Finally I stumbled upon a solution in NetSuite's knowledgebase, getCustomizationId.
This returns the internalId, scriptId and name for all customRecord's (or customRecordType's in NetSuite terms! Which is what made it hard to find.)
public string GetCustomizationId(string scriptId)
{
// Perform getCustomizationId on custom record type
CustomizationType ct = new CustomizationType();
ct.getCustomizationTypeSpecified = true;
ct.getCustomizationType = GetCustomizationType.customRecordType;
// Retrieve active custom record type IDs. The includeInactives param is set to false.
GetCustomizationIdResult getCustIdResult = _service.getCustomizationId(ct, false);
foreach (var customizationRef in getCustIdResult.customizationRefList)
{
if (customizationRef.scriptId == scriptId) return customizationRef.internalId;
}
return null;
}
you can make the internalid as an external property so that you can change it according to environment.
The internalId will be changed only when you install first time into an environment. when you deploy it into that environment, the internalid will not change with the future deployments unless you choose Add/Rename option during deployment.
I added a custom field to the UserProfile table named ClassOfYear and I'm able to get the data into the profile during registration like this:
var confirmationToken = WebSecurity.CreateUserAndAccount(model.UserName,
model.Password,
propertyValues: new { ClassOfYear = model.ClassOfYear },
requireConfirmationToken: true);
However, now I want to be able to update the profile when I manage it but I can't seem to find a method to do so. Do I need to simply update the UserProfile table myself? If not, what is the appropriate way of doing this?
FYI, I'm using Dapper as my data access layer, just in case it matters. But, like stated, I can just update the UserProfile table via Dapper if that's what I'm supposed to do, I just figured that the WebSecurity class, or something similar, had a way already since the custom user profile fields are integrated with the CreateUserAndAccount method.
Thanks all!
There is nothing in the SimpleMembershipProvider code that does anything with additional fields except upon create.
Simply query the values yourself from your ORM.
You can use the WebSecurity.GetUserId(User.Identity.Name) to get the user's id and then Dapper to query the UserProfile table.
Just in case anyone facing the same problem. After fighting a lot with the SimpleMembership I got a solution that populates both the webpages_Membership and my custom Users table. For clarification follow my code:
public ActionResult Register(RegisterModel model)
{
if (ModelState.IsValid)
{
TUsuario userDTO= new TUSer()
{
Name = model.Name,
Login = model.Login,
Pass = model.Pass.ToString(CultureInfo.InvariantCulture),
Active = true,
IdCompany = model.IdCompany,
IdUserGroup = model.IdUserGroup,
};
try
{
WebSecurity.CreateUserAndAccount(model.Login, model.Pass, new { IdUser = new UserDAL().Seq.NextVal(), Name = userDTO.Name, Login = userDTO.Login, Active = userDTO.Active, Pass = userDTO.Pass, IdCompany = userDTO.IdCompany, IdUserGroup = userDTO.IdUserGroup });
WebSecurity.Login(model.Login, model.Pass);
After cursing the framework a lot, that gave me a bliss of fresh air :)
PS.: The users table is specified in the global.asax file using the WebSecurity.InitializeDatabaseConnection functon.