I have seen this question already but it does not explain it so that I understand what is actually going on. I've been developing for years and never come across this before (though my usage of Linq and Parallel is fairly recent).
My code is:
Parallel.ForEach(databaseMetadata.Rows.Cast<DataRow>(), row => {
var fieldName = row.Item("Name", "");
var field = this.Fields.Where(f => f.Name.ToLower() == fieldName.ToLower()).SingleOrDefault();
if(field != null) { field.Validate(this, connection, row); }
});
Within the field.Validate function it sets a property named 'HasBeenValidated' on the field object to true, however, as soon as I come out of this Parallel.ForEach loop, that property is set back to false. Can someone please explain why this is happening and what I can do to ensure that changes within the loops are persisted outside of the loop.
Edited:
Below is a copy of the code in field.Validate:
internal void Validate(EntityAttribute entity, SqlConnection connection, [AllowNull] DataRow metadata) {
this.HasBeenValidated = true;
var isRequired = this.IsRequired;
var maxLength = this.MaxLength;
var isAutoGenerated = this.IsAutoGenerated;
var dataType = this.member.PropertyType;
var dataTypeAsString = "";
if(metadata != null) {
isRequired = metadata.Item("IsRequired", false);
maxLength = metadata.Item("MaxLength", 0);
isAutoGenerated = metadata.Item("IsAutoGenerated", false);
dataTypeAsString = metadata.Item("DataType", "");
if(dataTypeAsString == this.member.PropertyType.ToSqlServerDataType()) { dataTypeAsString = ""; }
} else {
dataTypeAsString = this.member.PropertyType.ToSqlServerDataType();
}
if(metadata == null || isRequired != this.IsRequired || maxLength != this.MaxLength || isAutoGenerated != this.IsAutoGenerated || dataTypeAsString != "") {
var sql = string.Format((metadata == null ? "ALTER TABLE [{0}].[{1}] ADD" : "ALTER TABLE {0} ALTER COLUMN"), entity.Schema, entity.Name) + " " + this.Sql + ";";
if(!connection.ExecuteCommand(sql, 1)) {
throw new InvalidOperationException("Unable to create or alter column '" + this.Name + "' on table '" + entity.Name + "'.");
}
}
}
The HasBeenValidated property is defined on the field object as:
internal bool HasBeenValidated { get; set; }
Thanks in advance.
Apologies if I've wasted anyone's time but I have figured out the cause of this issue. The this.Fields list is an IEnumerable<> of the field type which is queryable. I thought that this would be better than having a greedy list (since this class has a good number of lists on it). The code for generating the Fields list is:
this.Fields = allProperties
.Select(property => new { Property = property, Field = property.GetCustomAttributes(typeof(FieldAttribute), true).SingleOrDefault() as FieldAttribute })
.Where(info => info.Field != null && (info.Field as ListAttribute) == null)
.Select(info => { info.Field.Member = info.Property; return info.Field; });
What I completely failed to realise was that GetCustomAttributes, rather unexpectedly (from my point of view anyway) regenerates a copy of the attribute class each time it is called.
Had this been a simpler class I might have suspected this sooner, but I was also changing properties in the fields class when setting the Member property (i.e. extracting metadata from the info.Property and setting properties within the field class based on the properties of that class) so when I was looking at the field class in the debugger I could see a lot of the properties had been changed (which mislead me to think it was the same instance of the field class and not a copy).
I really apologise if I've wasted anyone's time and effort in this but hopefully by posting my mistake this can help other people in future who might stumble using the GetCustomAttribute in a similar way inside a non-greedy Linq expression.
It looks like you have a synchronization problem with your DataRow objects - are you sure field.Validate doesn't update the property in the database? And that your databaseMetadata isn't just still sitting on the old data?
The arguments connection, row to field.Validate suggest you might be doing something like that...
Try to use a normal for loop and see what happens. Do you have the same problem?
Can you call databaseMetadata.Rows twice and check is it the same instance or different one.
bool isEquals = ReferenceEquals(databaseMetadata.Rows, databaseMetadata.Rows)
If it's doesn't support immutability I expect it to return a copy of rows each time you access Rows property.
Related
Yes, I'm well aware Not to do this, but I have no choice. I'd agree that it's an XYZ issue, but since I can't update the service I have to use, it's out of my hands. I need some help to save some time, maybe learn something handy in the process.
I'm looking to map a list of models (items in this example) to what is essentially numbered variables of a service I'm posting to, in the example, that's the fields a part of new 'newUser'.
Additionally, there may not be always be X amount items in the list (On the right in the example), and yet I have a finite amount (say 10) of numbered variables from 'newUser' to map to (On the left in the example). So I'll have to perform a bunch of checks to avoid indexing a null value as well.
Current example:
if (items.Count >= 1 && !string.IsNullOrWhiteSpace(items[0].id))
{
newUser.itemId1 = items[0].id;
newUser.itemName1 = items[0].name;
newUser.itemDate1 = items[0].date;
newUser.itemBlah1 = items[0].blah;
}
else
{
// This isn't necessary, but this effectively what will happen
newUser.itemId1 = string.Empty;
newUser.itemName1 = string.Empty;
newUser.itemDate1 = string.Empty;
newUser.itemBlah1 = string.Empty;
}
if (items.Count >= 2 && !string.IsNullOrWhiteSpace(items[1].id))
{
newUser.itemId2 = items[1].id;
newUser.itemName2 = items[1].name;
newUser.itemDate2 = items[1].date;
newUser.itemBlah2 = items[1].blah;
}
// Removed the else to clean it up, but you get the idea.
// And so on, repeated many more times..
I looked into an example using Dictionary, but I'm unsure of how to map that to the model without just manually mapping all the variables.
PS: To all who come across this question, if you're implementing numbered variables in your API, please don't- it's wildly unnecessary and time consuming.
As an alternative to fiddling with the JSON, you could get down and dirty and use Reflection.
Given the following test data:
const int maxItemsToSend = 3;
class ItemToSend {
public string
itemId1, itemName1,
itemId2, itemName2,
itemId3, itemName3;
}
ItemToSend newUser = new();
record Item(string id, string name);
Item[] items = { new("1", "A"), new("2", "B") };
Using the rules you set forth in the question, we can loop through the projected fields as so:
// If `itemid1`,`itemId2`, etc are fields:
var fields = typeof(ItemToSend).GetFields();
// If they're properties, replace GetFields() with
// .GetProperties(BindingFlags.Instance | BindingFlags.Public);
for(var i = 1; i <= maxItemsToSend; i++){
// bounds check
var item = (items.Count() >= i && !string.IsNullOrWhiteSpace(items[i-1].id))
? items[i-1] : null;
// Use Reflection to find and set the fields
fields.FirstOrDefault(f => f.Name.Equals($"itemId{i}"))
?.SetValue(newUser, item?.id ?? string.Empty);
fields.FirstOrDefault(f => f.Name.Equals($"itemName{i}"))
?.SetValue(newUser, item?.name ?? string.Empty);
}
It's not pretty, but it works. Here's a fiddle.
Here i have example of getting inventory from json string.
inventory = JsonUtility.FromJson<InventoryModel>GFile.GetPlayer(FileComponent.PlayerInventory));
Since i am loading that string from file it is possible it is just blank and i want to first check if it is blank and i would do it like this:
if(GFile.GetPlayer(FileComponent.PlayerInventory) != " ")
{
inventory = JsonUtility.FromJson<InventoryModel>(GFile.GetPlayer(FileComponent.PlayerInventory));
}
So my question is if there is any more elegant way of doing this instead of typing if statement like this?
Why not like this? :
var player = GFile.GetPlayer(FileComponent.PlayerInventory);
if(!string.IsNullOrWhiteSpace(player)) {
inventory = JsonUtility.FromJson<InventoryModel>(player);
}
I'd suggest
string data = GFile.GetPlayer(FileComponent.PlayerInventory);
if(!string.IsNullOrWhiteSpace(data))
{
inventory = JsonUtility.FromJson<InventoryModel>(data);
}
This way you only call GetPlayer once, and it doesn't matter if the resulting data is an empty string or is full of whitespace - it still won't enter that block and set inventory.
Edit
For older versions of .Net, this will also work
string data = GFile.GetPlayer(FileComponent.PlayerInventory);
if(data != null && data.Trim().Length == 0)
{
inventory = JsonUtility.FromJson<InventoryModel>(data);
}
I am custom importing some rows from a text file to our database and so I have bunch of codes like this for many fields.
address.State = row["Location State"].ToString();
I just noticed a requirement that says
don't overwrite those fields in the value that we are reading from the
text file is empty or blank.
So I assume I can wrap them all around a check like this?
if(!String.IsNullOrEmpth(row["Location State"].ToString()))
address.State = row["Location State"].ToString();
But before I go ahead and apply this kind of logic around all those fields I wanted to check and see if you have better solutions?
Maybe an extension method could help here:
public static string ColumnValueOrDefault(this DataRow row, string column, string defaultValue)
{
if (row == null)
{
throw new ArgumentNullException("row");
}
if (column == null)
{
throw new ArgumentNullException("column");
}
if (defaultValue == null)
{
throw new ArgumentNullException("defaultValue");
}
var rowString = row[column].ToString();
return string.IsNullOrEmpty(rowString) ? defaultValue : rowString;
}
Address.State = row.ColumnValueOrDefault("column", Address.State);
I would (and do) use this pattern personally :
string sTester = string.Empty
Address.State = string.IsNullOrEmpty(sTester = row["collumn"].ToString()) == false ? sTester : Address.State;
This allows each collumn value to be set once, and the reused using the same string variable, and is relatively readable
As you can see below im going through the property of an entity to get the field names and values for an audit trail system however its bringing back the references to the object that is passed ie careerReference=System.Data.Objects.DataClasses.EntityReference`1
absanal=HallmarkSolutions.PAMS.Interop.EntityFramework.AbsenceSicknessReason
How to I modify the code below to be more efficent and also ignore the navigation propertyes
public string ObjectToNotes(object obj)
{
if (obj == null)
{
throw new ArgumentNullException("obj", "Value can not be null or Nothing!");
}
StringBuilder sb = new StringBuilder();
Type t = obj.GetType();
PropertyInfo[] pi = t.GetProperties();
List<StandardLookup> absenceReasons = pamsContext.GetAbsenceReasons();
for (int index = 0; index < pi.Length; index++)
{
if (pi[index].PropertyType != typeof(EntityKey) && pi[index].PropertyType != typeof(EntityObject) && pi[index].PropertyType != typeof(EntityReference))
{
string message;
message = pi[index].Name.ToString() + " " + pi[index].PropertyType.ToString();
if (pi[index].Name=="reason")
sb.Append("reason=" + pamsContext.GetAbsenceReasonsDescription(Convert.ToInt32(pi[index].GetValue(obj, null))));
else
sb.Append(pi[index].Name + "=" + pi[index].GetValue(obj, null) + Environment.NewLine);
if (index < pi.Length - 1)
{
sb.Append(Environment.NewLine);
}
}
}
return sb.ToString();
}
At first you may ignore objects derived from EntityObject:
if (typeof(EntityObject).IsAssignableFrom(pi[index].PropertyType))
continue;
This won't skip, for example, AbsenceSicknessReasonId property (it's just an Int32 and it may not be obvious to discriminate from another normal property). You may do it little bit better with a very naive assumption: an Id property will (almost always) have its related navigation property.
if (pi[index].Name.EndsWith("Id"))
{
string navigationProperty =
pi[index].Name.Substring(0, pi[index].Name.Length - 2);
if (pi.FirstOrDefault(x => x.Name == navigationProperty) != null)
continue;
}
As you can see this is a pretty weak and naive assumption but it should work in many (most?) cases.
Do you have the Entity Context? If so, you can get the property names of your entity by calling
List<string> propNames = context.Entry(yourEntry).OriginalValues.PropertyNames.ToList()
Then go through the list of propert ynames to get your values, e.g.
context.Entry(yourEntry).Property(propNames[0]).CurrentValue
Navigational properties are not included in the list.
The following code results in deletions instead of updates.
My question is: is this a bug in the way I'm coding against Entity Framework or should I suspect something else?
Update: I got this working, but I'm leaving the question now with both the original and the working versions in hopes that I can learn something I didn't understand about EF.
In this, the original non working code, when the database is fresh, all the additions of SearchDailySummary object succeed, but on the second time through, when my code was supposedly going to perform the update, the net result is a once again empty table in the database, i.e. this logic manages to be the equiv. of removing each entity.
//Logger.Info("Upserting SearchDailySummaries..");
using (var db = new ClientPortalContext())
{
foreach (var item in items)
{
var campaignName = item["campaign"];
var pk1 = db.SearchCampaigns.Single(c => c.SearchCampaignName == campaignName).SearchCampaignId;
var pk2 = DateTime.Parse(item["day"].Replace('-', '/'));
var source = new SearchDailySummary
{
SearchCampaignId = pk1,
Date = pk2,
Revenue = decimal.Parse(item["totalConvValue"]),
Cost = decimal.Parse(item["cost"]),
Orders = int.Parse(item["conv1PerClick"]),
Clicks = int.Parse(item["clicks"]),
Impressions = int.Parse(item["impressions"]),
CurrencyId = item["currency"] == "USD" ? 1 : -1 // NOTE: non USD (if exists) -1 for now
};
var target = db.Set<SearchDailySummary>().Find(pk1, pk2) ?? new SearchDailySummary();
if (db.Entry(target).State == EntityState.Detached)
{
db.SearchDailySummaries.Add(target);
addedCount++;
}
else
{
// TODO?: compare source and target and change the entity state to unchanged if no diff
updatedCount++;
}
AutoMapper.Mapper.Map(source, target);
itemCount++;
}
Logger.Info("Saving {0} SearchDailySummaries ({1} updates, {2} additions)", itemCount, updatedCount, addedCount);
db.SaveChanges();
}
Here is the working version (although I'm not 100% it's optimized, it's working reliably and performing fine as long as I batch it out in groups of 500 or less items in a shot - after that it slows down exponentially but I think that just may be a different question/subject)...
//Logger.Info("Upserting SearchDailySummaries..");
using (var db = new ClientPortalContext())
{
foreach (var item in items)
{
var campaignName = item["campaign"];
var pk1 = db.SearchCampaigns.Single(c => c.SearchCampaignName == campaignName).SearchCampaignId;
var pk2 = DateTime.Parse(item["day"].Replace('-', '/'));
var source = new SearchDailySummary
{
SearchCampaignId = pk1,
Date = pk2,
Revenue = decimal.Parse(item["totalConvValue"]),
Cost = decimal.Parse(item["cost"]),
Orders = int.Parse(item["conv1PerClick"]),
Clicks = int.Parse(item["clicks"]),
Impressions = int.Parse(item["impressions"]),
CurrencyId = item["currency"] == "USD" ? 1 : -1 // NOTE: non USD (if exists) -1 for now
};
var target = db.Set<SearchDailySummary>().Find(pk1, pk2);
if (target == null)
{
db.SearchDailySummaries.Add(source);
addedCount++;
}
else
{
AutoMapper.Mapper.Map(source, target);
db.Entry(target).State = EntityState.Modified;
updatedCount++;
}
itemCount++;
}
Logger.Info("Saving {0} SearchDailySummaries ({1} updates, {2} additions)", itemCount, updatedCount, addedCount);
db.SaveChanges();
}
The thing that keeps popping up in my mind is that maybe the Entry(entity) or Find(pk) method has some side effects? I should probably be consulting the documentation but any advice is appreciated..
It's a slight assumption on my part (without looking into your models/entities), but have a look at what's going on within this block (see if the objects being attached here are related to the deletions):
if (db.Entry(target).State == EntityState.Detached)
{
db.SearchDailySummaries.Add(target);
addedCount++;
}
Your detached object won't be able to use its navigation properties to locate its related objects; you'll be re-attaching an object in a potentially conflicting state (without the correct relationships).
You haven't mentioned what is being deleted above, so I may be way off. Just off out, so this is a little rushed, hope there's something useful in there.