I am having problems with duplicate data being inserted in to the database, am I passing a wrong parameter in the IEnumerable<Location>?
It doesn't bring up any errors when I debug the app.
IEnumerable<Location> locations = context.Locations.Where(l => l.FacebookID == facebookID);
if (locations.Count() == 0)
{
Location newLocation = new Location();
newLocation.FacebookID = locationID;
newLocation.Description = locationValue;
IGeoCoder geoCoder = new GoogleGeoCoder(GoogleAPIKey);
Address[] addresses = geoCoder.GeoCode(locationValue);
if (addresses.Length > 0)
{
// Let's assume the first one is good enough
Address address = addresses[0];
newLocation.Latitude= address.Coordinates.Latitude.ToString();
newLocation.Longitude = address.Coordinates.Longitude.ToString();
// Use location.Latitude and location.Longitude
}
context.Locations.AddObject(newLocation);
context.SaveChanges();
}
I am guessing you did not mean to do this:
newLocation.FacebookID = locationID;
But rather this:
newLocation.FacebookID = facebookID;
Basically you are creating multiple records, with the same facebookId, as you actually used the locationID instead.
Related
I am building a website that does has a chat component to it. The code below receives from a stored procedure a list of messages with a lots of different paramaters. 1 of those is if a message is in reply to another, and if that is the case, duplicate the message that is being replied to over the message answer. If the message that was being replied to was also an answer to a previous message do the same ect. Now my issue is that I have not been able to figure out how to automate this part of the code without nesting if into one another until a point where I hope users won't reply in the same chain anymore.
To rephrase it, I go in a list in inverse order and check if the ReplyingTo is not null.
I then copy the row that has the same ID and ReplyingTo 1 row higher than the current row.
I then confirm that this new row has a ReplyingTo
If it does I copy that object 2 row higher than the current one.
and I would continue this way until I reached a certain point that the users would not reach.
If anyone got an idea on how to proceed I would be highly gracious. I have put an example of the type of data that would be given to this function below.
for (int i = publicChatCountList.Count-1 ; i > -1; i--)
{
if (publicChatCountList[i].ReplyingTo.HasValue)
{
Chat_Dto chatItem = new Chat_Dto();
long? ReplyingToId = publicChatCountList[i].ReplyingTo;
chatItem = publicChatCountList.Find(x => x.Id == ReplyingToId);
publicChatCountList.Insert(i+1, new Chat_Dto() {Text = chatItem.Text, IsPublic = chatItem.IsPublic, IsApproved = chatItem.IsApproved, ReplyingTo = chatItem.ReplyingTo });
publicChatCountList[i+1].Duplicate = true;
if (chatItem.ReplyingTo.HasValue)
{
Chat_Dto chatItem2 = new Chat_Dto();
long? ReplyingToId2 = chatItem.ReplyingTo;
chatItem2 = publicChatCountList.Find(x => x.Id == ReplyingToId2);
publicChatCountList.Insert(i + 2, new Chat_Dto() { Text = chatItem2.Text, IsPublic = chatItem2.IsPublic, IsApproved = chatItem2.IsApproved, ReplyingTo = chatItem2.ReplyingTo });
publicChatCountList[i + 2].Duplicate = true;
}
}
}
If I understood you correctly maybe running something like this to recursively get all replies would work:
private void Replies(Chat_Dto_List publicChatCountList,int i)
{
if (publicChatCountList[i].ReplyingTo.HasValue)
{
Chat_Dto chatItem = new Chat_Dto();
long? ReplyingToId = publicChatCountList[i].ReplyingTo;
chatItem = publicChatCountList.Find(x => x.Id == ReplyingToId);
publicChatCountList.Insert(i + 1, new Chat_Dto() { Text = chatItem.Text, IsPublic = chatItem.IsPublic, IsApproved = chatItem.IsApproved, ReplyingTo = chatItem.ReplyingTo });
publicChatCountList[i + 1].Duplicate = true;
if (chatItem.ReplyingTo.HasValue)
{
Replies(publicChatCountList, publicChatCountList.FindIndex(x => x.Id == chatItem.ReplyingTo))
}
}
}
I have to update one field in the row of the table after fetching two records from the same row. As an easiest practice I have fetched two records individually, created a new value and then updating that particular property through Entity framework. I think there is a better way to do the same thing with less code. If any body can suggest please.
if (objModel.amountpaid==0)
{
using (estatebranchEntities db=new estatebranchEntities())
{
int rentVar = Convert.ToInt32(db.PropertyDetails.Where(m => m.propertyid == objVM.propertyid).Select(m => m.rent).SingleOrDefault());
int balanceVar = Convert.ToInt32(db.PropertyDetails.Where(m => m.propertyid == objVM.propertyid).Select(m => m.balance).SingleOrDefault());
int balanceUpdateVar = (rentVar + balanceVar);
var propInfo = new PropertyDetail() { balance = balanceUpdateVar };
//var result = (from a in db.PropertyDetails
// where a.propertyid == objVM.propertyid
// select new PropertyDetail
// {
// rent = a.rent,
// balance = a.balance
// }).ToList();
db.PropertyDetails.Attach(propInfo);
db.Entry(propInfo).Property(z => z.balance).IsModified = true;
db.SaveChanges();
}
}
Here is what I think you can do.
Fetch the data once and update once.
using (estatebranchEntities db=new estatebranchEntities())
{
var propDetails = db.PropertyDetails.FirstOrDefault(m => m.propertyid == objVM.propertyid);
if (propDetails != null)
{
int rentVar = Convert.ToInt32(propDetails.rent);
int balanceVar = Convert.ToInt32(propDetails.balance);
int balanceUpdateVar = rentVar + balanceVar;
//now do the update
propDetails.balance = balanceUpdateVar;
db.Entry(proDetails).State = EntityState.Modified;
db.SaveChanges();
}
}
if you need to use the rentVar,balanceVar or the balanceUpdateVar, outside of the using statement then declare them outside it.
I process emails just fine. Now, I come across some emails that are PDFs and they must be inline since they aren't noticed with the .Attachments. Here is my code. I can't get it. Please help. Thanks!
var message = mainFolder.GetMessage(i - 1);
eCount++;
// Get specifics of email
var attachments = message.Attachments.ToList();
int attCnt = attachments.Capacity;
string preChk = message.From.ToString();
var msgMsg = new MimePart();
var att2 = new List<MimePart>();
var mp2 = new List<Multipart>();
var iter = new MimeIterator(message);
int mpCnt = 0;
if (attCnt == 0)
{
while (iter.MoveNext())
{
mpCnt += 1;
var mp = iter.Parent as Multipart;
var prt = iter.Current as MimePart;
if (mp != null && prt != null && prt.IsAttachment)
{ //Check if an attachment slipped through
mp2.Add(mp);
att2.Add(prt);
}
}
}
// If I expand the iter.MoveNext, I can drill down to the images
iter.MoveNext
I did figure it out and I pretty much eliminated all the code above and condensed it to only a couple lines. In my var attachments, if it = 0, then I know its inline. I did this:
var bd = message.BodyParts.ToList<MimeKit.MimeEntity>();
Usually, we do the MimeKit.Mimepart attachment in attachments loop. I found out a couple things. Do var mp = bd.ElementAt(inAttCnt - 1);
var ma2 = mp.ContentType.Name; Check that ma2 <> null. Dont do a bd.Remove you will throw an exception! The foreach will take care of it! Hope that helps.
Most of the time I retrieve multiple records so I would end up doing this
var rpmuser = new List<rpm_scrty_rpm_usr>();
I have my List collection of properties from poco
So I typically use select new in my Linq statement
Then I use a foreach and loop over the records in which the List would get model.Add(new instance in each loop)
However , do I really need to be doing all this looping to populate?
Bigger question when i have a single record should I be needing to even do a loop at all?
public bool UpdateAllUsers(string user, string hash, string salt)
{
bool status = false;
var rpmuser = new rpm_scrty_rpm_usr();
var query = (from t in db.rpm_usr
.Where(z => z.usr_id == "MillXZ")
select new
{
t.usr_id,
t.usr_lnm,
t.usr_pwd,
t.usr_fnm,
t.salt,
t.inact_ind,
t.lst_accs_dtm,
t.lst_pwd_chg_dtm,
t.tel,
t.wwid,
t.email_id,
t.dflt_ste_id,
t.apprvr_wwid,
t.chg_dtm,
t.chg_usr_id,
t.cre_dtm,
t.cre_usr_id,
});
foreach(var s in query)
{
rpmuser.wwid = s.wwid;
rpmuser.usr_pwd = s.usr_pwd;
rpmuser.usr_lnm = s.usr_lnm;
rpmuser.usr_id = s.usr_id;
rpmuser.usr_fnm = s.usr_fnm;
rpmuser.tel = s.tel;
rpmuser.salt = s.salt;
rpmuser.lst_pwd_chg_dtm = rpmuser.lst_pwd_chg_dtm;
rpmuser.lst_accs_dtm = s.lst_accs_dtm;
rpmuser.inact_ind = s.inact_ind;
rpmuser.email_id = s.email_id;
rpmuser.apprvr_wwid = s.apprvr_wwid;
rpmuser.chg_dtm = s.chg_dtm;
rpmuser.chg_usr_id = s.chg_usr_id;
rpmuser.cre_usr_id = s.cre_usr_id;
rpmuser.dflt_ste_id = s.dflt_ste_id;
rpmuser.cre_dtm = s.cre_dtm;
}
DateTime dateTime = DateTime.Now;
try
{
rpmuser = db.rpm_usr.Find(rpmuser.usr_id);
rpmuser.usr_pwd = hash;
rpmuser.salt = salt;
db.SaveChanges();
status = true;
}
catch (Exception ex)
{
status = false;
}
return status;
}
I am not exactly sure what you want. Your method says Update All, but only seems to be attempting to update one record. So why don't you just do this?
try
{
var rpmuser = db.rpm_usr.Single(z => z.usr_id == "MillXZ");
rpmuser.usr_pwd = hash;
rpmuser.salt = salt;
db.SaveChanges();
status = true;
}
catch (Exception ex)
{
status = false;
}
You have a lot of redundant declarations unless I am missing something. In the case of the list you will do something like this:
var query = db.rpm_usr.Where(z => z.usr_id == "...some string...");
foreach(var item in query)
{
rpmuser.usr_pwd = ...some value...;
rpmuser.salt = ...some value...;
}
db.SaveChanges();
I can't stress this enough, Murdock's answer is absolutely the right way to fix the code you've shown. You are writing way too much code for what you're trying to accomplish.
However, to answer your question about whether you need to loop in other situations, you can get away from having to loop by doing the projection into a new type as part of your LINQ-to-Entities query. The looping still happens, you just don't see it.
var query = db.rpm_usr
.Where(z => z.usr_id == "MillXZ")
.AsEnumerable()
.Select(z => new rpm_scrty_rpm_usr()
{
usr_id = z.usr_id,
usr_lnm = z.usr_lnm,
// etc...
});
You would then finish the query off with a .Single(), .SingleOrDefault(), or .ToList() depending on whether you expected exactly one, one or zero, or a list. For example, in this case if you might find one or zero users with the name "MillXZ" you would write the following.
var query = db.rpm_usr
.Where(z => z.usr_id == "MillXZ")
.AsEnumerable()
.Select(z => new rpm_scrty_rpm_usr()
{
usr_id = z.usr_id,
usr_lnm = z.usr_lnm,
// etc...
})
.SingleOrDefault();
With my experience with Entity Framework I find that the more and more a queries needs to be optimized the more business logic bleeds into the data layer or even lower into the database via a stored procedure. This makes unit testing more difficult and I was just wondering how people deal with this?
E.g.
I may have a function in my business layer/repository that has a bunch of business rules and logic that I have unit tests for. However I find that I can combined this into a stored procedure and return multiple result sets etc. but now my unit tests are useless.
Here is an example of placing a buy order in a stock market scenario. There are many hits to the database that bring back data, and then need to go back to the database to check to see if certain criteria is met.
I could put this into a stored procedure, but then all that logic is then pushed into the database and is harder to unit test.
public IEnumerable<ValidationResult> PlaceBuyOrder(int sellOrderId, int requestedShares, int buyerUserId)
{
if (sellOrderId <= 0) throw new ArgumentNullException("sellOrderId");
if (requestedShares <= 0) throw new ArgumentNullException("requestedShares");
if (buyerUserId <= 0) throw new ArgumentNullException("buyerUserId");
// Get the sell order to check to see if the sell order still exists.
var sellerOrderActivity = GetLatestUnprocessedOrderActivityForOrder(sellOrderId);
if (sellerOrderActivity == null)
yield return new ValidationResult(GlobalResources.OrderDoestExist);
else
{
// Make sure when the order type is "sell all" that the
if (sellerOrderActivity.Order.Type == OrderTypes.SellAll && requestedShares < sellerOrderActivity.QtyRemaining)
yield return new ValidationResult("requestedShares", GlobalResources.RequestedSharesCannotBeLessthanWhatIsBeingSold);
else
{
if (requestedShares > sellerOrderActivity.QtyRemaining)
requestedShares = sellerOrderActivity.QtyRemaining;
var requestedSharesAmount = requestedShares * sellerOrderActivity.Price;
if (!_financeService.CanUserAffordSharesPurchase(buyerUserId, requestedSharesAmount))
yield return new ValidationResult(GlobalResources.InsufficentFundsToPurchaseRequestedShares);
else if (!_financeService.CanUserAffordSharesPurchase(sellerOrderActivity.Order.UserId, 0))
yield return new ValidationResult(GlobalResources.SellerHasInsufficentFundsAfterCommission);
else
{
using (var transactionScope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadCommitted, Timeout = TransactionManager.MaximumTimeout }))
{
// Insert both the order and order activity.
var newBuyOrderActivity = new OrderActivity
{
Price = sellerOrderActivity.Price,
QtyRemaining = requestedShares,
Status = OrderActivityStatuses.Open,
Order = new Order
{
Type = OrderTypes.Buy,
UserId = buyerUserId,
VideoId = sellerOrderActivity.Order.VideoId
}
};
_orderRepository.AddOrderActivity(newBuyOrderActivity);
_orderRepository.SaveChanges();
var sellerProcessedOrderActivity = new ProcessedOrderActivity();
var buyerProcessedOrderActivity = new ProcessedOrderActivity();
ProcessBuyAndSellOrderActivities(newBuyOrderActivity.Id, sellerOrderActivity, requestedShares, sellerProcessedOrderActivity, buyerProcessedOrderActivity);
if (sellerOrderActivity.Order.Type == OrderTypes.Sell && requestedShares != sellerOrderActivity.QtyRemaining)
{
var newSellerPartialOrderActivity = new OrderActivity
{
OrderId = sellerOrderActivity.OrderId,
Price = sellerOrderActivity.Price,
QtyRemaining = sellerOrderActivity.QtyRemaining - requestedShares,
Status = OrderActivityStatuses.Partial
};
_orderRepository.AddOrderActivity(newSellerPartialOrderActivity);
_orderRepository.SaveChanges();
}
var sellerAccountId = _accountRepository.GetAccountIdForUser(sellerOrderActivity.Order.UserId);
var sellerAccountTransaction = new Data.Model.Transaction();
var buyerAccountTransaction = new Data.Model.Transaction();
_financeService.TransferFundsFromBuyerToSeller(buyerUserId, 0, sellerOrderActivity.Order.UserId, sellerAccountId, requestedSharesAmount,
sellerAccountTransaction, buyerAccountTransaction);
// Add the apporpriate transactions to the portfolio activity table and make
// sure the seller comes before the buyer to negate the portfolio first.
var sellerPortfolio = new UserPortfolioActivity
{
Price = sellerOrderActivity.Price,
ProcessedOrderActivityId = sellerProcessedOrderActivity.Id,
Qty = -requestedShares,
TransactionId = sellerAccountTransaction.Id,
UserId = sellerOrderActivity.Order.UserId,
VideoId = sellerOrderActivity.Order.VideoId
};
var buyerPortfolio = new UserPortfolioActivity
{
Price = sellerOrderActivity.Price,
ProcessedOrderActivityId = buyerProcessedOrderActivity.Id,
Qty = requestedShares,
TransactionId = buyerAccountTransaction.Id,
UserId = buyerUserId,
VideoId = sellerOrderActivity.Order.VideoId
};
_userRepository.AddUserPortfolio(sellerPortfolio);
_userRepository.AddUserPortfolio(buyerPortfolio);
_userRepository.SaveChanges();
transactionScope.Complete();
}
}
}
}
}