EntityCommandExecutionException happening only on AzureSQL - c#

I do have the following Class and Controller
public class FieldHelper
{
public FieldHelper(RoomMeta rm, string typeClass)
{
this.typeClass = typeClass;
this.Name = rm.Name;
if (rm.Required)
this.required = "required";
else
this.required = "optional";
}
public FieldHelper(EventTypeMeta etm, string typeClass)
{
this.typeClass = typeClass;
this.Name = etm.Name;
if (etm.Required)
this.required = "required";
else
this.required = "optional";
}
public string typeClass { get; set; }
public string Name { get; set; }
public string required { get; set; }
}
[HttpGet]
public JsonResult GetDefaultFields(int eventTypeID, int roomID)
{
using (var db = new MyDbContext())
{
List<FieldHelper> fields = new List<FieldHelper>();
foreach(RoomMeta rm in db.RoomMetaSet.Where(rm => rm.RoomId == roomID))
{
fields.Add(new FieldHelper(rm, rm.FieldTypes.Name)); //Here the Exception gets thrown
}
foreach(EventTypeMeta etm in db.EventTypeMetaSet.Where(etm => etm.EventTypeId == eventTypeID))
{
fields.Add(new FieldHelper(etm, etm.FieldTypes.Name));
}
return Json(fields, JsonRequestBehavior.AllowGet);
}
}
My database layout looks as follows:
Now, when I run this on my local machine, where I use a SQL Server Express 2014 installation, everything works just the way I expected it. However, once I deploy the Application to a Windows Azure Website with a Azure SQL Database, I get an EntityCommandExecutionException at the line marked. The inner exception tells me "There is already an open DataReader associated with this Command which must be closed first." which seems somehow more useful to me, but still I couldn't figure out why this works localy but not online.
Any Ideas would be appreciated.

Looks like I've found the Answer myself. Since I didnt Include the related Entitys in my query, entity framework tried to open another connection, which failed due to the restriction of my azure sql database.
I changed
foreach(RoomMeta rm in db.RoomMetaSet.Where(rm => rm.RoomId == roomID))
to
foreach(RoomMeta rm in db.RoomMetaSet.Include("FieldTypes").Where(rm => rm.RoomId == roomID))
and everything works fine now.

Related

Handling Stripes payment_intent.succeeded Webhook if it competes with a post back from the client to create an entity in the DB

I need some advice on the workflow for my application when charging a credit card using Stripe.
Scenario 1 - I don't use any webhook for payment_intent.succeeded so when I call stripe.confirmCardPayment on the client side in Javascript
and receive the paymentIntent back I then post to my server and create an entry in a "Payment" table with some method called "SavePayment()", where all the details (card id, exp month, amount, etc) will be stored. Once I save to the DB, I can return the details to the client (points earned, payment successful message, etc). Then we're done!
Scenario 2 Client(user) closes the browser after Stripe is called to charge the card, but before it can post back to my server to add the "Payment" entity. So now I use a webhook for payment_intent.succeeded as others have recommended doing this for redundancy.
Problem -
Because the webhook is triggered immediately, after the card is charged by Stripe, my server could potentially receive two different entry points (client posting back to server to save a payment and Stripes webhook trigger event), to create a "Payment" entity in my DB.
Now this isn't a huge problem, because both entry points can query for the "Payment" entity based on it's unique identifier (PaymentIntentId) to see if it exists in the DB.
But let's say both entry points query and return a null, so now both entry points go ahead and create a new "Payment" entity and attempt to save it in the DB. One will succeed and one will now fail, frequently creating a unique identifier constraint exception being thrown by SQL Server.
Solution? - This doesn't seem like the ideal workflow/scenario, where multiple exceptions could be frequently thrown, for creating an entity in my DB. Is there a better workflow for this, or am I stuck implementing it this way?
Here is some of my code/suedo code to look at.
public class Payment : BaseEntity
{
public string PaymentIntentId { get; set; }
public int Amount { get; set; }
public string Currency { get; set; }
public string CardBrand { get; set; }
public string CardExpMonth { get; set; }
public string CardExpYear { get; set; }
public int CardFingerPrint { get; set; }
public string CardLastFour { get; set; }
public PaymentStatus Status { get; set; }
public int StripeFee { get; set; }
public int PointsAwarded { get; set; }
public int PointsBefore { get; set; }
public int PointsAfter { get; set; }
public string StripeCustomer { get; set; }
public int UserId { get; set; }
public User User { get; set; }
}
Here is some code from the client to call stripe and then post to my server
// submit button is pressed
// do some work here then call Stripe
from(this.stripe.confirmCardPayment(this.paymentIntent.clientSecret, data)).subscribe((result: any) => {
if (result.paymentIntent) {
let payment = {
paymentIntentId: result.paymentIntent.id,
amount: result.paymentIntent.amount,
currency: result.paymentIntent.currency,
// fill in other fields
};
this.accountService.savePayment(payment).subscribe(response => {
if (response.status === 'Success') {
// do some stuff here
this.alertService.success("You're purchase was successful");
this.router.navigateByUrl('/somepage');
}
if (response.status === 'Failed') {
this.alertService.danger("Failed to process card");
}
}, error => {
console.log(error);
this.alertService.danger("Oh no! Something happened, please contact the help desk.");
}).add(() => {
this.loadingPayment = false;
});
} else {
this.loadingPayment = false;
this.alertService.danger(result.error.message);
}
});
Here is the server controller to save a "Payment" entity
[HttpPost("savepayment")]
public async Task<ActionResult> SavePayment(StripePaymentDto paymentDto)
{
var userFromRepo = await _userManager.FindByEmailFromClaimsPrinciple(HttpContext.User);
if (userFromRepo == null)
return Unauthorized(new ApiResponse(401));
// this calls the Stripe API to get the PaymentIntent (just incase the client changed it)
var paymentIntent = await _paymentService.RetrievePaymentIntent(paymentDto.PaymentIntentId);
if (paymentIntent == null) return BadRequest(new ApiResponse(400, "Problem Retrieving Payment Intent"));
var payment = _mapper.Map<StripePaymentDto, StripePayment>(paymentDto);
payment.UserId = userFromRepo.Id;
if (paymentIntent.Status == "succeeded") {
// fill in all the necessary fields
// left out for brevity
} else if (paymentIntent.Status == "requires_payment_method") {
payment.Status = PaymentStatus.Failed;
_logger.LogInformation("Payment Intent is not successful. Status: " + paymentIntent.Status + " PaymentIntentId: " + paymentIntent.PaymentIntentId);
// send payment failure email
} else {
// don't know if this will be needed
payment.Status = PaymentStatus.Pending;
}
_unitOfWork.Repository<StripePayment>().Add(payment);
var success = await _unitOfWork.Complete();
if (success > 0) {
if (payment.Status == PaymentStatus.Success) {
// send email
}
return Ok(_mapper.Map<StripePayment, StripePaymentDto>(payment));
}
return BadRequest(new ApiResponse(400, "Failed to save payment"));
}
Here is the Stripe webhook
[HttpPost("webhook")]
public async Task<ActionResult> StripeWebhook()
{
var json = await new StreamReader(HttpContext.Request.Body).ReadToEndAsync();
// if this doesn't match we get an exception (sig with whSec)
var stripeEvent = EventUtility.ConstructEvent(json, Request.Headers["Stripe-Signature"], _whSecret);
PaymentIntent intent;
switch (stripeEvent.Type)
{
case "payment_intent.succeeded":
intent = (PaymentIntent)stripeEvent.Data.Object;
_logger.LogInformation("Payment Succeeded: ", intent.Id);
this.ProcessSuccess(intent);
// order = await _paymentService.UpdateOrderPaymentSucceeded(intent.Id);
// _logger.LogInformation("Order updated to payment received: ", order.Id);
break;
case "payment_intent.payment_failed":
intent = (PaymentIntent)stripeEvent.Data.Object;
_logger.LogInformation("Payment Failed: ", intent.Id);
// _logger.LogInformation("Payment Failed: ", order.Id);
break;
}
return new EmptyResult();
}
private async void ProcessSuccess(PaymentIntent paymentIntent) {
var spec = new PaymentsWithTypeSpecification(paymentIntent.Id);
var paymentFromRepo = await _unitOfWork.Repository<StripePayment>().GetEntityWithSpec(spec);
if (paymentFromRepo == null) {
// create one and add it
var payment = _mapper.Map<PaymentIntent, StripePayment>(paymentIntent);
payment.UserId = Convert.ToInt32(paymentIntent.Metadata["userid"]);
}
// finish work here and then save to DB
}
Great point below. I appreciate your goal. After some thought, my final analysis is that: in order to prevent duplicate records in the database from multiple sources, a unique index should be used. (which you are using)
Now by using a unique index the database will throw an exception which the code will have to handle gracefully. Hence the answer is that you are doing it the way I and others have done so for some years. Unfortunately, I'm not aware of any other means of avoiding an exception once you hit the database tier.
Great question even if the answer is not the one you were hoping for.

Php to C# aspnet mvc

i have transform a php/js code to js/c#, but i stuck for update the new value.
The php code is :
`if (isset($_POST['update'])) {
foreach($_POST['positions'] as $position) {
$index = $position[0];
$newPosition = $position[1];
$conn->query("UPDATE country SET position = '$newPosition' WHERE id='$index'");
}
exit('success');
}`
My "empty" c# code
[HttpPost]
public ActionResult Index (userTable index)
{
picturesEntities MyDb = new picturesEntities();
homeViewModel HVM = new homeViewModel();
HVM.userTables = MyDb.userTables.ToList();
if (Request["update"] != null)
{
foreach (Request["positions"])
{
MyDb.SaveChanges();
}
return View(HVM);
}
}
If someone could help me for it that would be great, i'm stuck on it for days and i didn't find a workning solution yet.
Thanks to everyone who read my message.
Most ASP.NET will bind a custom class which will be compatible to your request.
public class UserPositionsRequest
{
public bool Update { get; set; }
// For orderly, this actually be a list of a custom class
public List<int[]> Positions { get; set; }
}
This by any means is not a complete and working solution, the following code was never been tested and can be consider as pseudo-like code.
Also, the .Id and .Position should be the same sensitivity as in Db.
// Binding our UserPositionsRequest class
public void Index(UserPositionsRequest request) {
// Checking if we should update, if you will change the request to boolean type: "true"
// ..on the client side, then you could actually change the condition to be: if (request.Update)
if (request.Update == 1) {
// Creating database connection using (I assume) EntityFramework
using (var picturesEntities = new picturesEntities()) {
// Building a dictionary for fast lookup. Key, Value as the 0, 1 arg respectfully
var usersDataToUpdate = request.Positions.ToDictionary(p => p[0], p => p[1]);
// Finding the entries that needs to be updated
var usersEntitiesToUpdate = picturesEntities.userTables.Where(cntry => usersDataToUpdate.ContainsKey(cntry.Id));
// Iterating over the entities
foreach (var userEntity in usersEntitiesToUpdate) {
// Updating their position.
userEntity.Position = usersDataToUpdate[userEntity.Id];
}
picturesEntities.SaveChanges();
}
}
// Probably you wanted to return something here, but it's probably an ajax and you can skip that.
}

c# how to decide whether to add to sub-list or start new one

I have an sql query that provides me my data where I sometimes have lines that should be clustered (the data is aligned with an order by). The data is grouped by the field CAPName. Going through those rows line by line, I need to decide whether a new list should be initiated (content of CAPName differs to previous itteration), or whether the (already) initated list (from the previous iteration) should be added, too.
My pain lays with the location of the declaration of the relatedCapabilitySystem list.
I wanted to declare it within the if statement (Because, as I stated I need to decide whether the list from the previous iteration should be added too, or whether it should start a new list), but I can't as the compiler throws an exception, as the RLCapSys.Add(rCs); is non-existing in this content (which is only theoretically true). I understand why the compiler throws this exception. But if I declare the list on a "higher" level, than I always have a new list, which I don't want in the case that the item should be added to the list defined in the iteration(s) (1 or more) before
So what I want to achieve is, generate the list RLCapSys and add to it, in case the previous iteration contains the same CAPName (for clustering), otherwise create a new list.
SqlCommand cmdDetail = new SqlCommand(SQL_SubSytemsToCapability, DBConDetail);
SqlDataReader rdrDetail = cmdDetail.ExecuteReader();
List<relatedCapility> RLCaps = new List<relatedCapility>();
string lastCapShown = null;
while (rdrDetail.Read())
{
List<relatedCapabilitySystem> RLCapSys = new List<relatedCapabilitySystem>();
if (lastCapShown != rdrDetail["CAPName"].ToString())
{
//List<relatedCapabilitySystem> RLCapSys2 = new List<relatedCapabilitySystem>();
relatedCapility rC = new relatedCapility
{
Capability = rdrDetail["CAPName"].ToString(),
systemsRelated = RLCapSys,
};
RLCaps.Add(rC);
}
relatedCapabilitySystem rCs = new relatedCapabilitySystem
{
system = rdrDetail["name"].ToString(),
start = rdrDetail["SysStart"].ToString(),
end = rdrDetail["SysEnd"].ToString(),
};
RLCapSys.Add(rCs);
// method to compare the last related Capability shown create a new related Capabilty entry or add to the existing releated Capabilty related system list
lastCapShown = rdrDetail["CAPName"].ToString();
}
DBConDetail.Close();
and for reason of completness (but I think it is not needed here):
internal class CapabilitiesC
{
public List<Capability>Capabilities{ get;set;}
}
public class Capability
{
public string name { get; internal set; }
public string tower { get; internal set; }
public string color { get; internal set; }
public List<relatedCapility> related { get; set; }
}
public class relatedCapility
{
public string Capability { get; set; }
public List<relatedCapabilitySystem> systemsRelated { get; set; }
}
public class relatedCapabilitySystem
{
public string system { get; set; }
public string start { get; set; }
public string end { get; set; }
}
The purpose of your code is to take the input data and group it by capability. However, that is not immediately obvious. You can change your code to use LINQ so it becomes easier to understand and in the process solving your problem.
First you need a type to represent a record in your database. For lack of better name I will use Record:
class Record
{
public string System { get; set; }
public string Start { get; set; }
public string End { get; set; }
public string Capabilty { get; set; }
}
You can then create an iterator block to return all the records from the database (using an OR mapper like Entity Framework avoids most of this code and you can even shift some of the work from your computer to the database server):
IEnumerable<Record> GetRecords()
{
// Code to create connection and command (preferably in a using statement)
SqlDataReader rdrDetail = cmdDetail.ExecuteReader();
while (rdrDetail.Read())
{
yield return new Record {
System = rdrDetail["name"].ToString(),
Start = rdrDetail["SysStart"].ToString(),
End = rdrDetail["SysEnd"].ToString(),
Capability = rdrDetail["CAPName"].ToString()
};
}
// Close connection (proper using statement will do this)
}
Finally, you can use LINQ to perform the grouping:
var RLCaps = GetRecords()
.GroupBy(
record => record.Capability,
(capability, records) => new relatedCapility
{
Capability = capability ,
systemsRelated = records
.Select(record => new relatedCapabilitySystem
{
system = record.System,
start = record.Start,
end = record.End
})
.ToList()
})
.ToList();
Why not just assign it as NULL. The pattern would be
List<> myList = null;
if(condition)
{
myList = new List<>();
}
else
{
myList = previousList;
}
myList.Add();
previousList = myList;
I've got it working now. Thx everyone for your help. #martin, thx for your solution, you have put quite some effort into this, but that would have required for me to completely re-write my code. I am sure your approach would work and will be my next approach should I have a similar problem again.
It was a combination of the other answers that helped me figure it out. Let me show you what I ended up with:
SqlCommand cmdDetail = new SqlCommand(SQL_SubSytemsToCapability, DBConDetail);
SqlDataReader rdrDetail = cmdDetail.ExecuteReader();
List<relatedCapility> RLCaps = new List<relatedCapility>();
List<relatedCapabilitySystem> RLCapSys = new List<relatedCapabilitySystem>();
string lastCapShown = null;
while (rdrDetail.Read())
{
if (lastCapShown != rdrDetail["CAPName"].ToString())
{
RLCapSys = relatedCapabilitySystemList();
relatedCapility rC = new relatedCapility
{
Capability = rdrDetail["CAPName"].ToString(),
systemsRelated = RLCapSys,
};
RLCaps.Add(rC);
}
relatedCapabilitySystem rCs = new relatedCapabilitySystem
{
system = rdrDetail["name"].ToString(),
start = rdrDetail["SysStart"].ToString(),
end = rdrDetail["SysEnd"].ToString(),
};
RLCapSys.Add(rCs);
// method to compare the last related Capability shown create a new related Capabilty entry or add to the existing releated Capabilty related system list
lastCapShown = rdrDetail["CAPName"].ToString();
}
DBConDetail.Close();
So that's the section already shown bevor including my changes. Plus I added this:
private List<relatedCapabilitySystem> relatedCapabilitySystemList()
{
List<relatedCapabilitySystem> RLCapSys = new List<relatedCapabilitySystem>();
return RLCapSys;
}
Now I have new list reference everytime the CapName changes that is then added to the "higher" list. Before I had the issue of the very same list repeatedly assigned rather than a fresh one started. So thx again for your effort.

Azure storage account throws HTTP 400 error when writing

I've developed a simple Azure Webapp with C#. And use the table of Azure storage account to save some kinds of records of the website.
It works well on my own computer on both read and write operations, with the real connection string. I can find the record with the "Azure Storage Explorer".
However, after I deploy the app. The cloud version keeps throwing Http 400 error when I try to write the table. But the read operation still works fine. (The local version is always OK.)
The problem is wierd. Please help me with it.
Thanks.
== UPDATE ==
The save & query code is something like this.
public class CodeSnippet: TableEntity
{
public string Title { get; set; }
public string Author { get; set; }
public string FileType { get; set; }
public string Code { get; set; }
public Guid Save()
{
var guid = Guid.NewGuid();
var datetime = DateTimeOffset.UtcNow;
this.RowKey = guid.ToString();
this.PartitionKey = datetime.ToString();
var json = JsonConvert.SerializeObject(this);
var client = AzureStorage.DefaultClient.Instance.GetTableClient();
var table = client.GetTableReference("privateshare");
var insertOp = TableOperation.Insert(this);
table.Execute(insertOp);
return guid;
}
public static CodeSnippet Query(Guid? guid)
{
if (guid == null)
{
return null;
}
var client = AzureStorage.DefaultClient.Instance.GetTableClient();
var table = client.GetTableReference("privateshare");
var query = new TableQuery<CodeSnippet>()
.Where(TableQuery.GenerateFilterCondition(
"RowKey", QueryComparisons.Equal, guid.ToString()))
.Take(1);
var res = table.ExecuteQuery(query);
return res?.First();
}
}
Problem solved.
The error occurs when I take DateTimeOffset.UtcNow.ToString() as the partition key.
After I change it to DateTimeOffset.UtcNow.ToUnixTimeSeconds().ToString(), it eventually works fine.
I don't know how & why the problem happens. But I guess the special characters in the "PartitionKey" may be the reason of the problem.

RavenDB - stream index query results in exception

We're currently trying to use the Task<IAsyncEnumerator<StreamResult<T>>> StreamAsync<T>(IQueryable<T> query, CancellationToken token = null), running into some issues.
Our document look something like:
public class Entity
{
public string Id { get; set; }
public DateTime Created { get; set; }
public Geolocation Geolocation { get; set; }
public string Description { get; set; }
public IList<string> SubEntities { get; set; }
public Entity()
{
this.Id = Guid.NewGuid().ToString();
this.Created = DateTime.UtcNow;
}
}
In combination we've a view model, which is also the model were indexing:
public class EntityViewModel
{
public string Id { get; set; }
public DateTime Created { get; set; }
public Geolocation Geolocation { get; set; }
public string Description { get; set; }
public IList<SubEntity> SubEntities { get; set; }
}
And ofcourse, the index, with the resulttype inheriting from the viewmodel, to enable that SubEntities are mapped and output correctly, while enabling the addition of searchfeatures such as fulltext etc.:
public class EntityWithSubentitiesIndex : AbstractIndexCreationTask<Entity, EntityWithSubentitiesIndex.Result>
{
public class Result : EntityViewModel
{
public string Fulltext { get; set; }
}
public EntityWithSubentitiesIndex ()
{
Map = entities => from entity in entities
select new
{
Id = entity.Id,
Created = entity.Created,
Geolocation = entity.Geolocation,
SubEntities = entity.SubEntities.Select(x => LoadDocument<SubEntity>(x)),
Fulltext = new[]
{
entity.Description
}.Concat(entity.SubEntities.Select(x => LoadDocument<SubEntity>(x).Name)),
__ = SpatialGenerate("__geolokation", entity.Geolocation.Lat, entity.Geolocation.Lon)
};
Index(x => x.Created.Date, FieldIndexing.Analyzed);
Index(x => x.Fulltext, FieldIndexing.Analyzed);
Spatial("__geolokation", x => x.Cartesian.BoundingBoxIndex());
}
}
Finally we're querying like this:
var query = _ravenSession.Query<EntityWithSubentitiesIndex.Result, EntityWithSubentitiesIndex>()
.Customize(c =>
{
if (filter.Boundary == null) return;
var wkt = filter.Boundary.GenerateWkt().Result;
if (!string.IsNullOrWhiteSpace(wkt))
{
c.RelatesToShape("__geolokation", wkt, SpatialRelation.Within);
}
})
.AsQueryable();
// (...) and several other filters here, removed for clarity
var enumerator = await _ravenSession.Advanced.StreamAsync(query);
var list = new List<EntityViewModel>();
while (await enumerator.MoveNextAsync())
{
list.Add(enumerator.Current.Document);
}
When doing so we're getting the following exception:
System.InvalidOperationException: The query results type is 'Entity'
but you expected to get results of type 'Result'. If you want to
return a projection, you should use
.ProjectFromIndexFieldsInto() (for Query) or
.SelectFields() (for DocumentQuery) before calling to
.ToList().
According to the documentation, the Streaming API should support streaming via an index, and querying via an IQueryable at once.
How can this be fixed, while still using an index, and the streaming API, to:
Prevent having to page through the normal query, to work around the default pagesize
Prevent having to load the subentities one at a time when querying
Thanks in advance!
Try to use:
.As<Entity>()
(or .OfType<Entity>()) in your query. That should work in the regular stream.
This is a simple streaming query using "TestIndex" that is an index over an entity Test and I'm using a TestIndex.Result to look like your query. Note that this is actually not what the query will return, it's only there so you can write typed queries (ie. .Where(x => x.SomethingMapped == something))
var queryable = session.Query<TestIndex.Result, TestIndex>()
.Customize(c =>
{
//do stuff
})
.As<Test>();
var enumerator = session.Advanced.Stream(queryable);
while (enumerator.MoveNext())
{
var entity = enumerator.Current.Document;
}
If you instead want to retrieve the values from the index and not the actual entity being indexed you have to store those as fields and then project them into a "view model" that matches your mapped properties. This can be done by using .ProjectFromIndexFieldsInto<T>() in your query. All the stored fields from the index will be mapped to the model you specify.
Hope this helps (and makes sense)!
Edit: Updated with a, for me, working example of the Streaming API used with ProjectFromIndexFieldsInto<T>() that returns more than 128 records.
using (var session = store.OpenAsyncSession())
{
var queryable = session.Query<Customers_ByName.QueryModel, Customers_ByName>()
.Customize(c =>
{
//just to do some customization to look more like OP's query
c.RandomOrdering();
})
.ProjectFromIndexFieldsInto<CustomerViewModel>();
var enumerator = await session.Advanced.StreamAsync(queryable);
var customerViewModels = new List<CustomerViewModel>();
while (await enumerator.MoveNextAsync())
{
customerViewModels.Add(enumerator.Current.Document);
}
Console.WriteLine(customerViewModels.Count); //in my case 504
}
The above code works great for me. The index has one property mapped (name) and that property is stored. This is running the latest stable build (3.0.3800).
As #nicolai-heilbuth stated in the comments to #jens-pettersson's answer, it seems to be a bug in the RavenDB client libraries from version 3 onwards.
Bug report filed here: http://issues.hibernatingrhinos.com/issue/RavenDB-3916

Categories

Resources