Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce
Hey guys I have Recently implemented a new review service into a current site we have.
When the change went live the first day all worked fine.
Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong.
In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code.
var email = HttpUtility.UrlEncode(order.Customer.Email.ToString());
var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString());
var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product");
var orderRef = HttpUtility.UrlEncode(order.Id.ToString());
var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-")));
string itemRef = "";
try
{
itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString());
}
catch
{
itemRef = "0";
}
var url = string.Format("feefo Url",
login, password,email,name,description,orderRef,productLink,itemRef);
var request = (HttpWebRequest)WebRequest.Create(url);
request.KeepAlive = false;
request.Timeout = 5000;
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusDescription == "OK")
{
var stream = response.GetResponseStream();
if(stream != null)
{
using (var reader = new StreamReader(stream))
{
var content = reader.ReadToEnd();
}
}
}
}
So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo.
Now:
this hasnt been happening all week since the 15th (day of the
implementation)
the site has been grinding to a halt recently.
The stream and reader in the the var content is there for debugging.
Im wondering does the code redflag anything to you that could relate to the process of website?
Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots.
Any help would be much appreciated!
EDIT: also note that this code is in a method that is called and wrapped in A try catch
UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week
A call to another web site while processing the order can degrade performance, as you are calling to a site that you do not control. You don't know how much time it is going to take. Furthermore, the GetResponse method can throw an exception, if you don't log anything in your outer try/catch block then you won't be able to know what's happening.
The best way to perform such a task is to implement something like the "Send Emails" scheduled task, and send data when you can afford to wait for the remote service. It is easy if you try. It is more resilient and easier to maintain if you upgrade the nopCommerce code base.
This is how I do similar things:
Avoid modifying the OrderProcessingService: Create a custom service or plugin that consumes the OrderPlacedEvent or the OrderPaidEvent (just implement the IConsumer<OrderPaidEvent> or IConsumer<OrderPlacedEvent> interface).
Do not call to a third party service directly while processing the request if you don't need the response at that moment. It will only delay your process. At the service created in step 1, store data and send it to Feefo later. You can store data to database or use an static collection if you don't mind losing pending data when restarting the site (that could be ok for statistical data for instance).
Best way to implement point #2 is to add a new scheduled task implementing ITask (remember to add a record to the ScheduleTask table). Just recover the stored data do your processing.
Add some logging. It is easy, just get an ILogger instance and call Insert.
As far as I can see, you are making a blocking synchronous call to other websites, which will definitely slow down your site in between the request-response process. What Marco has suggested is valid, try to do it in an ITask. Or you can use an asynchronous web request to potentially remove the block, if you need things done immediately instead of scheduled. :)
Related
Is it possible to read API data as it comes , I wrote the below Csharp code in my controller but sometimes the data takes more than 2 minutes and I was wondering if it was possible to load the data in my website as they come instead of waiting for it. Below is my current code:
private static async Task<List<Model>> GetFlightData()
{
using (var client = new HttpClient())
{
client.Timeout = TimeSpan.FromMilliseconds(Timeout.Infinite);
var content = await client.GetStringAsync(URL);
var result = JsonConvert.DeserializeObject<List<Model>>(content);
return result;
}
}
The fastest way is to save the data statically and initialize it on start up,
the problem with this solution is that IIS may restart you website when there's no traffic and the data will be lost (and will cause the next visitor to wait a whole 2 minutes).
The best suggestion I have is to save it to redis/other cache of your chosing and then just pull it from there.
About:
I have this Windows Form application which every 60 seconds it captures information from two common web pages, do some simple string treatment with the result and do something (or not) based in the result.
One of those sites doesn't have any protection, so I can easily get it's HTML code using HttpWebRequest and it's HttpWebResponse.GetResponseStream().
The other one has some code protection and I can't use the same approach. The solution was use the WebBrowser class to select all text of the site and copy to the clipboard, as Jake Drew posted here (method 1).
Extra information:
When the timer reachs 1 min, each method is asynchronously execuded using Task. At the end of each Task the main thread will search some information in those texts and take or not some decisions based in the result. After this process, not even the captured text will relevant anymore. Basically everything can be wipe out from memory, since I'll get everything new and process it in about 1 minute.
Problem:
Everything is working fine but the problem is that it's gradually increasing the memory usage (about 20mb for each ticking), which are unecessary as I said before I don't need to maintain data in running in memory more than I had in the begin of app's execution:
and after comparing two snapshots I've found these 3 objects. Apparently they're responsible for that excess of memory usage:
So, even after I put the main execution in Tasks and do everything I could to help the Garbage Collector, I still have this issue.
What else could I do to avoid this issue or dump the trash from memory??
Edit:
Here's the code that is capturing the HTML of the page using HttpWebRequest:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) {
if (response.StatusCode == HttpStatusCode.OK) {
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (response.CharacterSet == null) {
readStream = new StreamReader(receiveStream);
} else {
readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
}
PB_value = readStream.ReadToEnd();
readStream.Close(); //Ensure
}
response.Close(); //Ensure
}
Solved:
After some research I've found a solution. I actually feel kind of ashamed because it is a quite simple solution that I haven't tried before, still, it's important to share.
The first thing I did was create an Event to identify when my two Tasks were finished then I assigned two functions to this event. The first function forced the Garbage Collector (GC.Collect()). The second function disposed the two Tasks, since all the main processes were done inside them (T.Dispose()). Then I've got the result I wanted:
We are experiencing performance issues within our website related to high cpu usage. When using a profiler we have identified a particular method that is taking ~35 seconds to return from.
This is a call back method when using a payment gateway called SagePay.
I have copied the two methods that are part of this call below:
public void SagePayNotificationReturn()
{
string vendorTxCode = Request.Form["vendortxcode"];
var sagePayTransaction = this.sagePayTransactionManager.GetTransactionByVendorTxCode(vendorTxCode);
if (sagePayTransaction == null)
{
// Cannot find the order, so log an error and return error response
int errorId = this.exceptionManager.LogException(System.Web.HttpContext.Current.Request, new Exception(string.Format("Could not find SagePay transaction for order {0}.", vendorTxCode)));
ReturnResponse(System.Web.HttpContext.Current, StatusEnum.ERROR, string.Format("{0}home/error/{1}", GlobalSettings.SiteURL, errorId), string.Format("Received notification for {0} but the transaction was not found.", vendorTxCode));
}
else
{
// Store the response and respond immediately to SagePay
sagePayTransaction.NotificationValues = sagePayTransactionManager.FormValuesToQueryString(Request.Form);
this.sagePayTransactionManager.Save(sagePayTransaction);
ReturnResponse(System.Web.HttpContext.Current, StatusEnum.OK, string.Format("{0}payment/processtransaction/{1}", GlobalSettings.SiteURL, vendorTxCode), string.Empty);
}
}
private void ReturnResponse(HttpContext context, StatusEnum status, string redirectUrl, string statusDetail)
{
context.Response.Clear();
context.Response.ContentEncoding = Encoding.UTF8;
using (StreamWriter streamWriter = new StreamWriter(context.Response.OutputStream))
{
streamWriter.WriteLine(string.Concat("Status=", status.ToString()));
streamWriter.WriteLine(string.Concat("RedirectURL=", redirectUrl));
streamWriter.WriteLine(string.Concat("StatusDetail=", HttpUtility.HtmlEncode(statusDetail)));
streamWriter.Flush();
streamWriter.Close();
}
context.ApplicationInstance.CompleteRequest();
}
The GetTransactionByVendorTxCode method is a simple Entity Framework call, so I've ruled that out.
Has anybody any experience in this or can they see anything glaringly wrong with the code that could cause such an issue?
EDIT: Looking at the breakdown table provided by the profiler, it says that 99.6% time is spent in System.Web.Mvc.MvcHandler.BeginProcessRequest().
EDIT: Using the profiling tool New Relic, it says that 22% of all processing time is spent in the this.sagePayTransactionManager.GetTransactionByVendorTxCode(vendorTxCode) method. This is simply containing an EF6 call to a repository. The call does contain a predicate parameter though, rather than pre-defined condition. Could it be that the query is not being pre-compiled?
Here's my first step of getting to the solution:
Put in a timer start before this statement then stop it when it completes. Tell us the timespan.
var sagePayTransaction = this.sagePayTransactionManager.GetTransactionByVendorTxCode(vendorTxCode);
Put in another timer for this code block: Tell us the relative time compared to method above.
using (StreamWriter streamWriter = new StreamWriter(context.Response.OutputStream))
{
streamWriter.WriteLine(string.Concat("Status=", status.ToString()));
streamWriter.WriteLine(string.Concat("RedirectURL=", redirectUrl));
streamWriter.WriteLine(string.Concat("StatusDetail=", HttpUtility.HtmlEncode(statusDetail)));
streamWriter.Flush();
streamWriter.Close();
}
Finally put another timer here:
context.ApplicationInstance.CompleteRequest();
Post the information back to us and I'll guide you to the next step. What we are doing above is getting metrics that will span both local and remote access to find the major problem. We'll pick that off first then progress further if needed. Just tell us what the measurements are.
There are a number of things you may need to consider here.
If GetTransactionByVendorTxCode is responsible for 22% of total processing time, then you will need to streamline everything that method touches but then still go on to fish other bottlenecks in the overall processing pipeline.
You say that method abstracts a call to EF6 and that it passes in a predicate expression that gets used in the Where clause to build up the final query.
If the query is a complex one, have you considered delegating to a StoredProcedure? Since you are returning an Entity, you can hang it off of the DbSet. (In the case of a DTO, it will hang off the Database property of the DbContext).
Also, you will need to look at the indices on the columns used in your predicate. What is the current record count? Are your queries resulting in seeks or scans? You will need to look at the resulting query plans; if using SQL Server run your queries Database Engine Tuning Advisor.
Perhaps a little more detail about your current setup will assist in providing better guidance.
I have been reading a lot about ThreadPools, Tasks, and Threads. After awhile I got pretty confused with the whole thing. Lots of people saying negative/positive things about each... Maybe someone can help me find a solution for my problem. I created a simple diagram here to get my point across better.
Basically on the left is a list of 5 strings (URL's) that need to be processed. In the center is just my idea of a handler that has 2 events to track progress. Inside that handler it takes all 5 URL's creates separate tasks for them, shown in blue. Once each one complete I want each one to return the webpage results to the handler. When they have all returned a value I want the OnComplete to be called and all this information passed back to the main thread.
Hopefully you can understand what I am trying to do. Thanks in advance for anyone who would like to help!
Update
I have taken your suggestions and put them to use. But I still have a few questions. Here is the code I have built, mind it is not build proof, just a concept to see if I'm going in the right direction. Please read the comments, I had included my questions on how to proceed in there. Thank you for all who took interest in my question so far.
public List<String> ProcessList (string[] URLs)
{
List<string> data = new List<string>();
for(int i = 0; i < URLs.Length - 1; i++)
{
//not sure how to do this now??
//I want only 10 HttpWebRequest running at once.
//Also I want this method to block until all the URL data has been returned.
}
return data;
}
private async Task<string> GetURLData(string URL)
{
//First setup out web client
HttpWebRequest Request = GetWebRequest(URL);
//
//Check if the client holds a value. (There were no errors)
if (Request != null)
{
//GetCouponsAsync will return to the calling function and resumes
//here when GetResponse is complete.
WebResponse Response = await Request.GetResponseAsync();
//
//Setup our Stream to read the reply
Stream ResponseStream = Response.GetResponseStream();
//return the reply string here...
}
}
As #fendorio and #ps2goat pointed out async await is perfect for your scenario. Here is another msdn article
http://msdn.microsoft.com/en-us/library/hh300224.aspx
It seems to me that you are trying to replicate a webserver within a webserver.
Each web request starts its own thread in a webserver. As these requests can originate from anywhere that has access to the server, nothing but the server itself has access or the ability to manage them (in a clean way).
If you would like to handle requests and keep track of them like I believe you are asking, AJAX requests would be the best way to do this. This way you can leave the server to manage the threads and requests as it does best, but you can manage their progress and monitor them via JSON return results.
Look into jQuery.ajax for some ideas on how to do this.
To achieve the above mentioned functionality in a simple way, I would prefer calling a BackgroundWorker for each of the tasks. You can keep track of the progress plus you get a notification upon task completion.
Another reason to choose this is that the mentioned tasks look like a back-end job and not tightly coupled with the UI.
Here's a MSDN link and this is the link for a cool tutorial.
We are building a highly concurrent web application, and recently we have started using asynchronous programming extensively (using TPL and async/await).
We have a distributed environment, in which apps communicate with each other through REST APIs (built on top of ASP.NET Web API). In one specific app, we have a DelegatingHandler that after calling base.SendAsync (i.e., after calculating the response) logs the response to a file. We include the response's basic information in the log (status code, headers and content):
public static string SerializeResponse(HttpResponseMessage response)
{
var builder = new StringBuilder();
var content = ReadContentAsString(response.Content);
builder.AppendFormat("HTTP/{0} {1:d} {1}", response.Version.ToString(2), response.StatusCode);
builder.AppendLine();
builder.Append(response.Headers);
if (!string.IsNullOrWhiteSpace(content))
{
builder.Append(response.Content.Headers);
builder.AppendLine();
builder.AppendLine(Beautified(content));
}
return builder.ToString();
}
private static string ReadContentAsString(HttpContent content)
{
return content == null ? null : content.ReadAsStringAsync().Result;
}
The problem is this: when the code reaches content.ReadAsStringAsync().Result under heavy server load, the request sometimes hangs on IIS. When it does, it sometimes returns a response -- but hangs on IIS as if it didn't -- or in other times it never returns.
I have also tried reading the content using ReadAsByteArrayAsync and then converting it to String, with no luck.
When I convert the code to use async throughout I get even weirder results:
public static async Task<string> SerializeResponseAsync(HttpResponseMessage response)
{
var builder = new StringBuilder();
var content = await ReadContentAsStringAsync(response.Content);
builder.AppendFormat("HTTP/{0} {1:d} {1}", response.Version.ToString(2), response.StatusCode);
builder.AppendLine();
builder.Append(response.Headers);
if (!string.IsNullOrWhiteSpace(content))
{
builder.Append(response.Content.Headers);
builder.AppendLine();
builder.AppendLine(Beautified(content));
}
return builder.ToString();
}
private static Task<string> ReadContentAsStringAsync(HttpContent content)
{
return content == null ? Task.FromResult<string>(null) : content.ReadAsStringAsync();
}
Now HttpContext.Current is null after the call to content.ReadAsStringAsync(), and it keeps being null for all the subsequent requests! I know this sounds unbelievable -- and it took me some time and the presence of three coworkers to accept that this was really happening.
Is this some kind of expected behavior? Am I doing something wrong here?
I had this problem. Although, I haven't fully tested yet, using CopyToAsync instead of ReadAsStringAsync seems to fix the problem:
var ms = new MemoryStream();
await response.Content.CopyToAsync(ms);
ms.Seek(0, SeekOrigin.Begin);
var sr = new StreamReader(ms);
responseContent = sr.ReadToEnd();
With regards to your second issue, the async/await is syntactic sugar for the compiler building a state machine where the call to to a function preceded by "await" returns immediately on the current thread...one that contains HttpContext.Current in its thread local storage. The completion of that async call can occur on a different thread...one that does NOT have HttpContext.Current in its thread local storage.
If you want the completion to execute on the same thread (thus having the same objects in thread local storage like HttpContext.Current), then you need to be aware of this behavior. This is especially important on calls from the main UI thread (if you're building a Windows application) or in ASP.NET, calls from an ASP.NET request thread where you are dependent on HttpContext.Current.
See reference docs on ConfigureAwait(false). Also, view some Channel 9 tutorials on TPL. Once the "easy" stuff is grokked, the presenter will invariably talk about this issue as it causes subtle problems that are not easily understood unless you know what the TPL is doing underneath the covers.
Good luck.
With regards to your first problem, if the caller gets a result, I'm not convinced that IIS has not completed the request. How are you determining that the ASP.NET request thread initiated by this caller is hung in IIS?