I have created my own implementation (pretty straight forward) in order to talk to a REST-service. The code for GET requests can be found below. However, I would like to hear if there are some obvious pitfalls in my code that makes the requests perform worse than they could. They work decently at the moment, but I have a feeling I could have done a better job.
Any feedback would be greatly appreciated!
public static void Get<T>(string url, Action<Result<T>> callback, NetworkCredential credentials = null, JsonConverter converter = null)
{
// Checks for no internet
if (!NetworkInterface.GetIsNetworkAvailable())
{
callback(new Result<T>(new NoInternetException()));
return;
}
// Sets up the web request for the given URL (REST-call)
var webRequest = WebRequest.Create(url) as HttpWebRequest;
// Makes sure we'll accept gzip encoded responses
webRequest.Headers[HttpRequestHeader.AcceptEncoding] = "gzip";
// If any credentials were sent, attach them to request
webRequest.Credentials = credentials;
// Queues things up in a thread pool
ThreadPool.QueueUserWorkItem((object ignore) =>
{
// Starts receiving the response
webRequest.BeginGetCompressedResponse(responseResult =>
{
try
{
// Fetches the response
var response = (HttpWebResponse)webRequest.EndGetResponse(responseResult);
// If there _is_ a response, convert the JSON
if (response != null)
{
// Gives us a standard variable to put stuff into
var result = default(T);
// Creates the settings-object to insert all custom converters into
var settings = new JsonSerializerSettings();
// Inserts the relevant converters
if (converter != null)
{
if (converter is JsonMovieConverter)
{
settings.Converters.Add(new JsonMovieListConverter());
}
settings.Converters.Add(converter);
}
// Depending on whether or not something is encoded as GZIP - deserialize from JSON in the correct way
if (response.Headers[HttpRequestHeader.ContentEncoding] == "gzip")
{
var gzipStream = response.GetCompressedResponseStream();
result = JsonConvert.DeserializeObject<T>(new StreamReader(gzipStream).ReadToEnd(), settings);
}
else
{
result = JsonConvert.DeserializeObject<T>(new StreamReader(response.GetResponseStream()).ReadToEnd(), settings);
}
// Close the response
response.Close();
// Launch callback
callback(new Result<T>(result));
}
}
catch (Exception ex) // Deals with errors
{
if (ex is WebException && ((WebException)ex).Response != null && ((HttpWebResponse)((WebException)ex).Response).StatusCode == HttpStatusCode.Unauthorized)
{
callback(new Result<T>(new UnauthorizedException()));
}
else
{
callback(new Result<T>(ex));
}
}
}, webRequest);
});
}
In general this code should be quite self explanatory, but here's a few more facts:
I am using Delay's optimized gzip-decoder, which provides me with the method GetCompressedResponse() (basically the same as the original method).
I have created some JSON.net custom JsonConverter classes in order to deserialize my JSON correctly. These are fairly simple and doesn't affect performance.
The Result-class is simply a wrapper class for my results (contains a Value and Error field)
I don't know JSON.net, but is there a form that takes a stream or a streamreader rather than forcing you to read the entire string into memory first? That's rather wasteful if the streams could be large, though it'll make no difference if they're all small.
There's been a HttpWebrequest.AutomaticDecompression property that can simplify your code since 2.0 (in fairness, I'm always forgetting about that myself).
You can use the CachePolicy property to have the request use the IE cache, which can be a big saving if you'll hit the same URIs and the server handles it appropriately (appropriate max-age, correct handling of conditional GET). It also allows some flexibility - e.g. if your use has a high requirement for freshness you can use the Revalidate level so you'll always contact the server, even if the max-age suggests the server shouldn't be contacted, but you can still act on a 304 appropriately (presented to your code as if it were a 200, so you don't need to rewrite everything).
You could even build an object cache on top of this, where you use the IsFromCache method to know that whether it's safe to use the cached object, or if you need to rebuild it because the data it was built from has changed. (This is really sweet actually, there's the famous line about cache invalidation being a hard problem, and this lets us pass the buck for that hard bit down to the HTTP layer, while having the actual cached items living in the .NET layer and not needing to be deserialised again - it's a bit of work so don't do it if you won't have frequent cache hits due to the nature of your data, but where it does work, it rocks).
Related
I'm working with a frustrating API that has an annoying habit of varying it's throttling rate. Sometimes I can send one request per second, and sometimes I can only send a request every three to four seconds.
With this in mind, I need to create a way to manage this. Whenever a request fails, it returns a 503 response (service unavailable). My current plan is to use the HttpStatusCodeof my WebResponse to determine whether or not I should swallow the current WebException. I can repeat this x number of times until either the request is successful, or the process is cancelled altogether.
Note that I cannot stop and restart the process, because it is both time consuming for the user and damaging to the structure of the data.
Currently, I have wrapped up the API call and XML load into a method of it's own:
int webexceptionnumber = 200;
public bool webResponseSuccessful(string uri, XmlDocument doc)
{
try
{
WebRequest request = HttpWebRequest.Create(uri);
WebResponse response = request.GetResponse();
doc.Load(response.GetResponseStream());
return true;
}
catch(WebException l)
{
if (((HttpWebResponse)l.Response).StatusCode == HttpStatusCode.ServiceUnavailable)
{
webexceptionnumber = 503; // I need to do this in a much neater
return false; //fashion, but this is quick and dirty for now
}
else
{
return false;
}
}
}
I can then call this, checking to see if it returns a false value, like so:
if (!webResponseSuccessful(signedUri, xDoc))
{
//Here is where I'm struggling - see below
}
I'm stumped as to how to do this cleanly. I'd like to avoid getting messier by using a goto statement, so how do I repeat the action when a 503 response is returned? Is a while loop the answer, or do I really need to do away with that extra integer and do this in a cleaner fashion?
Change the bool to a "return type" in that type have a bool that says IsSuccessful and ShouldTryAgain. Then have the caller decide to run the operation again or continue.
public class ReturnType {
public IsSuccessFul{get;set;}
public ShouldTryAgain {get;set;}
}
I've searched through the questions here and all I see are simple theoretical BS. So here is my scenario : we have a new application and one spoiled consumer with their own older system. So in our system when an evaluation reaches a specific state, we are to transmit all the data to their service.
The first part is simple: get the data from the record, put it into their datacontracts, and send the data to them. But the second part is where it gets slippery because it requires sending all supporting documents. So in a real world case I have a Referral Document, an Assessment Document, and a Summary of Findings. So in my main code I'm just saying this :
if (client.ReferralDocument != null)
response = TransmitDocumentAsync(client.ReferralDocument);
if (client.Assessment != null)
response = TransmitDocumentAsync(client.Assessment);
if (cilent.Summary != null)
response = TransmitDocumentAsync(client.Summary);
Now the method called is asynchronous and it is simply
public static async Task<Response> TransmitDocumentAsync(byte[] document)
{
InterimResponse x = await proxy.InsertAttachmentAsync(document, identifier);
return new Task<Response>(new Response(InterimResponse);
}
So I am able to basically throw those documents 'over the wall' to be uploaded without waiting. But what I'm stuck on is how I handle the returned objects and how do I know which document it is tied to?
What I'm asking is what I need to add after the three calls to handle any errors returned as well as any other issues or exceptions that arise? Do I just do an await on return? Do I have 3 return objects (referalResponse, assessmentResponse, summaryResponse) and issue an await on each one? Am I overthinking this and just let things end without concern for the results? :)
If you want to await them one at a time:
if (client.ReferralDocument != null)
await TransmitDocumentAsync(client.ReferralDocument);
if (client.Assessment != null)
await TransmitDocumentAsync(client.Assessment);
if (cilent.Summary != null)
await TransmitDocumentAsync(client.Summary);
If you want to issue them all, and then await all the responses at the same time:
var responses = new List<Task>();
if (client.ReferralDocument != null)
responses.Add(TransmitDocumentAsync(client.ReferralDocument));
if (client.Assessment != null)
responses.Add(TransmitDocumentAsync(client.Assessment));
if (cilent.Summary != null)
responses.Add(TransmitDocumentAsync(client.Summary));
Response[] r = await Task.WhenAll(responses);
On a side note, your TransmitDocumentAsync is incorrect. It should not be constructing a new Task, only the new Response.
I'm working with .NET 3.5 with a simple handler for http requests. Right now, on each http request my handler opens a tcp connection with 3 remote servers in order to receive some information from them. Then closes the sockets and writes the server status back to Context.Response.
However, I would prefer to have a separate object that every 5 minutes connects to the remote servers via tcp, gets the information and keeps it. So the HttpRequest, on each request would be much faster just asking this object for the information.
So my questions here are, how to keep a shared global object in memory all the time that can also "wake" an do those tcp connections even when no http requests are coming and have the object accesible to the http request handler.
A service may be overkill for this.
You can create a global object in your application start and have it create a background thread that does the query every 5 minutes. Take the response (or what you process from the response) and put it into a separate class, creating a new instance of that class with each response, and use System.Threading.Interlocked.Exchange to replace a static instance each time the response is retrieved. When you want to look the the response, simply copy a reference the static instance to a stack reference and you will have the most recent data.
Keep in mind, however, that ASP.NET will kill your application whenever there are no requests for a certain amount of time (idle time), so your application will stop and restart, causing your global object to get destroyed and recreated.
You may read elsewhere that you can't or shouldn't do background stuff in ASP.NET, but that's not true--you just have to understand the implications. I have similar code to the following example working on an ASP.NET site that handles over 1000 req/sec peak (across multiple servers).
For example, in global.asax.cs:
public class BackgroundResult
{
public string Response; // for simplicity, just use a public field for this example--for a real implementation, public fields are probably bad
}
class BackgroundQuery
{
private BackgroundResult _result; // interlocked
private readonly Thread _thread;
public BackgroundQuery()
{
_thread = new Thread(new ThreadStart(BackgroundThread));
_thread.IsBackground = true; // allow the application to shut down without errors even while this thread is still running
_thread.Name = "Background Query Thread";
_thread.Start();
// maybe you want to get the first result here immediately?? Otherwise, the first result may not be available for a bit
}
/// <summary>
/// Gets the latest result. Note that the result could change at any time, so do expect to reference this directly and get the same object back every time--for example, if you write code like: if (LatestResult.IsFoo) { LatestResult.Bar }, the object returned to check IsFoo could be different from the one used to get the Bar property.
/// </summary>
public BackgroundResult LatestResult { get { return _result; } }
private void BackgroundThread()
{
try
{
while (true)
{
try
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://example.com/samplepath?query=query");
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream(), System.Text.Encoding.UTF8))
{
// get what I need here (just the entire contents as a string for this example)
string result = reader.ReadToEnd();
// put it into the results
BackgroundResult backgroundResult = new BackgroundResult { Response = result };
System.Threading.Interlocked.Exchange(ref _result, backgroundResult);
}
}
}
catch (Exception ex)
{
// the request failed--cath here and notify us somehow, but keep looping
System.Diagnostics.Trace.WriteLine("Exception doing background web request:" + ex.ToString());
}
// wait for five minutes before we query again. Note that this is five minutes between the END of one request and the start of another--if you want 5 minutes between the START of each request, this will need to change a little.
System.Threading.Thread.Sleep(5 * 60 * 1000);
}
}
catch (Exception ex)
{
// we need to get notified of this error here somehow by logging it or something...
System.Diagnostics.Trace.WriteLine("Error in BackgroundQuery.BackgroundThread:" + ex.ToString());
}
}
}
private static BackgroundQuery _BackgroundQuerier; // set only during application startup
protected void Application_Start(object sender, EventArgs e)
{
// other initialization here...
_BackgroundQuerier = new BackgroundQuery();
// get the value here (it may or may not be set quite yet at this point)
BackgroundResult result = _BackgroundQuerier.LatestResult;
// other initialization here...
}
I am looking for a way to close a WebTest response stream (JSON Object) without having to use the timeout property as it makes the test fail and not always works, the reason to do this is that the stream ticks infinitely unless it is closed by the client, right now my tests just time out because I haven't found a way to close them from code.
The JSON object doesn't need to be valid, but an example of an object and how it looks like when streamed can be found here: http://tradestation.github.io/webapi-docs/en/stream/
My load test parses an IISLog and then it sends the Web API requests it finds as WebTestRequests, some of those requests are answered with JSON objects that stream endlessly and I need to close those streams based on the time it took the request to complete in the IISlog.
public class WebTest1Coded : WebTest
{
public WebTest1Coded()
{
this.PreAuthenticate = true;
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
//Substitute the highlighted path with the path of the iis log file
IISLogReader reader = new IISLogReader(#"C:\IisLogsToWebPerfTest\TestData\log.log");
foreach (WebTestRequest request in reader.GetRequests())
{
if (this.LastResponse != null) {
this.LastResponse.HtmlDocument.ToString();
}
yield return request;
}
}
}
Thanks!
I have the following bits of code, scattered throughout my application. I'd really like to boilerplate it, and place it in either a static class, or some utility set of classes so I don't have all this duplication.
However, the small bits of the function are unique in such a way that I don't know how to refactor it.
private void callResponseCallback(IAsyncResult asynchronousResult)
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)asynchronousResult.AsyncState;
HttpWebResponse response;
// End the get response operation
response = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult);
Stream streamResponse = response.GetResponseStream();
StreamReader streamReader = new StreamReader(streamResponse);
string responseData = streamReader.ReadToEnd();
streamResponse.Close();
streamReader.Close();
response.Close();
ExpectedResponseType regResponse = Newtonsoft.Json.JsonConvert.DeserializeObject<ExpectedResponseType>(responseData);
if (regResponse.ok == "0")
{
//error - handle the msg
//whether the user not loggin or not exist
Deployment.Current.Dispatcher.BeginInvoke(() =>
{
MessageBox.Show(CustomErrorMessage);
});
}
else
{
//check the variables unique to the ExpectedResponseType and do Stuff here;
}
}
catch (WebException e)
{
// Error treatment
// ...
Debug.WriteLine("error " + e);
}
I am most curious how to pass in "ExpectedResponseType", such that it might be any Class, (i.e., is there a way to pass in T?) or possibly how to fire events that can then be executed by the UI thread and handled appropriately.
Thanks.
edit: "ExpectedResponseType" or "T" is a large collection of classes for each type of server call. For example I have LoginResponse, RegisterResponse, GetFilesResponse, UpdateResponse, DownloadResponse, etc..
EDIT: I have removed earlier example as it would not work with the delegate signature.
In order to handle the checking of the parameters specific to the type T you will need to add a little abstraction, the cleanest way is probably to wrap your code in a templated class that allows the registration of a delegate for handling the checking, I'm sure this is a specific pattern but cannot recall which one:
public class ResponseHandler<T>
{
public ResponseHandler(Action<T> typeSpecificCheckFunction)
{
this.CheckVariables = typeSpecificCheckFunction;
}
Action<T> CheckVariables;
public void callResponseCallback(IAsyncResult asynchronousResult)
{
// stuff
T regResponse = Newtonsoft.Json.JsonConvert.DeserializeObject<T>(responseData);
CheckVariables(response);
// stuff
}
}
In response to you question about handling a large variety of T, perhaps the cleaned up code above clears it up, if not then this is what generics are for - provided you know what you are expecting in each case. So for each type you were expecting you would call it something along the lines of:
var handler = new ResponseHandler<ExpectedResponseType>( response =>
{
// code to check your response properties here
});
xxx.RegisterResponseCallback(handler.callResponseCallback);