HttpClient - Could not create SSL/TLS secure channel - c#

I've been writing some code that pulls data from an external source. Essentially first it gets a list of the events that relate to today, processes it, then gets a list of events that relate to tomorrow. Depending upon how many events there are in a given day the processing can take a number of hours. Here is the problem:
If I run ScrapeToday() and ScrapeTomorrow() immediately after one another without the processing everything is gravy. However, as in the normal program flow, if there is a large gap between the operations I catch the following error;
The request was aborted: Could not create SSL/TLS secure channel.
My first instinct is that this must be due to a bug in HttpClient, likely something expiring due to the long duration between requests. However as a new client is created for each request I wouldn't've thought it possible. I've done some digging around on SO and some other sites but have not been able to reach the root of the problem - any help would be appreciated!
Code below.
public static async Task<Dictionary<string, Calendar.Event>> ScrapeToday()
{
try
{
var client = new HttpClient();
WriteLine("Requesting today's calendar...");
var json = await client.GetStringAsync(calendarTodayUrl);
var results = JsonConvert.DeserializeObject<Dictionary<string, Calendar.Event>>(json);
return results;
}
catch (Exception e)
{
Scraper.Instance.WriteLine("Error retrieving today's calendar: " + e);
return new Dictionary<string, Calendar.Event>();
}
}
public static async Task<Dictionary<string, Calendar.Event>> ScrapeTomorrow()
{
try {
var client = new HttpClient();
WriteLine("Requesting tomorrow's calendar...");
var json = await client.GetStringAsync(calendarTomorrowUrl);
var results = JsonConvert.DeserializeObject<Dictionary<string, Calendar.Event>>(json);
return results;
}
catch (Exception e)
{
Scraper.Instance.WriteLine("Error retrieving tomorrow's calendar: " + e);
return new Dictionary<string, Calendar.Event>();
}
}
Edit: Since posting I have tried making a global httpclient and using that for both requests. That also did not work.
Edit2: The bug is reproduceable with an elapsed time of 30 minutes between the calls.
Edit3: If the call is retried after the failure it always works.

Related

Can't send message to kafka topic use Confluent.Kafka C#

We have 3 different environments: test, cert and prod. These environments have topics configured using the offset explorer.
The problem is that I can send messages to cert and test, but I can't send to prod until the topic in prod is marked for deletion. As soon as I do this, the messages immediately begin to be sent. I tried to create new topics in test and cert. The problem persists until I put a mark on these topics for deletion, I did not succeed in sending a message.
This problem is happening when i call method ProduceAsync. This method work 5 minutes and finished with error :
Local: Message timed out.
If i use method Produce, the program goes next step but message in topic doesn't exist.
private readonly KafkaDependentProducer<Null, string> _producer;
private string topic;
private ILogger<SmsService> _logger;
public SmsService(KafkaDependentProducer<Null, string> producer, ILogger<SmsService> logger)
{
_producer = producer;
topic = config.GetSection("Kafka:Topic").Value;
_logger = logger;
}
public async Task<Guid?> SendMessage(InputMessageModel sms)
{
var message = new SmsModel(sms.text, sms.type);
var kafkaMessage = new Message<Null, string>();
kafkaMessage.Value = JsonConvert.SerializeObject(message);
try
{
await _producer.ProduceAsync(topic, kafkaMessage);
}
catch (Exception e)
{
Console.WriteLine($"Oops, something went wrong: {e}");
return null;
}
return message.messageId;
Class KafkaDependentProducer i take from official repo example https://github.com/confluentinc/confluent-kafka-dotnet/tree/master/examples/Web
I finded the solution. In my case i needed add "Acks" parameter in ProducerConfig. (Acks = Acks.Leader (equals 1))
Unfortunately the last version of Kafka.Confluent don't write exeption, i had to do version lower. ProduceAsync gave me exeption: Broker: Not enough in-sync replicas, when i just finded answer in internet.
Parameter: min.insync.replicas in problem topic equals 2.

How to optimize this code and call Api every 40ms

I want to interrogate one sensor that returns a JSON Rest Api response. I make Api call every 40 milliseconds but it gave me this error :
in System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) in System.Threading.Tasks.Task1.get_Result()
I have timer where interval = 40. And this is the code how I call tha Api :
private void Timer(object sender, EventArgs e)
{
tmrPollingSick.Stop();
string strJson = "";
HttpClient client = new HttpClient();
string baseUrl = "http://9999.99999.99999.8";
client.BaseAddress = new Uri(baseUrl);
var contentType = new MediaTypeWithQualityHeaderValue("application/json");
client.DefaultRequestHeaders.Accept.Add(contentType);
string strAltezza = string.Empty;
try
{
strJson = "Here I set HEADERS... DATA ect " + Convert.ToChar(34) +
"header" + Convert.ToChar(34) + ": {............"
var contentData = new StringContent(strJson, System.Text.Encoding.UTF8, "application/json");
using (var responseMessage = client.PostAsync("/bla/bla/bla", contentData).Result)
{
if (responseMessage.IsSuccessStatusCode)
{
string strContext = responseMessage.Content.ReadAsStringAsync().Result;
Object dec = JsonConvert.DeserializeObject(strContext); // deserializing Json string (it will deserialize Json string)
JObject obj = JObject.Parse(strContext);
//Process Data In
JObject obj1 = JObject.Parse(obj["bla"].ToString());
JObject obj2 = JObject.Parse(obj1["processDataIn"].ToString());
strAltezza = obj2["1"].ToString();
textBox1.Text = strAltezza;
}
}
}
catch(WebException ex1)
{
MessageBox.Show("web: "+ex1.StackTrace.ToString() + " - " + ex1.Message);
}
catch (Exception ex)
{
MessageBox.Show(ex.StackTrace.ToString() + " - " + ex.Message);
}
tmrPollingSick.Start();
}
Everything works fine but after a while it gives me that error.
I allready read this (How to implement real time data for a web page and this) but I haven't tried them yet.
Any suggestions how to fix this?
Is there another way how to get the result in real-time without crashing?
May I baptize this as stubborn pooling?
You don't want to use a timer. What you want is x time between request-response cycles. (This will solve the socket exhaustion).
Split your code into phases (client init, request fetch, response processing). See #Oliver answer.
Make a function to execute everything. And run some sort of infinite foreach loop where you can sleep for x time after calling the fetch function (and the process, but you could defer this to another thread or do it async).
When you call this method all 40ms you'll run out of send sockets, because you create every time a new HttpClient. Even putting this into a using statement (cause HttpClient implements IDisposable) wouldn't solve this problem, cause the underlying socket will be blocked for 3 minutes from the OS (take a look at this answer for further explanations).
You should split this stuff into some initialization phase where you setup the client, build up the request as far as possible and within this timer method just call the PostAsync() method and check the response.

Shared object among different requests

I'm working with .NET 3.5 with a simple handler for http requests. Right now, on each http request my handler opens a tcp connection with 3 remote servers in order to receive some information from them. Then closes the sockets and writes the server status back to Context.Response.
However, I would prefer to have a separate object that every 5 minutes connects to the remote servers via tcp, gets the information and keeps it. So the HttpRequest, on each request would be much faster just asking this object for the information.
So my questions here are, how to keep a shared global object in memory all the time that can also "wake" an do those tcp connections even when no http requests are coming and have the object accesible to the http request handler.
A service may be overkill for this.
You can create a global object in your application start and have it create a background thread that does the query every 5 minutes. Take the response (or what you process from the response) and put it into a separate class, creating a new instance of that class with each response, and use System.Threading.Interlocked.Exchange to replace a static instance each time the response is retrieved. When you want to look the the response, simply copy a reference the static instance to a stack reference and you will have the most recent data.
Keep in mind, however, that ASP.NET will kill your application whenever there are no requests for a certain amount of time (idle time), so your application will stop and restart, causing your global object to get destroyed and recreated.
You may read elsewhere that you can't or shouldn't do background stuff in ASP.NET, but that's not true--you just have to understand the implications. I have similar code to the following example working on an ASP.NET site that handles over 1000 req/sec peak (across multiple servers).
For example, in global.asax.cs:
public class BackgroundResult
{
public string Response; // for simplicity, just use a public field for this example--for a real implementation, public fields are probably bad
}
class BackgroundQuery
{
private BackgroundResult _result; // interlocked
private readonly Thread _thread;
public BackgroundQuery()
{
_thread = new Thread(new ThreadStart(BackgroundThread));
_thread.IsBackground = true; // allow the application to shut down without errors even while this thread is still running
_thread.Name = "Background Query Thread";
_thread.Start();
// maybe you want to get the first result here immediately?? Otherwise, the first result may not be available for a bit
}
/// <summary>
/// Gets the latest result. Note that the result could change at any time, so do expect to reference this directly and get the same object back every time--for example, if you write code like: if (LatestResult.IsFoo) { LatestResult.Bar }, the object returned to check IsFoo could be different from the one used to get the Bar property.
/// </summary>
public BackgroundResult LatestResult { get { return _result; } }
private void BackgroundThread()
{
try
{
while (true)
{
try
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://example.com/samplepath?query=query");
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream(), System.Text.Encoding.UTF8))
{
// get what I need here (just the entire contents as a string for this example)
string result = reader.ReadToEnd();
// put it into the results
BackgroundResult backgroundResult = new BackgroundResult { Response = result };
System.Threading.Interlocked.Exchange(ref _result, backgroundResult);
}
}
}
catch (Exception ex)
{
// the request failed--cath here and notify us somehow, but keep looping
System.Diagnostics.Trace.WriteLine("Exception doing background web request:" + ex.ToString());
}
// wait for five minutes before we query again. Note that this is five minutes between the END of one request and the start of another--if you want 5 minutes between the START of each request, this will need to change a little.
System.Threading.Thread.Sleep(5 * 60 * 1000);
}
}
catch (Exception ex)
{
// we need to get notified of this error here somehow by logging it or something...
System.Diagnostics.Trace.WriteLine("Error in BackgroundQuery.BackgroundThread:" + ex.ToString());
}
}
}
private static BackgroundQuery _BackgroundQuerier; // set only during application startup
protected void Application_Start(object sender, EventArgs e)
{
// other initialization here...
_BackgroundQuerier = new BackgroundQuery();
// get the value here (it may or may not be set quite yet at this point)
BackgroundResult result = _BackgroundQuerier.LatestResult;
// other initialization here...
}

My timer loop intermittently slows down, not sure why

I have a API request queue that I have looped using System.Timers.Timer. I set it up like this:
private static void SetupTimerLoop()
{
queueLoopTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent);
queueLoopTimer.Interval = 100;
queueLoopTimer.Start();
}
Periodically OnTimedEvent will only get called once every second, almost exactly for large spans of time. It will then speed back up to once every 100ms. I cannot accurately reproduce these results, sometimes it happens, sometimes it does not. I have watched my CPU usage and it doesn't spike during these times of slowdown, if anything it goes down.
If I make breakpoints it shows that the timers interval is still only 100ms.
What could be going on here? Is there anything I can do to further troubleshoot this?
Potentially related, when this happens all my HTTP requests that are initiated from the loop (these are put onto Tasks so they return asynchronously) stop returning.
Request related:
private T Get<T>(string endpoint, IRequest request) where T : class
{
var client = InitHttpClient();
string debug = BaseUrl + endpoint + ToQueryString(request);
HttpResponseMessage response = client.GetAsync(BaseUrl + endpoint + ToQueryString(request)).Result;
string body = response.Content.ReadAsStringAsync().Result;
if (response.IsSuccessStatusCode)
{
T result = JsonConvert.DeserializeObject<T>(body, _serializerSettings);
return result;
}
var error = JsonConvert.DeserializeObject<HelpScoutError>(body);
throw new HelpScoutApiException(error, body);
}
I never get to: string body = response.Content.ReadAsStringAsync().Result; as long as the loop seems to be running slow.
Not sure if that could have something to do with it, hopefully you guys have some insight.
Edit: I don't flood the API with requests. I throttle the number of requests that go out per rolling 60-second period. During that time I just use an if statement to pass over the API call for that loop.
First of all, I'm not too clear when you say you have a loop. Your "OnTimedEvent" is an event handler that gets triggered at a preset interval (100ms).
As for a hint, you might want to try adding logic at the beginning of your "OnTimedEvent" handler to prevent re-entrance, just in-case your "OnTimedEvent" handler is taking longer than 100ms.
static int working = 0;
private static void OnTimedEvent(Object source, System.Timers.ElapsedEventArgs e)
{
if (working > 0)
return;
working++;
//do your normal work here
working--;
}

Closing WCF Service from Async method?

I have a service layer project on an MVC 5 ASP.NET application I am creating on .NET 4.5.2 which calls out to an External 3rd Party WCF Service to Get Information asynchronously. An original method to call external service was as below (there are 3 of these all similar in total which I call in order from my GetInfoFromExternalService method (note it isnt actually called that - just naming it for illustration)
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
try
{
if (_externalpServiceClient == null)
{
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
}
string tokenId= await _externalpServiceClient .GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message);
}
finally
{
CloseExternalServiceClient(_externalpServiceClient);
_externalpServiceClient= null;
}
}
So that meant that when each async call had completed the finally block ran - the WCF client was closed and set to null and then newed up when another request was made. This was working fine until a change needed to be made whereby if the number of cars passed in by User exceeds 1000 I create a Split Function and then call my GetInfoFromExternalService method in a WhenAll with each 1000 - as below:
if (cars.Count > 1000)
{
const int packageSize = 1000;
var packages = SplitCarss(cars, packageSize);
//kick off the number of split packages we got above in Parallel and await until they all complete
await Task.WhenAll(packages.Select(GetInfoFromExternalService));
}
However this now falls over as if I have 3000 cars the method call to GetTokenId news up the WCF service but the finally blocks closes it so the second batch of 1000 that is attempting to be run throws an exception. If I remove the finally block the code works ok - but it is obviously not good practice to not be closing this WCF client.
I had tried putting it after my if else block where the cars.count is evaluated - but if a User uploads for e.g 2000 cars and that completes and runs in say 1 min - in the meantime as the user had control in the Webpage they could upload another 2000 or another User could upload and again it falls over with an Exception.
Is there a good way anyone can see to correctly close the External Service Client?
Based on the related question of yours, your "split" logic doesn't seem to give you what you're trying to achieve. WhenAll still executes requests in parallel, so you may end up running more than 1000 requests at any given moment of time. Use SemaphoreSlim to throttle the number of simultaneously active requests and limit that number to 1000. This way, you don't need to do any splits.
Another issue might be in how you handle the creation/disposal of ExternalServiceClient client. I suspect there might a race condition there.
Lastly, when you re-throw from the catch block, you should at least include a reference to the original exception.
Here's how to address these issues (untested, but should give you the idea):
const int MAX_PARALLEL = 1000;
SemaphoreSlim _semaphoreSlim = new SemaphoreSlim(MAX_PARALLEL);
volatile int _activeClients = 0;
readonly object _lock = new Object();
ExternalServiceClient _externalpServiceClient = null;
ExternalServiceClient GetClient()
{
lock (_lock)
{
if (_activeClients == 0)
_externalpServiceClient = new ExternalServiceClient("WSHttpBinding_IExternalService");
_activeClients++;
return _externalpServiceClient;
}
}
void ReleaseClient()
{
lock (_lock)
{
_activeClients--;
if (_activeClients == 0)
{
_externalpServiceClient.Close();
_externalpServiceClient = null;
}
}
}
private async Task<string> GetTokenIdForCarsAsync(Car[] cars)
{
var client = GetClient();
try
{
await _semaphoreSlim.WaitAsync().ConfigureAwait(false);
try
{
string tokenId = await client.GetInfoForCarsAsync(cars).ConfigureAwait(false);
return tokenId;
}
catch (Exception ex)
{
//TODO plug in log 4 net
throw new Exception("Failed" + ex.Message, ex);
}
finally
{
_semaphoreSlim.Release();
}
}
finally
{
ReleaseClient();
}
}
Updated based on the comment:
the External WebService company can accept me passing up to 5000 car
objects in one call - though they recommend splitting into batches of
1000 and run up to 5 in parallel at one time - so when I mention 7000
- I dont mean GetTokenIdForCarAsync would be called 7000 times - with my code currently it should be called 7 times - i.e giving me back 7
token ids - I am wondering can I use your semaphore slim to run first
5 in parallel and then 2
The changes are minimal (but untested). First:
const int MAX_PARALLEL = 5;
Then, using Marc Gravell's ChunkExtension.Chunkify, we introduce GetAllTokenIdForCarsAsync, which in turn will be calling GetTokenIdForCarsAsync from above:
private async Task<string[]> GetAllTokenIdForCarsAsync(Car[] cars)
{
var results = new List<string>();
var chunks = cars.Chunkify(1000);
var tasks = chunks.Select(chunk => GetTokenIdForCarsAsync(chunk)).ToArray();
await Task.WhenAll(tasks);
return tasks.Select(task => task.Result).ToArray();
}
Now you can pass all 7000 cars into GetAllTokenIdForCarsAsync. This is a skeleton, it can be improved with some retry logic if any of the batch requests has failed (I'm leaving that up to you).

Categories

Resources