Is there any way to pause the debugging without breakpoints in VS? (c# Console Application)
I noticed that after a few time my program simply stops working but no exception is thrown, it simply stop doing what it should do.
That's why I wanted to know if there is any way i could pause the code as if there was a breakpoint there, so that i would be able to understand what happend.
The program scrapes data from a website and inserts it into a MS SQL SERVER database if it matters.
EDIT :
That's the function that does the magic in my code and scrapes the data from the website. All other things are just data monipulatins.
public static string PostRequest(string url, string request)
{
var httpWebRequest = (HttpWebRequest)WebRequest.Create(url);
httpWebRequest.ContentType = "application/json";
httpWebRequest.Method = "POST";
using (var streamWriter = new StreamWriter(httpWebRequest.GetRequestStream()))
{
string json = request;
streamWriter.Write(json);
streamWriter.Flush();
streamWriter.Close();
}
var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var result = streamReader.ReadToEnd();
return result;
}
}
And again no exception is being thrown.
I asked for no breakpoints because the program runs just fine for the first 500 times. Past that number give or take it freezes.
This line will break as if it were a break point:
System.Diagnostics.Debugger.Break();
Have you considered
Console.ReadLine();
or
Console.ReadKey();
?
You can put a System.Diagnostics.Debugger.Break(); somewhere, with some "activation" condition that can trigger it.
As example, you can have a thread just checking for some condition
As a start, something like this (yes, it can be improved, this is just an idea)
void CheckAndDebugBreak()
{
while (true)
{
string fileGuard = ...; // whatever file you can easily create/delete
if (File.Exists(fileGuard))
{
File.Delete(fileGuard); // to avoid hitting the breakpoint next time, recreate the file if needed
System.Diagnostics.Debugger.Break();
}
// some way to stop the infinite loop e.g.
if (somevariableSetOutside)
break;
// wait a while before next check
System.Threading.Thread.Sleep(1000);
}
}
If you are only avoiding breakpoints because you don't want to have to manually cycle through, you can try this:
Set a break point at the beginning of your function.
Mouse over the breakpoint and select the 'gear' symbol
Check the "Conditions" box
Select "Hit Count" from the first drop down, and set a value
This will stop execution on the breakpoint after that many cycles. So you could run for 500 cycles, then pause at a breakpoint.
Additionally, you can create a Count variable to count the cycles and output the number to the console each cycle to determine exactly how many cycles it makes it through, and if the number is consistent or not.
Select menu Debug - Break All to break into the next available line of code in an executing app. See Navigate Code with the Visual Studio Debugger for more information.
Related
About:
I have this Windows Form application which every 60 seconds it captures information from two common web pages, do some simple string treatment with the result and do something (or not) based in the result.
One of those sites doesn't have any protection, so I can easily get it's HTML code using HttpWebRequest and it's HttpWebResponse.GetResponseStream().
The other one has some code protection and I can't use the same approach. The solution was use the WebBrowser class to select all text of the site and copy to the clipboard, as Jake Drew posted here (method 1).
Extra information:
When the timer reachs 1 min, each method is asynchronously execuded using Task. At the end of each Task the main thread will search some information in those texts and take or not some decisions based in the result. After this process, not even the captured text will relevant anymore. Basically everything can be wipe out from memory, since I'll get everything new and process it in about 1 minute.
Problem:
Everything is working fine but the problem is that it's gradually increasing the memory usage (about 20mb for each ticking), which are unecessary as I said before I don't need to maintain data in running in memory more than I had in the begin of app's execution:
and after comparing two snapshots I've found these 3 objects. Apparently they're responsible for that excess of memory usage:
So, even after I put the main execution in Tasks and do everything I could to help the Garbage Collector, I still have this issue.
What else could I do to avoid this issue or dump the trash from memory??
Edit:
Here's the code that is capturing the HTML of the page using HttpWebRequest:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) {
if (response.StatusCode == HttpStatusCode.OK) {
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (response.CharacterSet == null) {
readStream = new StreamReader(receiveStream);
} else {
readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
}
PB_value = readStream.ReadToEnd();
readStream.Close(); //Ensure
}
response.Close(); //Ensure
}
Solved:
After some research I've found a solution. I actually feel kind of ashamed because it is a quite simple solution that I haven't tried before, still, it's important to share.
The first thing I did was create an Event to identify when my two Tasks were finished then I assigned two functions to this event. The first function forced the Garbage Collector (GC.Collect()). The second function disposed the two Tasks, since all the main processes were done inside them (T.Dispose()). Then I've got the result I wanted:
So I am writing an academic software where I need to get data from a network of 8 devices via online links. Note that this devices are configured in such a fashion that they sometime return no or null data. And I need to collect this data for a long time. Here, is the code:
public static void ParseJsonStatic(string link, ...9)
{
/access the URLs in a suitable interval and process data
var client = new WebClient();
var stream = client.OpenRead(link);
Debug.Assert(stream != null, "stream != null");
var reader = new StreamReader(stream);
var rootObject = JsonConvert.DeserializeObject<RootObject>(reader.ReadToEnd());
....
}
So whenever there is a null stream, Visual Studio will pause and show me the exception bubble and I will have to click this continue button.
Is there a way to handle this and make sure my code continues to run from start if such a situation occurs. So what I want is this:
while (stream == null) { ... retry to read stream and don't trigger nullPointerException... }
Because pausing in middle fails my purpose of data collection based on specific interval and also I cannot leave it unattended as such for long intervals.
Thanks.
Try this:
var stream = null;
while (stream == null) {
stream = client.OpenRead(link)
}
Maybe between reads you also want to wait for some time.
In Visual Studio: Ctrl + Alt + E or Debug > Exceptions.
This will open up a window where you can select which expections will cause the debugger to break.
The ones you are intrested in will probably be under "Common Language Runtime Exceptions"
Note that any unhandled exception which would cause your app to crash will still break debugging. You need to make sure to handle the exception with a Try/Catch.
Go to the Debug menu and then the Exceptions
To access this window go to the Debug menu and select Windows -> Exception Settings.
Select add and type in your exception. This will add a checkbox item
for your exception.
Anytime that exception is thrown your debugger will pause if that checkbox is on. In your situation this checkbox must be unchecked.
See this link on msdn for more info!
Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce
Hey guys I have Recently implemented a new review service into a current site we have.
When the change went live the first day all worked fine.
Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong.
In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code.
var email = HttpUtility.UrlEncode(order.Customer.Email.ToString());
var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString());
var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product");
var orderRef = HttpUtility.UrlEncode(order.Id.ToString());
var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-")));
string itemRef = "";
try
{
itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString());
}
catch
{
itemRef = "0";
}
var url = string.Format("feefo Url",
login, password,email,name,description,orderRef,productLink,itemRef);
var request = (HttpWebRequest)WebRequest.Create(url);
request.KeepAlive = false;
request.Timeout = 5000;
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusDescription == "OK")
{
var stream = response.GetResponseStream();
if(stream != null)
{
using (var reader = new StreamReader(stream))
{
var content = reader.ReadToEnd();
}
}
}
}
So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo.
Now:
this hasnt been happening all week since the 15th (day of the
implementation)
the site has been grinding to a halt recently.
The stream and reader in the the var content is there for debugging.
Im wondering does the code redflag anything to you that could relate to the process of website?
Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots.
Any help would be much appreciated!
EDIT: also note that this code is in a method that is called and wrapped in A try catch
UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week
A call to another web site while processing the order can degrade performance, as you are calling to a site that you do not control. You don't know how much time it is going to take. Furthermore, the GetResponse method can throw an exception, if you don't log anything in your outer try/catch block then you won't be able to know what's happening.
The best way to perform such a task is to implement something like the "Send Emails" scheduled task, and send data when you can afford to wait for the remote service. It is easy if you try. It is more resilient and easier to maintain if you upgrade the nopCommerce code base.
This is how I do similar things:
Avoid modifying the OrderProcessingService: Create a custom service or plugin that consumes the OrderPlacedEvent or the OrderPaidEvent (just implement the IConsumer<OrderPaidEvent> or IConsumer<OrderPlacedEvent> interface).
Do not call to a third party service directly while processing the request if you don't need the response at that moment. It will only delay your process. At the service created in step 1, store data and send it to Feefo later. You can store data to database or use an static collection if you don't mind losing pending data when restarting the site (that could be ok for statistical data for instance).
Best way to implement point #2 is to add a new scheduled task implementing ITask (remember to add a record to the ScheduleTask table). Just recover the stored data do your processing.
Add some logging. It is easy, just get an ILogger instance and call Insert.
As far as I can see, you are making a blocking synchronous call to other websites, which will definitely slow down your site in between the request-response process. What Marco has suggested is valid, try to do it in an ITask. Or you can use an asynchronous web request to potentially remove the block, if you need things done immediately instead of scheduled. :)
Having a strange problem that I've never encountered nor heard of happening. It seems that occasionally, the ReadLine() function of the StreamReader class will return NULL, as if it's at the end of the file, BUT it's not.
My log file indicates that everything is happening just as if it had actually reached the end of the file, but yet it's only processing part of it. There doesn't appear to be any consistency, because if I restart the process from scratch, the whole file is processed without incident.
Clearly, there is nothing funky in the file itself, or it would do this on the same line each time, plus it has happened with a few different files, and each time they are re-run, it works fine.
Anyone run across anything similar, or have any suggestions on what might be causing such a thing?
Thanks,
Andrew
Sample:
line = _readerStream.ReadLine();
if (null != line)
{
eventRetVal = loadFileLineEvent(line);
}
else
{
// do some housecleaning and log file completed
}
_readerStream is the stream which has been opened elsewhere.
loadFileLineEvent is a delegate that gets passed in which processes the line. This has its own error handling (with logging), so there's no issue in there.
The routine above (not shown in its entirety) has error handling around it also (with logging), which is not being triggered either.
It's getting to the "else" and logging that it reached the end of the file, but it's obvious from the number of records I got that it didn't.
Have you tried a more traditional approach to reading the stream? This way your checking for the end of the stream before reading the next potentially empty/null line. Seems like your code should work, but with a possible null exception thrown for trying to read a line that doesn't exists(not sure if SR throws for that though).
using (StreamReader SR = new StreamReader(OFD.FileName))
{
while (!SR.EndOfStream)
{
string CurrentLine = SR.ReadLine();
var eventRetVal = loadFileLineEvent(CurrentLine);
}
}
I've been asked to write a small program which checks that all the pages we have online are not erroring.
To do this, I use the following code (where pathsToCheck is List, each string is a URL like http://www.domain.com/webpage)
foreach (string path in pathsToCheck)
{
HttpWebResponse response = null;
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(path);
webRequest.AllowAutoRedirect = true;
response = (HttpWebResponse)webRequest.GetResponse();
System.Diagnostics.Debug.Assert(response.StatusDescription == "OK", "Look into this, it doesn't like the response code");
System.Threading.Thread.Sleep(1000);
}
catch (Exception ex)
{
Console.WriteLine("Failed : " + path);
}
finally
{
Write(--totalPathsToCheck);
}
}
The problem I am having is it always fails (timesout) from the third item in the list (everything fails from the third). Naturally, I guessed there must be a fault with the third item, but there isn't.
Since the first item doesn't time out, I created a new list, with 5 items, all of the same URL (one I know doesn't time out). The same issue occurs, on the third iteration, it times out and continues to time out for the remainder of the list.
I then decided to test a different URL (on a different domain) and the same issue persists.
I added the sleep to the code, and increased it in-case there were too many requests within a given period but that made no differences.
What should I be doing?
You need to close your connections. Add
response.Close();
From http://msdn.microsoft.com/en-us/library/system.net.httpwebresponse.close.aspx:
The Close method closes the response stream and releases the connection to the resource for reuse by other requests.
You must call either the Stream.Close or the HttpWebResponse.Close method to close the stream and release the connection for reuse. It is not necessary to call both Stream.Close and HttpWebResponse.Close, but doing so does not cause an error. Failure to close the stream can cause your application to run out of connections.