I'm working on a console application which is scheduled in windows scheduler to run every 15 minutes which when ran downloads a file from a public website using WebClient.
string Url1 = "http://www2.epa.gov/sites/production/files/" + DateTime.Now.Year + "-" + DateTime.Now.Month.ToString("d2")+ "/rindata.csv";
WebClient webClient = new WebClient();
webClient.DownloadFile(Url1, filename);
The above code works fine, but the above URL might or might not change every month randomly which cause my application throw 404 Exception.
Example
Consider the URL to be http://www2.epa.gov/sites/production/files/2015-09/rindata.csv and the variable part of the URL is 2015-09 which contains the data regarding September and it might change to 2015-10 for October if there any data change for that month but there no pattern of when or whether it changes everymonth.
May I know a better way to handle this?
To make it download every 15 minutes you can use a timer, set its interval to 15 minutes(in miliseconds), and put that code in the tick. Regarding the change of URL, I donĀ“t realize of a better way to do it.
It sounds like the URL won't necessarily update every month, so if this is the case don't re-evaluate the string every 15 minutes. So on first run set
string Url1 = "http://www2.epa.gov/sites/production/files/" + DateTime.Now.Year + "-" + DateTime.Now.Month.ToString("d2")+ "/rindata.csv";
you'll have to save this working value somewhere like the config file, then you can keep reusing this value until it fails, so every 15 minutes only run
WebClient webClient = new WebClient();
webClient.DownloadFile(Url1, filename);
instead of evaluating the URL again.
When it fails then re-evaluate it to the current month again.
if (failed)
{
string Url1 = "http://www2.epa.gov/sites/production/files/" + DateTime.Now.Year + "-" + DateTime.Now.Month.ToString("d2")+ "/rindata.csv";
}
then overwrite the saved value.
More info on saving to a settings file:
https://msdn.microsoft.com/en-us/library/aa730869(VS.80).aspx
I regards to the Date change, I would probably do a backwards search from 12 to 1. Since the newest data is all you're interested in, a higher month that doesn't return a 404 would always be the freshest data. A simple loop that checks for a 404 would be fine if you have no other way to know what the URL is. This is a very basic example but the concept should be sound.
for (int i = 12; i > 1; i--)
{
string folder = string.Format("{0}-{1}", DateTime.Now.Year + "-" + i.ToString().PadLeft(2, '0'));
string Url1 = "http://www2.epa.gov/sites/production/files/" + folder + "/rindata.csv";
try
{
using (WebClient client = new WebClient())
{
client.DownloadFile(Url1, "/rindata.csv");
}
}
catch (Exception e)
{
Console.WriteLine(string.Format("404 Error:{0}", Url1));
}
}
Related
I am writing program events to a txt file as a log but the time stamps are not updating at each point. I have declared the following string:
string timeStamp = DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss.ff");
string taskComplete = (timeStamp) + " Task Complete";
which i am calling at different points through the program:
using (StreamWriter w_Log = new StreamWriter(file_Log, true))
{
w_Log.WriteLine(taskComplete);
w_Log.Close();
}
There are several more strings declared using timeStamp though the program as well. Here is an example of the log file:
2014/02/22 10:07:26.71 Process started
2014/02/22 10:07:26.71 Task Complete
2014/02/22 10:07:26.71 Task Complete
2014/02/22 10:07:26.71 Process complete, time elapsed: 0.496 seconds
As you can see, the time seems to be static even though it has taken 49ms to complete. When the program is run again, the time has changed to the current time but the same issue, the time written is the same throughout.
Do I need to use a different method or am I using this one incorrectly?
So, at step 1 you're defining a string as being DateTime.Now with a particular format
At each point, you're just showing the same string. The string is fixed, it's not going to invoke DateTime.Now each time you run it.
So if you want it to change - you're going to need to call DateTime.Now each time.
w_Log.WriteLine(DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss.ff") + " Task Complete ");
You are defining taskComplete as a string once and using it over and over again. It doesn't update regardless of how you define it. You could set it once now and leave your the method running for 10 years and it will still contain the same value.
You actually need to update the timestamp value each time you want to update it. If you were trying to limit the code in this method, you could do is change taskComplete to a method that returns a string with the updated timestamp
void SomeMethod()
{
//doing other stuff
using (StreamWriter w_Log = new StreamWriter(file_Log, true))
{
w_Log.WriteLine(GetTaskCompleteMessage());
w_Log.Close();
}
//doing other stuff
}
String GetTaskCompleteMessage()
{
string timeStamp = DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss.ff");
return = (timeStamp) + " Task Complete";
}
you should redefine your string each time you want to update your log as you are doing now the variable timeStamp was fixed during the lifetime of your class's instance
string timeStamp = DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss.ff");
string taskComplete = (timeStamp) + " Task Complete";
//here when you call the log method
using (StreamWriter w_Log = new StreamWriter(file_Log, true))
{
timeStamp = DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss.ff");
taskComplete = (timeStamp) + " Task Complete";
w_Log.WriteLine(taskComplete);
w_Log.Close();
}
I have around 300k image files in a remote location. I download (have to) and write the details of these files to a text file (with some additional info). Due to the nature of the info I'm getting, I have to process each file as they arrive (Also I write each file info to a file line) to get some form of statistics for example, I have a list of objects with attributes size and count to see how many images of certain sizes I have.
I have also thought about getting everything read and written to a file without keeping any statistics info where I could just open the file again to add the statistics. But I can't think of a way to process a 250k line multi attribute file for statistics info.
I know the lists (yeah I have 2 of them) and the constant loop for each item is bugging the application down but is there another way? Right now it's been 2 hours and the application is still on 26k. For each image item, I do something like this to keep count where I check if an image comes with a certain size that did come before, I add it to that List item.
public void AddSizeTokens(Token token)
{
int index = tokenList.FindIndex(item => item.size== token.size);
if (index >= 0)
tokenList[index].count+=1;
else
tokenList.Add(token);
}
What a single line from the file I write to looks like
Hits Size Downloads Local Loc Virtual ID
204 88.3 4212 .../someImage.jpg f-dd-edb2-4a64-b42
I'm downloading the files like below;
try
{
using (WebClient client = new WebClient())
{
if (File.Exists(filePath + "/" + fileName + "." + ext))
{
return "File Exists: " + filePath + "/" + fileName + "." + ext;
}
client.DownloadFile(virtualPath, filePath + "/" + fileName + "." + ext);
return "Downloaded: " + filePath + "/" + fileName + "." + ext;
}
}
catch (Exception e) {
return"Problem Downloading " + fileName + ": " + e.Message;
}
You should be changing your tokenList from List<Token> to Dictionary<long, Token>.
The key is the size.
Your code would look like this:
Dictionary<long, Token> tokens = new Dictionary<long, Token>();
public void AddSizeTokens(Token token)
{
Token existingToken;
if(!tokens.TryGetValue(token.size, out existingToken))
tokens.Add(token.size, token);
else
existingToken.count += 1;
}
That will change it from an O(n) operation to a O(1) operation.
Another point to consider is Destrictor's comment. Your internet connection speed is very possibly the bottle neck here.
Well, I thought perhaps the coding was the issue. Some of the problem was indeed so. As per Daniel Hilgarth's instructions, changing to dictionary helped a lot, but only the first 30 minutes. Then It was getting worse by every minute.
The problem was apparently the innocent looking UI elements that I've fed information. They ate away so much cpu that it killed the application eventually. Minimizing UI info feed helped (1.5k per minute to at slowest 1.3k). Unbelievable! Hope it helps others who have similar problems.
I have a function which have a long execution time.
public void updateCampaign()
{
context.Session[processId] = "0|Fetching Lead360 Campaign";
Lead360 objLead360 = new Lead360();
string campaignXML = objLead360.getCampaigns();
string todayDate = DateTime.Now.ToString("dd-MMMM-yyyy");
context.Session[processId] = "1|Creating File for Lead360 Campaign on " + todayDate;
string fileName = HttpContext.Current.Server.MapPath("campaigns") + todayDate + ".xml";
objLead360.createFile(fileName, campaignXML);
context.Session[processId] = "2|Reading The latest Lead360 Campaign";
string file = File.ReadAllText(fileName);
context.Session[processId] = "3|Updating Lead360 Campaign";
string updateStatus = objLead360.updateCampaign(fileName);
string[] statusArr = updateStatus.Split('|');
context.Session[processId] = "99|" + statusArr[0] + " New Inserted , " + statusArr[1] + " Updated , With " + statusArr[2] + " Error , ";
}
So to track the Progress of the function I wrote a another function
public void getProgress()
{
if (context.Session[processId] == null)
{
string json = "{\"error\":true}";
Response.Write(json);
Response.End();
}else{
string[] status = context.Session[processId].ToString().Split('|');
if (status[0] == "99") context.Session.Remove(processId);
string json = "{\"error\":false,\"statuscode\":" + status[0] + ",\"statusmsz\":\"" + status[1] + "\" }";
Response.Write(json);
Response.End();
}
}
To call this by jQuery post request is used
reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=updatecampaign";
$.post(reqUrl);
setTimeout(getProgress, 500);
get getProgress is :
function getProgress() {
reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=getProgress";
$.post(reqUrl, function (response) {
var progress = jQuery.parseJSON(response);
console.log(progress)
if (progress.error) {
$("#fetchedCampaign .waitingMsz").html("Some error occured. Please try again later.");
$("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_error.jpg) no-repeat center 6px" });
return;
}
if (progress.statuscode == 99) {
$("#fetchedCampaign .waitingMsz").html("Update Status :"+ progress.statusmsz );
$("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_loded.jpg) no-repeat center 6px" });
return;
}
$("#fetchedCampaign .waitingMsz").html("Please Wait... " + progress.statusmsz);
setTimeout(getProgress, 500);
});
}
But the problem is that I can't see the intermediate message. Only the last message is been displayed after a long lime of ajax loading message
Also on the browser console I just see that after a long time first requested is completed and after that the second request is completed. but there should be for getProgress ?
I have checked jquery.doc and it says that $post is an asynchronous request.
Can anyone please explain what is wrong with the code or logic?
You are in a situation discussed here:
ASP.net session request queuing
While a request for a given user's session is processed, other requests for the same session are waiting. You need to run your long function in a background thread and let the request that initiates it finish. However, the background thread will not have access to session, and you will need a different mechanism to communicate its progress.
From the information you've provided, I would suspect that it's not your javascript code that's being synchronous, but rather the server-side code. You can test this by using Firebug or Chrome's dev tools to look at the start and end times of the two AJAX requests. If I'm right, you'll see that the second request begins after half a second, but doesn't complete until after the first one.
If that's the case, possible causes are:
Running in a dev environment in Visual Studio, especially in debug mode, seems to reduce the amount of asynchronicity. The dev environment seems to like to process one request at a time.
See Igor's answer about session request queueing.
You may have code that explicitly locks resources and causes the second request to block until the long-running request is done.
One other possible culprit is the fact that most browsers only allow a limited number of concurrent requests to a particular domain. If you have a few requests pending at any given moment, the browser may just be queuing up the remaining requests until they return.
I have a web project where clicking a button navigates to another page. The new page can be 1 of three possible pages depending on data in the server. (The url may be the same for 2 of those pages)
I have three classes representing expected elements on each page using the PageObject model.
What is the best way to actually find what page actually got loaded? Is there an OR type of wait that I can wait on three unique elements and get the one that actually got loaded?
Yes, it is possible to check the presence of unique element (which identifies the page) and then return respective page in the framework.
However, a test should know the page it is expecting next and should assume that the correct page has loaded and perform further actions/assertions. You can even put an assertion here to verify correct page has loaded. If a different page has loaded, then the test eventually fails as assertions would fail.
This way test becomes more readable and describes flow of application.
Also, setting up test data upfront for the tests, is always advisable. This way you would know what data is available on server and test would know which page would render.
I had a similar issue where I needed to detect if a login was for a new user (the login page then goes to a terms & conditions page rather than direct to the home page).
Initially I just waited and then tested the second page but this was just a pain so I came up with this.
To Test the result with this:
var whichScreen = waitForEitherElementText(By.CssSelector(HeaderCssUsing), "HOME SCREEN", "home", terms.getHeaderLocator(), terms.headerText, "terms", driver, MAX_STALE_RETRIES);
if(whichScreen.Item1 && whichScreen.Item2 == "terms")
{
terms.aggreeToTerms();
}
The method that this calls is :
protected Tuple<bool, string> waitForEitherElementText(By locator1, string expectedText1, string return1Ident,
By locator2, string expectedText2, string return2Ident, IWebDriver driver, int retries)
{
var retryCount = 0;
string returnText = "";
WebDriverWait explicitWait = new WebDriverWait(driver, TimeSpan.FromSeconds(globalWaitTime));
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(0.5));
while (retryCount < retries)
{
try
{
explicitWait.Until<bool>((d) =>
{
try
{
if (Equals(d.FindElement(locator1).Text, expectedText1)) { returnText = return1Ident; };
}
catch (NoSuchElementException)
{
if (Equals(d.FindElement(locator2).Text, expectedText2)) { returnText = return2Ident; };
}
return (returnText != "");
});
return Tuple.Create(true, returnText);
}
catch (StaleElementReferenceException e)
{
Console.Out.WriteLine(DateTime.UtcNow.ToLocalTime().ToString() +
":>>> -" + locator1.ToString() + " OR " + locator2.ToString() + "- <<< - " +
this.GetType().FullName + "." + System.Reflection.MethodBase.GetCurrentMethod().Name +
" : " + e.Message);
retryCount++;
}
}
return Tuple.Create(false,"");
}
The explicit wait until uses a boolean so will loop around for the full wait time (I have a very slow Test server so I set this to 60 seconds). the implicit wait is set to half a second so the element tests will attempt every half a second and loop around until either true is returned or it fails.
I use a Tuple so that I can detect which screen I am on, and in this case agree to the terms & conditions which then sets me back on my normal page path
I am writing an event handler that, on ItemAdded, checks to see if a site exists, then creates a site with the given URL or with an alternate URL. I already wrote something similar but I was attempting to clean up my code for the site exist check into the method below.
private string CheckSiteExists(SPWeb web, string siteURL, string webURL)
{
//Counter for our alternate URL
int i = 0;
//Open original URL
SPWeb tempweb = web.Site.OpenWeb(webURL + "/" + siteURL);
//Check if site exists
if (tempweb.Exists == false)
{
do
{
i++;
tempweb = web.Site.OpenWeb(webURL + "/" + siteURL + "_" + i);
}
while (tempweb.Exists == false);
//Dispose of our web
tempweb.Dispose();
}
else
{
tempweb.Dispose();
//If site does not exist, return original URL
return siteURL;
}
//If site does exist, return original url plus counter
return siteURL + "_" + i;
}
I decided to test what I have and found that w3wp went from 0% CPU usage to 50-80% and stayed there until I killed it manually. I'm guessing that my do while statement isn't acting as I think it should and it's just looping to infinity.
This code seems to be finding the first URL that matches a web that does exist, not the first matching a web that doesn't exist:
You're checking tempweb.Exists == false rather than == true
You're only disposing, and returning the URL, after tempweb.Exists is true.
If no web exists, this will get stuck in a very long loop.