I am writing an event handler that, on ItemAdded, checks to see if a site exists, then creates a site with the given URL or with an alternate URL. I already wrote something similar but I was attempting to clean up my code for the site exist check into the method below.
private string CheckSiteExists(SPWeb web, string siteURL, string webURL)
{
//Counter for our alternate URL
int i = 0;
//Open original URL
SPWeb tempweb = web.Site.OpenWeb(webURL + "/" + siteURL);
//Check if site exists
if (tempweb.Exists == false)
{
do
{
i++;
tempweb = web.Site.OpenWeb(webURL + "/" + siteURL + "_" + i);
}
while (tempweb.Exists == false);
//Dispose of our web
tempweb.Dispose();
}
else
{
tempweb.Dispose();
//If site does not exist, return original URL
return siteURL;
}
//If site does exist, return original url plus counter
return siteURL + "_" + i;
}
I decided to test what I have and found that w3wp went from 0% CPU usage to 50-80% and stayed there until I killed it manually. I'm guessing that my do while statement isn't acting as I think it should and it's just looping to infinity.
This code seems to be finding the first URL that matches a web that does exist, not the first matching a web that doesn't exist:
You're checking tempweb.Exists == false rather than == true
You're only disposing, and returning the URL, after tempweb.Exists is true.
If no web exists, this will get stuck in a very long loop.
Related
We have some documentation files in our company that go to customers, and I noticed that some links are saved as absolute references, so for example to \server001\Files..., which of course won't work for customers. So I wrote code that takes all files in a folder, and changes the links from absolute references to local references.
toc = WordprocessingDocument.Open(pathFile, true);
var baseUri = new Uri(pathFile, UriKind.Absolute);
IEnumerable<HyperlinkRelationship> all_hr = toc.MainDocumentPart.HyperlinkRelationships;
for (int index = 0; index < all_hr.Count(); index++)
{
HyperlinkRelationship hr = all_hr.ElementAt(index);
string link = hr.Uri.OriginalString.Replace("%20", " ");
if (!(link.EndsWith(".doc") || link.EndsWith(".docx")))
continue;
if (hr.Uri.IsAbsoluteUri)
{
var newHr = new Uri(link, UriKind.Absolute);
link = baseUri.MakeRelativeUri(newHr).OriginalString;
}
log.Info("Changed: " + hr.Uri.OriginalString + " to: " + link + " in file: " + sourceFile);
var hyperlinkRelationshipId = hr.Id;
toc.MainDocumentPart.DeleteReferenceRelationship(hr);
try
{
toc.MainDocumentPart.AddHyperlinkRelationship(new Uri(link, UriKind.Relative), false, hyperlinkRelationshipId);
}
catch (Exception)
{
// This would be reached if link is still absolute. Never called.
}
}
toc.Save();
toc.Close();
When I debug this, it finds absolute links just fine, and they are all stored back as relative links.
But if I then take a look at the files that were "fixed", suddenly different references that I did not even touch in the code are changed, and absolute again (to file:///\\Server001\Files\...).
Is my understanding of references in Word false, or what might be happening here?
Sample log output:
16:49:43,738 [DOC2PDF] INFO - Changed: file:///\\Server001\Files\Main\RSL.doc to: ..\Main\RSL.doc in file: file:///\\Server001\Files\Main\INDEX.doc
I have around 300k image files in a remote location. I download (have to) and write the details of these files to a text file (with some additional info). Due to the nature of the info I'm getting, I have to process each file as they arrive (Also I write each file info to a file line) to get some form of statistics for example, I have a list of objects with attributes size and count to see how many images of certain sizes I have.
I have also thought about getting everything read and written to a file without keeping any statistics info where I could just open the file again to add the statistics. But I can't think of a way to process a 250k line multi attribute file for statistics info.
I know the lists (yeah I have 2 of them) and the constant loop for each item is bugging the application down but is there another way? Right now it's been 2 hours and the application is still on 26k. For each image item, I do something like this to keep count where I check if an image comes with a certain size that did come before, I add it to that List item.
public void AddSizeTokens(Token token)
{
int index = tokenList.FindIndex(item => item.size== token.size);
if (index >= 0)
tokenList[index].count+=1;
else
tokenList.Add(token);
}
What a single line from the file I write to looks like
Hits Size Downloads Local Loc Virtual ID
204 88.3 4212 .../someImage.jpg f-dd-edb2-4a64-b42
I'm downloading the files like below;
try
{
using (WebClient client = new WebClient())
{
if (File.Exists(filePath + "/" + fileName + "." + ext))
{
return "File Exists: " + filePath + "/" + fileName + "." + ext;
}
client.DownloadFile(virtualPath, filePath + "/" + fileName + "." + ext);
return "Downloaded: " + filePath + "/" + fileName + "." + ext;
}
}
catch (Exception e) {
return"Problem Downloading " + fileName + ": " + e.Message;
}
You should be changing your tokenList from List<Token> to Dictionary<long, Token>.
The key is the size.
Your code would look like this:
Dictionary<long, Token> tokens = new Dictionary<long, Token>();
public void AddSizeTokens(Token token)
{
Token existingToken;
if(!tokens.TryGetValue(token.size, out existingToken))
tokens.Add(token.size, token);
else
existingToken.count += 1;
}
That will change it from an O(n) operation to a O(1) operation.
Another point to consider is Destrictor's comment. Your internet connection speed is very possibly the bottle neck here.
Well, I thought perhaps the coding was the issue. Some of the problem was indeed so. As per Daniel Hilgarth's instructions, changing to dictionary helped a lot, but only the first 30 minutes. Then It was getting worse by every minute.
The problem was apparently the innocent looking UI elements that I've fed information. They ate away so much cpu that it killed the application eventually. Minimizing UI info feed helped (1.5k per minute to at slowest 1.3k). Unbelievable! Hope it helps others who have similar problems.
I have a function which have a long execution time.
public void updateCampaign()
{
context.Session[processId] = "0|Fetching Lead360 Campaign";
Lead360 objLead360 = new Lead360();
string campaignXML = objLead360.getCampaigns();
string todayDate = DateTime.Now.ToString("dd-MMMM-yyyy");
context.Session[processId] = "1|Creating File for Lead360 Campaign on " + todayDate;
string fileName = HttpContext.Current.Server.MapPath("campaigns") + todayDate + ".xml";
objLead360.createFile(fileName, campaignXML);
context.Session[processId] = "2|Reading The latest Lead360 Campaign";
string file = File.ReadAllText(fileName);
context.Session[processId] = "3|Updating Lead360 Campaign";
string updateStatus = objLead360.updateCampaign(fileName);
string[] statusArr = updateStatus.Split('|');
context.Session[processId] = "99|" + statusArr[0] + " New Inserted , " + statusArr[1] + " Updated , With " + statusArr[2] + " Error , ";
}
So to track the Progress of the function I wrote a another function
public void getProgress()
{
if (context.Session[processId] == null)
{
string json = "{\"error\":true}";
Response.Write(json);
Response.End();
}else{
string[] status = context.Session[processId].ToString().Split('|');
if (status[0] == "99") context.Session.Remove(processId);
string json = "{\"error\":false,\"statuscode\":" + status[0] + ",\"statusmsz\":\"" + status[1] + "\" }";
Response.Write(json);
Response.End();
}
}
To call this by jQuery post request is used
reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=updatecampaign";
$.post(reqUrl);
setTimeout(getProgress, 500);
get getProgress is :
function getProgress() {
reqUrl = "AjaxPages/lead360Campaign.aspx?processid=" + progressID + "&action=getProgress";
$.post(reqUrl, function (response) {
var progress = jQuery.parseJSON(response);
console.log(progress)
if (progress.error) {
$("#fetchedCampaign .waitingMsz").html("Some error occured. Please try again later.");
$("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_error.jpg) no-repeat center 6px" });
return;
}
if (progress.statuscode == 99) {
$("#fetchedCampaign .waitingMsz").html("Update Status :"+ progress.statusmsz );
$("#fetchedCampaign .waitingMsz").css({ "background": "url(common/images/ajax_loded.jpg) no-repeat center 6px" });
return;
}
$("#fetchedCampaign .waitingMsz").html("Please Wait... " + progress.statusmsz);
setTimeout(getProgress, 500);
});
}
But the problem is that I can't see the intermediate message. Only the last message is been displayed after a long lime of ajax loading message
Also on the browser console I just see that after a long time first requested is completed and after that the second request is completed. but there should be for getProgress ?
I have checked jquery.doc and it says that $post is an asynchronous request.
Can anyone please explain what is wrong with the code or logic?
You are in a situation discussed here:
ASP.net session request queuing
While a request for a given user's session is processed, other requests for the same session are waiting. You need to run your long function in a background thread and let the request that initiates it finish. However, the background thread will not have access to session, and you will need a different mechanism to communicate its progress.
From the information you've provided, I would suspect that it's not your javascript code that's being synchronous, but rather the server-side code. You can test this by using Firebug or Chrome's dev tools to look at the start and end times of the two AJAX requests. If I'm right, you'll see that the second request begins after half a second, but doesn't complete until after the first one.
If that's the case, possible causes are:
Running in a dev environment in Visual Studio, especially in debug mode, seems to reduce the amount of asynchronicity. The dev environment seems to like to process one request at a time.
See Igor's answer about session request queueing.
You may have code that explicitly locks resources and causes the second request to block until the long-running request is done.
One other possible culprit is the fact that most browsers only allow a limited number of concurrent requests to a particular domain. If you have a few requests pending at any given moment, the browser may just be queuing up the remaining requests until they return.
I have a web project where clicking a button navigates to another page. The new page can be 1 of three possible pages depending on data in the server. (The url may be the same for 2 of those pages)
I have three classes representing expected elements on each page using the PageObject model.
What is the best way to actually find what page actually got loaded? Is there an OR type of wait that I can wait on three unique elements and get the one that actually got loaded?
Yes, it is possible to check the presence of unique element (which identifies the page) and then return respective page in the framework.
However, a test should know the page it is expecting next and should assume that the correct page has loaded and perform further actions/assertions. You can even put an assertion here to verify correct page has loaded. If a different page has loaded, then the test eventually fails as assertions would fail.
This way test becomes more readable and describes flow of application.
Also, setting up test data upfront for the tests, is always advisable. This way you would know what data is available on server and test would know which page would render.
I had a similar issue where I needed to detect if a login was for a new user (the login page then goes to a terms & conditions page rather than direct to the home page).
Initially I just waited and then tested the second page but this was just a pain so I came up with this.
To Test the result with this:
var whichScreen = waitForEitherElementText(By.CssSelector(HeaderCssUsing), "HOME SCREEN", "home", terms.getHeaderLocator(), terms.headerText, "terms", driver, MAX_STALE_RETRIES);
if(whichScreen.Item1 && whichScreen.Item2 == "terms")
{
terms.aggreeToTerms();
}
The method that this calls is :
protected Tuple<bool, string> waitForEitherElementText(By locator1, string expectedText1, string return1Ident,
By locator2, string expectedText2, string return2Ident, IWebDriver driver, int retries)
{
var retryCount = 0;
string returnText = "";
WebDriverWait explicitWait = new WebDriverWait(driver, TimeSpan.FromSeconds(globalWaitTime));
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(0.5));
while (retryCount < retries)
{
try
{
explicitWait.Until<bool>((d) =>
{
try
{
if (Equals(d.FindElement(locator1).Text, expectedText1)) { returnText = return1Ident; };
}
catch (NoSuchElementException)
{
if (Equals(d.FindElement(locator2).Text, expectedText2)) { returnText = return2Ident; };
}
return (returnText != "");
});
return Tuple.Create(true, returnText);
}
catch (StaleElementReferenceException e)
{
Console.Out.WriteLine(DateTime.UtcNow.ToLocalTime().ToString() +
":>>> -" + locator1.ToString() + " OR " + locator2.ToString() + "- <<< - " +
this.GetType().FullName + "." + System.Reflection.MethodBase.GetCurrentMethod().Name +
" : " + e.Message);
retryCount++;
}
}
return Tuple.Create(false,"");
}
The explicit wait until uses a boolean so will loop around for the full wait time (I have a very slow Test server so I set this to 60 seconds). the implicit wait is set to half a second so the element tests will attempt every half a second and loop around until either true is returned or it fails.
I use a Tuple so that I can detect which screen I am on, and in this case agree to the terms & conditions which then sets me back on my normal page path
I am still having problems communicating with TFS through my C# application. I am trying to work with PendingChanges to check in files created by my application but after hours of google research and reading I have yet to find a way to ONLY check in specific files. Whenever I do a check in, TFS simply checks in ALL items that are currently checked out PLUS the ones I tell it to check in. Is there a way to remove certain items from the PendingChanges object or create a completely new pendingchanges object with JUST the files I need checked in? This entire behavior of all or nothing seems to be quite ridiculous. Please help.
Workspace myWorkspace = createWorkspace();
// Show our pending changes.
PendingChange[] pendingChanges = myWorkspace.GetPendingChanges();
rt.Text += "Your current pending changes: \n";
foreach (string f in checkinItems)
{
foreach (PendingChange pendingChange in pendingChanges)
{
if (Path.Combine(localPath, f) != pendingChange.LocalItem)
{
toCheckIn.Add(Path.Combine(localPath, f));
rt.Text += "Found one!" + Path.Combine(localPath, f).ToString() + "\n";
break;
}
else
{
rt.Text += pendingChange.LocalItem + " Not ours. \n";
}
}
}
myWorkspace.PendAdd(toCheckIn.ToArray(), true);
// Checkin the items we added.
int changesetNumber = myWorkspace.CheckIn(pendingChanges, currentUserName + ": " + toCheckIn + " from CokomImport");
rt.Text += "Checked in changeset " + changesetNumber;
This is the code I have so far. It filters out the stuff that I don't need but in the end it makes no difference because I need to check in the original PendingChanges with the stuff I need added to it.