EWS C# ExchangeService.MoveItems Issue - c#

I'm having an EWS MoveItems issue that I hope someone can help me with. With 1300 emails in the sent folder, I call the MoveItems method to move them ALL to a back-up folder and only a subset of the items get moved! Looking for a pattern, I recorded the following test numbers:
Test #1: Init Count: 1300; Actual # Moved: 722
Test #2: Init Count: 1300; Actual # Moved: 661
Test #3: Init Count: 1300; Actual # Moved: 738
With each test case my logging output shows that 1300 were found and passed to the MoveItems method, however, checking the Sent Items folder shows that not all 1300 were moved (as indicated in the above tests).
Here's a snip of my code:
...
do
{
ItemView view = new ItemView(pageSize, offset);
findResults = service.FindItems(folder, emailFilter, view);
Logger.Write("Email count on this page to be archived: " + findResults.Items.Count);
foreach (Item email in findResults)
{
itemIds.Add(email.Id);
}
offset += pageSize;
}
while (findResults.MoreAvailable);
Logger.Write("Total email Ids to be archived: " + itemIds.Count());
if (itemIds.Count() > 0)
{
Logger.Write("Archiving emails...");
service.MoveItems(itemIds, folder5.Folders[0].Id);
Logger.Write("Archive call complete.");
}
else
{
Logger.Write("No emails found to archive.");
}
...
All of this is wrapped in a try/catch block. No errors are caught.
The only other interesting item worth noting, is that the time between the "Archiving emails..." log and the "Archive call complete." is always within a second or two of being 1 minute. Possibly indicating a time-out on the call? Here's a snip of my log:
8/15/2014 4:29:43 PM - Information - Archiving emails...
8/15/2014 4:29:44 PM - Information - Creating search filters...
8/15/2014 4:29:48 PM - Information - Email count on this page to be archived: 1000
8/15/2014 4:29:49 PM - Information - Email count on this page to be archived: 300
8/15/2014 4:29:49 PM - Information - Total email Ids to be archived: 1300
8/15/2014 4:29:49 PM - Information - Archiving emails...
8/15/2014 4:30:51 PM - Information - Archive call complete.
8/15/2014 4:30:51 PM - Information - Email archival completed without errors
I'm pretty much at the end of my rope, so I appreciate any help you may be able to provide.

I had this same issue while working with EWS. I'm not sure what the "correct" solution is, but my workaround seemed to work. I profiled the move and it seemed to do fine moving a few hundred items at a time. Try moving ~250 in each call to MoveItems.

You should try processing the ServiceResponses that come back when you run the MoveItems method eg
if (itemIds.Count() > 0)
{
ServiceResponseCollection<MoveCopyItemResponse> Responses = service.MoveItems(itemIds, folder5.Id);
Int32 Success = 0;
Int32 Error = 0;
foreach (MoveCopyItemResponse respItem in Responses) {
switch (respItem.Result) {
case ServiceResult.Success: Success++;
break;
case ServiceResult.Error: Error++;
Console.WriteLine("Error with Item " + respItem.ErrorMessage);
break;
}
}
Console.WriteLine("Results Processed " + itemIds.Count + " Success " + Success + " Failed " + Error);
}
That will tell you what's going on and why some of your moves have failed. I would suspect its throttling so as Chris suggested drop your batch size down. In the past when I've written stuff to do large moves between Mailbox and Archive I went for 100 item batch size and never had a problem. When I set the batch size too large I saw time-outs and throttle errors.
Cheers
Glen

Related

SSIS: addrows stops after adding one row out of many

I'm trying to process an xml file I'm getting from a vendor. I managed to get some c# code to read in all 26 items in the xml. This code I placed into a script component in SSIS and fed that into a Union All task. I then placed a dataviewer so I can verify what I received. I use this code to add the rows to the output buffer:
Roles roles = GetWebServiceResult(wUrl);
MessageBox.Show("We have read in " + roles.Items.Length + " items");
//Add each role entry to the output buffer.
for (int i = 0; i < roles.Items.Length; i++)
{
MessageBox.Show("Adding item " + (i + 1) + " to the output");
Transfer role = getRole(roles.Items[i]);
Output0Buffer.AddRow();
Output0Buffer.roleKey = role.roleKey;
Output0Buffer.text = role.text;
Output0Buffer.Item = role.Item;
Output0Buffer.Users = role.Users;
}
When I run this I get a popup saying that there are 26 items to process, but I only get one more popup after that, telling me that item #1 has been added. the job then stops with no errors, but I only have one row of output in the dataviewer. I don't understand why this is happening when I know that there are 25 more items to add.
Additional: On a whim, I took out the Output0Buffer code and it went through all 26 items.
I figured it out. I ran it using Ctrl-F5 and studied the output in the console. Turns out a column wasn't big enough. I made that column larger and everything works. I would have thought that error would have stopped the processing.

directorysearcher pagesize confusion

I have been looking over Microsoft's documentation and the posts here on getting search results from DirectorySearcher. I am writing code not sure the best performing way to get a lot of results from AD (right now testing with 4K results, but should scale for more).
Question 1: What is the best method?
Here are my efforts so far.
Run 1 description
I did not set the PageSize which returns 2000 (this seems to be the default on the AD server - not 1000 that I read from posts/documentation). I do not know how to get the remainder of the results. I tried making calls to Dispose() and then FindAll() multiple times. That did not work (gave me same results over and over).
Question 2: How do I get all the results this way?
Run 1:
//ds.PageSize - not setting this property
log.Debug("PageSize=" + ds.PageSize);
log.Debug("SizeLimit=" + ds.SizeLimit);
results = ds.FindAll();
log.Debug("AD count: " + results.Count);
Run 1 Log
PageSize=0
SizeLimit=0
AD Count: 2000
Run 2 description
I did the PageSize to higher than my results (though I really do not want to do this for performance fears). I got all the results as expected.
Run 2:
ds.PageSize = 5000;
log.Debug("PageSize=" + ds.PageSize);
log.Debug("SizeLimit=" + ds.SizeLimit);
results = ds.FindAll();
log.Debug("AD count: " + results.Count);
Run 2 Log
PageSize=5000
SizeLimit=0
AD Count: 4066
Run 3 description
I set the PageSize to lower than my results so not to impact performance thinking setting this to would then maybe allow the 'pagination' of results by calling Dispose() and FindAll(). Totally got unexpected results!
Run 3:
ds.PageSize = 2000;
log.Debug("PageSize=" + ds.PageSize);
log.Debug("SizeLimit=" + ds.SizeLimit);
results = ds.FindAll();
log.Debug("AD count: " + results.Count);
Run 3 Log:
PageSize=2000
SizeLimit=0
AD Count: 4066
Question 3: This makes no sense to me. Please point me to right direction. I thought subsequent calls to Dispose() and FindAll() would work here. But I got all the results on first go.
Thanks a million!
The value may have been changed in your environment - it is 1000 by default. You can set the Page Size to 1000 and the DirectorySearcher class will handle paging for you. If you set it smaller, that's fine too. You should wrap the code in a using block to make sure resources get disposed.

Get time of speech in a video file (in code)

I'm looking for a way (in java, c#..) to get the time when people speak in a video file (even in a movie).
I don't need to know the accurate words, just the time.
Output example:
00:03 - 01:03 (someone spoke for a minute),
03:00 - 06:12 (someone spoke again),
.
.
.
I have found Sphinx (written in java): http://cmusphinx.sourceforge.net/
but couldn't get it to recognize properly.
Any ideas?
Thanks.
EDIT:
This is what I've tried in sphinx (very basic):
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
recognizer.startRecognition(somefile);
SpeechResult result;
while ((result = recognizer.getResult()) != null) {
System.out.println(result);
}
recognizer.stopRecognition();
There were only 3 results (there should be allot more).
EDIT2:
well, I tried this on a song in my computer:
https://www.assembla.com/code/sonido/subversion/nodes/12/sphinx4/src/sphinx4/edu/cmu/sphinx/tools/endpoint/Segmenter.java
This is the output:
DataStartSignal: creation time: 1399716763914
SpeechStartSignal
DoubleData: 44100Hz, first sample #: 8820, collect time: 200
DoubleData: 44100Hz, first sample #: 9261, collect time: 210
.....
DoubleData: 44100Hz, first sample #: 1745037, collect time: 39570
SpeechEndSignal
SpeechStartSignal
DoubleData: 44100Hz, first sample #: 1894536, collect time: 42960
......
Two Problems:
1. My goal is to be able to do it on movies. It works on audio files (.wav)
2. I'm not sure it works well. As you can see, the output says the speech started after 200 milliseconds, where actually it started after 3 seconds at least (the song is 'Bee Gees - How Deep Is Your Love').
I have found Sphinx (written in java): http://cmusphinx.sourceforge.net/ but couldn't get it to recognize properly.
Like you said, you do not need to recognize. To get only voice activity detection in Java with times see the segmenter class edu.cmu.sphinx.tools.endpoint.Segmenter

Request timing out on large operation?

Trying to complete a web process, however, I am receiving a 'request time out' error. I'm not sure what I can do to get around this.
I modified the method to create a new connection for every number being passed in the for loop, but it seems to be yielding the same result.
I am more of a desktop developer, not overly versed in ASP.Net, so any light that could be shed on my issue would be great.
I've looked up info on ASP background workers, which doesn't seem to be a great way to go, and I've increased the server settings to allow a higher timeout, but still timeout if a huge number of parts are provided.
I'm also trying to avoid a separate process that is scheduled on the server to execute a submitted list of numbers. If more info is needed to make sense of this, just let me know.
Also, when I attempt to run the application locally (debug) there are never issues, only when it's placed on the live site.
Here is the exact error received:
Server Error in '/' Application.
Request timed out.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Web.HttpException: Request timed out.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Stack Trace:
[HttpException (0x80004005): Request timed out.]
And here is the code:
protected void btnSearch_Click(object sender, EventArgs e)
{
// Clear our reporting panel.
panelHolder.Controls.Clear();
// Store each line in a new array item.
string[] searchlines = txtboxSearch.Text.Replace("\n", "|").Split('|');
// Create a table row containing our main table headers.
panelHolder.Controls.Add(new LiteralControl("<table style=\"width:100%;\">" +
" <tr> " +
" <td></td> " +
" <td>Number</td> " +
" <td>Comparison</td> " +
" </tr>"));
// Variable to hold the row counts.
int j = 0;
// Store our current web members name for use in our tracking data.
string MemberName = Member.GetCurrentMember().Text;
// This table will be used solely for storing our excel exported data.
System.Data.DataTable dt = new System.Data.DataTable();
// Locate our part comparison results for every line of data supplied by our users.
for (int i = 0; i < searchlines.Count(); i++)
{
using (SqlConnection con = new SqlConnection(dbConnection))
{
// If this array item is not blank we will need to collect information about it.
if (searchlines[i].Trim() != string.Empty)
{
// Determine if data collection (reporting) is turned on.
Boolean isReporting = DataCollection();
using (SqlDataReader dr = Connect.ExecuteReader("SelectNumbers]", con,
new SqlParameter("#Number", searchlines[i].Trim()),
new SqlParameter("#CurrentMember", MemberName),
new SqlParameter("#DataCollection", isReporting)))
{
if (dr.HasRows)
{
while (dr.Read())
{
// Add our table rows containing our returned data set.
panelCompetitorHolder.Controls.Add(new LiteralControl("<tr><td>" + Convert.ToString(i + 1) + "</td>"));
AddTableData(dr, "Part Number");
AddTableData(dr, "Comparison");
// Go to our next line item.
j += 1;
}
}
}
}
}
}
// Add our table to the panel control.
panelHolder.Controls.Add(new LiteralControl("</table>"));
}
Your issue may lie in the fact that IIS assumes a maximum period of time for a given request to be processed. By default, the value is 90 seconds. Here's a few ways you can set a different amount of time:
Via Web.config - Check/add this entry on your web.config file:
<system.web>
<httpRuntime executionTimeout="N" />
</system.web>
Programatically - You may add this to your server-side code:
Server.ScriptTimeout = N;
Where N is, in both options, the desired amount of seconds for the request timeout.
Additionally, your values may be superseded/ignored if there's an entry present at the server's applicationhost.config or machine.config files, as described here:
http://www.iis.net/configreference/system.applicationhost/sites/sitedefaults/limits
If that's the case you may have to alter the corresponding entries - or, in lieu of that, alter your codebehind content.

HtmlAgilityPack Save Process Not Letting Go Of File

I am saving some of the rendered html of a web site by overriding the Render method and using HtmlAgilityPack. Here is the code:
protected override void Render(HtmlTextWriter writer)
{
using (HtmlTextWriter htmlwriter = new HtmlTextWriter(new StringWriter()))
{
base.Render(htmlwriter);
string output= htmlwriter.InnerWriter.ToString();
var doc = new HtmlDocument();
doc.LoadHtml(output);
doc.Save(currDir + "\\" + reportDir + "\\dashboardTable.html");
}
}
However, some process does not let go of the saved file and I am unable to delete it from the server. Does anyone know of an HtmlAgilityPack issue that would cause this?
Any advice is appreciated.
Regards.
EDIT:
I have tried both of the methods suggested. I can't tell if they are the solution yet because my app is frozen on the server due to the files I can't delete. However, when I use these solutions on my own machine, the rendered HTML does not save as an HTML table anymore but rather like this:
INCIDENT MANAGEMENT
Jul '12 F'12
Trend F'12 2011
(avg)
Severe Incidents (Sev1/2): 3 2.1 4.16
Severe Avoidable Incidents (Sev1/2): 1 1.3 1.91
Incidents (Sev3): 669 482 460.92
Incidents (Sev4) - No business Impact: 1012 808 793
Proactive Tickets Opened: 15 19.3 14
Proactive Tickets Resolved/Closed: 14 17.3 11
CHANGE MANAGEMENT
Total Planned Changes: 531 560 583.58
Change Success Rate (%): 99.5 99.4 99
Non-Remedial Urgent Changes: 6 11 47.08
PROBLEM MANAGEMENT
New PIRs: 2 1.4 2
Closed PIRs: 0 2 3
Overdue Action items: 2 3.2 0
COMPLIANCE MEASUREMENTS
Jul Trend Jun
Total Number of Perimeter Devices: 250 258
Perimeter Devices - Non Compliant: 36 31
Total Number of Internal Devices: 6676 6632
Internal Devices - Non Compliant: 173 160
Unauthorized Perimeter Changes: 0 0
Unauthorized Internal Changes 0 0
LEGEND
ISP LINKS
July June Trend
SOC CPO DRP SOC CPO DRP
40% 34% 74% 39% 35% 74%
BELL MPLS HEAD ENDS
July June Trend
SOC CPO SOC CPO
8% 5% 7% 10% 8% 5.5% 7% 10%
ENTERPRISE NETWORK (# of issues called out)
July June Trend
CORE FW/DMZ CORE FW/DMZ
1 0 1 0
US & INTL (# of issues called out)
July June Trend
US Intl US Intl
2 2 2 3
LINE OF BUSINESS BELL WAN MPLS
<> 50%-65% >65% <> 50%-65% >65% Trend
Retail: 2272 0 1 2269 4 0
Business Banking: 59 1 0 60 0 0
Wealth: 122 2 0 121 2 1
Corporate: 51 0 0 49 2 0
Remote ATM: 280 0 0 280 0 0
TOOLS
Version Currency Vulnerability Status Health Status
Key Messages:
where only the text data has been saved and all of the HTML and CSS is missing. If I just use doc.Save() I get an exact representation of the table as it displays on the website.
Try this instead. Maybe the Save method isn't closing the underlying stream.
using( FileStream stream = File.OpenWrite( currDir + "\\" + reportDir + "\\dashboardTable.html" ) ){
doc.Save(stream);
stream.Close();
}
Edit
Per #L.B's comments it appears that HtmlAgilityPack does use a using block as in my example so it will be ensuring that the stream gets closed.
Thus as I suggested at the end of my original answer this must be a server environment problem
Original Answer
This may be some sort of bug with HtmlAgilityPack - you may want to report it to the developers.
However to eliminate that possibility you may want to consider explicitly controlling the creation of the StreamWriter for the file so you are explicitly closing it yourself. Replace this line:
doc.Save(currDir + "\\" + reportDir + "\\dashboardTable.html");
With the following:
using (StreamWriter fileWriter = new StreamWriter(currDir + "\\" + reportDir + "\\dashboardTable.html"))
{
doc.Save(fileWriter);
fileWriter.Close();
}
If the issue still persists even with this change then that would suggest an issue with your server environment rather than an issue with HtmlAgilityPack. Btw to test if this change makes a difference you should start from a clean server environment rather than one where you are already having issues deleting the file in question.

Categories

Resources