Foreach not completing iterations - c#

C# Foreach not completing iterations: if I have 10 XML records it only creates about 6 XML files.
public void PRODUCT()
{
try
{
var toProdSignlCRMList = _db.kv_sp_Product().ToList();
if (toProdSignlCRMList.Count > 0)
{
foreach (kv_sp_Product_Result myProdSignalLoop in toProdSignlCRMList)
{
var erp_prod_signal = new erp_crm_class.PRODUCT
{
CODE = myProdSignalLoop.Code.ToString().Trim(),
SHORTDESC = myProdSignalLoop.Description_1.ToString().Trim(),
INTERNATIONAL = false,
};
XmlSerializer xsSubmit = new XmlSerializer(typeof(erp_crm_class.PRODUCT));
var ProdSignalxml = "";
using (var sww = new StringWriter())
{
using (XmlWriter writer = XmlWriter.Create(sww))
{
xsSubmit.Serialize(writer, erp_prod_signal);
ProdSignalxml = sww.ToString();
using (StreamWriter outputFile = new StreamWriter(Convert.ToString("C:\\Upload\\" + "PRODUCT" + DateTime.Now.ToString("yyyyMMddhhmmssfff") + ".xml")))
{
outputFile.Write(ProdSignalxml);
}
}
}
}
}
}
catch (Exception ex)
{
}
}

You are using a Try/Catch block.
A Try/Catch block exits the Try block any time the code fails, and immediately runs the Catch block.
Normally you'd have some sort of error-handling in the catch block but for debugging you could also add the following to help you find out what is going on:
catch(Exception ex)
{
MessageBox.Show(ex.Message, "Error");
}
If the actual message doesn't contain any useful information, another way to go about is stepping through each loop and see what part fails and why.
Third option and most likely the best one is to remove your Try/Catch block and make sure you are using a debugger that is configured to break at exceptions - this way you get the actual error in your debugger.

Related

Is the using var always executed?

In some code that I maintain, I came across this:
int Flag;
using (StreamReader reader = new StreamReader(FileName, Encoding.GetEncoding("iso-8859-1"), true))
{
Flag = 1;
// Some computing code
}
if(Flag == 1)
{
// Some other code
}
Which, from what I understand, is a way to do some other instruction if the using part was executed. But is there a possibility for using to be not executed (Except if an exception is raised)? Or is this completely useless code?
That code is useless...
If you add a try... catch it could have a sense... You want to know if/where an exception happens, like:
int flag = 0;
try
{
using (StreamReader reader = new StreamReader(FileName, Encoding.GetEncoding("iso-8859-1"), true))
{
flag = 1;
reader.ReadToEnd();
flag = 2;
}
flag = int.MaxValue;
}
catch (Exception ex)
{
}
if (flag == 0)
{
// Exception on opening
}
else if (flag == 1)
{
// Exception on reading
}
else if (flag == 2)
{
// Exception on closing
}
else if (flag == int.MaxValue)
{
// Everything OK
}
Based on using Statement documentation, you can translate your code to
int flag;
{
StreamReader reader = new StreamReader(FileName, Encoding.GetEncoding("iso-8859-1"), true);
try
{
flag = 1;
// Some computing code
}
finally
{
if (reader != null) ((IDisposable)reader).Dispose();
}
}
if (flag == 1)
{
// Some other code
}
If you reach the flag == 1 test, that means your code didn't thrown and therefor, flag was set to 1. So, yes, flag stuffs are completely useless code in your case.
The code is always executed within the using statement unless the creation of the instance throws an exception.
Take this into account.
int Flag;
using (StreamReader reader = new StreamReader(FileName, Encoding.GetEncoding("iso-8859-1"), true))
{
// This scope is executed if the StreamReader instance was created
// If ex. it can't open the file etc. then the scope is not executed
Flag = 1;
}
// Does not run any code past this comment
// if the using statement was not successfully executed
// or there was an exception thrown within the using scope
if(Flag == 1)
{
// Some other code
}
However there is a way that you could make sure the next part of the code is executed.
Using a try statement would give you the possibility to make sure the flag is set.
This may not be what you want to do, but based on your code it would make sure the flag is set. Perhaps you need some other kind of logic.
int Flag;
try
{
using (StreamReader reader = new StreamReader(FileName, Encoding.GetEncoding("iso-8859-1"), true))
{
// This scope is executed if the StreamReader instance was created
// If ex. it can't open the file etc. then the scope is not executed
Flag = 1;
}
}
catch (Exception e)
{
// Do stuff with the exception
Flag = -1; // Error Flag perhaps ??
}
// Any code after this is still executed
if(Flag == 1)
{
// Some other code
}

SqlException: do not abort a transaction

I have a code that adds data to two EntityFramework 6 DataContexts, like this:
using(var scope = new TransactionScope())
{
using(var requestsCtx = new RequestsContext())
{
using(var logsCtx = new LogsContext())
{
var req = new Request { Id = 1, Value = 2 };
requestsCtx.Requests.Add(req);
var log = new LogEntry { RequestId = 1, State = "OK" };
logsCtx.Logs.Add(log);
try
{
requestsCtx.SaveChanges();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
logsCtx.SaveChanges();
}
}
}
There is an insert trigger in Requests table that rejects some values using RAISEERROR. This situation is normal and should be handled by the try-catch block where the SaveChanges method is invoked. If the second SaveChanges method fails, however, the changes to both DataContexts must be reverted entirely - hence the transaction scope.
Here goes the error: when requestsCtx.SaveChanges() throws a exception, the whole Transaction.Current has its state set to Aborted and the latter logsCtx.SaveChanges() fails with the following:
TransactionException:
The operation is not valid for the state of the transaction.
Why is this happening and how do tell EF that the first exception is not critical?
Really not sure if this will work, but it might be worth trying.
private void SaveChanges()
{
using(var scope = new TransactionScope())
{
var log = CreateRequest();
bool saveLogSuccess = CreateLogEntry(log);
if (saveLogSuccess)
{
scope.Complete();
}
}
}
private LogEntry CreateRequest()
{
var req = new Request { Id = 1, Value = 2 };
var log = new LogEntry { RequestId = 1, State = "OK" };
using(var requestsCtx = new RequestsContext())
{
requestsCtx.Requests.Add(req);
try
{
requestsCtx.SaveChanges();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
finally
{
return log;
}
}
}
private bool CreateLogEntry(LogEntry log)
{
using(var logsCtx = new LogsContext())
{
try
{
logsCtx.Logs.Add(log);
logsCtx.SaveChanges();
}
catch (Exception)
{
return false;
}
return true;
}
}
from the documentation on transactionscope: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope%28v=vs.110%29.aspx
If no exception occurs within the transaction scope (that is, between
the initialization of the TransactionScope object and the calling of
its Dispose method), then the transaction in which the scope
participates is allowed to proceed. If an exception does occur within
the transaction scope, the transaction in which it participates will
be rolled back.
Basically as soon as an exception is encountered, the transaction is rolled back (as it seems you're aware) - I think this might work but am really not sure and can't test to confirm. It seems like this goes against the intended use of transaction scope, and I'm not familiar enough with exception handling/bubbling, but maybe it will help! :)
I think I finally figured it out. The trick was to use an isolated transaction for the first SaveChanges:
using(var requestsCtx = new RequestsContext())
using(var logsCtx = new LogsContext())
{
var req = new Request { Id = 1, Value = 2 };
requestsCtx.Requests.Add(req);
var log = new LogEntry { RequestId = 1, State = "OK" };
logsCtx.Logs.Add(log);
using(var outerScope = new TransactionScope())
{
using(var innerScope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
try
{
requestsCtx.SaveChanges();
innerScope.Complete();
}
catch(Exception ex)
{
log.State = "Error: " + ex.Message;
}
}
logsCtx.SaveChanges();
outerScope.Complete();
}
}
Warning: most of the articles about RequiresNew mode discourage using it due to performance reasons. It works perfectly for my scenario, however if there are any side effects that I'm unaware of, please let me know.

IndexOutOfRangeException when trying to create an Object from a text file in C#

I'm currently in the middle of trying to take a '|' delimited text file and create objects from the data contained within. Example:
Name|Address|City|State|Zip|Birthday|ID|Etc.
Name2|Address2|City2|State2|Zip2|Birthday2|ID2|Etc.
The newly created object, is then added to a list of said objects and the program moves to the next line of the file by way of a while loop using .Peek() (to make sure I don't go past the end of the file).
However, when it gets to creating the second object (more specifically, the second field of the second object), it throws an Index Out Of Range Exception, and I can't for the life of me figure out why. Thank you whomever might read this!
StreamReader textIn = new StreamReader(new FileStream(path, FileMode.OpenOrCreate, FileAccess.Read));
List<Student> students = new List<Student>();
while (textIn.Peek() != -1)
{
string row = textIn.ReadLine();
MessageBox.Show(row);
string [] fields = row.Split('|');
Student temp = new Student();
try
{
temp.name = fields[0];
temp.address = fields[1];
temp.city = fields[2];
temp.state = fields[3];
temp.zipCode = Convert.ToInt32(fields[4]);
temp.birthdate = fields[5];
temp.studentID = Convert.ToInt32(fields[6]);
temp.sGPA = Convert.ToDouble(fields[7]);
}
catch
{
MessageBox.Show("IndexOutOfRangeException caught");
}
students.Add(temp);
}
textIn.Close();
First you can't ensure if its a IndexOutOfRange Exception with your current catch block.
catch
{
MessageBox.Show("IndexOutOfRangeException caught");
}
It can be anything, may be exception during parsing to double. You may modify your catch block to:
catch(IndexOutOfRangeException ex)
{
MessageBox.Show(ex.Message);
}
Also if you are going to access fields[7] then its better if you can check against the length of array to ensure that you got atleast 8 elements in your array.
if(fileds.Length >=8)
{
temp.name = fields[0];
....
To catch FormatException which can occur during double parsing you may add an extra catch block for:
catch (FormatException ex)
{
MessageBox.Show(ex.Message);
}
Check if you have all 8 fieds in a line.
Show a message if ther isn't.
Get the actual exception and show its message to see the real problem description.
Use Double.TryParse Method and Int32.TryParse Method to be sure all numeric values are valid
Also use while (!textIn.EndOfStream) instead.
try
{
int tempInt;
double tempDouble;
if (fields.Length = 8)//#1
{
temp.name = fields[0];
temp.address = fields[1];
temp.city = fields[2];
temp.state = fields[3];
if (!int.TryParse(fields[4], out tempInt)) //#4
temp.zipCode = tempInt;
else
{
//..invalid value in field
}
temp.birthdate = fields[5];
if (!int.TryParse(fields[6], out tempInt)) //#4
temp.studentID = tempInt;
else
{
//..invalid value in field
}
if (!int.TryParse(fields[7], out tempDouble)) //#4
temp.sGPA = tempDouble;
else
{
//..invalid value in field
}
}
else //#2
{
MessageBox.Show("Invalid number of fields");
}
}
catch (Exception ex) //#3
{
MessageBox.Show(ex.Message);
}
Maybe ReadAllLines will work a bit better if the data is on each line:
List<Student> students = new List<Student>();
using (FileStream textIn = new FileStream(path, FileMode.Open, FileAccess.Read))
{
foreach (string line in File.ReadAllLines(path))
{
MessageBox.Show(line);
string[] fields = line.Split('|');
Student temp = new Student();
try
{
temp.name = fields[0];
temp.address = fields[1];
temp.city = fields[2];
temp.state = fields[3];
temp.zipCode = Convert.ToInt32(fields[4]);
temp.birthdate = fields[5];
temp.studentID = Convert.ToInt32(fields[6]);
temp.sGPA = Convert.ToDouble(fields[7]);
}
catch
{
MessageBox.Show(string.Format("IndexOutOfRangeException caught, Split Result:", string.Join(", ", fields.ToArray())));
}
students.Add(temp);
}
}
In the given data if you have atleast eight columns for every row, you wont be getting index of of range exception but parsing of item at 4, 6, 7 would fail as they are not numbers and converting the non number values to int and double raises the exception.
temp.zipCode = Convert.ToInt32(fields[4]);
temp.studentID = Convert.ToInt32(fields[6]);
temp.sGPA = Convert.ToDouble(fields[7]);
You need to change the catch block to know the reason for exception
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
}

status code on async webresponse task

I want to hit a plethora (100k+) of JSON files as rapidly as possible, serialize them, and store the HTTP response status code of the request (whether it succeeded or failed). (I am using System.Runtime.Serialization.Json and a DataContract). I intend to do further work with the status code and serialized object, but as a test bed I have this snippet of code:
List<int> ids = new List<int>();
for (int i = MIN; i < MAX; i++)
ids.Add(i);
var tasks = ids.Select(id =>
{
var request = WebRequest.Create(GetURL(id));
return Task
.Factory
.FromAsync<WebResponse>(request.BeginGetResponse, request.EndGetResponse, id)
.ContinueWith(t =>
{
HttpStatusCode code = HttpStatusCode.OK;
Item item = null;
try
{
using (var stream = t.Result.GetResponseStream())
{
DataContractJsonSerializer jsonSerializer = new DataContractJsonSerializer(typeof(Item));
item = ((Item)jsonSerializer.ReadObject(stream));
}
}
catch (AggregateException ex)
{
if (ex.InnerException is WebException)
code = ((HttpWebResponse)((WebException)ex.InnerException).Response).StatusCode;
}
});
}).ToArray();
Task.WaitAll(tasks);
Using this approach I was able to process files much more quickly than the synchronous approach I was doing before.
Though, I know that the GetResponseStream() throws the WebException when the status code is 4xx or 5xx. So to capture those status codes I need to catch this exception. However, in the context of this TPL it is nested in an InnerException on an AggregateException. This makes this line really confusing:
code = ((HttpWebResponse)((WebException)ex.InnerException).Response).StatusCode;
Though, this works... I was wondering if there is a better/clearer way to capture such an exception in this context?
Take a look at MSDN article: Exception Handling (Task Parallel Library)
For example, you might want to rewrite your code as follows:
try
{
using (var stream = t.Result.GetResponseStream())
{
DataContractJsonSerializer jsonSerializer = new
DataContractJsonSerializer(typeof(Item));
item = ((Item)jsonSerializer.ReadObject(stream));
}
}
catch (AggregateException ex)
{
foreach (var e in ex.InnerExceptions)
{
bool isHandled = false;
if (e is WebException)
{
WebException webException = (WebException)e;
HttpWebResponse response = webException.Response as HttpWebResponse;
if (response != null)
{
code = response.StatusCode;
isHandled = true;
}
}
if (!isHandled)
throw;
}
}
Try this on for size. GetBaseException returns the exception that caused the problem.
try
{
}
catch (System.AggregateException aex)
{
var baseEx = aex.GetBaseException() as WebException;
if (baseEx != null)
{
var httpWebResp = baseEx.Response as HttpWebResponse;
if (httpWebResp != null)
{
var code = httpWebResp.StatusCode;
// Handle it...
}
}
throw;
}

Optimization of file read in C#

I have a requirement to copy a file, parse it's contents removing the line breaks, and the splitting the content along the pipes, and then storing the resulting string[] off in the database. My files could get as large as 65000 valid records each, and so performance is essential.
Below is what I currently have. Problem is it is extremely slow( 3 hours to process 65000 lines). I would appreciate any help with improving optimizing this piece so my runs can be significantly faster.
public void ReadFileLinesIntoRows()
{
try
{
using (var reader = new TextFieldParser(FileName))
{
reader.HasFieldsEnclosedInQuotes = false;
reader.TextFieldType = FieldType.Delimited;
reader.SetDelimiters("|");
String[] currentRow;
while (!reader.EndOfData)
{
try
{
currentRow = reader.ReadFields();
int rowcount = currentRow.Count();
//if it is less than what you need, pad it.
if (rowcount < 190)
{
Array.Resize<string>(ref currentRow, 190);
rows.Add(currentRow);
}
else
{
rows.Add(currentRow);
}
}
catch (MalformedLineException mex)
{
unreadlines.Add(reader.ErrorLine);//continue afterwards
}
}
this.TotalRowCount = rows.Count();
}
}
catch (Exception ex)
{
throw ex;
}
}
public void cleanfilecontent(String tempfilename, Boolean? HeaderIncluded)
{
try
{
//remove the empty lines in the file
using (var sr = new StreamReader(tempfilename))
{
// Write new file
using (var sw = new StreamWriter(CleanedCopy))
{
using (var smove = new StreamWriter(duptempfileremove))
{
string line;
Boolean skippedheader = false;
while ((line = sr.ReadLine()) != null)
{
// Look for text to remove
if (line.Contains("----------------------------------"))
{
smove.Write(line);
}
else if (HeaderIncluded.HasValue && HeaderIncluded.Value==true && ! skippedheader)
{
smove.Write(line);
skippedheader = true;
}
else if(skippedheader)
{
// Keep lines that does not match
sw.WriteLine(line);
}
}
smove.Flush();
}
sw.Flush();
}
sr.Close();
}
}
catch (Exception ex)
{
throw ex;
}
}
65000 records isn't all that big. If you have sufficient memory, I will suggest read the whole file to memory, perform your parsing and construct your row and use batch insert to commit the data to DB. This will be the fastest! I suspect most of your 3 hours is spent in inserting database record one at a time to the database.

Categories

Resources