I am trying to figure out how to read the results of the Quickbooks Online report service. Specifically I am trying to show the results from reportBS on a label and here is my code:
OAuth2RequestValidator oauthValidator = new OAuth2RequestValidator(dictionary["accessToken"]);
ServiceContext serviceContext = new ServiceContext(dictionary["realmId"], IntuitServicesType.QBO, oauthValidator);
serviceContext.IppConfiguration.BaseUrl.Qbo = "https://sandbox-quickbooks.api.intuit.com/";
serviceContext.IppConfiguration.MinorVersion.Qbo = "55";
ReportService reportService = new ReportService(serviceContext);
reportService.accounting_method = "Accrual";
reportService.start_date = "2020-01-01";
reportService.end_date = "2020-10-31";
reportService.summarize_column_by = "Month";
serviceContext.IppConfiguration.Message.Response.SerializationFormat = Intuit.Ipp.Core.Configuration.SerializationFormat.Json;
ReportService defaultReportService1 = new ReportService(serviceContext);
string defaultReportName = "BalanceSheet";
Report reportBS = defaultReportService1.ExecuteReport(defaultReportName);
The result I get when I run my code and place reportBS on a label is Intuit.Ipp.Data.Report with nothing else.
You need to save the QBO report as JSON or XML string? As far as I know, converting a QBO report object to JSON or XML needs to be done manually.
Here is a question that shows how it is done with a profit and loss report. (this is actually very similar to the text extraction method I outlined in the first place, only the result gets reassembled as XML/JSON in your case).
If you just want to pass the report to another system or store the report serialization could fit the bill.
Following this basic report example:
private static void PrintRows(StringBuilder reportText, Row[] rows, int[] maxColumnSize, int level)
{
for (int rowIndex = 0; rowIndex < rows.Length; rowIndex++)
{
Row row = rows[rowIndex];
//Get Row Header
Header rowHeader = GetRowProperty(row, ItemsChoiceType1.Header);
//Append Row Header
if (rowHeader != null && rowHeader.ColData != null) { PrintColData(reportText, rowHeader.ColData, maxColumnSize, level); }
//Get Row ColData
ColData[] colData = GetRowProperty(row, ItemsChoiceType1.ColData);
//Append ColData
if (colData != null) { PrintColData(reportText, colData, maxColumnSize, level); }
//Get Child Rows
Rows childRows = GetRowProperty(row, ItemsChoiceType1.Rows);
//Append Child Rows
if (childRows != null) { PrintRows(reportText, childRows.Row, maxColumnSize, level + 1); }
//Get Row Summary
Summary rowSummary = GetRowProperty(row, ItemsChoiceType1.Summary);
//Append Row Summary
if (rowSummary != null && rowSummary.ColData != null) { PrintColData(reportText, rowSummary.ColData, maxColumnSize, level); }
}
}
StringBuilder reportText = new StringBuilder();
//Determine Maxmimum Text Lengths to format Report
//int[] maximumColumnTextSize = GetMaximumColumnTextSize(report);
//Append Column Headers
//PrintColumnData(reportText, report.Columns, maximumColumnTextSize, 0);
PrintRows(reportText, report.Rows, maximumColumnTextSize, 1);
I'm writing a simple import application and need to read a CSV file, show result in a DataGrid and show corrupted lines of the CSV file in another grid. For example, show the lines that are shorter than 5 values in another grid. I'm trying to do that like this:
StreamReader sr = new StreamReader(FilePath);
importingData = new Account();
string line;
string[] row = new string [5];
while ((line = sr.ReadLine()) != null)
{
row = line.Split(',');
importingData.Add(new Transaction
{
Date = DateTime.Parse(row[0]),
Reference = row[1],
Description = row[2],
Amount = decimal.Parse(row[3]),
Category = (Category)Enum.Parse(typeof(Category), row[4])
});
}
but it's very difficult to operate on arrays in this case. Is there a better way to split the values?
Don't reinvent the wheel. Take advantage of what's already in .NET BCL.
add a reference to the Microsoft.VisualBasic (yes, it says VisualBasic but it works in C# just as well - remember that at the end it is all just IL)
use the Microsoft.VisualBasic.FileIO.TextFieldParser class to parse CSV file
Here is the sample code:
using (TextFieldParser parser = new TextFieldParser(#"c:\temp\test.csv"))
{
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
while (!parser.EndOfData)
{
//Processing row
string[] fields = parser.ReadFields();
foreach (string field in fields)
{
//TODO: Process field
}
}
}
It works great for me in my C# projects.
Here are some more links/informations:
MSDN: Read From Comma-Delimited Text Files in Visual Basic
MSDN: TextFieldParser Class
I recommend CsvHelper from NuGet.
PS: Regarding other more upvoted answers, I'm sorry but adding a reference to Microsoft.VisualBasic is:
Ugly
Not cross-platform, because it's not available in .NETCore/.NET5 (and Mono never had very good support of Visual Basic, so it may be buggy).
My experience is that there are many different csv formats. Specially how they handle escaping of quotes and delimiters within a field.
These are the variants I have ran into:
quotes are quoted and doubled (excel) i.e. 15" -> field1,"15""",field3
quotes are not changed unless the field is quoted for some other reason. i.e. 15" -> field1,15",fields3
quotes are escaped with \. i.e. 15" -> field1,"15\"",field3
quotes are not changed at all (this is not always possible to parse correctly)
delimiter is quoted (excel). i.e. a,b -> field1,"a,b",field3
delimiter is escaped with \. i.e. a,b -> field1,a\,b,field3
I have tried many of the existing csv parsers but there is not a single one that can handle the variants I have ran into. It is also difficult to find out from the documentation which escaping variants the parsers support.
In my projects I now use either the VB TextFieldParser or a custom splitter.
Sometimes using libraries are cool when you do not want to reinvent the wheel, but in this case one can do the same job with fewer lines of code and easier to read compared to using libraries.
Here is a different approach which I find very easy to use.
In this example, I use StreamReader to read the file
Regex to detect the delimiter from each line(s).
An array to collect the columns from index 0 to n
using (StreamReader reader = new StreamReader(fileName))
{
string line;
while ((line = reader.ReadLine()) != null)
{
//Define pattern
Regex CSVParser = new Regex(",(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))");
//Separating columns to array
string[] X = CSVParser.Split(line);
/* Do something with X */
}
}
CSV can get complicated real fast.
Use something robust and well-tested:
FileHelpers:
www.filehelpers.net
The FileHelpers are a free and easy to use .NET library to import/export data from fixed length or delimited records in files, strings or streams.
Another one to this list, Cinchoo ETL - an open source library to read and write CSV files
For a sample CSV file below
Id, Name
1, Tom
2, Mark
Quickly you can load them using library as below
using (var reader = new ChoCSVReader("test.csv").WithFirstLineHeader())
{
foreach (dynamic item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
If you have POCO class matching the CSV file
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
}
You can use it to load the CSV file as below
using (var reader = new ChoCSVReader<Employee>("test.csv").WithFirstLineHeader())
{
foreach (var item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
Please check out articles at CodeProject on how to use it.
Disclaimer: I'm the author of this library
I use this here:
http://www.codeproject.com/KB/database/GenericParser.aspx
Last time I was looking for something like this I found it as an answer to this question.
private static DataTable ConvertCSVtoDataTable(string strFilePath)
{
DataTable dt = new DataTable();
using (StreamReader sr = new StreamReader(strFilePath))
{
string[] headers = sr.ReadLine().Split(',');
foreach (string header in headers)
{
dt.Columns.Add(header);
}
while (!sr.EndOfStream)
{
string[] rows = sr.ReadLine().Split(',');
DataRow dr = dt.NewRow();
for (int i = 0; i < headers.Length; i++)
{
dr[i] = rows[i];
}
dt.Rows.Add(dr);
}
}
return dt;
}
private static void WriteToDb(DataTable dt)
{
string connectionString =
"Data Source=localhost;" +
"Initial Catalog=Northwind;" +
"Integrated Security=SSPI;";
using (SqlConnection con = new SqlConnection(connectionString))
{
using (SqlCommand cmd = new SqlCommand("spInsertTest", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#policyID", SqlDbType.Int).Value = 12;
cmd.Parameters.Add("#statecode", SqlDbType.VarChar).Value = "blagh2";
cmd.Parameters.Add("#county", SqlDbType.VarChar).Value = "blagh3";
con.Open();
cmd.ExecuteNonQuery();
}
}
}
Here's a solution I coded up today for a situation where I needed to parse a CSV without relying on external libraries. I haven't tested performance for large files since it wasn't relevant to my particular use case but I'd expect it to perform reasonably well for most situations.
static List<List<string>> ParseCsv(string csv) {
var parsedCsv = new List<List<string>>();
var row = new List<string>();
string field = "";
bool inQuotedField = false;
for (int i = 0; i < csv.Length; i++) {
char current = csv[i];
char next = i == csv.Length - 1 ? ' ' : csv[i + 1];
// if current character is not a quote or comma or carriage return or newline (or not a quote and currently in an a quoted field), just add the character to the current field text
if ((current != '"' && current != ',' && current != '\r' && current != '\n') || (current != '"' && inQuotedField)) {
field += current;
} else if (current == ' ' || current == '\t') {
continue; // ignore whitespace outside a quoted field
} else if (current == '"') {
if (inQuotedField && next == '"') { // quote is escaping a quote within a quoted field
i++; // skip escaping quote
field += current;
} else if (inQuotedField) { // quote signifies the end of a quoted field
row.Add(field);
if (next == ',') {
i++; // skip the comma separator since we've already found the end of the field
}
field = "";
inQuotedField = false;
} else { // quote signifies the beginning of a quoted field
inQuotedField = true;
}
} else if (current == ',') { //
row.Add(field);
field = "";
} else if (current == '\n') {
row.Add(field);
parsedCsv.Add(new List<string>(row));
field = "";
row.Clear();
}
}
return parsedCsv;
}
First of all need to understand what is CSV and how to write it.
Every next string ( /r/n ) is next "table" row.
"Table" cells is separated by some delimiter symbol. Most often used symbols is \t or ,
Every cell possibly can contain this delimiter symbol (cell must to start with quotes symbol and ends with this symbol in this case)
Every cell possibly can contains /r/n sybols (cell must to start with quotes symbol and ends with this symbol in this case)
The easiest way for C#/Visual Basic to work with CSV files is to use standard Microsoft.VisualBasic library. You just need to add needed reference, and the following string to your class:
using Microsoft.VisualBasic.FileIO;
Yes, you can use it in C#, don't worry. This library can read relatively big files and supports all of needed rules, so you will be able to work with all of CSV files.
Some time ago I had wrote simple class for CSV read/write based on this library. Using this simple class you will be able to work with CSV like with 2 dimensions array.
You can find my class by the following link:
https://github.com/ukushu/DataExporter
Simple example of using:
Csv csv = new Csv("\t");//delimiter symbol
csv.FileOpen("c:\\file1.csv");
var row1Cell6Value = csv.Rows[0][5];
csv.AddRow("asdf","asdffffff","5")
csv.FileSave("c:\\file2.csv");
To complete the previous answers, one may need a collection of objects from his CSV File, either parsed by the TextFieldParser or the string.Split method, and then each line converted to an object via Reflection. You obviously first need to define a class that matches the lines of the CSV file.
I used the simple CSV Serializer from Michael Kropat found here: Generic class to CSV (all properties)
and reused his methods to get the fields and properties of the wished class.
I deserialize my CSV file with the following method:
public static IEnumerable<T> ReadCsvFileTextFieldParser<T>(string fileFullPath, string delimiter = ";") where T : new()
{
if (!File.Exists(fileFullPath))
{
return null;
}
var list = new List<T>();
var csvFields = GetAllFieldOfClass<T>();
var fieldDict = new Dictionary<int, MemberInfo>();
using (TextFieldParser parser = new TextFieldParser(fileFullPath))
{
parser.SetDelimiters(delimiter);
bool headerParsed = false;
while (!parser.EndOfData)
{
//Processing row
string[] rowFields = parser.ReadFields();
if (!headerParsed)
{
for (int i = 0; i < rowFields.Length; i++)
{
// First row shall be the header!
var csvField = csvFields.Where(f => f.Name == rowFields[i]).FirstOrDefault();
if (csvField != null)
{
fieldDict.Add(i, csvField);
}
}
headerParsed = true;
}
else
{
T newObj = new T();
for (int i = 0; i < rowFields.Length; i++)
{
var csvFied = fieldDict[i];
var record = rowFields[i];
if (csvFied is FieldInfo)
{
((FieldInfo)csvFied).SetValue(newObj, record);
}
else if (csvFied is PropertyInfo)
{
var pi = (PropertyInfo)csvFied;
pi.SetValue(newObj, Convert.ChangeType(record, pi.PropertyType), null);
}
else
{
throw new Exception("Unhandled case.");
}
}
if (newObj != null)
{
list.Add(newObj);
}
}
}
}
return list;
}
public static IEnumerable<MemberInfo> GetAllFieldOfClass<T>()
{
return
from mi in typeof(T).GetMembers(BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static)
where new[] { MemberTypes.Field, MemberTypes.Property }.Contains(mi.MemberType)
let orderAttr = (ColumnOrderAttribute)Attribute.GetCustomAttribute(mi, typeof(ColumnOrderAttribute))
orderby orderAttr == null ? int.MaxValue : orderAttr.Order, mi.Name
select mi;
}
I'd highly suggest using CsvHelper.
Here's a quick example:
public class csvExampleClass
{
public string Id { get; set; }
public string Firstname { get; set; }
public string Lastname { get; set; }
}
var items = DeserializeCsvFile<List<csvExampleClass>>( csvText );
public static List<T> DeserializeCsvFile<T>(string text)
{
CsvReader csv = new CsvReader( new StringReader( text ) );
csv.Configuration.Delimiter = ",";
csv.Configuration.HeaderValidated = null;
csv.Configuration.MissingFieldFound = null;
return (List<T>)csv.GetRecords<T>();
}
Full documentation can be found at: https://joshclose.github.io/CsvHelper
I need to eliminate all rows that contain either string notUsed or string notUsed2, where a particular identity is 2.
I'm using a foreach loop to accomplish this, prior to appending any of my data to a stringbuilder.
I would have chosen a method like so:
foreach (DataRow dr in ds.Tables[0].Rows)
{
int auth = (int)dr[0];
if (auth == 2) continue;
string notUsed = "NO LONGER USED";
string notUsed2 = "NO LONGER IN USE";
if (dr.Cells[3].ToString().Contains(string)notUsed)
{
dr.Delete();
}
else
{
if (dr.Cells[3].ToString().Contains(string)notUsed2)
{
dr.Delete();
}
}
}
However, the above is... utterly wrong. It seems logical to me, but I don't quite understand how to form that method in a way that C# understands.
You should
Remove Cells as it is not a property of DataRow instead use
dr[3].
Remove string from Contains so your check should be like: if (dr[3].ToString().Contains(notUsed))
You are modifying the collection inside a foreach loop, by deleting the row. You should use for loop backward.
Like:
for (int i = ds.Tables[0].Rows.Count - 1; i >= 0; i--)
{
DataRow dr = ds.Tables[0].Rows[i];
int auth = (int)dr[0];
if (auth == 2) continue;
string notUsed = "NO LONGER USED";
string notUsed2 = "NO LONGER IN USE";
if (dr[3].ToString().Contains(notUsed))
{
dr.Delete();
}
else
{
if (dr[3].ToString().Contains(notUsed2))
{
dr.Delete();
}
}
}
You can also use LINQ to DataSet and get a new DataTable like:
DataTable newDataTAble = ds.Tables[0].AsEnumerable()
.Where(r => !r.Field<string>(3).Contains(notUsed) &&
r.Field<string>(3).Contains(notUsed2))
.CopyToDataTable();
The title says all, but I'll explain more in detail. Here is the thing.
I'm currently developing extra code in a Word VSTO add-in which uses mergefields in a template. The mergefields in the template are filled using data from an external file. What I need is to read the value from a Merge Field but I have totally no clue how to accomplish this. I've been searching for a few days now but none of the articles I read worked for me...
So the question:
How can I get the value from a specific merge field in Word using VSTO?
Mailmerge is quite simple in VSTO, Here is the two magic lines that will do
//Pass in the path of external file
document.MailMerge.OpenDataSource(Name: vm.FilePath.FullName);
document.MailMerge.Destination = WdMailMergeDestination.wdSendToNewDocument;
I found another full example here
This codeblock retrieves all fields in the document
public static List<string> GetFieldsUsedInDocument(Document document)
{
var fields = new List<string>();
foreach (MailMergeField fld in document.MailMerge.Fields)
{
if (fld.Code != null)
{
fields.Add(fld.Code.Text.ToUpper());
}
}
return fields;
}
To get the MergeField names from the list of fields returned above GetFieldsUsedInDocument
public static List<string> GetMergeFields(List<string> allFields)
{
var merges = new List<string>();
foreach (var field in allFields)
{
var isNestedField = false;
foreach (var fieldChar in field)
{
int charCode = fieldChar;
if (charCode == 19 || charCode == 21)
{
isNestedField = true;
break;
}
}
if (!isNestedField)
{
var fieldCode = field;
if (fieldCode.Contains("MERGEFIELD"))
{
var fieldName = fieldCode.Replace("MERGEFIELD", string.Empty).Replace('"', ' ').Trim();
var charsToGet = fieldName.IndexOf(" ");
if (charsToGet < 0)
charsToGet = fieldName.IndexOf(#"\");
charsToGet = charsToGet > 0 ? charsToGet : fieldName.Length;
fieldName = fieldName.Substring(0, charsToGet);
if (!merges.Contains(fieldName))
{
merges.Add(fieldName);
}
}
}
}
return merges;
}
I'm using this code to parse the values and store them in List. The first row has names which are getting stored fine. But when storing values, only the second row is bring saved. I'm not sure what edit I need to make so that it parses all other rows as well.
Please see image and code below.
List<string> names = new List<string>(); // List to store Key names
List<string> values = new List<string>(); // List to store key values
using (StreamReader stream = new StreamReader(filePath))
{
names = stream.ReadLine().Split(',').ToList(); // Seperate key names and store them in a List
values = stream.ReadLine().Split(',').ToList(); // Seperate key values and store them in a list
}
See if something like this works better:
// List to store Key names
List<string> names = new List<string>();
// List to store key values
List<List<string>> values = new List<string>();
using (StreamReader stream = new StreamReader(filePath))
{
if(!stream.EndOfStream)
{
// Seperate key names and store them in a List
names = stream.ReadLine().Split(',').ToList();
}
while(!stream.EndOfStream)
{
// Seperate key values and store them in a list
values.Add(stream.ReadLine().Split(',').ToList());
}
}
This changes your values list to be a list of a list of strings so that each row will a list of string
While this probably isn't the best way to parse a .csv, if your data is consistent and the file format is strongly consistent you can probably get away with doing it like this. As soon as you try this with odd values, quoted strings, strings with commas, etc., you'll need a different approach.
i have written the code for grid view make changes it to a list.I think it will help
protected void Button1_Click(object sender, EventArgs e)
{
if (FileUpload1.HasFile)
{
string s = FileUpload1.FileName.Trim();
if (s.EndsWith(".csv"))
{
FileUpload1.PostedFile.SaveAs(Server.MapPath("~/data/" + s));
string[] readText = File.ReadAllLines(Server.MapPath("~/data/" + s));
DataSet ds = new DataSet();
DataTable dt = new DataTable();
// Array.Sort(readText);
for (int i = 0; i < readText.Length; i++)
{
if (i == 0)
{
string str = readText[0];
string[] header = str.Split(',');
dt.TableName = "sal";
foreach (string k in header)
{
dt.Columns.Add(k);
}
}
else
{
DataRow dr = dt.NewRow();
string str1 = readText[i];
if (readText[i] == ",,,,")
{
break;
}
string[] rows = str1.Split(',');
if (dt.Columns.Count == rows.Length)
{
for (int z = 0; z < rows.Length; z++)
{
if (rows[z] == "")
{
rows[z] = null;
}
dr[z] = rows[z];
}
dt.Rows.Add(dr);
}
else
{
Label1.Text = "please select valid format";
}
}
}
//Iterate through the columns of the datatable to set the data bound field dynamically.
ds.Merge(dt);
Session["tasktable"] = dt;
foreach (DataColumn col in dt.Columns)
{
BoundField bf = new BoundField();
bf.DataField = col.ToString();
bf.HeaderText = col.ColumnName;
if (col.ToString() == "Task")
{
bf.SortExpression = col.ToString();
}
GridView1.Columns.Add(bf);
}
GridView1.DataSource = ds;
GridView1.DataBind();
}
else
{
Label1.Text = "please select a only csv format";
}
}
else
{
Label1.Text = "please select a file";
}
}