in my case i wanted to display items from local SQLite database which i created as shown below:
public string CreateDB() //create database
{
var output = "";
string dbPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "IsoModule.db3");
output = "Database Created";
return output;
}
public string CreateTable() //create table
{
try
{
string dbPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "IsoModule.db3");
var db = new SQLiteConnection(dbPath);
db.CreateTable<UserInfo>();
db.CreateTable<TableInfo>();
string result = "Table(s) created";
return result;
}
catch (Exception ex)
{
return ("Error" + ex.Message);
}
}
and this is my code where i wish to retrieve data
string path = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "IsoModule.db3");
var tablelistout = new SQLiteConnection(path);
var alltables = tablelistout.Table<TableInfo>();
foreach (var listing in alltables)
{
var from = new string[]
{
listing.tname + " - " + listing.status
};
ListView listtable = (ListView)FindViewById(Resource.Id.listtable);
listtable.Adapter = new ArrayAdapter(this, Android.Resource.Layout.SimpleListItem1, from);
}
the code runs with NO ERROR but it only display last item in the table. it is confusing me, so i would like to ask how can i retrieve all the data from specific table?
or if someone has asked the same question please share me the link. many appreciate.
var alltables = tablelistout.Table<TableInfo>();
var data = new List<string>();
foreach (var listing in alltables)
{
data.Add(listing.tname + " - " + listing.status);
}
ListView listtable = (ListView)FindViewById(Resource.Id.listtable);
listtable.Adapter = new ArrayAdapter(this, Android.Resource.Layout.SimpleListItem1, data.ToArray());
All I did was move 2 things out of the loop. First, I moved out the initialization of the array. Second, I moved out the listView + assignation of the adapter.
Your issue is that in the loop, you were always overriding everything you had done in the previous iteration (leaving you with the last item like you said).
Also, You should take note that it will be important for you to create a custom adapter if you plan on having a decent amount of data. ArrayAdapter is a native Android class which is then wrapped by a C# Xamarin object, meaning you will have both a C# and Java object per row. It adds overhead as both garbage collectors will have work to do and can cause performance issues. Xamarin devs tend to generally avoid it with the exception of quick prototyping.
On another note, I would use the FindViewById<T>(Int32) instead of the FindViewById(Int32) and casting it. FindViewById<ListView>(Resource.Id.listtable) in your case. Just taking advantage of the power of generics.
Related
I'm working on report rendering with FastReports - I'm doing this coming from a system to render with Crystal Reports. When using Crystal, I found that preloading a report and then binding parameters on request sped up crystal dramatically, since most of the time for a small layout like an invoice is in the setup. I'm now trying to achieve the same with FastReports.
It's unclear how much time setup takes however, so I'd also be interested in whether this is not a worthwhile endeavour.
My issue is that I have used a JSON API call, and used ConnectionStringExpression with a single parameter. In a nutshell, changing the parameter does not reload the data when I call Prepare.
Here's my code, with the second report load taken out, it renders the same report twice.
var report = new Report();
report.Load("C:\\dev\\ia\\products\\StratusCloud\\AppFiles\\Reports\\SalesQuoteItems.frx");
var urlTemplate = "http://localhost:9502/data/sales-quote/{CardCode#}/{DocEntry#}";
var reportParms = new Dictionary<string, dynamic>();
reportParms.Add("CardCode#", "C20000");
reportParms.Add("DocEntry#", 77);
var connectionstring = "Json=" + System.Text.RegularExpressions.Regex.Replace(urlTemplate, "{([^}]+)}", (m) => {
if (reportParms.ContainsKey(m.Groups[1].Value))
{
return string.Format("{0}", reportParms[m.Groups[1].Value]);
}
return m.Value;
});
var dataapiparm = report.Dictionary.Parameters.FindByName("DataAPIUrl#");
if (dataapiparm != null)
{
dataapiparm.Value = connectionstring;
}
foreach(FastReport.Data.Parameter P in report.Dictionary.Parameters)
{
if (reportParms.ContainsKey(P.Name))
{
P.Value = reportParms[P.Name];
}
}
report.Prepare();
var pdfExport = new PDFSimpleExport();
pdfExport.Export(report, "test1.pdf");
//report = new Report();
//report.Load("C:\\dev\\ia\\products\\StratusCloud\\AppFiles\\Reports\\SalesQuoteItems.frx");
reportParms["DocEntry#"] = 117;
connectionstring = "Json=" + System.Text.RegularExpressions.Regex.Replace(urlTemplate, "{([^}]+)}", (m) => {
if (reportParms.ContainsKey(m.Groups[1].Value))
{
return string.Format("{0}", reportParms[m.Groups[1].Value]);
}
return m.Value;
});
dataapiparm = report.Dictionary.Parameters.FindByName("DataAPIUrl#");
if (dataapiparm != null)
{
dataapiparm.Value = connectionstring;
}
foreach (FastReport.Data.Parameter P in report.Dictionary.Parameters)
{
if (reportParms.ContainsKey(P.Name))
{
P.Value = reportParms[P.Name];
}
}
report.Prepare();
pdfExport.Export(report, "test2.pdf");
Cheers,
Mark
Fast Project definitely doesn't recalculate the ConnectionStringExpression on report.Prepare, so I went back to another method that I was looking at. It turns out that if the ConnectionString itself is rewritten, then report.Prepare does refetch the data.
A simple connection string without a schema takes a long time to process, so I remove everything beyond the semi-colon and keep it, replace the url portion of the connectionstring, and then stick teh same schema information back on the end.
Copying the schema information into each report generation connection string seems to remove around 10 seconds from report.Prepare!
At the moment it's the best that I can do, and I wonder if there is another more efficient method of rerunning the same report against new data (having the same schema).
I'm having a bit of a problem with an update feature designed to compare two images, and if different, delete the existing image and data from Mongo, and replace it with a new copy. The problem is, individually, each component works. The loading feature will successfully upload an image and Bson Document. The delete method will successfully (seemingly) remove them; the document, the fs.files entry, and the fs.chunks entry.
However, when the entry is deleted and then proceeds to upload the new image, only the fs.files entry and the BsonDocument will be pushed to the server. The actual image is left off.
I'm running MongoDB 3.2.6 for Windows.
The replace block followed by the upload block
if (newMD5.Equals(oldMD5) == false)
{
Debug.WriteLine("Updating image " + fileWithExt);
BsonValue targetId = docCollection.FindOne(Query.EQ("id", fileNoExt))["_id"];
deleteImageEntry(Query.EQ("_id", new ObjectId(targetId.ToString())));
//continues to upload replacement
} else
{
continue;
}
}
//create new entry
uploadInfo = mongoFileSystem.Upload(memStream, fileFs);
BsonDocument entry = new BsonDocument();
entry.Add("fileId", uploadInfo.Id);
entry.Add("id", fileNoExt);
entry.Add("filename", fileFs);
entry.Add("user", "");
//appends to image collection
var newItemInfo = docCollection.Save(entry);
And the delete method
public static bool deleteImageEntry(IMongoQuery query)
{
MongoInterface mongo = new MongoInterface();
try
{
var docCollection = mongo.Database.GetCollection("employees");
var imageCollection = mongo.Database.GetCollection<EmployeeImage>("employees");
var toDelete = docCollection.FindOne(query);
BsonValue fileId = toDelete.GetValue("fileId");
mongo.Gridfs.DeleteById(fileId);
WriteConcernResult wresult = imageCollection.Remove(query);
}
catch (Exception e)
{
Debug.WriteLine("Image could not be deleted \n\r" + e.Message);
return false;
}
return true;
}
Sorry the code is messy, I've doing a lot a guerrilla testing to try and find a reason for this. Similar code has worked in other parts of the program.
Well this is embarrassing. After a few more hours of debugging it turned out to be the MD5.ComputerHash() function was setting the memStream iterator to the end, and so there was no data being uploaded. Using memStream.Position = 0; solved the problem.
I have created code that gathers a list of the existing "Line Styles" in Revit.
List<Category> All_Categories = doc.Settings.Categories.Cast<Category>().ToList();
Category Line_Category = All_Categories[1];
foreach (Category one_cat in All_Categories) { if (one_cat.Name == "Lines") { Line_Category = one_cat;} }
if (Line_Category.CanAddSubcategory)
{
CategoryNameMap All_Styles = Line_Category.SubCategories; List<string> Line_Styles = new List<string>();
foreach (Category one_category in All_Styles) { if (one_category.Name.Contains("CO_NAME")) {Line_Styles.Add(one_category.Name); } }
TaskDialog.Show(Line_Styles.Count.ToString() + " Current Line Styles", List_To_Dialog(Line_Styles));
}
This gets the list of line styles, but when I try:
Category New_Line_Style = Line_Category.NewSubCategory....
Visual Studio tells me there is no definition for NewSubCategory
Can anyone tell me how to make a new SubCategory of "Lines", or what I'm doing wrong in the above code?
NOTE: I discovered the main issue. I was attempting to add the sub category to my variable Line_Category (which is itself a category, which should be a parent). I had also attempted adding the sub category to All_Categories (which had been cast as a list and not a CategoryNameMap).
When I added a variable that was not cast, NewSubCategory became available. However, now I am unable to see how to set the line pattern associated with my new style -- the only example I've found online suggests using New_Line_Style.LinePatternId, but that is not in the list of available options on my new SubCategory. Is there some way to set the default pattern to be used when creating a new SubCategory?
Jeremy Tammik wrote a post about retrieving all the linestyles here: http://thebuildingcoder.typepad.com/blog/2013/08/retrieving-all-available-line-styles.html. That might help explain some of the linestyle category stuff in more detail.
Here's another good link asking the same question and how it was solved using VB: http://thebuildingcoder.typepad.com/blog/2013/08/retrieving-all-available-line-styles.html. Here's a C# version of the VB code that worked for a new linestyle:
UIApplication app = commandData.Application;
UIDocument uidoc = app.ActiveUIDocument;
Document ptr2Doc = uidoc.Document;
Category lineCat = ptr2Doc.Settings.Categories.get_Item(BuiltInCategory.OST_Lines);
Category lineSubCat;
string newSubCatName = "NewLineStyle";
Color newSubCatColor = new Color(255, 0, 0); //Red
try
{
using (Transaction docTransaction = new Transaction(ptr2Doc, "hatch22 - Create SubCategory"))
{
docTransaction.Start();
lineSubCat = ptr2Doc.Settings.Categories.NewSubcategory(lineCat, newSubCatName);
lineSubCat.LineColor = newSubCatColor;
docTransaction.Commit();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
I have a CSV file in this format:
"Call Type","Charge Type","Map to"
"51","","Mobile SMS"
"52","","Mobile SMS"
"DD","Local Calls","Local Calls"
"DD","National Calls","National Calls"
First two columns are the "source information" that my C# will insert, and the last column is what it will return.
Currently what I am doing is a switch statement hardcoded in c#.
var File001 = from line in File.ReadLines(bill_file)
let l = line.Split(',')
select new
{ CallType = ICD_map(l[5],l[3])}
where
l[5] = "51";
l[3] = "";
private static string ICD_map(string call_type_description, string call_category,)
{
case "51":
case "52":
return "Mobile SMS";
default:
return "Unknown";
}
I want this to be an expandable list thus my new method is to load the mapping table from a csv file. Can you suggest any improvements to this method to make my definition library expandable (hoping CSV file okay for this purpose, it is only 100 lines long so far, so not concerned about memory management).
What I have tried so far is:
class ICD_Map2
{
private string call_type;
private string charge_type;
private string map_to;
// Default constructor
public ICD_Map2() {
call_type = "Unknown";
charge_type = "Unknown";
map_to = "Unknown";
}
// Constructor
public ICD_Map2(string call_type, string charge_type, string map_to)
{
this.call_type = call_type;
this.charge_type = charge_type;
this.map_to = map_to;
}
}
List<ICD_Map2>maps = new List<ICD_Map2>();
private void button2_Click(object sender, EventArgs e)
{
// Start new thread to create BillSummary.csv
button1.Enabled = false;
maps.Clear();
//load mapping file
var reader = new StreamReader(File.OpenRead(#"Itemised_Call_Details_Map.csv"));
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(',');
maps.Add(new ICD_Map2(values[0].Replace("\"",""), values[1].Replace("\"",""), values[2].Replace("\"","")));
textBox2.AppendText(Environment.NewLine + " Mapping: " + values[0].Replace("\"", "") + " to " + values[1].Replace("\"", ""));
}
I have loaded the CSV file to my program but I am unable to do the lookup from LINQ. Can you tell me the next process.
Open to any other method.
Thanks for your time.
I would suggest you to go with
http://www.codeproject.com/Articles/9258/A-Fast-CSV-Reader
It will give you lots of flexibility to play around with your code.
We have been using it in our projects, and it's really helpful to have full control inplace of writing generic CSV code which is prone to errors and bugs
I'm tryring to do a simple insert with foreign key, but it seems that I need to use db.SaveChanges() for every record insert. How can I manage to use only one db.SaveChanges() at the end of this program?
public static void Test()
{
using (var entities = new DBEntities())
{
var sale =
new SalesFeed
{
SaleName = "Stuff...",
};
entities.AddToSalesFeedSet(sale);
var phone =
new CustomerPhone
{
CreationDate = DateTime.UtcNow,
sales_feeds = sale
};
entities.AddToCustomerPhoneSet(phone);
entities.SaveChanges();
}
}
After running the above code I get this exception:
System.Data.UpdateException: An error occurred while updating the entries. See the InnerException for details. The specified value is not an instance of a valid constant type
Parameter name: value.
EDIT: Changed example code and added returned exception.
Apperantly using UNSIGNED BIGINT causes this problem. When I switched to SIGNED BIGINT everything worked as it supposed to.
I tried to do this "the right way":
And then I wrote this little test app to scan a directory, store the directory and all its files in two tables:
static void Main(string[] args)
{
string directoryName = args[0];
if(!Directory.Exists(directoryName))
{
Console.WriteLine("ERROR: Directory '{0}' does not exist!", directoryName);
return;
}
using (testEntities entities = new testEntities())
{
StoredDir dir = new StoredDir{ DirName = directoryName };
entities.AddToStoredDirSet(dir);
foreach (string filename in Directory.GetFiles(directoryName))
{
StoredFile stFile = new StoredFile { FileName = Path.GetFileName(filename), Directory = dir };
entities.AddToStoredFileSet(stFile);
}
try
{
entities.SaveChanges();
}
catch(Exception exc)
{
string message = exc.GetType().FullName + ": " + exc.Message;
}
}
}
As you can see, I only have a single call to .SaveChanges() at the very end - this works like a charm, everything's as expected.
Something about your approach must be screwing up the EF system.....
it might be related with the implementation of AddToSalesFeedSet etc..
there is chance that you are doing commit inside ?
any way, my point is that i encountered very close problem, was tring to add relation to new entity with existed entity that been queried earlier - that has unsigned key
and got the same exception;
the solution was to call Db.collection.Attach(previouslyQueriedEntityInstance);