After looking through various Lua interpreters for C#, it seems that only one is truly pure c# - MoonSharp. LuaInterpreter (defunct from 2009) which later became NLua depends on one of two other c# librariers KeraLua or another lib, and requires a customized lua52.dll (you can not use the one from lua.org, and). They have a bug report which is closed that says look at the readme for the download location of their customized lua52.dll, however it is absent. You are forced to download these libs from various sources and pray they work together in addition have a multi-file distribution which may have/cause compatibility issues with other programs due to several lua52.dll variations on the end users computer (assuming they will use more than just your program).
The one shining beacon of light on NLua is it's apparently popularity, however the project has not received any significant update in several years. MoonSharp on the other hand appears to be completely self-contained, yet is lacking in documentation for common tasks such as loading a table that was built with lua and working with it.
I have come up with the following code based on the singular example they provided on Git, and then duplicated on their site at moonsharp.org (whichever came first, i am unsure, but having 1 example is not sufficient) :
using System;
using System.IO;
using MoonSharp.Interpreter;
class Foo {
function Bar( string accountName, string warcraftPath ) {
string datastore = Path.Combine(warcraftPath, "WTF", "Account", accountName, "SavedVariables", "DataStore_Containers.lua";
DynValue table = Script.RunString( File.ReadAllText( datastore ) );
Console.WriteLine( table.Table.Keys.Count().ToString() );
}
}
Results in the following (code in picture is slightly different as I adjusted the pasted code here for cleanliness and to make it easier for you to reproduce the problem using the table data in the pastebin link below.)
The table I am trying to read looks like the following (simplified had to paste on pastebin due to the size exceeding 30,000 characters):
World of Warcraft - Datastore_Containers Lua table sample data
I sort of have something sort of working, it's a bit hackish, but doesn't seem to be away to loop through the values or explicitly get the subtables / values or key of the value.
Script s = new Script(CoreModules.Preset_Complete);
// hacked by appending ' return DataStore_ContainersDB ' to return the table as DoString seems to only work to run a function expecting a result to be returned.
DynValue dv = s.DoString(luaTable + "\nreturn DataStore_ContainersDB;");
Table t = dv.Table;
foreach(var v in t.Keys)
{
Console.WriteLine( v.ToPrintString() );
}
The problem is that there doesn't seem to be any way for me to enter the sub-table result sets or to explicitly access those like t["global"] or t.global.
Managed to hack and slash my way through this and come up with a working solution although it is fairly rudimentary (possible someone could take this concept and make accessing of the sub data more reasonable:
Script s = new Script(CoreModules.Preset_Complete);
DynValue dv = s.DoString(luaTable + "\nreturn DataStore_ContainersDB;");
Table t = dv.Table;
Table global;
global = t.Get("global").ToObject<Table>().Get("Characters").ToObject<Table>();
foreach (var key in global.Keys)
{
Console.WriteLine( key.ToString() );
}
The library MoonSharp appears to require and depend heavily upon the Script class which is the premise by which all other methods operate. The DoString method requires a return result or the DynValue will always be void/null . DynValue appears to be the base global handler for the entire Lua process which can handle methods (aka, that lua string could contain several methods which DynValue would expose and allow them to be called in C# returning the response as other DynValue's)
So if you wish to load a lua file that ONLY contains date in Lua's table format, you MUST append a return with the table name as the last line. This is why you see :
"\nreturn DataStore_ContainersDB;"
... as the table name is called "DataStore_ContainersDB"
Next, the result must be loaded into a fresh Table object, as DynValue is not an actual table but a class construct to hold all the various formats available (methods, tables, etc).
After it is in a Table format, you can now work with it by calling the key/value pair by the key name, number, or DynValue. In my case, since I know the original key names, I call straight through to the Table where the key names exist which I do not know and would like to work with.
Table.Get( Key )
Since this returns a DynValue, we must then convert/load the object as a table again which is made convenient using the .ToObject<> method.
The foreach loop I supplied then loops through the keys available in the sub-table located at : global > Characters > *
... which I then write the key name out to the console using key.ToString()
If there is other sub-tables, in this example (as there are), you can traverse to unknown ones using the same concept in the foreach loop by expanding on it like this :
foreach (var key in global.Keys)
{
if(IsTable(global.Get(key.String)))
{
Console.WriteLine("-------" + key.ToPrintString() + "-------");
Table characterData = global.Get(key.String).ToObject<Table>();
foreach (var characterDataField in characterData.Keys)
{
if( !IsTable(characterData.Get(characterDataField.String)))
{
Console.WriteLine(string.Format("{0} = {1}", characterDataField.ToPrintString(), characterData.Get(characterDataField.String).ToPrintString()));
}
else
{
Console.WriteLine(string.Format("{0} = {1}", characterDataField.ToPrintString(), "Table[]"));
}
}
Console.WriteLine("");
}
}
... and here is the method I wrote to quickly check if the data is a table or not . This is the IsTable() method used in the above foreach example.
private static bool IsTable(DynValue table)
{
switch (table.Type)
{
case DataType.Table:
return true;
case DataType.Boolean:
case DataType.ClrFunction:
case DataType.Function:
case DataType.Nil:
case DataType.Number:
case DataType.String:
case DataType.TailCallRequest:
case DataType.Thread:
case DataType.Tuple:
case DataType.UserData:
case DataType.Void:
case DataType.YieldRequest:
break;
}
return false;
}
I have done what I could to make this workable, however, as stated before, I do see room for improving the recursion of this. Checking the data type on every subobject, and then loading it just feels very redundant and seems like this could be simplified.
I am open to other solutions on this question, ideally in the form of some enhancement that would make this not so clunky to use.
For dealing with Tables within Tables, which is my preferred way of doing things. I came up with this.
Script s = new Script();
s.DoString(luaCode);
Table tableData = s.Globals[rootTableIndex] as Table;
for (int i = 1; i < tableData.Length + 1; i++) {
Table subTable = tableData.Get(i).Table;
//Do cool stuff here with the data
}
Granted this requires you to know the index of the Global rootTable.
For my use of this I do the following (still testing out things)
string luaCode = File.ReadAllText(Path.Combine(weaponDataPath, "rifles.Lua"));
Script script = new Script();
script.DoString(luaCode);
Gun rifle = new Gun();
Table rifleData = script.Globals["rifles"] as Table;
for (int i = 1; i < rifleData.Length + 1; i++) {
Table rifleTable = rifleData.Get(i).Table;
rifle.Name = rifleTable.Get("Name").String;
rifle.BaseDamage = (int)rifleTable.Get("BaseDamage").Number;
rifle.RoundsPerMinute = (int)rifleTable.Get("RoundsPerMinute").Number;
rifle.MaxAmmoCapacity = (int)rifleTable.Get("MaxAmmoCapacity").Number;
rifle.Caliber = rifleTable.Get("Caliber").String;
rifle.WeaponType = "RIFLE";
RiflePrototypes.Add(rifle.Name, rifle);
}
This requires some assumptions about the Tables and how the values are Named, but if you are using this for object member assignment I don't see why you would care about elements in the table that are not part of the object which you define with the assignment type.Member = table.Get(member equivalent index).member type
Related
I currently have a Class which uses the function as such:
var txbl = test.search_bustype("SUP", "Name");
or
foreach(string toWorkWith in test.search_bustype("SUP", "Name")){ // each one }
However, for every Column I want to search using a function, I have to create a separate function.
ie: Columns - bustype, companyID - Would have to have separate functions to search.
My current code is:
public Array search_bustype(string match, string forthat)
{
db = new rkdb_07022016Entities2();
var tbl = (from c in db.tblbus_business select c).ToArray();
List<string> List = new List<string>();
int i = 0;
foreach (var toCheck in tbl)
{
if (toCheck.BusType.ToString() == match)
{
if (forthat == "Name")
{
List.Add(toCheck.Name);
}
}
i++;
}
return List.ToArray();
}
Is there anyway to possibly, like php actually send the query to the function and then run it there? I haven't been able to find many sources about how to build a secure infrastructure with Entity so I am wondering if anyone knows any way of maybe creating a skeleton method with this framework.
Thanks in advance!
Okay so I stumbled on the Frameworks sources and actually now understand that the Framework itself implements the Skeleton method.
You simply only refer to each query inside the (from c in......
I'll have to look further into how this infrastructure works before I can understand how to further implement functions.
Thank-you for your time however! I will close this.
I'm trying to accomplish 2 things with the below snippet of code (from ApplicationDataService.lsml.cs in the server project of my Lightswitch 2013 solution).
partial void Query1_PreprocessQuery(ref IQueryable<CandidateBasic> query)
{
query = from item in query where item.CreatedBy == this.Application.User.Name select item;
}
partial void CandidateBasics_Validate(CandidateBasic entity, EntitySetValidationResultsBuilder results)
{
var newcandidateCount = this.DataWorkspace.ApplicationData.Details.GetChanges().AddedEntities.OfType<CandidateBasic>().Count();
var databasecandidateCount = this.CandidateBasics.GetQuery().Execute().Count();
const int maxcandidateCount = 1;
if (newcandidateCount + databasecandidateCount > maxcandidateCount)
{
results.AddEntityError("Error: you are only allowed to have one candidate record");
}
}
Firstly, I want to make sure each user can only see things that he has made. This, together with a preprocess query on the table in question, works perfectly.
The next bit is designed to make sure that each user can only create one record in a certain table. Unfortunately, it seems to be looking at the whole table, and not the query I made that shows only the user's own records.
How can I get that second bit of code to limit only the user's own records, and not the global table?
You're not actually calling that query though are you? Your query is called Query1 based on the code provided yet you don't seem to be calling it. I'd do something like:
int count = DataWorkspace.ApplicationData.Query1().Count();
Currently, I am struggling with an issue regarding Entity Framework (LINQ to Entities). Most of the time when I try to execute entity.SaveChanges() everything works fine but at some points entity.SaveChanges() takes too much and timesouts. I searched a lot but was unable to find out the answer.
(According to companies policy, I cannot copy code somewhere else. So, I do not have the exact code but I will try to layout the basic structure. I hope it helps you to figure out the problem but if i doesn't then let me know.)
Task:
My task is to scan the whole network for some specific files. Match content of each file with the content of database and based on the matching either insert or update the database with the content of the file. I have around 3000 files on the network.
Problem:
public void PerformAction()
{
DbTransaction tran = null;
entity.Connection.Open(); //entity is a global variable declared like myDatabaseEntity entity = new myDatabaseEntity();
tran = entity.Connection.BeginTransaction();
foreach(string path in listOfPaths)
{
//returns 1 - Multiple matching in database OR
// 2 - One matching file in database OR
// 3 - No Matching found.
int returnValue = SearchDatabase();
if(returnValue == 1)
DoSomething(); //All inserts/updates work perfectly. Save changes also works correctly.
else if(returnValue == 2)
DoSomething(); //Again, everything ok. SaveChanges works perfectly here.
else
{
//This function uses some XML file to generate all the queries dynamically
//Forexample INSERT INTO TABLEA(1,2,3);
GenerateInsertQueriesFromXML();
ExecuteQueries();
SaveChanges(); <---- Problem here. Sometimes take too much time.
}
//Transaction commit/rollback code here
}
}
public bool ExecuteQueries()
{
int result = 0;
foreach(string query in listOfInsertQueries)
{
result = entity.ExecuteStoreCommand(query); //Execute the insert queries
if(result <=0)
return false;
}
entity.TestEntityA a = new entity.TestEntityA();
a.PropertyA = 123;
a.PropertyB = 345;
//I have around 25 properties here
entity.AddToTestEntityA(a);
return true;
}
Found the issue.
The main table where i was inserting all the data had a trigger on INSERT and DELETE.
So, whenever i inserted some new data in the main table, the trigger was firing in the backend and was taking all the time.
Entity framework is FAST and INNOCENT :D
Hy,
I have a csv file like that
user_name1,c:\photo\user_photo.jpg,0,0,0,0
in which every line refers a distinct object with own fileds separated from comma.
How I find a particular object knowing the user name? I use distinct user_names. And after that how I make that object, curent object that I use?
What i have done until now:
StreamReader sc = new StreamReader(#"C:\Player.csv");
String linie = sc.ReadLine();
while (!String.IsNullOrEmpty(linie))
{
string[] stringu = linie.Split(',');
Player player = new Player(stringu[0], stringu[1], int.Parse(stringu[2]), int.Parse(stringu[3]), int.Parse(stringu[4]));
players.Add(player);
linie = sc.ReadLine();
}
sc.Close();
var query = players.Where(a => a.Name == label6.Text);
}
Sincerly,
You could try to use a library, which makes easy the use of CSV files with LINQ queries. Please look here.
Firstly I'd suggest that for parsing a CSV file you don't roll your own field splitter. I'd suggest using something like the TextFieldParser (which can be used in C# by referencing Microsoft.VisualBasic).
Once you've created the parser you can use it to get the array of strings for each record represented by line in your application:
List<Players> players = new List<Players>();
var parser = new TextFieldParser();
while ( !parser.EOF )
{
string [] playerFields = parser.ReadFields();
// Create player from fields
var player = Player.FromFields(playerFields);
players.Add(player);
}
Now it really depends on whether you want to continuously query the players in the file, as if you do then getting an in-memory copy and using LINQ makes sense, else if you only want to query once then I'd simply do the check line by line.
Assuming that you do want to query multiple times then parsing the file and holding the values in a List or similar makes sense (assuming the file isn't ridiculously big).
Finally, if you has distinct user names then you could use the FirstOrDefault method in LINQ to give you the single player back that matches.
var player = players.FirstOrDefault(p => p.Name.Equals(textBox.Text));
Or if you know that you have unique player names you could just store the whole lot in a dictionary....?
if ( players.ContainsKey(textBox.Text) )
{
var player = players[textBox.Text];
}
Anyway, just some thoughts.
The only problem you have at the moment, as far as I can see, it that you say you are looking to fetch "a particular object", but in your last line you are using Where which is going to return an IEnumerable containing one, none or many such objects. If you want to fetch one uniquely named object you should use Single or SingleOrDefault, e.g.
var myPlayer = players.Single(x => x.Name == label6.Text);
The answer you have accepted has merely converted your query from method syntax to the equivalent query syntax and introduced a compilation error by attempting to convert the resulting IEnumerable<Player> to a Player.
In anticipation of a problem you may be about to have, you might also want to look into Microsoft.VisualBasic.FileIO.TextFieldParser (you'll need to add a reference to Microsoft.VisualBasic.dll), which will allow you to parse a CSV in a more robust way, i.e. handle fields containing commas etc., using something like the following:
using (var parser = new TextFieldParser(filename))
{
parser.SetDelimiters(",");
while (!parser.EndOfData)
{
var p = parser.ReadFields();
players.Add(new Player(p[0], p[1], int.Parse(p[2]), int.Parse(p[3]), int.Parse(p[4])));
}
}
Have you tried something like
Player selected_player = from pl in players
where pl.Name == label6.Text
select pl;
?
Currently I am using EF 5.0.0 with EF Power Tools 3. I am trying to do reverse engineering from an existing database with reverse engineering code first (Power Tools 3). The sample of doing this can be found at this MSDN Link.
The only problem of doing this is the name of my database objects. My database objects using small words and underscore as space, E.g: item_cart. However I don't want that kind of naming standard to be passed to my C# application.
Then I tried to tweak the template to convert each table/field name to follow the application naming standards. This is the current conversion tool I have done till now.
public static string SqlNameToEntityName(string name)
{
StringBuilder entityName = new StringBuilder();
if(string.IsNullOrEmpty(name)) return name;
else if (!name.Contains('_'))
{
return name[0].ToString().ToUpper() + name.Substring(1);
}
string rippedName = name;
while(rippedName[0] == '_')
{
rippedName = rippedName.Substring(1);
}
do{
entityName.Append( CustomConvention.UpperFirstChar( rippedName.Substring(0, rippedName.IndexOf('_')) ) );
rippedName = rippedName.Substring(rippedName.IndexOf('_')+1);
}while(rippedName.Contains('_'));
entityName.Append( CustomConvention.UpperFirstChar(rippedName) );
return entityName.ToString();
}
public static string UpperFirstChar(string name)
{
if(string.IsNullOrEmpty(name)) return "";
return name[0].ToString().ToUpper() + name.Substring(1, name.Length-1).ToLower();
}
Then in Entity.tt, I have modified the template using this static method. columnName = CustomConvention.SqlNameToEntityName(columnName);. Using this approach, I can convert the table names from item_cart to ItemCart well enough (not tested yet though).
Lastly, here is my questions:
Currently I cannot change the .cs file name to following the naming convention. So it still stay as item_cart.cs
Will my approach make problems in the future?
Is there any more better (standard/cleaner) way to doing this?
Beside from this reverse engineering, what is the best (fastest and cleanest) way to map tables (maybe views and procedures) to entities?
Possible related question: Resolving naming convention conflict between entities in EF4 and our database standards?