I have a problem with C# and Unity3D. I would like to read in a series of some to many JSON files from a single directory, and deserialise the data into one specific class for each file. I'm wondering if there's a fast way of looping that process so I don't need a large if/switch block or anything. Deserialisation is fine, but I'm having trouble actually assigning the data from each file to a list containing objects of the correct Type.
Note: The class name is the same as the filename. For example, if the filename is Cars.json, I want to find a Component called CarManager and use it to store the deserialised data in a List<Car> at CarManager.cars.
I'm inexperienced, and I don't really know how to work with Type references yet. If someone could explain how can I write the ProcessFile() method to successfully differentiate between Object types, so I can store the data for each file in Unity, I'd really appreciate it.
Cheers.
namespace Flight {
public class DatasetManager : MonoBehaviour {
private JsonSerializer serialiser;
private StreamReader streamReader;
private string path;
private string fileName;
private string extension;
public void Start() {
// Define Path
path = Application.dataPath + "/Data/";
extension = "json";
// Construct Serialiser
serialiser = new JsonSerializer();
serialiser.Converters.Add(new StringEnumConverter());
serialiser.NullValueHandling = NullValueHandling.Ignore;
// Import Data
Import();
}
private void Import() {
string[] files = Directory.GetFiles(path, "*." + extension);
if(files.Length == 0) return;
for(int i = 0; i < files.Length; i++) ProcessFile(files[i]);
}
private void ProcessFile(string xFile) {
streamReader = File.OpenText(xFile);
// Read Filename
string plural = Path.GetFileNameWithoutExtension(xFile);
string entity = plural.EndsWith("ies") ? plural.Substring(0,plural.Length-3) + "y" : plural.Substring(0,plural.Length-1);
string manager = entity + "Manager";
// Determine Entity & Manager Types
System.Type entityType = System.Type.GetType("Flight." + entity);
System.Type managerType = System.Type.GetType("Flight." + manager);
if(entityType == null || managerType == null) return;
// Determine List Type
System.Type listType = null;
listType = typeof(List<>).MakeGenericType(entityType);
if(listType == null) return;
// Acquire Data
List<dynamic> data = System.Activator.CreateInstance(listType) as List<dynamic>;
data = serialiser.Deserialize(streamReader, typeof(List<dynamic>)) as List<dynamic>;
if(data == null) return;
// Store Data in Game
GameObject theGame = GameObject.FindGameObjectWithTag("Game");
Component theComponent = theGame.GetComponent(manager);
FieldInfo field = managerType.GetField(plural.ToLower());
/*** How can I proceed from here? ***/
List<dynamic> theList = field.GetValue(theComponent) as List<dynamic>;
field.SetValue(theComponent, data);
}
}
}
The above code produces an ArgumentException:
System.Collections.Generic.List1[System.Object] cannot be converted to target type: System.Collections.Generic.List1[Flight.Car]
No worries, fixed it. The following modified code within ProcessFile() appears to work properly:
private void ProcessFile(string xFile) {
// ...
// ...
// Determine List Type
System.Type listType = typeof(List<>).MakeGenericType(entityType);
if(listType == null) return;
// Acquire Data
streamReader = File.OpenText(xFile);
object data = null;
data = System.Activator.CreateInstance(listType);
data = serialiser.Deserialize(streamReader, listType);
if(data == null) return;
// Store Data in Game
GameObject theGame = GameObject.FindGameObjectWithTag("Game");
Component theComponent = theGame.GetComponent(manager);
FieldInfo field = managerType.GetField(plural.ToLower());
field.SetValue(theComponent, data);
}
Related
I have a process whereby we have written a class to import a large (ish) CSV into our app using CsvHelper (https://joshclose.github.io/CsvHelper).
I would like to compare the header to the Map to ensure the header's integrity. We get the CSV file from a 3rd party and I want to ensure it doesn't change over time and thought the best way to do this would be to compare it against the map.
We have a class set up as so (trimmed):
public class VisitExport
{
public int? Count { get; set; }
public string CustomerName { get; set; }
public string CustomerAddress { get; set; }
}
And its corresponding map (also trimmed):
public class VisitMap : ClassMap<VisitExport>
{
public VisitMap()
{
Map(m => m.Count).Name("Count");
Map(m => m.CustomerName).Name("Customer Name");
Map(m => m.CustomerAddress).Name("Customer Address");
}
}
This is the code I have for reading the CSV file and it works great. I have a try catch in place for the error but ideally, if it fails specifically for a header miss match, I'd like to handle that specifically.
private void fileLoadedLink_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
try
{
var filePath = string.Empty;
data = new List<VisitExport>();
using (OpenFileDialog openFileDialog = new OpenFileDialog())
{
openFileDialog.InitialDirectory = new KnownFolder(KnownFolderType.Downloads).Path;
openFileDialog.Filter = "csv files (*.csv)|*.csv";
openFileDialog.FilterIndex = 2;
openFileDialog.RestoreDirectory = true;
if (openFileDialog.ShowDialog() == DialogResult.OK)
{
filePath = openFileDialog.FileName;
var fileStream = openFileDialog.OpenFile();
var culture = CultureInfo.GetCultureInfo("en-GB");
using (StreamReader reader = new StreamReader(fileStream))
using (var readCsv = new CsvReader(reader, culture))
{
var map = new VisitMap();
readCsv.Context.RegisterClassMap(map);
var fileContent = readCsv.GetRecords<VisitExport>();
data = fileContent.ToList();
fileLoadedLink.Text = filePath;
viewModel.IsFileLoaded = true;
}
}
}
}
catch (CsvHelperException ex)
{
Console.WriteLine(ex.InnerException != null ? ex.InnerException.Message : ex.Message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}
}
Is there a way of comparing the Csv header vs my map?
There are two basic cases for CSV files with headers: missing CSV columns, and extra CSV columns. The first is already detected by CsvHelper while the detection of the second is not implemented out of the box and requires subclassing of CsvReader.
(As CsvHelper maps CSV columns to model properties by name, permuting the order of the columns in the CSV file would not be considered a breaking change.)
Note that this only applies to CSV files that actually contain headers. Since you are not setting CsvConfiguration.HasHeaderRecord = false I assume that this applies to your use case.
Details about each of the two cases follow.
Missing CSV columns.
Currently CsvHelper already throws an exception by default in such situations. When unmapped data model properties are found, CsvConfiguration.HeaderValidated is invoked. By default this is set to ConfigurationFunctions.HeaderValidated whose current behavior is to throw a HeaderValidationException if there are any unmapped model properties. You can replace or extend HeaderValidated with logic of your own if you prefer:
var culture = CultureInfo.GetCultureInfo("en-GB");
var config = new CsvConfiguration (culture)
{
HeaderValidated = (args) =>
{
// Add additional logic as required here
ConfigurationFunctions.HeaderValidated(args);
},
};
using (var readCsv = new CsvReader(reader, config))
{
// Remainder unchanged
Demo fiddle #1 here.
Extra CSV columns.
Currently CsvHelper does not inform the application when this happens. See Throw if csv contains unexpected columns #1032 which confirms that this is not implemented out of the box.
In a GitHub comment, user leopignataro suggests a workaround, which is to subclass CsvReader and add the necessary validation logic oneself. However the version shown in the comment doesn't seem to handle duplicated column names or embedded references. The following subclass of CsvHelper should do this correctly. It is based on the logic in CsvReader.ValidateHeader(ClassMap map, List<InvalidHeader> invalidHeaders). It recursively walks the incoming ClassMap, attempts to find a CSV header corresponding to each member or constructor parameter, and flags the index of each one that is mapped. Afterwards, if there are any unmapped headers, the supplied Action<CsvContext, List<string>> OnUnmappedCsvHeaders is invoked to notify the application of the problem and throw some exception if desired:
public class ValidatingCsvReader : CsvReader
{
public ValidatingCsvReader(TextReader reader, CultureInfo culture, bool leaveOpen = false) : this(new CsvParser(reader, culture, leaveOpen)) { }
public ValidatingCsvReader(TextReader reader, CsvConfiguration configuration) : this(new CsvParser(reader, configuration)) { }
public ValidatingCsvReader(IParser parser) : base(parser) { }
public Action<CsvContext, List<string>> OnUnmappedCsvHeaders { get; set; }
public override void ValidateHeader(Type type)
{
base.ValidateHeader(type);
var headerRecord = HeaderRecord;
var mapped = new BitArray(headerRecord.Length);
var map = Context.Maps[type];
FlagMappedHeaders(map, mapped);
var unmappedHeaders = Enumerable.Range(0, headerRecord.Length).Where(i => !mapped[i]).Select(i => headerRecord[i]).ToList();
if (unmappedHeaders.Count > 0)
{
OnUnmappedCsvHeaders?.Invoke(Context, unmappedHeaders);
}
}
protected virtual void FlagMappedHeaders(ClassMap map, BitArray mapped)
{
// Logic adapted from https://github.com/JoshClose/CsvHelper/blob/0d753ff09294b425e4bc5ab346145702eeeb1b6f/src/CsvHelper/CsvReader.cs#L157
// By https://github.com/JoshClose
foreach (var parameter in map.ParameterMaps)
{
if (parameter.Data.Ignore)
continue;
if (parameter.Data.IsConstantSet)
// If ConvertUsing and Constant don't require a header.
continue;
if (parameter.Data.IsIndexSet && !parameter.Data.IsNameSet)
// If there is only an index set, we don't want to validate the header name.
continue;
if (parameter.ConstructorTypeMap != null)
{
FlagMappedHeaders(parameter.ConstructorTypeMap, mapped);
}
else if (parameter.ReferenceMap != null)
{
FlagMappedHeaders(parameter.ReferenceMap.Data.Mapping, mapped);
}
else
{
var index = GetFieldIndex(parameter.Data.Names.ToArray(), parameter.Data.NameIndex, true);
if (index >= 0)
mapped.Set(index, true);
}
}
foreach (var memberMap in map.MemberMaps)
{
if (memberMap.Data.Ignore || !CanRead(memberMap))
continue;
if (memberMap.Data.ReadingConvertExpression != null || memberMap.Data.IsConstantSet)
// If ConvertUsing and Constant don't require a header.
continue;
if (memberMap.Data.IsIndexSet && !memberMap.Data.IsNameSet)
// If there is only an index set, we don't want to validate the header name.
continue;
var index = GetFieldIndex(memberMap.Data.Names.ToArray(), memberMap.Data.NameIndex, true);
if (index >= 0)
mapped.Set(index, true);
}
foreach (var referenceMap in map.ReferenceMaps)
{
if (!CanRead(referenceMap))
continue;
FlagMappedHeaders(referenceMap.Data.Mapping, mapped);
}
}
}
And then in your code, handle the OnUnmappedCsvHeaders callback however you would like, such as by throwing a CsvHelperException or some other custom exception:
using (var readCsv = new ValidatingCsvReader(reader, culture)
{
OnUnmappedCsvHeaders = (context, headers) => throw new CsvHelperException(context, string.Format("Unmapped CSV headers: \"{0}\"", string.Join(",", headers))),
})
Demo fiddles:
#2 (your model).
#3 (with external references).
#4 (duplicate names).
#5 (using the auto-generated map).
This could use additional testing, e.g. for data models with parameterized constructors and additional, mutable properties.
How about catching HeaderValidationException before catching CsvHelperException
catch (HeaderValidationException ex)
{
var message = ex.Message.Split('\n')[0];
var currentHeader = ex.Context.Reader.HeaderRecord;
message += $"{Environment.NewLine}Header: \"{string.Join(",", currentHeader)}\"";
Console.WriteLine(message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}
catch (CsvHelperException ex)
{
Console.WriteLine(ex.InnerException != null ? ex.InnerException.Message : ex.Message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}
i have a lass like this
public class Params
{
public string FirstName;
public string SecondName;
public string Path;
public long Count;
public double TotalSize;
public long Time;
public bool HasError;
public Params()
{
}
public Params(string firstName, string secondName, string path, long count, double totalSize, long time, bool hasError)
{
FirstName = firstName;
SecondName = secondName;
Path = path;
Count = count;
TotalSize = totalSize;
Time = time;
HasError = hasError;
}
}
I have the json class like this:
public static class FileWriterJson
{
public static void WriteToJsonFile<T>(string filePath, T objectToWrite, bool append = true) where T : new()
{
TextWriter writer = null;
try
{
var contentsToWriteToFile = JsonConvert.SerializeObject(objectToWrite);
writer = new StreamWriter(filePath, append);
writer.Write(contentsToWriteToFile);
}
finally
{
if (writer != null)
writer.Close();
}
}
public static T ReadFromJsonFile<T>(string filePath) where T : new()
{
TextReader reader = null;
try
{
reader = new StreamReader(filePath);
var fileContents = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(fileContents);
}
finally
{
if (reader != null)
reader.Close();
}
}
}
The main program is like this
var Params1 = new Params("Test", "TestSecondName", "Mypath",7, 65.0, 0, false);
FileWriterJson.WriteToJsonFile<Params>("C:\\Users\\myuser\\bin\\Debug\\test1.json", Params1);
FileWriterJson.WriteToJsonFile<Params>("C:\\Users\\myuser\\bin\\Debug\\test1.json", Params1);
This is mine test1.json:
{"FirstName":"Test","SecondName":"TestSecondName","Path":"Mypath","Count":7,"TotalSize":65.0,"Time":0,"HasError":false}{"FirstName":"Test","SecondName":"TestSecondName","Path":"Mypath","Count":7,"TotalSize":65.0,"Time":0,"HasError":false}
As you can see i have two json objects written in the file.
What i need to do is:
void ReadAllObjects(){
//read the json object from the file
// count the json objects - suppose there are two objects
for (int i=0;i<2;i++){
//do some processing with the first object
// if processing is successfull delete the object (i don't know how to delete the particular json object from file)
} }
but when i read like this
var abc =FileWriterJson.ReadFromJsonFile<Params>(
"C:\\Users\\myuser\\bin\\Debug\\test1.json");
i get the following error:
"Additional text encountered after finished reading JSON content: {.
Path '', line 1, position 155."
Then i used the following code to read the JSON file
public static IEnumerable<T> FromDelimitedJson<T>(TextReader reader, JsonSerializerSettings settings = null)
{
using (var jsonReader = new JsonTextReader(reader) { CloseInput = false, SupportMultipleContent = true })
{
var serializer = JsonSerializer.CreateDefault(settings);
while (jsonReader.Read())
{
if (jsonReader.TokenType == JsonToken.Comment)
continue;
yield return serializer.Deserialize<T>(jsonReader);
}
}
}
}
Which worked fine for me.
Now i need following suggestion:
1> when i put my test1.json data in https://jsonlint.com/ it says Error:
Parse error on line 9:
..."HasError": false} { "FirstName": "Tes
----------------------^
Expecting 'EOF', '}', ',', ']', got '{'
should i write into file in some other way.
2>Is there any better of doing this.
You are writing each object out individually to the file.
But what you are creating is not a valid JSON file, just a text file with individual JSON objects.
To make it valid JSON, then you need to put the objects into an array or list and then save this to the file.
var Params1 = new Params("Test", "TestFirstName", "Mypath",7, 65.0, 0, false);
var Params2 = new Params("Test 2", "TestSecondName", "Mypath",17, 165.0, 10, false);
List<Params> paramsList = new List<Params>();
paramsList .Add(Params1);
paramsList .Add(Params2);
FileWriterJson.WriteToJsonFile<List<Params>>("C:\\Users\\myuser\\bin\\Debug\\test1.json", paramsList);
FileWriterJson.WriteToJsonFile("C:\Users\myuser\bin\Debug\test1.json", Params1);
Then you should be able to read it in OK. Don't forget to read in a List<Params>
I have data in tab-separated values (TSV) text files that I want to read and (eventually) store in database tables. With the TSV files, each line contains one record, but in one file the record can have 2 fields, in another file 4 fields, etc. I wrote working code to handle the 2-field records, but I thought this might be a good case for a generic method (or two) rather than writing new methods for each kind of record. However, I have not been able to code this because of 2 problems: I can't create a new object for holding the record data, and I don't know how to use reflection to generically fill the instance variables of my objects.
I looked at several other similar posts, including Datatable to object by using reflection and linq
Below is the code that works (this is in Windows, if that matters) and also the code that doesn't work.
public class TSVFile
{
public class TSVRec
{
public string item1;
public string item2;
}
private string fileName = "";
public TSVFile(string _fileName)
{
fileName = _fileName;
}
public TSVRec GetTSVRec(string Line)
{
TSVRec rec = new TSVRec();
try
{
string[] fields = Line.Split(new char[1] { '\t' });
rec.item1 = fields[0];
rec.item2 = fields[1];
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show("Bad import data on line: " +
Line + "\n" + ex.Message, "Error",
System.Windows.Forms.MessageBoxButtons.OK,
System.Windows.Forms.MessageBoxIcon.Error);
}
return rec;
}
public List<TSVRec> ImportTSVRec()
{
List<TSVRec> loadedData = new List<TSVRec>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
{
loadedData.Add(GetTSVRec(Line));
}
}
return loadedData;
}
// *** Attempted generic methods ***
public T GetRec<T>(string Line)
{
T rec = new T(); // compile error!
Type t = typeof(T);
FieldInfo[] instanceVars = t.GetFields();
string[] fields = Line.Split(new char[1] { '\t' });
for (int i = 0; i < instanceVars.Length - 1; i++)
{
rec. ??? = fields[i]; // how do I finish this line???
}
return rec;
}
public List<T> Import<T>(Type t)
{
List<T> loadedData = new List<T>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
{
loadedData.Add(GetRec<T>(Line));
}
}
return loadedData;
}
}
I saw the line
T rec = new T();
in the above-mentioned post, but it doesn't work for me...
I would appreciate any suggestions for how to make this work, if possible. I want to learn more about using reflection with generics, so I don't only want to understand how, but also why.
I wish #EdPlunkett had posted his suggestion as an answer, rather than a comment, so I could mark it as the answer...
To summarize: to do what I want to do, there is no need for "Assigning instance variables obtained through reflection in generic method". In fact, I can have a generic solution without using a generic method:
public class GenRec
{
public List<string> items = new List<string>();
}
public GenRec GetRec(string Line)
{
GenRec rec = new GenRec();
try
{
string[] fields = Line.Split(new char[1] { '\t' });
for (int i = 0; i < fields.Length; i++)
rec.items.Add(fields[i]);
}
catch (Exception ex)
{
System.Windows.Forms.MessageBox.Show("Bad import data on line: " + Line + "\n" + ex.Message, "Error",
System.Windows.Forms.MessageBoxButtons.OK,
System.Windows.Forms.MessageBoxIcon.Error);
}
return rec;
}
public List<GenRec> Import()
{
List<GenRec> loadedData = new List<GenRec>();
using (StreamReader sr = File.OpenText(fileName))
{
string Line = null;
while ((Line = sr.ReadLine()) != null)
loadedData.Add(GetRec(Line));
}
return loadedData;
}
I just tested this, and it works like a charm!
Of course, this isn't helping me to learn how to write generic methods or use reflection, but I'll take it...
I have some auto-generated data being exported into my Unity project. To help me out I want to assign a custom icon to these assets to clearly identify them. This is of course simply possible via the editor itself, but ideally I'd like this to happen automatically on import.
To this effect I have written an AssetPostProcessor which should take care of this for me. In the example below (which applies to MonoScripts as an example but could apply to any kind of asset), all newly imported scripts will have the MyFancyIcon icon assigned to them. This update is both visible on the script assets themselves, as well as on the MonoBehaviours in the inspector.
using UnityEngine;
using UnityEditor;
using System.Reflection;
public class IconAssignmentPostProcessor : AssetPostprocessor
{
static void OnPostprocessAllAssets(string[] importedAssets, string[] deletedAssets, string[] movedAssets, string[] movedFromAssetPaths)
{
Texture2D icon = AssetDatabase.LoadAssetAtPath<Texture2D>("Assets/Iconfolder/MyFancyIcon.png");
foreach (string asset in importedAssets)
{
MonoScript script = AssetDatabase.LoadAssetAtPath<MonoScript>(asset);
if(script != null)
{
PropertyInfo inspectorModeInfo = typeof(SerializedObject).GetProperty("inspectorMode", BindingFlags.NonPublic | BindingFlags.Instance);
SerializedObject serializedObject = new SerializedObject(script);
inspectorModeInfo.SetValue(serializedObject, InspectorMode.Debug, null);
SerializedProperty iconProperty = serializedObject.FindProperty("m_Icon");
iconProperty.objectReferenceValue = icon;
serializedObject.ApplyModifiedProperties();
serializedObject.Update();
EditorUtility.SetDirty(script);
}
}
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
}
And it works just fine, except for one problem. The updates aren't saved when closing the project and reopening it. To the best of my knowledge, either the EditorUtility.SetDirty(script); call should take care of this, or at the very least the AssetDatabase.SaveAssets(); call.
However, looking at the difference between manually assigning an icon (which works) and doing it programmatically, there is an icon field in the meta files associated with the assets which does get set when manually assigning an icon, but not in my scripted case. (In the scripted case the meta files aren't even updated)
So what gives? Do I have to do anything in particular when it's (apparently) only meta data I'm changing? Is there anything simple I'm overlooking?
Experimented with this code and concluded that this is a bug. Made contact with Unity and this is their reply:
Currently , it is a submitted bug and Our Developer Team is
investigating it. It seems that this bug seems to happen because of
AssetDatabes.SaveAssets() does not save changes.
The work around is to do this manually.
Processing and Saving Data when OnPostprocessAllAssets is called:
1.Create a Json file settings that will hold the settings if it does not exist.
2.When OnPostprocessAllAssets is called, load old Json file settings.
4.Apply fancy icon to the Asset.
5.Loop over the the loaded Json file settings and check if it contains the file from the importedAssets parameter.
If it contains the loaded file, modify that setting and save it. If it does not, add it to the List then save it.
6.Check if the asset importedAssets does not exist on the hard-drive with File.Exists. If it does not exist, remove it from the List of the loaded Json file settings then save it.
Auto re-apply the fancy icon when Unity loads:
1.Add a static constructor to the IconAssignmentPostProcessor class. This static constructor will automatically be called when Editor loads and also when OnPostprocessAllAssets is invoked.
2.When the constructor is called, create a Json file settings that will hold the settings if it does not exist.
3.Load the old Json file settings.
4.Re-apply the fancy icons by looping through the loaded Json file.
5.Check if the loaded Json file still have assets that is not on the drive. If so, remove that asset from the List then save it.
Below is what the new IconAssignmentPostProcessor script should look like:
using UnityEngine;
using UnityEditor;
using System.Reflection;
using System.IO;
using System.Collections.Generic;
using System.Text;
using System;
public class IconAssignmentPostProcessor : AssetPostprocessor
{
// Called when Editor Starts
static IconAssignmentPostProcessor()
{
prepareSettingsDir();
reloadAllFancyIcons();
}
private static string settingsPath = Application.dataPath + "/FancyIconSettings.text";
private static string fancyIconPath = "Assets/Iconfolder/MyFancyIcon.png";
private static bool firstRun = true;
static void OnPostprocessAllAssets(string[] importedAssets, string[] deletedAssets, string[] movedAssets, string[] movedFromAssetPaths)
{
prepareSettingsDir();
//Load old settings
FancyIconSaver savedFancyIconSaver = LoadSettings();
Texture2D icon = AssetDatabase.LoadAssetAtPath<Texture2D>(fancyIconPath);
for (int j = 0; j < importedAssets.Length; j++)
{
string asset = importedAssets[j];
MonoScript script = AssetDatabase.LoadAssetAtPath<MonoScript>(asset);
if (script != null)
{
//Apply fancy Icon
ApplyIcon(script, icon);
//Process each asset
processFancyIcon(savedFancyIconSaver, fancyIconPath, asset, pathToGUID(asset));
}
}
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
public static string pathToGUID(string path)
{
return AssetDatabase.AssetPathToGUID(path);
}
public static string guidToPath(string guid)
{
return AssetDatabase.GUIDToAssetPath(guid);
}
public static void processFancyIcon(FancyIconSaver oldSettings, string fancyIconPath, string scriptPath, string scriptGUID)
{
int matchIndex = -1;
if (oldSettings == null)
{
oldSettings = new FancyIconSaver();
}
if (oldSettings.fancyIconData == null)
{
oldSettings.fancyIconData = new List<FancyIconData>();
}
FancyIconData fancyIconData = new FancyIconData();
fancyIconData.fancyIconPath = fancyIconPath;
fancyIconData.scriptPath = scriptPath;
fancyIconData.scriptGUID = scriptGUID;
//Check if this guid exist in the List already. If so, override it with the match index
if (containsGUID(oldSettings, scriptGUID, out matchIndex))
{
oldSettings.fancyIconData[matchIndex] = fancyIconData;
}
else
{
//Does not exist, add it to the existing one
oldSettings.fancyIconData.Add(fancyIconData);
}
//Save the data
SaveSettings(oldSettings);
//If asset does not exist, delete it from the json settings
for (int i = 0; i < oldSettings.fancyIconData.Count; i++)
{
if (!assetExist(scriptPath))
{
//Remove it from the List then save the modified List
oldSettings.fancyIconData.RemoveAt(i);
SaveSettings(oldSettings);
Debug.Log("Asset " + scriptPath + " no longer exist. Deleted it from JSON Settings");
continue; //Continue to the next Settings in the List
}
}
}
//Re-loads all the fancy icons
public static void reloadAllFancyIcons()
{
if (!firstRun)
{
firstRun = false;
return; //Exit if this is not first run
}
//Load old settings
FancyIconSaver savedFancyIconSaver = LoadSettings();
if (savedFancyIconSaver == null || savedFancyIconSaver.fancyIconData == null)
{
Debug.Log("No Previous Fancy Icon Settings Found!");
return;//Exit
}
//Apply Icon Changes
for (int i = 0; i < savedFancyIconSaver.fancyIconData.Count; i++)
{
string asset = savedFancyIconSaver.fancyIconData[i].scriptPath;
//If asset does not exist, delete it from the json settings
if (!assetExist(asset))
{
//Remove it from the List then save the modified List
savedFancyIconSaver.fancyIconData.RemoveAt(i);
SaveSettings(savedFancyIconSaver);
Debug.Log("Asset " + asset + " no longer exist. Deleted it from JSON Settings");
continue; //Continue to the next Settings in the List
}
string tempFancyIconPath = savedFancyIconSaver.fancyIconData[i].fancyIconPath;
Texture2D icon = AssetDatabase.LoadAssetAtPath<Texture2D>(tempFancyIconPath);
MonoScript script = AssetDatabase.LoadAssetAtPath<MonoScript>(asset);
if (script == null)
{
continue;
}
Debug.Log(asset);
ApplyIcon(script, icon);
}
AssetDatabase.SaveAssets();
AssetDatabase.Refresh();
}
private static void ApplyIcon(MonoScript script, Texture2D icon)
{
PropertyInfo inspectorModeInfo = typeof(SerializedObject).GetProperty("inspectorMode", BindingFlags.NonPublic | BindingFlags.Instance);
SerializedObject serializedObject = new SerializedObject(script);
inspectorModeInfo.SetValue(serializedObject, InspectorMode.Debug, null);
SerializedProperty iconProperty = serializedObject.FindProperty("m_Icon");
iconProperty.objectReferenceValue = icon;
serializedObject.ApplyModifiedProperties();
serializedObject.Update();
EditorUtility.SetDirty(script);
Debug.Log("Applied Fancy Icon to: " + script.name);
}
//Creates the Settings File if it does not exit yet
private static void prepareSettingsDir()
{
if (!File.Exists(settingsPath))
{
File.Create(settingsPath);
}
}
public static void SaveSettings(FancyIconSaver fancyIconSaver)
{
try
{
string jsonData = JsonUtility.ToJson(fancyIconSaver, true);
Debug.Log("Data: " + jsonData);
byte[] jsonByte = Encoding.ASCII.GetBytes(jsonData);
File.WriteAllBytes(settingsPath, jsonByte);
}
catch (Exception e)
{
Debug.Log("Settings not Saved: " + e.Message);
}
}
public static FancyIconSaver LoadSettings()
{
FancyIconSaver loadedData = null;
try
{
byte[] jsonByte = File.ReadAllBytes(settingsPath);
string jsonData = Encoding.ASCII.GetString(jsonByte);
loadedData = JsonUtility.FromJson<FancyIconSaver>(jsonData);
return loadedData;
}
catch (Exception e)
{
Debug.Log("No Settings Loaded: " + e.Message);
}
return loadedData;
}
public static bool containsGUID(FancyIconSaver fancyIconSaver, string guid, out int matchIndex)
{
matchIndex = -1;
if (fancyIconSaver == null || fancyIconSaver.fancyIconData == null)
{
Debug.Log("List is null");
return false;
}
for (int i = 0; i < fancyIconSaver.fancyIconData.Count; i++)
{
if (fancyIconSaver.fancyIconData[i].scriptGUID == guid)
{
matchIndex = i;
return true;
}
}
return false;
}
public static bool assetExist(string path)
{
return File.Exists(path);
}
[Serializable]
public class FancyIconSaver
{
public List<FancyIconData> fancyIconData;
}
[Serializable]
public class FancyIconData
{
public string fancyIconPath;
public string scriptPath;
public string scriptGUID;
}
}
This should hold the fancy icons when Unity is restarted.
I think you may find your answer here: http://answers.unity3d.com/questions/344153/save-game-using-scriptable-object-derived-custom-a.html
Unfortunately.
I have the following code which takes a CSV and writes to a console:
using (CsvReader csv = new CsvReader(
new StreamReader("data.csv"), true))
{
// missing fields will not throw an exception,
// but will instead be treated as if there was a null value
csv.MissingFieldAction = MissingFieldAction.ReplaceByNull;
// to replace by "" instead, then use the following action:
//csv.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
Console.Write(string.Format("{0} = {1};",
headers[i],
csv[i] == null ? "MISSING" : csv[i]));
Console.WriteLine();
}
}
The CSV file has 7 headers for which I have 7 columns in my SQL table.
What is the best way to take each csv[i] and write to a row for each column and then move to the next row?
I tried to add the ccsv[i] to a string array but that didn't work.
I also tried the following:
SqlCommand sql = new SqlCommand("INSERT INTO table1 [" + csv[i] + "]", mysqlconnectionstring);
sql.ExecuteNonQuery();
My table (table1) is like this:
name address city zipcode phone fax device
your problem is simple but I will take it one step further and let you know a better way to approach the issue.
when you have a problem to sold, always break it down into parts and apply each part in each own method. For example, in your case:
1 - read from the file
2 - create a sql query
3 - run the query
and you can even add validation to the file (imagine your file does not even have 7 fields in one or more lines...) and the example below it to be taken, only if your file never passes around 500 lines, as if it does normally you should consider to use a SQL statement that takes your file directly in to the database, it's called bulk insert
1 - read from file:
I would use a List<string> to hold the line entries and I always use StreamReader to read from text files.
using (StreamReader sr = File.OpenText(this.CsvPath))
{
while ((line = sr.ReadLine()) != null)
{
splittedLine = line.Split(new string[] { this.Separator }, StringSplitOptions.None);
if (iLine == 0 && this.HasHeader)
// header line
this.Header = splittedLine;
else
this.Lines.Add(splittedLine);
iLine++;
}
}
2 - generate the sql
foreach (var line in this.Lines)
{
string entries = string.Concat("'", string.Join("','", line))
.TrimEnd('\'').TrimEnd(','); // remove last ",'"
this.Query.Add(string.Format(this.LineTemplate, entries));
}
3 - run the query
SqlCommand sql = new SqlCommand(string.Join("", query), mysqlconnectionstring);
sql.ExecuteNonQuery();
having some fun I end up doing the solution and you can download it here, the output is:
The code can be found here. It needs more tweaks but I will left that for others. Solution written in C#, VS 2013.
The ExtractCsvIntoSql class is as follows:
public class ExtractCsvIntoSql
{
private string CsvPath, Separator;
private bool HasHeader;
private List<string[]> Lines;
private List<string> Query;
/// <summary>
/// Header content of the CSV File
/// </summary>
public string[] Header { get; private set; }
/// <summary>
/// Template to be used in each INSERT Query statement
/// </summary>
public string LineTemplate { get; set; }
public ExtractCsvIntoSql(string csvPath, string separator, bool hasHeader = false)
{
this.CsvPath = csvPath;
this.Separator = separator;
this.HasHeader = hasHeader;
this.Lines = new List<string[]>();
// you can also set this
this.LineTemplate = "INSERT INTO [table1] SELECT ({0});";
}
/// <summary>
/// Generates the SQL Query
/// </summary>
/// <returns></returns>
public List<string> Generate()
{
if(this.CsvPath == null)
throw new ArgumentException("CSV Path can't be empty");
// extract csv into object
Extract();
// generate sql query
GenerateQuery();
return this.Query;
}
private void Extract()
{
string line;
string[] splittedLine;
int iLine = 0;
try
{
using (StreamReader sr = File.OpenText(this.CsvPath))
{
while ((line = sr.ReadLine()) != null)
{
splittedLine = line.Split(new string[] { this.Separator }, StringSplitOptions.None);
if (iLine == 0 && this.HasHeader)
// header line
this.Header = splittedLine;
else
this.Lines.Add(splittedLine);
iLine++;
}
}
}
catch (Exception ex)
{
if(ex.InnerException != null)
while (ex.InnerException != null)
ex = ex.InnerException;
throw ex;
}
// Lines will have all rows and each row, the column entry
}
private void GenerateQuery()
{
foreach (var line in this.Lines)
{
string entries = string.Concat("'", string.Join("','", line))
.TrimEnd('\'').TrimEnd(','); // remove last ",'"
this.Query.Add(string.Format(this.LineTemplate, entries));
}
}
}
and you can run it as:
class Program
{
static void Main(string[] args)
{
string file = Ask("What is the CSV file path? (full path)");
string separator = Ask("What is the current separator? (; or ,)");
var extract = new ExtractCsvIntoSql(file, separator);
var sql = extract.Generate();
Output(sql);
}
private static void Output(IEnumerable<string> sql)
{
foreach(var query in sql)
Console.WriteLine(query);
Console.WriteLine("*******************************************");
Console.Write("END ");
Console.ReadLine();
}
private static string Ask(string question)
{
Console.WriteLine("*******************************************");
Console.WriteLine(question);
Console.Write("= ");
return Console.ReadLine();
}
}
Usually i like to be a bit more generic so i'll try to explain a very basic flow i use from time to time:
I don't like the hard coded attitude so even if your code will work it will be dedicated specifically to one type. I prefer i simple reflection, first to understand what DTO is it and then to understand what repository should i use to manipulate it:
For example:
public class ImportProvider
{
private readonly string _path;
private readonly ObjectResolver _objectResolver;
public ImportProvider(string path)
{
_path = path;
_objectResolver = new ObjectResolver();
}
public void Import()
{
var filePaths = Directory.GetFiles(_path, "*.csv");
foreach (var filePath in filePaths)
{
var fileName = Path.GetFileName(filePath);
var className = fileName.Remove(fileName.Length-4);
using (var reader = new CsvFileReader(filePath))
{
var row = new CsvRow();
var repository = (DaoBase)_objectResolver.Resolve("DAL.Repository", className + "Dao");
while (reader.ReadRow(row))
{
var dtoInstance = (DtoBase)_objectResolver.Resolve("DAL.DTO", className + "Dto");
dtoInstance.FillInstance(row.ToArray());
repository.Save(dtoInstance);
}
}
}
}
}
Above is a very basic class responsible importing the data. Nevertheless of how this piece of code parsing CSV files (CsvFileReader), the important part is thata "CsvRow" is a simple List.
Below is the implementation of the ObjectResolver:
public class ObjectResolver
{
private readonly Assembly _myDal;
public ObjectResolver()
{
_myDal = Assembly.Load("DAL");
}
public object Resolve(string nameSpace, string name)
{
var myLoadClass = _myDal.GetType(nameSpace + "." + name);
return Activator.CreateInstance(myLoadClass);
}
}
The idea is to simple follow a naming convetion, in my case is using a "Dto" suffix for reflecting the instances, and "Dao" suffix for reflecting the responsible dao. The full name of the Dto or the Dao can be taken from the csv name or from the header (as you wish)
Next step is filling the Dto, each dto or implements the following simple abstract:
public abstract class DtoBase
{
public abstract void FillInstance(params string[] parameters);
}
Since each Dto "knows" his structure (just like you knew to create an appropriate table in the database), it can easily implement the FillInstanceMethod, here is a simple Dto example:
public class ProductDto : DtoBase
{
public int ProductId { get; set; }
public double Weight { get; set; }
public int FamilyId { get; set; }
public override void FillInstance(params string[] parameters)
{
ProductId = int.Parse(parameters[0]);
Weight = double.Parse(parameters[1]);
FamilyId = int.Parse(parameters[2]);
}
}
After you have your Dto filled with data you should find the appropriate Dao to handle it
which is basically happens in reflection in this line of the Import() method:
var repository = (DaoBase)_objectResolver.Resolve("DAL.Repository", className + "Dao");
In my case the Dao implements an abstract base class - but it's not that relevant to your problem, your DaoBase can be a simple abstract with a single Save() method.
This way you have a dedicated Dao to CRUD your Dto's - each Dao simply knows how to save for its relevant Dto. Below is the corresponding ProductDao to the ProductDto:
public class ProductDao : DaoBase
{
private const string InsertProductQuery = #"SET foreign_key_checks = 0;
Insert into product (productID, weight, familyID)
VALUES (#productId, #weight, #familyId);
SET foreign_key_checks = 1;";
public override void Save(DtoBase dto)
{
var productToSave = dto as ProductDto;
var saveproductCommand = GetDbCommand(InsertProductQuery);
if (productToSave != null)
{
saveproductCommand.Parameters.Add(CreateParameter("#productId", productToSave.ProductId));
saveproductCommand.Parameters.Add(CreateParameter("#weight", productToSave.Weight));
saveproductCommand.Parameters.Add(CreateParameter("#familyId", productToSave.FamilyId));
ExecuteNonQuery(ref saveproductCommand);
}
}
}
Please ignore the CreateParameter() method, since it's an abstraction from the base classs. you can just use a CreateSqlParameter or CreateDataParameter etc.
Just notice, it's a real naive implementation - you can easily remodel it better, depends on your needs.
From the first impression of your questionc I guess you would be having hugely number of records (more than lacs). If yes I would consider the SQL bulk copies an option. If the record would be less go ahead single record. Insert. The reason for you insert not working is u not providing all the columns of the table and also there's some syntax error.