Very new to .NET task parallelism. The objective is walking through a tree structure, where each branch is composed of one parent node, one child node and one operation node(like a weight). And for each node, create an extension object and save it to db. I followed a possible duplicate conversation. But the observation is that the tree is not walked through completely. The process would exit early unexpectedly. Following is my code:
public void InitializeScheduleVariables_Parallel(IResource ANode, double aNumRequired, double aBatchRequired, double aAcceptProbability, AppContext aAppContext, bool ARecursively = true)
{
var LTasks = new List<Task>();
var LUser = aAppContext.LocalContext.User;
LTasks.Add(Task.Factory.StartNew(() =>
{
var LNewContext = new AppContext(new DbContext(new Context(LUser)));
var LNewRep = new ResourceRepository(LNewContext);
ANode = LNewRep.Get(ANode.Id);
ANode.ResourceInstance_Create(); // Create the ResourceInstance on the Resourse if it not already exists.
ANode.ResourceInstance.Required = aNumRequired;
ANode.ResourceInstance.ScheduleSource = ResourceInstance.ScheduleSourceEnum.Undefined;
ANode.ResourceInstance.ScheduleState = ResourceInstance.ScheduleStateEnum.Unscheduled;
ANode.ResourceInstance.ScheduleMode = ResourceInstance.ScheduleModeEnum.Undefined;
ANode.ResourceInstance.BatchRequired = aBatchRequired;
ANode.ResourceInstance.ProbabilityOfCompletion = aAcceptProbability;
ANode.ResourceInstance.Save();
}));
if (ARecursively)
{
foreach (AssemblyLink LAssembly in ANode.GetOutEdges())
{
LTasks.Add(Task.Factory.StartNew(() =>
{
// SET The Variables for the Production Operations AS WELL
IOperationResource LOperation = LAssembly.Operation;
if (LOperation != null)
{
var LNewContext = new AppContext(new DbContext(new Context(LUser)));
var LNewRep = new OperationResourceRepository(LNewContext);
LOperation = LNewRep.Get(LOperation.Id);
LOperation.ResourceInstance_Create(); // Create the ResourceInstance on the Resourse if it not already exists.
LOperation.ResourceInstance.Required = aNumRequired / LAssembly.OutputQuantity;
LOperation.ResourceInstance.BatchRequired = aBatchRequired / LAssembly.OutputQuantity;
LOperation.ResourceInstance.ScheduleSource = ResourceInstance.ScheduleSourceEnum.Undefined;
LOperation.ResourceInstance.ScheduleState = ResourceInstance.ScheduleStateEnum.Unscheduled;
LOperation.ResourceInstance.ScheduleMode = ResourceInstance.ScheduleModeEnum.Undefined;
LOperation.ResourceInstance.ProbabilityOfCompletion = aAcceptProbability;
LOperation.ResourceInstance.Save();
}
}));
LTasks.Add(Task.Factory.StartNew(() =>
{
// Recursively SET Child NODES
IResource LChildNode = LAssembly.Child;
double LNumRequired_Child = aNumRequired * LAssembly.InputQuantity / LAssembly.OutputQuantity;
double LNumBatchRequired_Child = LChildNode.Quantity * LAssembly.InputQuantity / LAssembly.OutputQuantity;
InitializeScheduleVariables_Parallel(LChildNode, LNumRequired_Child, LNumBatchRequired_Child, aAcceptProbability, aAppContext, ARecursively);
}));
}
}
Task.WaitAll(LTasks.ToArray());
}
Could anyone share some thought? Thank you.
Related
I have difficulties understanding this example on how to use facets :
https://lucenenet.apache.org/docs/4.8.0-beta00008/api/Lucene.Net.Demo/Lucene.Net.Demo.Facet.SimpleFacetsExample.html
My goal is to create an index in which each document field have a facet, so that at search time i can choose which facets use to navigate data.
What i am confused about is setup of facets in index creation, to
summarize my question : is index with facets compatibile with
ReferenceManager?
Need DirectoryTaxonomyWriter to be actually written and persisted
on disk or it will embedded into the index itself and is just
temporary? I mean given the code
indexWriter.AddDocument(config.Build(taxoWriter, doc)); of the
example i expect it's temporary and will be embedded into the index (but then the example also show you need the Taxonomy to drill down facet). So can the Taxonomy be tangled in some way with the index so that the are handled althogeter with ReferenceManager?
If is not may i just use the same folder i use for storing index?
Here is a more detailed list of point that confuse me :
In my scenario i am indexing the document asyncrhonously (background process) and then fetching the indext ASAP throught ReferenceManager in ASP.NET application. I hope this way to fetch the index is compatibile with DirectoryTaxonomyWriter needed by facets.
Then i modified the code i write introducing the taxonomy writer as indicated in the example, but i am a bit confused, seems like i can't store DirectoryTaxonomyWriter into the same folder of index because the folder is locked, need i to persist it or it will be embedded into the index (so a RAMDirectory is enougth)? if i need to persist it in a different direcotry, can i safely persist it into subdirectory?
Here the code i am actually using :
private static void BuildIndex (IndexEntry entry)
{
string targetFolder = ConfigurationManager.AppSettings["IndexFolder"] ?? string.Empty;
//** LOG
if (System.IO.Directory.Exists(targetFolder) == false)
{
string message = #"Index folder not found";
_fileLogger.Error(message);
_consoleLogger.Error(message);
return;
}
var metadata = JsonConvert.DeserializeObject<IndexMetadata>(File.ReadAllText(entry.MetdataPath) ?? "{}");
string[] header = new string[0];
List<dynamic> csvRecords = new List<dynamic>();
using (var reader = new StreamReader(entry.DataPath))
{
CsvConfiguration csvConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture);
csvConfiguration.AllowComments = false;
csvConfiguration.CountBytes = false;
csvConfiguration.Delimiter = ",";
csvConfiguration.DetectColumnCountChanges = false;
csvConfiguration.Encoding = Encoding.UTF8;
csvConfiguration.HasHeaderRecord = true;
csvConfiguration.IgnoreBlankLines = true;
csvConfiguration.HeaderValidated = null;
csvConfiguration.MissingFieldFound = null;
csvConfiguration.TrimOptions = CsvHelper.Configuration.TrimOptions.None;
csvConfiguration.BadDataFound = null;
using (var csvReader = new CsvReader(reader, csvConfiguration))
{
csvReader.Read();
csvReader.ReadHeader();
csvReader.Read();
header = csvReader.HeaderRecord;
csvRecords = csvReader.GetRecords<dynamic>().ToList();
}
}
string targetDirectory = Path.Combine(targetFolder, "Index__" + metadata.Boundle + "__" + DateTime.Now.ToString("yyyyMMdd_HHmmss") + "__" + Path.GetRandomFileName().Substring(0, 6));
System.IO.Directory.CreateDirectory(targetDirectory);
//** LOG
{
string message = #"..creating index : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
using (var dir = FSDirectory.Open(targetDirectory))
{
using (DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(dir))
{
Analyzer analyzer = metadata.GetAnalyzer();
var indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer);
using (IndexWriter writer = new IndexWriter(dir, indexConfig))
{
long entryNumber = csvRecords.Count();
long index = 0;
long lastPercentage = 0;
foreach (dynamic csvEntry in csvRecords)
{
Document doc = new Document();
IDictionary<string, object> dynamicCsvEntry = (IDictionary<string, object>)csvEntry;
var indexedMetadataFiled = metadata.IdexedFields;
foreach (string headField in header)
{
if (indexedMetadataFiled.ContainsKey(headField) == false || (indexedMetadataFiled[headField].NeedToBeIndexed == false && indexedMetadataFiled[headField].NeedToBeStored == false))
continue;
var field = new Field(headField,
((string)dynamicCsvEntry[headField] ?? string.Empty).ToLower(),
indexedMetadataFiled[headField].NeedToBeStored ? Field.Store.YES : Field.Store.NO,
indexedMetadataFiled[headField].NeedToBeIndexed ? Field.Index.ANALYZED : Field.Index.NO
);
doc.Add(field);
var facetField = new FacetField(headField, (string)dynamicCsvEntry[headField]);
doc.Add(facetField);
}
long percentage = (long)(((decimal)index / (decimal)entryNumber) * 100m);
if (percentage > lastPercentage && percentage % 10 == 0)
{
_consoleLogger.Information($"..indexing {percentage}%..");
lastPercentage = percentage;
}
writer.AddDocument(doc);
index++;
}
writer.Commit();
}
}
}
//** LOG
{
string message = #"Index Created : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
}
Any Help will be appreaciated :) Thank you in advance
I tried to loop other object inside of the function and its working but on this, it can't loop. Help. this is rush, and I'm not that familiar with creating iOS app.
public override void ViewDidLoad()
{
base.ViewDidLoad();
using (var web = new WebClient())
{
var url = "http://www.creativeinterlace.com/smitten/maintenance/api/feeds/get-miss-location/101";
json = web.DownloadString(url);
}
json = json.Replace("{\"location\":", "").Replace("}]}", "}]");
var ls = JArray.Parse(json);
if (ls.Count != 0)
{
foreach (var x in ls)
{
var name = x.SelectToken("location");
name1 = Convert.ToString(name);
var loc = x.SelectToken("address");
loc1 = Convert.ToString(loc);
var time = x.SelectToken("time_ago");
time1 = Convert.ToString(time);
locations = new List<Locations>
{
new Locations
{
shopname = name1,
address= loc1,
time = time1
},
};
}
nmtable.Source = new LocationSource(locations);
nmtable.RowHeight = 60;
nmtable.ReloadData();
}
}
You initialize the locations every time in the loop,so the list updates with only the newest object. You should initialize the list outside of the loop , and add object every time.
locations = new List<Locations>();
if (ls.Count != 0)
{
foreach (var x in ls)
{
var name = x.SelectToken("location");
name1 = Convert.ToString(name);
var loc = x.SelectToken("address");
loc1 = Convert.ToString(loc);
var time = x.SelectToken("time_ago");
time1 = Convert.ToString(time);
locations.Add(new Locations{ shopname = name1,address= loc1,time = time1});
};
}
I have the following code to read a shapefile set (.dbf, .prj, .shp, .shx) with the NetTopologySuite.IO.ShapefileDataReader:
public FeatureCollection ReadShapeFile(string localShapeFile)
{
var collection = new FeatureCollection();
var factory = new GeometryFactory();
using (var reader = new ShapefileDataReader(localShapeFile, factory))
{
var header = reader.DbaseHeader;
while (reader.Read())
{
var f = new Feature {Geometry = reader.Geometry};
var attrs = new AttributesTable();
for (var i = 0; i < header.NumFields; i++)
{
attrs.AddAttribute(header.Fields[i].Name, reader.GetValue(i));
}
f.Attributes = attrs;
collection.Add(f);
}
}
return collection;
}
This works, but the geometry objects don't have a property to tell which reference system the coordinates are in.
How can I find out which coordinate system / reference system the shape file or individual shapes are in?
The projection is not available in the .shp file, but in the .prj file, and can be loaded separately:
var projectionFile = Path.Combine(Path.GetDirectoryName(localShapeFile), Path.GetFileNameWithoutExtension(localShapeFile) + ".prj");
var projectionInfo = ProjectionInfo.Open(projectionFile);
I have content items stored in Ektron that are assigned to taxonomies. I'm trying to create a method that will allow me to programmatically change the taxonomies. So far I find the content item by ID, and I'm able to retrieve its taxonomies, but I'm not sure how to change them.
var ektronItem = contentManager.GetItem((long) item.tctmd_id);
if (ektronItem != null) // item exists in Ektron
{
var newTaxonomies = item.taxonomy_ids;
var taxonomyAPI = new Taxonomy();
var taxData = taxonomyAPI.ReadAllAssignedCategory(ektronItem.Id);
foreach (var tax in taxData)
{
taxonomyAPI.RemoveTaxonomyItem(???);
// here I'm trying to remove the content item from the taxonomy
}
}
taxonomyAPI.RemoveTaxonomyItem() takes a Ektron.Cms.TaxonomyRequest object, but I'm not sure how to create this. I'm also not sure if this is even the method I should be using.
In case anyone else wants to know how to do this, here's the solution I came up with:
var contentManager = new Ektron.Cms.Framework.Content.ContentManager();
var criteria = new Ektron.Cms.Content.ContentCriteria(ContentProperty.Id, EkEnumeration.OrderByDirection.Ascending);
criteria.AddFilter(ContentProperty.FolderId, CriteriaFilterOperator.EqualTo, toUpdate.folder_id);
criteria.OrderByDirection = Ektron.Cms.Common.EkEnumeration.OrderByDirection.Descending;
criteria.OrderByField = Ektron.Cms.Common.ContentProperty.GoLiveDate;
criteria.FolderRecursive = true;
criteria.PagingInfo = new Ektron.Cms.PagingInfo(50, 1);
var ektronItem = contentManager.GetItem((long) item.tctmd_id);
if (ektronItem != null) // item exists in Ektron
{
// update taxonomy in Ektron
var taxIds = item.taxonomy_ids;
var taxonomyAPI = new Taxonomy();
var taxData = taxonomyAPI.ReadAllAssignedCategory(ektronItem.Id);
var taxManager = new Ektron.Cms.Framework.Organization.TaxonomyItemManager();
var taxCriteria = new TaxonomyItemCriteria();
// create a taxonomy criteria of the item ID
taxCriteria.AddFilter(TaxonomyItemProperty.ItemId, CriteriaFilterOperator.EqualTo, item.tctmd_id);
// get all taxonomy items with item ID
var taxItems = taxManager.GetList(taxCriteria);
// determine taxonomyItemType
var type = taxItems.FirstOrDefault().ItemType;
foreach (var tax in taxData)
{
// delete from old taxonomies
taxManager.Delete(tax.Id, (long)item.tctmd_id, type);
}
foreach (var tax in taxIds)
{
// add to new taxonomies
var taxonomyItemData = new TaxonomyItemData()
{
TaxonomyId = tax,
ItemType = type,
ItemId = (long)item.tctmd_id
};
try
{
taxManager.Add(taxonomyItemData);
}
catch (Exception ex)
{
}
}
}
I'm working with a tree structure of Installation Places: each one may contain child InstallationPlaces and these can also contain children and so on and so on. I've got the following function:
public JsonResult GetInstPlacesTree()
{
InstallationPlaceModel ipm = new InstallationPlaceModel();
var dataContext = ipm.getRootInstallationPlaces();
var instPlaces = from ip in dataContext.installationPlaces
select new
{
id = ip.installationPlace.id,
Name = ip.installationPlace.mediumDescription,
};
return Json(instPlaces, JsonRequestBehavior.AllowGet);
}
This function returns only the root level of the tree.
I have got two working methods:
one returns the root Installation Places;
the other returns the children of a given Installation Place;
They both return IEnumerable variables.
getRootInstallationPlaces();
getChildInstallationPlaces(id);
How can I achieve to call all the Installation Places and respective children?
I have tried this alternative to the GetInstPlacesTree() function:
private IEnumerable<TreeViewItemModel> GetDefaultInlineData()
{
InstallationPlaceModel ipm = new InstallationPlaceModel();
List<TreeViewItemModel> fullTree = new List<TreeViewItemModel>();
var gipo = ipm.getChildInstallationPlaces(currentInstallationPlace.InstallationPlaceId);
List<TreeViewItemModel> childTree = new List<TreeViewItemModel>();
if (gipo.installationPlaces.Count() > 0)
{
foreach (wsInstallationPlace.installationPlaceOutput child in gipo.installationPlaces)
{
TreeViewItemModel childTreeItem = new TreeViewItemModel
{
Text = child.installationPlace.mediumDescription,
Id = child.installationPlace.id
};
childTree.Add(childTreeItem);
}
}
TreeViewItemModel fatherTreeItem = new TreeViewItemModel
{
Text = currentInstallationPlace.InstallationPlaceMediumDescription,
Id = currentInstallationPlace.InstallationPlaceId,
Items = childTree
};
fullTree.Add(fatherTreeItem);
return fullTree;
}
Any help?
I think something like the following should do what you are after. Essentially it keeps your initial method almost as-is but it populates the child Items of each top-level with a recursive call.
The recursive call grabs the children and adds each child to a List<TreeViewItemModel> to be returned but their children are in turn populated by a call to the recursive function. The recursion will end when there are no children left:
public JsonResult GetInstPlacesTree()
{
InstallationPlaceModel ipm = new InstallationPlaceModel();
var dataContext = ipm.getRootInstallationPlaces();
var instPlaces = from ip in dataContext.installationPlaces
select new TreeViewItemModel
{
id = ip.installationPlace.id,
Name = ip.installationPlace.mediumDescription,
Items = getChildInstallationPlacesRecursive(ip.installationPlace.id, ipm)
};
return Json(instPlaces, JsonRequestBehavior.AllowGet);
}
public List<TreeViewItemModel> getChildInstallationPlacesRecursive(int id, InstallationPlaceModel ipm)
{
List<TreeViewItemModel> children = new List<TreeViewItemModel>();
var gipo = ipm.getChildInstallationPlaces(id);
foreach (wsInstallationPlace.installationPlaceOutput child in gipo.installationPlaces)
{
children.Add(new TreeViewItemModel
{
Text = child.installationPlace.mediumDescription,
Id = child.installationPlace.id,
Items = getChildInstallationPlacesRecursive(child.installationPlace.id, ipm)
});
}
return children;
}
To make it recursive, you have to think that child places are, at the same time, roots of their own childs, then you can call the same function for them.
private IEnumerable<TreeViewItemModel> RecursivePlaces(InstallationPlace root){
var output = new List<TreeViewItemModel>();
output.add(new TreeViewItemModel
{
Text = root.installationPlace.mediumDescription,
Id = root.installationPlace.id
});
foreach(var child in root.installationPlaces)
output.AddRange(RecursivePlaces(child));
return output;
}
//Initial call
RecursivePlaces(ipm.getRootInstallationPlace());
You can solve this without recursion with following approach. I wrote it in some kind of pseudo-code, so you will get an idea what I suggest to accomplish, I didn't used your exact function names and structures and classes...
queue = new List<>();
queue.Add(initialInstallation);
retVal = new List<>();
while (queue.Count > 0) {
retVal.Add(queue[0].GetData());
queue.Add(queue[0].GetChildren());
queue.Remove(0);
}
return retVal;