I have the following code to read a shapefile set (.dbf, .prj, .shp, .shx) with the NetTopologySuite.IO.ShapefileDataReader:
public FeatureCollection ReadShapeFile(string localShapeFile)
{
var collection = new FeatureCollection();
var factory = new GeometryFactory();
using (var reader = new ShapefileDataReader(localShapeFile, factory))
{
var header = reader.DbaseHeader;
while (reader.Read())
{
var f = new Feature {Geometry = reader.Geometry};
var attrs = new AttributesTable();
for (var i = 0; i < header.NumFields; i++)
{
attrs.AddAttribute(header.Fields[i].Name, reader.GetValue(i));
}
f.Attributes = attrs;
collection.Add(f);
}
}
return collection;
}
This works, but the geometry objects don't have a property to tell which reference system the coordinates are in.
How can I find out which coordinate system / reference system the shape file or individual shapes are in?
The projection is not available in the .shp file, but in the .prj file, and can be loaded separately:
var projectionFile = Path.Combine(Path.GetDirectoryName(localShapeFile), Path.GetFileNameWithoutExtension(localShapeFile) + ".prj");
var projectionInfo = ProjectionInfo.Open(projectionFile);
Related
I have difficulties understanding this example on how to use facets :
https://lucenenet.apache.org/docs/4.8.0-beta00008/api/Lucene.Net.Demo/Lucene.Net.Demo.Facet.SimpleFacetsExample.html
My goal is to create an index in which each document field have a facet, so that at search time i can choose which facets use to navigate data.
What i am confused about is setup of facets in index creation, to
summarize my question : is index with facets compatibile with
ReferenceManager?
Need DirectoryTaxonomyWriter to be actually written and persisted
on disk or it will embedded into the index itself and is just
temporary? I mean given the code
indexWriter.AddDocument(config.Build(taxoWriter, doc)); of the
example i expect it's temporary and will be embedded into the index (but then the example also show you need the Taxonomy to drill down facet). So can the Taxonomy be tangled in some way with the index so that the are handled althogeter with ReferenceManager?
If is not may i just use the same folder i use for storing index?
Here is a more detailed list of point that confuse me :
In my scenario i am indexing the document asyncrhonously (background process) and then fetching the indext ASAP throught ReferenceManager in ASP.NET application. I hope this way to fetch the index is compatibile with DirectoryTaxonomyWriter needed by facets.
Then i modified the code i write introducing the taxonomy writer as indicated in the example, but i am a bit confused, seems like i can't store DirectoryTaxonomyWriter into the same folder of index because the folder is locked, need i to persist it or it will be embedded into the index (so a RAMDirectory is enougth)? if i need to persist it in a different direcotry, can i safely persist it into subdirectory?
Here the code i am actually using :
private static void BuildIndex (IndexEntry entry)
{
string targetFolder = ConfigurationManager.AppSettings["IndexFolder"] ?? string.Empty;
//** LOG
if (System.IO.Directory.Exists(targetFolder) == false)
{
string message = #"Index folder not found";
_fileLogger.Error(message);
_consoleLogger.Error(message);
return;
}
var metadata = JsonConvert.DeserializeObject<IndexMetadata>(File.ReadAllText(entry.MetdataPath) ?? "{}");
string[] header = new string[0];
List<dynamic> csvRecords = new List<dynamic>();
using (var reader = new StreamReader(entry.DataPath))
{
CsvConfiguration csvConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture);
csvConfiguration.AllowComments = false;
csvConfiguration.CountBytes = false;
csvConfiguration.Delimiter = ",";
csvConfiguration.DetectColumnCountChanges = false;
csvConfiguration.Encoding = Encoding.UTF8;
csvConfiguration.HasHeaderRecord = true;
csvConfiguration.IgnoreBlankLines = true;
csvConfiguration.HeaderValidated = null;
csvConfiguration.MissingFieldFound = null;
csvConfiguration.TrimOptions = CsvHelper.Configuration.TrimOptions.None;
csvConfiguration.BadDataFound = null;
using (var csvReader = new CsvReader(reader, csvConfiguration))
{
csvReader.Read();
csvReader.ReadHeader();
csvReader.Read();
header = csvReader.HeaderRecord;
csvRecords = csvReader.GetRecords<dynamic>().ToList();
}
}
string targetDirectory = Path.Combine(targetFolder, "Index__" + metadata.Boundle + "__" + DateTime.Now.ToString("yyyyMMdd_HHmmss") + "__" + Path.GetRandomFileName().Substring(0, 6));
System.IO.Directory.CreateDirectory(targetDirectory);
//** LOG
{
string message = #"..creating index : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
using (var dir = FSDirectory.Open(targetDirectory))
{
using (DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(dir))
{
Analyzer analyzer = metadata.GetAnalyzer();
var indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer);
using (IndexWriter writer = new IndexWriter(dir, indexConfig))
{
long entryNumber = csvRecords.Count();
long index = 0;
long lastPercentage = 0;
foreach (dynamic csvEntry in csvRecords)
{
Document doc = new Document();
IDictionary<string, object> dynamicCsvEntry = (IDictionary<string, object>)csvEntry;
var indexedMetadataFiled = metadata.IdexedFields;
foreach (string headField in header)
{
if (indexedMetadataFiled.ContainsKey(headField) == false || (indexedMetadataFiled[headField].NeedToBeIndexed == false && indexedMetadataFiled[headField].NeedToBeStored == false))
continue;
var field = new Field(headField,
((string)dynamicCsvEntry[headField] ?? string.Empty).ToLower(),
indexedMetadataFiled[headField].NeedToBeStored ? Field.Store.YES : Field.Store.NO,
indexedMetadataFiled[headField].NeedToBeIndexed ? Field.Index.ANALYZED : Field.Index.NO
);
doc.Add(field);
var facetField = new FacetField(headField, (string)dynamicCsvEntry[headField]);
doc.Add(facetField);
}
long percentage = (long)(((decimal)index / (decimal)entryNumber) * 100m);
if (percentage > lastPercentage && percentage % 10 == 0)
{
_consoleLogger.Information($"..indexing {percentage}%..");
lastPercentage = percentage;
}
writer.AddDocument(doc);
index++;
}
writer.Commit();
}
}
}
//** LOG
{
string message = #"Index Created : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
}
Any Help will be appreaciated :) Thank you in advance
I tried to loop other object inside of the function and its working but on this, it can't loop. Help. this is rush, and I'm not that familiar with creating iOS app.
public override void ViewDidLoad()
{
base.ViewDidLoad();
using (var web = new WebClient())
{
var url = "http://www.creativeinterlace.com/smitten/maintenance/api/feeds/get-miss-location/101";
json = web.DownloadString(url);
}
json = json.Replace("{\"location\":", "").Replace("}]}", "}]");
var ls = JArray.Parse(json);
if (ls.Count != 0)
{
foreach (var x in ls)
{
var name = x.SelectToken("location");
name1 = Convert.ToString(name);
var loc = x.SelectToken("address");
loc1 = Convert.ToString(loc);
var time = x.SelectToken("time_ago");
time1 = Convert.ToString(time);
locations = new List<Locations>
{
new Locations
{
shopname = name1,
address= loc1,
time = time1
},
};
}
nmtable.Source = new LocationSource(locations);
nmtable.RowHeight = 60;
nmtable.ReloadData();
}
}
You initialize the locations every time in the loop,so the list updates with only the newest object. You should initialize the list outside of the loop , and add object every time.
locations = new List<Locations>();
if (ls.Count != 0)
{
foreach (var x in ls)
{
var name = x.SelectToken("location");
name1 = Convert.ToString(name);
var loc = x.SelectToken("address");
loc1 = Convert.ToString(loc);
var time = x.SelectToken("time_ago");
time1 = Convert.ToString(time);
locations.Add(new Locations{ shopname = name1,address= loc1,time = time1});
};
}
Very new to .NET task parallelism. The objective is walking through a tree structure, where each branch is composed of one parent node, one child node and one operation node(like a weight). And for each node, create an extension object and save it to db. I followed a possible duplicate conversation. But the observation is that the tree is not walked through completely. The process would exit early unexpectedly. Following is my code:
public void InitializeScheduleVariables_Parallel(IResource ANode, double aNumRequired, double aBatchRequired, double aAcceptProbability, AppContext aAppContext, bool ARecursively = true)
{
var LTasks = new List<Task>();
var LUser = aAppContext.LocalContext.User;
LTasks.Add(Task.Factory.StartNew(() =>
{
var LNewContext = new AppContext(new DbContext(new Context(LUser)));
var LNewRep = new ResourceRepository(LNewContext);
ANode = LNewRep.Get(ANode.Id);
ANode.ResourceInstance_Create(); // Create the ResourceInstance on the Resourse if it not already exists.
ANode.ResourceInstance.Required = aNumRequired;
ANode.ResourceInstance.ScheduleSource = ResourceInstance.ScheduleSourceEnum.Undefined;
ANode.ResourceInstance.ScheduleState = ResourceInstance.ScheduleStateEnum.Unscheduled;
ANode.ResourceInstance.ScheduleMode = ResourceInstance.ScheduleModeEnum.Undefined;
ANode.ResourceInstance.BatchRequired = aBatchRequired;
ANode.ResourceInstance.ProbabilityOfCompletion = aAcceptProbability;
ANode.ResourceInstance.Save();
}));
if (ARecursively)
{
foreach (AssemblyLink LAssembly in ANode.GetOutEdges())
{
LTasks.Add(Task.Factory.StartNew(() =>
{
// SET The Variables for the Production Operations AS WELL
IOperationResource LOperation = LAssembly.Operation;
if (LOperation != null)
{
var LNewContext = new AppContext(new DbContext(new Context(LUser)));
var LNewRep = new OperationResourceRepository(LNewContext);
LOperation = LNewRep.Get(LOperation.Id);
LOperation.ResourceInstance_Create(); // Create the ResourceInstance on the Resourse if it not already exists.
LOperation.ResourceInstance.Required = aNumRequired / LAssembly.OutputQuantity;
LOperation.ResourceInstance.BatchRequired = aBatchRequired / LAssembly.OutputQuantity;
LOperation.ResourceInstance.ScheduleSource = ResourceInstance.ScheduleSourceEnum.Undefined;
LOperation.ResourceInstance.ScheduleState = ResourceInstance.ScheduleStateEnum.Unscheduled;
LOperation.ResourceInstance.ScheduleMode = ResourceInstance.ScheduleModeEnum.Undefined;
LOperation.ResourceInstance.ProbabilityOfCompletion = aAcceptProbability;
LOperation.ResourceInstance.Save();
}
}));
LTasks.Add(Task.Factory.StartNew(() =>
{
// Recursively SET Child NODES
IResource LChildNode = LAssembly.Child;
double LNumRequired_Child = aNumRequired * LAssembly.InputQuantity / LAssembly.OutputQuantity;
double LNumBatchRequired_Child = LChildNode.Quantity * LAssembly.InputQuantity / LAssembly.OutputQuantity;
InitializeScheduleVariables_Parallel(LChildNode, LNumRequired_Child, LNumBatchRequired_Child, aAcceptProbability, aAppContext, ARecursively);
}));
}
}
Task.WaitAll(LTasks.ToArray());
}
Could anyone share some thought? Thank you.
I have content items stored in Ektron that are assigned to taxonomies. I'm trying to create a method that will allow me to programmatically change the taxonomies. So far I find the content item by ID, and I'm able to retrieve its taxonomies, but I'm not sure how to change them.
var ektronItem = contentManager.GetItem((long) item.tctmd_id);
if (ektronItem != null) // item exists in Ektron
{
var newTaxonomies = item.taxonomy_ids;
var taxonomyAPI = new Taxonomy();
var taxData = taxonomyAPI.ReadAllAssignedCategory(ektronItem.Id);
foreach (var tax in taxData)
{
taxonomyAPI.RemoveTaxonomyItem(???);
// here I'm trying to remove the content item from the taxonomy
}
}
taxonomyAPI.RemoveTaxonomyItem() takes a Ektron.Cms.TaxonomyRequest object, but I'm not sure how to create this. I'm also not sure if this is even the method I should be using.
In case anyone else wants to know how to do this, here's the solution I came up with:
var contentManager = new Ektron.Cms.Framework.Content.ContentManager();
var criteria = new Ektron.Cms.Content.ContentCriteria(ContentProperty.Id, EkEnumeration.OrderByDirection.Ascending);
criteria.AddFilter(ContentProperty.FolderId, CriteriaFilterOperator.EqualTo, toUpdate.folder_id);
criteria.OrderByDirection = Ektron.Cms.Common.EkEnumeration.OrderByDirection.Descending;
criteria.OrderByField = Ektron.Cms.Common.ContentProperty.GoLiveDate;
criteria.FolderRecursive = true;
criteria.PagingInfo = new Ektron.Cms.PagingInfo(50, 1);
var ektronItem = contentManager.GetItem((long) item.tctmd_id);
if (ektronItem != null) // item exists in Ektron
{
// update taxonomy in Ektron
var taxIds = item.taxonomy_ids;
var taxonomyAPI = new Taxonomy();
var taxData = taxonomyAPI.ReadAllAssignedCategory(ektronItem.Id);
var taxManager = new Ektron.Cms.Framework.Organization.TaxonomyItemManager();
var taxCriteria = new TaxonomyItemCriteria();
// create a taxonomy criteria of the item ID
taxCriteria.AddFilter(TaxonomyItemProperty.ItemId, CriteriaFilterOperator.EqualTo, item.tctmd_id);
// get all taxonomy items with item ID
var taxItems = taxManager.GetList(taxCriteria);
// determine taxonomyItemType
var type = taxItems.FirstOrDefault().ItemType;
foreach (var tax in taxData)
{
// delete from old taxonomies
taxManager.Delete(tax.Id, (long)item.tctmd_id, type);
}
foreach (var tax in taxIds)
{
// add to new taxonomies
var taxonomyItemData = new TaxonomyItemData()
{
TaxonomyId = tax,
ItemType = type,
ItemId = (long)item.tctmd_id
};
try
{
taxManager.Add(taxonomyItemData);
}
catch (Exception ex)
{
}
}
}
I am new to mdx query and I am very curious about mdx query generation using C# so I searched for any demo or open source then I found Ranet.olap (https://ranetuilibraryolap.codeplex.com/) which is providing what I need.
After taking the dlls I tried to incorporate them in my code. I am pasting my full console code which should generate mdx query but it's not doing so, am I doing something wrong?
using System;
using System.Collections.Generic;
using Microsoft.AnalysisServices.AdomdClient;
using Ranet.Olap.Core.Managers;
using Ranet.Olap.Core.Metadata;
using Ranet.Olap.Core.Types;
namespace MDX
{
class Program
{
static void Main(string[] args)
{
startWork();
}
public static void startWork()
{
string connString = "Provider=MSOLAP.3; Data Source=localhost;Initial Catalog=AdventureWorkDW2008R2;Integrated Security=SSPI;";
CubeDef cubes;
AdomdConnection conn = new AdomdConnection(connString);
conn.Open();
cubes = conn.Cubes.Find("AdventureWorkCube");
Ranet.Olap.Core.Managers.MdxQueryBuilder mdx = new Ranet.Olap.Core.Managers.MdxQueryBuilder();
mdx.Cube = cubes.Caption;
List<Ranet.Olap.Core.Wrappers.AreaItemWrapper> listColumn = new List<Ranet.Olap.Core.Wrappers.AreaItemWrapper>();
List<Ranet.Olap.Core.Wrappers.AreaItemWrapper> listRow = new List<Ranet.Olap.Core.Wrappers.AreaItemWrapper>();
List<Ranet.Olap.Core.Wrappers.AreaItemWrapper> listData = new List<Ranet.Olap.Core.Wrappers.AreaItemWrapper>();
//Column area
Dimension dmColumn = cubes.Dimensions.Find("Dim Product");
Microsoft.AnalysisServices.AdomdClient.Hierarchy hColumn = dmColumn.Hierarchies["English Product Name"];
//hierarchy properties
List<PropertyInfo> lPropInfo = new List<PropertyInfo>();
foreach (var prop in hColumn.Properties)
{
PropertyInfo p = new PropertyInfo();
p.Name = prop.Name;
p.Value = prop.Value;
lPropInfo.Add(p);
}
Ranet.Olap.Core.Wrappers.AreaItemWrapper areaIColumn = new Ranet.Olap.Core.Wrappers.AreaItemWrapper();
areaIColumn.AreaItemType = AreaItemWrapperType.Hierarchy_AreaItemWrapper;
areaIColumn.Caption = hColumn.Caption;
areaIColumn.CustomProperties = lPropInfo;
listColumn.Add(areaIColumn);
//Rows Area
Dimension dmRow = cubes.Dimensions.Find("Due Date");
Microsoft.AnalysisServices.AdomdClient.Hierarchy hRow = dmRow.Hierarchies["English Month Name"];
List<PropertyInfo> lRowPropInfo = new List<PropertyInfo>();
foreach (var prop in hRow.Properties)
{
PropertyInfo p = new PropertyInfo(prop.Name,prop.Value);
lRowPropInfo.Add(p);
}
Ranet.Olap.Core.Wrappers.AreaItemWrapper areaIRow = new Ranet.Olap.Core.Wrappers.AreaItemWrapper();
areaIRow.AreaItemType = AreaItemWrapperType.Hierarchy_AreaItemWrapper;
areaIRow.Caption = hRow.Caption;
areaIRow.CustomProperties = lRowPropInfo;
listRow.Add(areaIRow);
//Measure Area or Data Area
Measure ms = cubes.Measures.Find("Order Quantity");
Ranet.Olap.Core.Wrappers.AreaItemWrapper areaIData = new Ranet.Olap.Core.Wrappers.AreaItemWrapper();
areaIData.AreaItemType = AreaItemWrapperType.Measure_AreaItemWrapper;
areaIData.Caption = ms.Caption;
List<PropertyInfo> lmpropInfo = new List<PropertyInfo>();
foreach (var prop in ms.Properties)
{
PropertyInfo p = new PropertyInfo(prop.Name, prop.Value);
lmpropInfo.Add(p);
}
areaIData.CustomProperties = lmpropInfo;
listData.Add(areaIData);
mdx.AreaWrappersColumns = listColumn;
mdx.AreaWrappersRows = listRow;
mdx.AreaWrappersData = listData;
string mdxQuery = mdx.GenerateMdxQuery();
conn.Close();
}
}
}
A simple example of the generation mdx query (only Ranet OLAP 3.7 version):
using System.Collections.Generic;
using Ranet.Olap.Core.Data;
using Ranet.Olap.Core.Managers;
using Ranet.Olap.Core.Types;
using Ranet.Olap.Core.Wrappers;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
startWork();
}
public static void startWork()
{
var mdx = new QueryBuilderParameters
{
CubeName = "[Adventure Works]",
SubCube = "",
MdxDesignerSetting = new MDXDesignerSettingWrapper(),
CalculatedMembers = new List<CalcMemberInfo>(),
CalculatedNamedSets = new List<CalculatedNamedSetInfo>(),
AreaWrappersFilter = new List<AreaItemWrapper>(),
AreaWrappersColumns = new List<AreaItemWrapper>(),
AreaWrappersRows = new List<AreaItemWrapper>(),
AreaWrappersData = new List<AreaItemWrapper>()
};
//define parameters
mdx.MdxDesignerSetting.HideEmptyColumns = false;
mdx.MdxDesignerSetting.HideEmptyRows = false;
mdx.MdxDesignerSetting.UseVisualTotals = false;
mdx.MdxDesignerSetting.SubsetCount = 0;
var itemCol1 = new Hierarchy_AreaItemWrapper
{
AreaItemType = AreaItemWrapperType.Hierarchy_AreaItemWrapper,
UniqueName = "[Customer].[Customer Geography]"
};
mdx.AreaWrappersColumns.Add(itemCol1);
var itemRow1 = new Hierarchy_AreaItemWrapper
{
AreaItemType = AreaItemWrapperType.Hierarchy_AreaItemWrapper,
UniqueName = "[Date].[Calendar]"
};
mdx.AreaWrappersRows.Add(itemRow1);
var itemData1 = new Measure_AreaItemWrapper();
itemData1.AreaItemType = AreaItemWrapperType.Measure_AreaItemWrapper;
itemData1.UniqueName = "[Measures].[Internet Order Count]";
mdx.AreaWrappersData.Add(itemData1);
string query = MdxQueryBuilder.Default.BuildQuery(mdx, null);
}
}
}
MDX Query result:
SELECT
HIERARCHIZE(HIERARCHIZE([Customer].[Customer Geography].Levels(0).Members)) DIMENSION PROPERTIES PARENT_UNIQUE_NAME, HIERARCHY_UNIQUE_NAME, CUSTOM_ROLLUP, UNARY_OPERATOR, KEY0 ON 0,
HIERARCHIZE(HIERARCHIZE([Date].[Calendar].Levels(0).Members)) DIMENSION PROPERTIES PARENT_UNIQUE_NAME, HIERARCHY_UNIQUE_NAME, CUSTOM_ROLLUP, UNARY_OPERATOR, KEY0 ON 1
FROM
[Adventure Works]
WHERE ([Measures].[Internet Order Count])
CELL PROPERTIES BACK_COLOR, CELL_ORDINAL, FORE_COLOR, FONT_NAME, FONT_SIZE, FONT_FLAGS, FORMAT_STRING, VALUE, FORMATTED_VALUE, UPDATEABLE, ACTION_TYPE
Still in process of revising code for this engine, though some suggestions for you:
It looks like you just grab cube metadata (dims, measures etc.) and pass it to generator. This does not sound like a way to generate MDX. MDX statement should look like
select
{
// measures, calculated members
} on 0,
{
// dimension data - sets
} on 1 // probably more axis
from **Cube**
All other parameters are optional