Send data google sheets with c# - c#

Read the data from the google sheet, it works fine but I don't get to send data to said sheet
HttpClient client = new HttpClient();
string url = "https://docs.google.com/.....";
var response = await client.GetAsync(string.Format(url));
string result = await response.Content.ReadAsStringAsync();
string cadena = stringBetween(result, "\n\n", "\"");
cadena = Regex.Replace(cadena, #"\n", ",");
string[] words = cadena.Split(',');
int x = 4;
List<User> listOfUsers = new List<User>();
for(x=4;x<28;x=x+4)
{
listOfUsers.Add(new User() { Nombre = words[x], Correo = words[x + 1], Telefono = words[x + 2], Comentario = words[x + 3] });
};

Your code is sending data to listOfUsers, which is a collection not tied to the spreadsheet in any way. Here is a good article from C-Sharp Corner showcasing how to create/update a Google Sheets document:
https://www.c-sharpcorner.com/article/create-and-update-google-spreadsheet-via-google-api-net-library/
I would encourage you to read the full article so you can understand what nuget packages you need to communicate with Google sheets. With that said, the most relevant part of it is towards the bottom, when the author writes a method to update an existing sheet:
private static void UpdatGoogleSheetinBatch(IList<IList<Object>> values, string spreadsheetId, string newRange, SheetsService service)
{
SpreadsheetsResource.ValuesResource.AppendRequest request =
service.Spreadsheets.Values.Append(new ValueRange() { Values = values }, spreadsheetId, newRange);
request.InsertDataOption =
SpreadsheetsResource.ValuesResource.AppendRequest.InsertDataOptionEnum.INSERTROWS;
request.ValueInputOption =
SpreadsheetsResource.ValuesResource.AppendRequest.ValueInputOptionEnum.RAW;
var response = request.Execute();
}
Notice how the method is taking a list of lists that contain values as an argument. It is then appended to the spreadsheet in question, and the insert option is then configured to add the new data as rows. The ValueInputOption is then set as RAW, meaning all values will be inserted without being parsed, and then the sheet is finally updated on the last line.
You will want to take note of how the author is generating their values, as they have a List containing as list of objects, whereas you have a list of users.
private static IList<IList<Object>> GenerateData()
{
List<IList<Object>> objNewRecords = new List<IList<Object>>();
int maxrows = 5;
for (var i = 1; i <= maxrows; i++)
{
IList<Object> obj = new List<Object>();
obj.Add("Data row value - " + i + "A");
obj.Add("Data row value - " + i + "B");
obj.Add("Data row value - " + i + "C");
objNewRecords.Add(obj);
}
return objNewRecords;
}
For what you are trying to do, I would modify it to do something like this
private static IList<IList<Object>> GenerateData(string[] words)
{
List<IList<Object>> objNewRecords = new List<IList<Object>>();
for (int x = 4; x < 28; x =x + 4)
{
IList<Object> obj = new List<Object>();
//nombre
obj.Add(words[x]);
// Correo
obj.Add(words[x+1]);
// Telefono
obj.Add(words[x+2]);
// Comentario
obj.Add(words[x+3]);
objNewRecords.Add(obj);
};
return objNewRecords;
}

Related

Lucene 4.8 facets usage

I have difficulties understanding this example on how to use facets :
https://lucenenet.apache.org/docs/4.8.0-beta00008/api/Lucene.Net.Demo/Lucene.Net.Demo.Facet.SimpleFacetsExample.html
My goal is to create an index in which each document field have a facet, so that at search time i can choose which facets use to navigate data.
What i am confused about is setup of facets in index creation, to
summarize my question : is index with facets compatibile with
ReferenceManager?
Need DirectoryTaxonomyWriter to be actually written and persisted
on disk or it will embedded into the index itself and is just
temporary? I mean given the code
indexWriter.AddDocument(config.Build(taxoWriter, doc)); of the
example i expect it's temporary and will be embedded into the index (but then the example also show you need the Taxonomy to drill down facet). So can the Taxonomy be tangled in some way with the index so that the are handled althogeter with ReferenceManager?
If is not may i just use the same folder i use for storing index?
Here is a more detailed list of point that confuse me :
In my scenario i am indexing the document asyncrhonously (background process) and then fetching the indext ASAP throught ReferenceManager in ASP.NET application. I hope this way to fetch the index is compatibile with DirectoryTaxonomyWriter needed by facets.
Then i modified the code i write introducing the taxonomy writer as indicated in the example, but i am a bit confused, seems like i can't store DirectoryTaxonomyWriter into the same folder of index because the folder is locked, need i to persist it or it will be embedded into the index (so a RAMDirectory is enougth)? if i need to persist it in a different direcotry, can i safely persist it into subdirectory?
Here the code i am actually using :
private static void BuildIndex (IndexEntry entry)
{
string targetFolder = ConfigurationManager.AppSettings["IndexFolder"] ?? string.Empty;
//** LOG
if (System.IO.Directory.Exists(targetFolder) == false)
{
string message = #"Index folder not found";
_fileLogger.Error(message);
_consoleLogger.Error(message);
return;
}
var metadata = JsonConvert.DeserializeObject<IndexMetadata>(File.ReadAllText(entry.MetdataPath) ?? "{}");
string[] header = new string[0];
List<dynamic> csvRecords = new List<dynamic>();
using (var reader = new StreamReader(entry.DataPath))
{
CsvConfiguration csvConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture);
csvConfiguration.AllowComments = false;
csvConfiguration.CountBytes = false;
csvConfiguration.Delimiter = ",";
csvConfiguration.DetectColumnCountChanges = false;
csvConfiguration.Encoding = Encoding.UTF8;
csvConfiguration.HasHeaderRecord = true;
csvConfiguration.IgnoreBlankLines = true;
csvConfiguration.HeaderValidated = null;
csvConfiguration.MissingFieldFound = null;
csvConfiguration.TrimOptions = CsvHelper.Configuration.TrimOptions.None;
csvConfiguration.BadDataFound = null;
using (var csvReader = new CsvReader(reader, csvConfiguration))
{
csvReader.Read();
csvReader.ReadHeader();
csvReader.Read();
header = csvReader.HeaderRecord;
csvRecords = csvReader.GetRecords<dynamic>().ToList();
}
}
string targetDirectory = Path.Combine(targetFolder, "Index__" + metadata.Boundle + "__" + DateTime.Now.ToString("yyyyMMdd_HHmmss") + "__" + Path.GetRandomFileName().Substring(0, 6));
System.IO.Directory.CreateDirectory(targetDirectory);
//** LOG
{
string message = #"..creating index : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
using (var dir = FSDirectory.Open(targetDirectory))
{
using (DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(dir))
{
Analyzer analyzer = metadata.GetAnalyzer();
var indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer);
using (IndexWriter writer = new IndexWriter(dir, indexConfig))
{
long entryNumber = csvRecords.Count();
long index = 0;
long lastPercentage = 0;
foreach (dynamic csvEntry in csvRecords)
{
Document doc = new Document();
IDictionary<string, object> dynamicCsvEntry = (IDictionary<string, object>)csvEntry;
var indexedMetadataFiled = metadata.IdexedFields;
foreach (string headField in header)
{
if (indexedMetadataFiled.ContainsKey(headField) == false || (indexedMetadataFiled[headField].NeedToBeIndexed == false && indexedMetadataFiled[headField].NeedToBeStored == false))
continue;
var field = new Field(headField,
((string)dynamicCsvEntry[headField] ?? string.Empty).ToLower(),
indexedMetadataFiled[headField].NeedToBeStored ? Field.Store.YES : Field.Store.NO,
indexedMetadataFiled[headField].NeedToBeIndexed ? Field.Index.ANALYZED : Field.Index.NO
);
doc.Add(field);
var facetField = new FacetField(headField, (string)dynamicCsvEntry[headField]);
doc.Add(facetField);
}
long percentage = (long)(((decimal)index / (decimal)entryNumber) * 100m);
if (percentage > lastPercentage && percentage % 10 == 0)
{
_consoleLogger.Information($"..indexing {percentage}%..");
lastPercentage = percentage;
}
writer.AddDocument(doc);
index++;
}
writer.Commit();
}
}
}
//** LOG
{
string message = #"Index Created : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
}

How do I take a users input and update a Google spreadsheet in c#

I have a Google Spreadsheet that act as a stocklist for a stationery room. This is proving to be a challenging build in c# as I need to take the users input on how many pens they would like and update the qty field in the spreadsheet. I can do this by hardcoding but how do I change it to take the user input.
using Google.Apis.Auth.OAuth2;
using Google.Apis.Services;
using Google.Apis.Sheets.v4;
using Google.Apis.Sheets.v4.Data;
using System;
using System.Collections.Generic;
using System.IO;
//Database https://docs.google.com/spreadsheets/d/HIDDEN/edit?ts=5e53e5e8#gid=0
/*
STOCK LIST
Col A = Item
Col B = Item Code
Col C = Qty
Col D = Unit Price
Col E = Date Added
Col F = Stock updated(After delivery)
COL G = Total Cost
Col H = Total Expenditure
USERS
Last Name First Name UserName PassWord
Knight Fiona FK1 Cat
Wilson Euan EW1 StarWars
Mansfield Graham GM1 Snarler
Account Test TA1 Test
*/
namespace StockList
{
class Program
{
//Read The Sheet
static readonly string[] Scopes = { SheetsService.Scope.Spreadsheets };
static readonly string ApplicationName = "Stock List";
static readonly string SpreadsheetId = "HIDDEN";
static readonly string sheet = "Stock";
static readonly string sheet1 = "Employees";
static SheetsService service;
//Update the database
static void Main(string[] args)
{
GoogleCredential credential;
using (var stream = new FileStream("Stock.json", FileMode.Open, FileAccess.Read))
{
credential = GoogleCredential.FromStream(stream)
.CreateScoped(Scopes);
}
// Create Google Sheets API service.
service = new SheetsService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = ApplicationName,
});
Console.WriteLine("Welcome to the Stationery Management Service!");
//Console.WriteLine("Please enter your Username");
//var userName = Console.ReadLine();
Console.WriteLine("\nCurrent Stock Level: \n");
ReadEntriesStock();
UpdateEntry();
Console.WriteLine("\nNew Stock:\n");
ReadEntriesStock();
Console.WriteLine("\nEmployees List:\n");
ReadEntriesEmployees();
Console.WriteLine("\n");
UpdateEntryEmployee();
}
static void ReadEntriesStock()
{
var range = $"{sheet}!A:F";
SpreadsheetsResource.ValuesResource.GetRequest request =
service.Spreadsheets.Values.Get(SpreadsheetId, range);
var response = request.Execute();
IList<IList<object>> values = response.Values;
if (values != null && values.Count > 0)
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine(" {0,8} {1} {2,5} {3,2} {4,2} {5} {6,6} {1,2} {8,7}", "Item", "|", "Stock Code", "|", "Quantity", "|", "Price", "|", "Date");
Console.WriteLine(" --------------------------------------------------------");
Console.ResetColor();
foreach (var row in values)
{
// Print columns A to E, which correspond to indices 0 and 4.
Console.WriteLine("{0,10} |{1,8} | {2,5} |{3,7} | {4,5}", row[0], row[1], row[2], row[3], row[4]);
}
}
else
{
Console.WriteLine("No data found.");
}
}
static void ReadEntriesEmployees()
{
var range = $"{sheet1}!A:B";
SpreadsheetsResource.ValuesResource.GetRequest request =
service.Spreadsheets.Values.Get(SpreadsheetId, range);
var response = request.Execute();
IList<IList<object>> values = response.Values;
if (values != null && values.Count > 0)
{
foreach (var row in values)
{
// Print columns A to E, which correspond to indices 0 and 4.
Console.WriteLine("{0,5} | {1,5}", row[0], row[1]);
}
}
else
{
Console.WriteLine("No data found.");
}
}
static void UpdateEntry()
{
var range = $"{sheet}!C1";
var valueRange = new ValueRange();
var oblist = new List<object>() { "87" };
valueRange.Values = new List<IList<object>> { oblist };
var updateRequest = service.Spreadsheets.Values.Update(valueRange, SpreadsheetId, range);
updateRequest.ValueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.USERENTERED;
var appendReponse = updateRequest.Execute();
}
static void UpdateEntryEmployee()
{
var range = $"{sheet1}!C6";
var valueRange = new ValueRange();
var oblist = new List<object>() { "" };
valueRange.Values = new List<IList<object>> { oblist };
var updateRequest = service.Spreadsheets.Values.Update(valueRange, SpreadsheetId, range);
updateRequest.ValueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.USERENTERED;
var appendReponse = updateRequest.Execute();
}
}
}
I created a reusable function that takes a pipe-delimited string and writes it row-wise into the designated sheet. I'm doing this in a DLL so I typically avoid reading/writing to/from the console.
private string GoogleSheetsUpdate(string fileId, string inputOption, string inputRange, string inputText) {
//PURPOSE: SETS VALUES IN A RANGE OF A SPREADSHEET.
//REFERENCE: https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/update
string result = "Success";
try {
//HOW THE INPUT DATA SHOULD BE INTERPRETED:
//THE VALUES WILL BE PARSED AS IF THE USER TYPED THEM INTO THE UI. NUMBERS WILL STAY AS NUMBERS, BUT STRINGS MAY BE CONVERTED
//TO NUMBERS, DATES, ETC.FOLLOWING THE SAME RULES THAT ARE APPLIED WHEN ENTERING TEXT INTO A CELL VIA THE GOOGLE SHEETS UI.
SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum valueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.USERENTERED;
inputOption = inputOption.ToLower();
if (inputOption == "raw") {
//THE VALUES THE USER HAS ENTERED WILL NOT BE PARSED AND WILL BE STORED AS-IS.
valueInputOption = SpreadsheetsResource.ValuesResource.UpdateRequest.ValueInputOptionEnum.RAW;
}
//ASSIGN VALUES TO DESIRED PROPERTIES OF REQUESTBODY; ALL EXISTING PROPERTIES WILL BE REPLACED
string[] oRow = inputText.Split('|');
ValueRange oBody = new ValueRange {
Values = new List<IList<object>> { oRow },
};
SpreadsheetsResource.ValuesResource.UpdateRequest oRequest = sheetsService.Spreadsheets.Values.Update(oBody, fileId, inputRange);
oRequest.ValueInputOption = valueInputOption;
UpdateValuesResponse oResponse = oRequest.Execute();
} catch (Exception e) {
result = "Google Sheets API: " + e.Message;
}
return result;
//RESPONSE: IF SUCCESSFUL, THE RESPONSE BODY CONTAINS AN INSTANCE OF UPDATEVALUESRESPONSE.
}

C# CSV to JSON per row

Major Edit: I am doing a bad job of explaining :(
I have two classes:
public class UserDefinitions// a list of 'Items', each'group of items belong to a user. I handle User logic elsewhere, and it works flawlessly.
{
public List<Item> items { get; set; }
}
public class Item //the User definitions. A user could have 1 or 15 of these. They would all be a single 'line' from the CSV file.
{
public string definitionKey { get; set; }
public string defName { get; set; }
public string defValue { get; set; }
}
Which I wanna build with a CSV File. I build this CSV File, so I make it using the same parameters every time.
I run SQL on my company's DB to generate results like so: http://i.imgur.com/gS1UJot.png
Then I read the file like so:
class Program
{
static void Main(string[] args)
{
var userData = new UserDefinitions();
var csvList = new List<Item>();
string json = "";
string fPath = #"C:\test\csvTest.csv";
var lines = File.ReadAllLines(fPath);
Console.WriteLine(lines);
List<string> udata = new List<string>(lines);
foreach (var line in udata)
{
string[] userDataComplete = line.Split(new string[] { "," }, StringSplitOptions.RemoveEmptyEntries);// this cleans any empty cells from the CSV
csvList.Add(new Item { definitionKey = userDataComplete[1], defName = userDataComplete[2], defValue = userDataComplete[3] });
}
json = JsonConvert.SerializeObject(csvList); //everything below is for debugging/tracking progress
Console.WriteLine(json);
Console.ReadKey();
StreamWriter sw = new StreamWriter("C:\\test\\testjson.txt");
sw.WriteLine(json);
sw.Close();
}
}
This ALMOST does what I want. The output json is from the first 'column' of the csv data
[{"definitionKey":"uuid1","defName":"HairColor","defValue":"Brown"},{"definitionKey":"uuid1","defName":"HairColor","defValue":"Blonde"},{"definitionKey":"uuid1","defName":"HairColor","defValue":"Blue"}]
When using the screen shot as an example, the wanted output should be
[{"attributeDefinitionKey":"uuid1","name":"HairColor","value":"Brown"},{"definitionKey":"uuid2","defName":"FreckleAmount","defValue":"50"}]
[{"attributeDefinitionKey":"uuid1","name":"HairColor","value":"Blonde"},{"definitionKey":"uuid2","defName":"FreckleAmount","defValue":"null"}]
[{"attributeDefinitionKey":"uuid1","name":"HairColor","value":"Blue"},{"definitionKey":"uuid3","defName":"Tattoos","defValue":"5"}]
I can't pick out certain aspects at will, or apply them to Items. For example there maybe 10 users or 5000 users, but the definitionKey will always be the [1], and adding '3' will get every subsequent defintionKey. Just like the defName will always be in the [2] spot and adding 3 will get every subsequent defName if there are any, this is all per line.
I know I have to add some +3 logic, but not quite sure how to incorporate that. Maybe a for loop? a nested for loop after a foreach loop? I feel I am missing something obvious!
Thanks again for any help
This reads the csv line for line and converts each row to json, while adapting to the change in the amount of columns.
This only works if the CSV follows your rules:
one userId and
x amount of "Things" with 3 columns per "Thing".
private static void Main(string[] args)
{
var file = new StreamReader(#"C:\test\csvTest.csv");
string line;
var itemsJson = new List<string>();
file.ReadLine();
while ((line = file.ReadLine()) != null)
{
var sb = new StringBuilder();
var fields = line.Split(',');
sb.Append(GetKeyValueJson("UserId", fields[0]));
for (var i = 1; i < fields.Length; i += 3)
{
var x = (i + 3) / 3;
sb.Append(GetKeyValueJson($"Thing {i + x} ID", fields[i]));
sb.Append(GetKeyValueJson($"Thing {i + x} ID", fields[i + 1]));
sb.Append(i + 3 == fields.Length
? GetKeyValueJson($"Thing {i + x} ID", fields[i + 2], true)
: GetKeyValueJson($"Thing {i + x} ID", fields[i + 2]));
}
itemsJson.Add(WrapJson(sb.ToString()));
}
var json = WrapItems(itemsJson);
Console.ReadLine();
}
private static string GetKeyValueJson(string id, string value, bool lastPair = false)
{
var sb = new StringBuilder();
sb.Append('"');
sb.Append(id);
sb.Append('"');
sb.Append(':');
sb.Append('"');
sb.Append(value);
sb.Append('"');
if (!lastPair)
sb.Append(',');
return sb.ToString();
}
private static string WrapJson(string s)
{
var sb = new StringBuilder();
sb.Append('{');
sb.Append(s);
sb.Append('}');
return sb.ToString();
}
private static string WrapItems(List<string> jsonList)
{
var sb = new StringBuilder();
sb.Append('"');
sb.Append("Items");
sb.Append('"');
sb.Append(':');
sb.Append('[');
sb.Append(jsonList.Aggregate((current, next) => current + "," + next));
sb.Append(']');
return WrapJson(sb.ToString());
}
}
It's not pretty and sorting would be tough, but it should adapt to the column amount as long as it is in 3's.

Google BigQuery REST API C# copy/export table with nested schema as CSV

Is there a way to export a whole table with a nested schema from Google BigQuery using the REST API as a CSV?
There is an example for doing this (https://cloud.google.com/bigquery/docs/exporting-data) with a not nested schema. This works fine on the not nested columns in my table. Here is the code of this part:
PagedEnumerable<TableDataList, BigQueryRow> result2 = client.ListRows(datasetId, result.Reference.TableId);
StringBuilder sb = new StringBuilder();
foreach (var row in result2)
{
sb.Append($"{row["visitorId"]}, {row["visitNumber"]}, {row["totals.hits"]}{Environment.NewLine}");
}
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(sb.ToString())))
{
var obj = gcsClient.UploadObject(bucketName, fileName, contentType, stream);
}
In BQ there are columns like totals.hits, totals.visits...If I try to address them I got the errormessage that there is not such a column. If I am addressing "totals" I get the objectname "System.Collections.Generic.Dictionary`2[System.String,System.Object]" in the rows in my csv.
Is there any possibility to do something like that? In the end I want my table from GA in BQ as a CSV somewhere else.
It is possible. Select every column you need like in the following shema und flatten everything was need to be flattened.
string query = $#"
#legacySQL
SELECT
visitorId,
visitNumber,
visitId,
visitStartTime,
date,
hits.hitNumber as hitNumber,
hits.product.productSKU as product.productSKU
FROM
FLATTEN(FLATTEN({tableName},hits),hits.product)";
//Creating a job for the query and activating legacy sql
BigQueryJob job = client.CreateQueryJob(query,
new CreateQueryJobOptions { UseLegacySql = true });
BigQueryResults queryResult = client.GetQueryResults(job.Reference.JobId,
new GetQueryResultsOptions());
StringBuilder sb = new StringBuilder();
//Getting the headers from the GA table and write them into the first row of the new table
int count = 0;
for (int i = 0; i <= queryResult.Schema.Fields.Count() - 1; i++)
{
string columenname = "";
var header = queryResult.Schema.Fields[0].Name;
if (i + 1 >= queryResult.Schema.Fields.Count)
columenname = queryResult.Schema.Fields[i].Name;
else
columenname = queryResult.Schema.Fields[i].Name + ",";
sb.Append(columenname);
}
//Getting the data from the GA table and write them row by row into the new table
sb.Append(Environment.NewLine);
foreach (var row in queryResult.GetRows())
{
count++;
if (count % 1000 == 0)
Console.WriteLine($"item {count} finished");
int blub = queryResult.Schema.Fields.Count;
for (Int64 j = 0; j < Convert.ToInt64(blub); j++)
{
try
{
if (row.RawRow.F[Convert.ToInt32(j)] != null)
sb.Append(row.RawRow.F[Convert.ToInt32(j)].V + ",");
}
catch (Exception)
{
}
}
sb.Append(Environment.NewLine);
}

Not getting the value of the hit field?

I am doing a project on search using Lucene.Net. We have created an index which contains 100 000 documents with 5 fields. But while searching I'm unable to track my correct record. Can anybody help me? Why is that so?
My code looks like this
List<int> ids = new List<int>();
List<Hits> hitList = new List<Hits>();
List<Document> results = new List<Document>();
int startPage = (pageIndex.Value - 1) * pageSize.Value;
string indexFileLocation = #"c:\\ResourceIndex\\"; //Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), "ResourceIndex");
var fsDirectory = FSDirectory.Open(new DirectoryInfo(indexFileLocation));
Analyzer analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);
IndexReader indexReader = IndexReader.Open(fsDirectory, true);
Searcher indexSearch = new IndexSearcher(indexReader);
//ids.AddRange(this.SearchPredicates(indexSearch, startPage, pageSize, query));
/*Searching From the ResourceIndex*/
Query resourceQuery = MultiFieldQueryParser.Parse(Lucene.Net.Util.Version.LUCENE_29,
new string[] { productId.ToString(), languagelds, query },
new string[] { "productId", "resourceLanguageIds", "externalIdentifier" },
analyzer);
TermQuery descriptionQuery = new TermQuery(new Term("description", '"'+query+'"'));
//TermQuery identifierQuery = new TermQuery(new Term("externalIdentifier", query));
BooleanQuery filterQuery = new BooleanQuery();
filterQuery.Add(descriptionQuery, BooleanClause.Occur.MUST);
//filterQuery.Add(identifierQuery,BooleanClause.Occur.MUST_NOT);
Filter filter = new CachingWrapperFilter(new QueryWrapperFilter(filterQuery));
TopScoreDocCollector collector = TopScoreDocCollector.create(100, true);
//Hits resourceHit = indexSearch.Search(resourceQuery, filter);
indexSearch.Search(resourceQuery, filter, collector);
ScoreDoc[] hits = collector.TopDocs().scoreDocs;
//for (int i = startPage; i <= pageSize && i < resourceHit.Length(); i++)
//{
// ids.Add(Convert.ToInt32(resourceHit.Doc(i).GetField("id")));
//}
for (int i = 0; i < hits.Length; i++)
{
int docId = hits[i].doc;
float score = hits[i].score;
Lucene.Net.Documents.Document doc = indexSearch.Doc(docId);
string result = "Score: " + score.ToString() +
" Field: " + doc.Get("id");
}
You're calling Document.Get("id"), which returns the value of a stored field. It wont work without Field.Store.YES when indexing.
You could use the FieldCache if you've got the field indexed without analyzing (Field.Index.NOT_ANALYZED) or using the KeywordAnalyzer. (Meaning one term per field and document.)
You'll need to use the innermost reader for the FieldCache to work optimally. Here's a code paste from FieldCache with frequently updated index which uses the FieldCache in a proper way, reading an integer value from the id field.
// Demo values, use existing code somewhere here.
var directory = FSDirectory.Open(new DirectoryInfo("index"));
var reader = IndexReader.Open(directory, readOnly: true);
var documentId = 1337;
// Grab all subreaders.
var subReaders = new List<IndexReader>();
ReaderUtil.GatherSubReaders(subReaders, reader);
// Loop through all subreaders. While subReaderId is higher than the
// maximum document id in the subreader, go to next.
var subReaderId = documentId;
var subReader = subReaders.First(sub => {
if (sub.MaxDoc() < subReaderId) {
subReaderId -= sub.MaxDoc();
return false;
}
return true;
});
var values = FieldCache_Fields.DEFAULT.GetInts(subReader, "id");
var value = values[subReaderId];

Categories

Resources