Paging using Lucene.net - c#

I'm working on a .Net application which uses Asp.net 3.5 and Lucene.Net I am showing search results given by Lucene.Net in an asp.net datagrid. I need to implement Paging (10 records on each page) for this aspx page.
How do I get this done using Lucene.Net?

Here is a way to build a simple list matching a specific page with Lucene.Net. This is not ASP.Net specific.
int first = 0, last = 9; // TODO: Set first and last to correct values according to page number and size
Searcher searcher = new IndexSearcher(YourIndexFolder);
Query query = BuildQuery(); // TODO: Implement BuildQuery
Hits hits = searcher.Search(query);
List<Document> results = new List<Document>();
for (int i = first; i <= last && i < hits.Length(); i++)
results.Add(hits.Doc(i));
// results now contains a page of documents matching the query
Basically the Hits collection is very lightweight. The cost of getting this list is minimal. You just instantiate the needed Documents by calling hits.Doc(i) to build your page.

What I do is iterate through the hits and insert them into a temporary table in the db. Then I can run a regular SQL query - joining that temp table with other tables too - and give the grid the DataSet/DataView that it wants.
Note that I do the inserts and the query in ONE TRIP to the db, because I'm using just one SQL batch.
void Page_Load(Object sender, EventArgs e)
{
dbutil = new DbUtil();
security = new Security();
security.check_security(dbutil, HttpContext.Current, Security.ANY_USER_OK);
Lucene.Net.Search.Query query = null;
try
{
if (string.IsNullOrEmpty(Request["query"]))
{
throw new Exception("You forgot to enter something to search for...");
}
query = MyLucene.parser.Parse(Request["query"]);
}
catch (Exception e3)
{
display_exception(e3);
}
Lucene.Net.Highlight.QueryScorer scorer = new Lucene.Net.Highlight.QueryScorer(query);
Lucene.Net.Highlight.Highlighter highlighter = new Lucene.Net.Highlight.Highlighter(MyLucene.formatter, scorer);
highlighter.SetTextFragmenter(MyLucene.fragmenter); // new Lucene.Net.Highlight.SimpleFragmenter(400));
StringBuilder sb = new StringBuilder();
string guid = Guid.NewGuid().ToString().Replace("-", "");
Dictionary<string, int> dict_already_seen_ids = new Dictionary<string, int>();
sb.Append(#"
create table #$GUID
(
temp_bg_id int,
temp_bp_id int,
temp_score float,
temp_text nvarchar(3000)
)
");
lock (MyLucene.my_lock)
{
Lucene.Net.Search.Hits hits = null;
try
{
hits = MyLucene.search(query);
}
catch (Exception e2)
{
display_exception(e2);
}
// insert the search results into a temp table which we will join with what's in the database
for (int i = 0; i < hits.Length(); i++)
{
if (dict_already_seen_ids.Count < 100)
{
Lucene.Net.Documents.Document doc = hits.Doc(i);
string bg_id = doc.Get("bg_id");
if (!dict_already_seen_ids.ContainsKey(bg_id))
{
dict_already_seen_ids[bg_id] = 1;
sb.Append("insert into #");
sb.Append(guid);
sb.Append(" values(");
sb.Append(bg_id);
sb.Append(",");
sb.Append(doc.Get("bp_id"));
sb.Append(",");
//sb.Append(Convert.ToString((hits.Score(i))));
sb.Append(Convert.ToString((hits.Score(i))).Replace(",", ".")); // Somebody said this fixes a bug. Localization issue?
sb.Append(",N'");
string raw_text = Server.HtmlEncode(doc.Get("raw_text"));
Lucene.Net.Analysis.TokenStream stream = MyLucene.anal.TokenStream("", new System.IO.StringReader(raw_text));
string highlighted_text = highlighter.GetBestFragments(stream, raw_text, 1, "...").Replace("'", "''");
if (highlighted_text == "") // someties the highlighter fails to emit text...
{
highlighted_text = raw_text.Replace("'","''");
}
if (highlighted_text.Length > 3000)
{
highlighted_text = highlighted_text.Substring(0,3000);
}
sb.Append(highlighted_text);
sb.Append("'");
sb.Append(")\n");
}
}
else
{
break;
}
}
//searcher.Close();
}

Related

Lucene 4.8 facets usage

I have difficulties understanding this example on how to use facets :
https://lucenenet.apache.org/docs/4.8.0-beta00008/api/Lucene.Net.Demo/Lucene.Net.Demo.Facet.SimpleFacetsExample.html
My goal is to create an index in which each document field have a facet, so that at search time i can choose which facets use to navigate data.
What i am confused about is setup of facets in index creation, to
summarize my question : is index with facets compatibile with
ReferenceManager?
Need DirectoryTaxonomyWriter to be actually written and persisted
on disk or it will embedded into the index itself and is just
temporary? I mean given the code
indexWriter.AddDocument(config.Build(taxoWriter, doc)); of the
example i expect it's temporary and will be embedded into the index (but then the example also show you need the Taxonomy to drill down facet). So can the Taxonomy be tangled in some way with the index so that the are handled althogeter with ReferenceManager?
If is not may i just use the same folder i use for storing index?
Here is a more detailed list of point that confuse me :
In my scenario i am indexing the document asyncrhonously (background process) and then fetching the indext ASAP throught ReferenceManager in ASP.NET application. I hope this way to fetch the index is compatibile with DirectoryTaxonomyWriter needed by facets.
Then i modified the code i write introducing the taxonomy writer as indicated in the example, but i am a bit confused, seems like i can't store DirectoryTaxonomyWriter into the same folder of index because the folder is locked, need i to persist it or it will be embedded into the index (so a RAMDirectory is enougth)? if i need to persist it in a different direcotry, can i safely persist it into subdirectory?
Here the code i am actually using :
private static void BuildIndex (IndexEntry entry)
{
string targetFolder = ConfigurationManager.AppSettings["IndexFolder"] ?? string.Empty;
//** LOG
if (System.IO.Directory.Exists(targetFolder) == false)
{
string message = #"Index folder not found";
_fileLogger.Error(message);
_consoleLogger.Error(message);
return;
}
var metadata = JsonConvert.DeserializeObject<IndexMetadata>(File.ReadAllText(entry.MetdataPath) ?? "{}");
string[] header = new string[0];
List<dynamic> csvRecords = new List<dynamic>();
using (var reader = new StreamReader(entry.DataPath))
{
CsvConfiguration csvConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture);
csvConfiguration.AllowComments = false;
csvConfiguration.CountBytes = false;
csvConfiguration.Delimiter = ",";
csvConfiguration.DetectColumnCountChanges = false;
csvConfiguration.Encoding = Encoding.UTF8;
csvConfiguration.HasHeaderRecord = true;
csvConfiguration.IgnoreBlankLines = true;
csvConfiguration.HeaderValidated = null;
csvConfiguration.MissingFieldFound = null;
csvConfiguration.TrimOptions = CsvHelper.Configuration.TrimOptions.None;
csvConfiguration.BadDataFound = null;
using (var csvReader = new CsvReader(reader, csvConfiguration))
{
csvReader.Read();
csvReader.ReadHeader();
csvReader.Read();
header = csvReader.HeaderRecord;
csvRecords = csvReader.GetRecords<dynamic>().ToList();
}
}
string targetDirectory = Path.Combine(targetFolder, "Index__" + metadata.Boundle + "__" + DateTime.Now.ToString("yyyyMMdd_HHmmss") + "__" + Path.GetRandomFileName().Substring(0, 6));
System.IO.Directory.CreateDirectory(targetDirectory);
//** LOG
{
string message = #"..creating index : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
using (var dir = FSDirectory.Open(targetDirectory))
{
using (DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(dir))
{
Analyzer analyzer = metadata.GetAnalyzer();
var indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer);
using (IndexWriter writer = new IndexWriter(dir, indexConfig))
{
long entryNumber = csvRecords.Count();
long index = 0;
long lastPercentage = 0;
foreach (dynamic csvEntry in csvRecords)
{
Document doc = new Document();
IDictionary<string, object> dynamicCsvEntry = (IDictionary<string, object>)csvEntry;
var indexedMetadataFiled = metadata.IdexedFields;
foreach (string headField in header)
{
if (indexedMetadataFiled.ContainsKey(headField) == false || (indexedMetadataFiled[headField].NeedToBeIndexed == false && indexedMetadataFiled[headField].NeedToBeStored == false))
continue;
var field = new Field(headField,
((string)dynamicCsvEntry[headField] ?? string.Empty).ToLower(),
indexedMetadataFiled[headField].NeedToBeStored ? Field.Store.YES : Field.Store.NO,
indexedMetadataFiled[headField].NeedToBeIndexed ? Field.Index.ANALYZED : Field.Index.NO
);
doc.Add(field);
var facetField = new FacetField(headField, (string)dynamicCsvEntry[headField]);
doc.Add(facetField);
}
long percentage = (long)(((decimal)index / (decimal)entryNumber) * 100m);
if (percentage > lastPercentage && percentage % 10 == 0)
{
_consoleLogger.Information($"..indexing {percentage}%..");
lastPercentage = percentage;
}
writer.AddDocument(doc);
index++;
}
writer.Commit();
}
}
}
//** LOG
{
string message = #"Index Created : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
}

Problem implementing Lucene.Net 4.8 partial and fuzzy search in multilanguage environment

As stated in the title i am trying to implement a search on document that are indexed with many Analyzers.
Documents are always lower case (so no issue about cases) but i have issue about multiple analyzers and partial search.
Example of issues :
If i search for 'grap' it will match with 'group' (and it's ok i use fuzzy search) but does not match with 'graphical'.
Another issue is that if the "hint" with the document to index tell the indexer to use a specific language (and so not the Standard.Analyzer in the indexing phase) the query then is unable to fetch data.
I have read that this is due the fact that also the query should be set to use the right analyzer, but i haven't understood how do achieve that.
Here my code :
public List<JObject> Search(string boundle, string query, LuceneHint luceneHint, int pageIndex, int itemsPerPage, string key)
{
//TODO : security check
if (string.IsNullOrEmpty(query))
throw new ArgumentNullException("query");
if (query.Length < 3)
throw new ArgumentException("query parameter too short, should be at least 3 characters long.");
if ((luceneHint?.FieldsToSearch?.Any() ?? false) == false)
throw new ArgumentNullException("luceneHint");
if (pageIndex < 0)
pageIndex = 0;
if (itemsPerPage < 1)
itemsPerPage = int.MaxValue;
if ((luceneHint?.Top ?? 0) < itemsPerPage)
itemsPerPage = luceneHint.Top;
var tokens = Regex.Split(query.Trim(), #"\W+");
//Here some test i made with query
//Query composedQuery = new MatchAllDocsQuery();
/*BooleanQuery composedQuery = new BooleanQuery();
foreach (var field in luceneHint.FieldsToSearch)
{
PhraseQuery phraseQuery = new PhraseQuery();
foreach (string word in tokens)
{
phraseQuery.Add(new Term(field.FieldName, word));
}
phraseQuery.Boost = (float)field.Weight;
composedQuery.Add(phraseQuery, Occur.SHOULD);
}*/
BooleanQuery composedQuery = new BooleanQuery();
foreach (var field in luceneHint.FieldsToSearch)
{
foreach (string word in tokens)
{
if (string.IsNullOrWhiteSpace(word))
continue;
var termQuery = new FuzzyQuery(new Term(field.FieldName, word.ToLower()));
termQuery.Boost = (float)field.Weight;
composedQuery.Add(termQuery, Occur.SHOULD);
}
}
var indexManager = IndexManager.Instance;
ReferenceManager<IndexSearcher> index = indexManager.Read(boundle); //index manager is an utility that will bound a boundle to a folder and so to an index
int resultLimit = luceneHint?.Top ?? RESULT_LIMIT;
var results = new List<JObject>();
var searcher = index.Acquire();
try
{
Dictionary<string, FieldDescriptor> filedToRead = (luceneHint?.FieldsToRead?.Any() ?? false) ?
luceneHint.FieldsToRead.ToDictionary(item => item.FieldName, item => item) :
new Dictionary<string, FieldDescriptor>();
bool fetchEveryField = filedToRead.Count == 0;
TopScoreDocCollector collector = TopScoreDocCollector.Create(resultLimit, true);
int startPageIndex = pageIndex * itemsPerPage;
searcher.Search(composedQuery, collector);
//TopDocs topDocs = searcher.Search(composedQuery, luceneHint?.Top ?? 100);
TopDocs topDocs = collector.GetTopDocs(startPageIndex, itemsPerPage);
foreach (var scoreDoc in topDocs.ScoreDocs)
{
Document doc = searcher.Doc(scoreDoc.Doc);
dynamic result = new JObject();
foreach (var field in doc.Fields)
if (fetchEveryField || filedToRead.ContainsKey(field.Name))
result[field.Name] = field.GetStringValue();
results.Add(result);
}
}
finally
{
if ( searcher != null )
index.Release(searcher);
}
return results;
}
The read boudle part basically cache and get the result of this code :
foreach ( var boundle in boundles)
result.Add(new BoundleEntry() { Bounde = boundle.Key, Index = new SearcherManager(FSDirectory.Open(boundle.Value.OrderByDescending(folder => folder).FirstOrDefault()), new SearcherFactory()) });
So basically i pass a method a string referring to a given boundle, then a cache is queryied to see if i can get the rigth SearchManager (calculated as above). If the cache is empty or the boundle is missing then the method above will scan for missing boundle and load (into the cache) the corresponding Search manager.
In the code above the variable "boundle" is not a string but a data structure (boudnle key is the string used to query the cache) that keep track of folder-version/boundleKey association.
Improtant : i notieced now that i misspelled a word in my source code when you see the word "boundle" what i mean was "bundle"

Google BigQuery REST API C# copy/export table with nested schema as CSV

Is there a way to export a whole table with a nested schema from Google BigQuery using the REST API as a CSV?
There is an example for doing this (https://cloud.google.com/bigquery/docs/exporting-data) with a not nested schema. This works fine on the not nested columns in my table. Here is the code of this part:
PagedEnumerable<TableDataList, BigQueryRow> result2 = client.ListRows(datasetId, result.Reference.TableId);
StringBuilder sb = new StringBuilder();
foreach (var row in result2)
{
sb.Append($"{row["visitorId"]}, {row["visitNumber"]}, {row["totals.hits"]}{Environment.NewLine}");
}
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(sb.ToString())))
{
var obj = gcsClient.UploadObject(bucketName, fileName, contentType, stream);
}
In BQ there are columns like totals.hits, totals.visits...If I try to address them I got the errormessage that there is not such a column. If I am addressing "totals" I get the objectname "System.Collections.Generic.Dictionary`2[System.String,System.Object]" in the rows in my csv.
Is there any possibility to do something like that? In the end I want my table from GA in BQ as a CSV somewhere else.
It is possible. Select every column you need like in the following shema und flatten everything was need to be flattened.
string query = $#"
#legacySQL
SELECT
visitorId,
visitNumber,
visitId,
visitStartTime,
date,
hits.hitNumber as hitNumber,
hits.product.productSKU as product.productSKU
FROM
FLATTEN(FLATTEN({tableName},hits),hits.product)";
//Creating a job for the query and activating legacy sql
BigQueryJob job = client.CreateQueryJob(query,
new CreateQueryJobOptions { UseLegacySql = true });
BigQueryResults queryResult = client.GetQueryResults(job.Reference.JobId,
new GetQueryResultsOptions());
StringBuilder sb = new StringBuilder();
//Getting the headers from the GA table and write them into the first row of the new table
int count = 0;
for (int i = 0; i <= queryResult.Schema.Fields.Count() - 1; i++)
{
string columenname = "";
var header = queryResult.Schema.Fields[0].Name;
if (i + 1 >= queryResult.Schema.Fields.Count)
columenname = queryResult.Schema.Fields[i].Name;
else
columenname = queryResult.Schema.Fields[i].Name + ",";
sb.Append(columenname);
}
//Getting the data from the GA table and write them row by row into the new table
sb.Append(Environment.NewLine);
foreach (var row in queryResult.GetRows())
{
count++;
if (count % 1000 == 0)
Console.WriteLine($"item {count} finished");
int blub = queryResult.Schema.Fields.Count;
for (Int64 j = 0; j < Convert.ToInt64(blub); j++)
{
try
{
if (row.RawRow.F[Convert.ToInt32(j)] != null)
sb.Append(row.RawRow.F[Convert.ToInt32(j)].V + ",");
}
catch (Exception)
{
}
}
sb.Append(Environment.NewLine);
}

File not being released by IIS when processing

In an ASP.Net MVC4 application, I'm using the following code to process a Go To Webinar Attendees report (CSV format).
For some reason, the file that is being loaded is not being released by IIS and it is causing issues when attempting to process another file.
Do you see anything out of the ordinary here?
The CSVHelper (CsvReader) is from https://joshclose.github.io/CsvHelper/
public AttendeesData GetRecords(string filename, string webinarKey)
{
StreamReader sr = new StreamReader(Server.MapPath(filename));
CsvReader csvread = new CsvReader(sr);
csvread.Configuration.HasHeaderRecord = false;
List<AttendeeRecord> record = csvread.GetRecords<AttendeeRecord>().ToList();
record.RemoveRange(0, 7);
AttendeesData attdata = new AttendeesData();
attdata.Attendees = new List<Attendee>();
foreach (var rec in record)
{
Attendee aa = new Attendee();
aa.Webinarkey = webinarKey;
aa.FullName = String.Concat(rec.First_Name, " ", rec.Last_Name);
aa.AttendedWebinar = 0;
aa.Email = rec.Email_Address;
aa.JoinTime = rec.Join_Time.Replace(" CST", "");
aa.LeaveTime = rec.Leave_Time.Replace(" CST", "");
aa.TimeInSession = rec.Time_in_Session.Replace("hour", "hr").Replace("minute", "min");
aa.Makeup = 0;
aa.RegistrantKey = Registrants.Where(x => x.email == rec.Email_Address).FirstOrDefault().registrantKey;
List<string> firstPolls = new List<string>()
{
rec.Poll_1.Trim(), rec.Poll_2.Trim(),rec.Poll_3.Trim(),rec.Poll_4.Trim()
};
int pass1 = firstPolls.Count(x => x != "");
List<string> secondPolls = new List<string>()
{
rec.Poll_5.Trim(), rec.Poll_6.Trim(),rec.Poll_7.Trim(),rec.Poll_8.Trim()
};
int pass2 = secondPolls.Count(x => x != "");
aa.FirstPollCount = pass1;
aa.SecondPollCount = pass2;
if (aa.TimeInSession != "")
{
aa.AttendedWebinar = 1;
}
if (aa.FirstPollCount == 0 || aa.SecondPollCount == 0)
{
aa.AttendedWebinar = 0;
}
attdata.Attendees.Add(aa);
attendeeToDB(aa); // adds to Oracle DB using EF6.
}
// Should I call csvread.Dispose() here?
sr.Close();
return attdata;
}
Yes. You have to dispose objects too.
sr.Close();
csvread.Dispose();
sr.Dispose();
Better strategy to use using keyword.
You should use usings for your streamreaders and writers.
You should follow some naming conventions (Lists contains always multiple entries, rename record to records)
You should use clear names (not aa)

filtered Query for objectContext Using Linq to SQL

i have tried to search some examples about my approach but all questions was not close enough to what i was trying to achieve .
for the TLDR sake , Question is : how do i make it work as in plain sql query?
using c# - Winforms with SqlCompact4 and Linq to SQL
my scenario involves a form with all the relevant Db table columns as availble filters to query
and then on text change event of each filtertextbox as a filter, the datasource of the gridview updates accordingly
and because i allow filtered search via many of them columns i was trying to avoid use of some extra
lines of code.
so lets say we only concentrate on 4 columns
custID, name, email, cellPhone
each has its corresponding TextBox.
i am trying to make a query as follows :
first i systematically collect all Textbox into a List
var AllFormsSearchFiltersTBXLst = new List<TextBox>();
code that collects all tbx on current form
var AllFormsSearchFiltersTBXLst = [currentFormHere].Controls.OfType<TextBox>();
so now i have all of textboxes as filters regardless if they have any value
then check who has some value in it
forech textbox in this filters textboxes if text length is greater than zero
it means that current filter is active
then.. a second list AllFormsACTIVESearchfiltersTBXLst will contain only active filters
what i was trying to achieve was in same way i didn't have to specify each of textbox objects
i just looped through each of them all as a collection and didn't have to specify each via it's id
now i want to make a filter on a dbContext using only those active filters
so i will not have to ask if current tbxName is email
like
query = db.Where(db=>db.email.Contains(TbxEmail.Text));
and again and again for each of 10 to 15 columns
what i have got so far is nothing that implements what i was heading to.
using (SqlCeConnection ClientsConn = new SqlCeConnection(ConfigurationManager.ConnectionStrings["Conn_DB_RCL_CRM2014"].ConnectionString))
{
System.Data.Linq.Table<ContactsClients> db = null;
// get all column names from context
var x =(System.Reflection.MemberInfo[]) typeof(ContactsClients).GetProperties();
using (DB_RCL_CRM2014Context Context = new DB_RCL_CRM2014Context(ClientsConn))
{
if (!Filtered)
db = Context.ContactsClients;//.Where(client => client.Name.Contains("fler"));
else
{
db = Context.ContactsClients;
// filters Dictionary contains the name of textBox and its value
// I've named TBX as Columns names specially so i could equalize it to the columns names when needed to automate
foreach (KeyValuePair<string,string> CurFltrKVP in FiltersDict)
{
foreach (var memberInfo in x)
{
// couldn't find out how to build the query
}
}
}
BindingSource BS_Clients = new BindingSource();
BS_Clients.DataSource = db;
GV_ClientInfo_Search.DataSource = BS_Clients;
what i normally do when working with plain sql is
foreach textbox take its value and add it into a string as filter
var q = "where " ;
foreach(tbx CurTBX in ALLFILTERTBX)
{
q +=CurTBX.Name +" LIKE '%" + CurTBX.Text + "%'";
// and some checking of last element in list off cores
}
then pass this string as a filter to the main select query ... that simple
how do i make it work as in plain sql query?
I think that you're trying to get the property of db dynamically, like: db.email according to the looped name of your textbox (here 'email'). However, I recommend you to do it some other way. I'd make a switch for each type of the property, like: email, name etc. Something like this:
// Create a list for the results
var results = new List<YourDBResultTypeHere>();
foreach(tbx CurTBX in ALLFILTERTBX)
{
switch(CurTBX.Name) {
case "email":
results.AddRange(db.Where(db => db.email.Contains(tbx.Text)).ToList());
break;
case "name":
results.AddRange(db.Where(db => db.name.Contains(tbx.Text)).ToList());
break;
}
}
try this
void UpdateGridViewData(bool Filtered=false, Dictionary<string,string> FiltersDict = null)
{
using (SqlCeConnection ClientsConn = new SqlCeConnection(ConfigurationManager.ConnectionStrings["Conn_DB_RCL_CRM2014"].ConnectionString))
{
System.Data.Linq.Table<ContactsClients> db = null;
IEnumerable<ContactsClients> IDB = null;
BindingSource BS_Clients = new BindingSource();
System.Reflection.MemberInfo[] AllDbTblClientsColumns = (System.Reflection.MemberInfo[])typeof(ContactsClients).GetProperties();
using (DB_RCL_CRM2014Context Context = new DB_RCL_CRM2014Context(ClientsConn))
{
if (!Filtered)
{
db = Context.ContactsClients;
BS_Clients.DataSource = db;
}
else
{
string fltr = "";
var and = "";
if (FiltersDict.Count > 1) and = "AND";
for (int i = 0; i < FiltersDict.Count; i++)
{
KeyValuePair<string, string> CurFltrKVP = FiltersDict.ElementAt(i);
if (i >= FiltersDict.Count-1) and = "";
for (int j = 0; j < AllDbTblClientsColumns.Length; j++)
{
if (AllDbTblClientsColumns[j].Name.Equals(CurFltrKVP.Key))
{
fltr += string.Format("{0} Like '%{1}%' {2} ", AllDbTblClientsColumns[j].Name, CurFltrKVP.Value, and);
}
}
}
try
{
IDB = Context.ExecuteQuery<ContactsClients>(
"SELECT * " +
"FROM ContactsCosmeticsClients " +
"WHERE " + fltr
);
BS_Clients.DataSource = IDB;
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
GV_ClientInfo_Search.DataSource = BS_Clients;
}
}
}

Categories

Resources