Per title - I am using the official mongodb driver and I am looking to get all POIs within the given bounding box.
So far I have:
MongoCollection<BsonDocument> collection = _MongoDatabase.GetCollection("pois");
BsonArray lowerLeftDoc = new BsonArray(new[] { lowerLeft.Lon, lowerLeft.Lat});
BsonArray upperRightDoc = new BsonArray(new[] { upperRight.Lon, upperRight.Lat});
BsonDocument locDoc = new BsonDocument
{
{ "$within", new BsonArray(new[] { lowerLeftDoc, upperRightDoc})}
};
BsonDocument queryDoc = new BsonDocument { { "loc", locDoc }};
IList<TrafficUpdate> updates = new List<TrafficUpdate>();
var results = collection.Find(new QueryDocument(queryDoc)).SetLimit(limit);
foreach (BsonDocument t in results)
{
}
Unfortunatelly this doesnt work. I get:
QueryFailure flag was unknown $within type: 0 (response was { "$err" :
"unknown $within type: 0", "code" : 13058 }).
The problem in your code is that you didnt specify which geo operation you wanted to use. You only specified $within but missed where. you must specify $within along with $box (bounding box) , $polygon , $center or $centerSphere / $nearSphere.
This is the correct mongo syntax to run $box queries
> box = [[40.73083, -73.99756], [40.741404, -73.988135]]
> db.places.find({"loc" : {"$within" : {"$box" : box}}})
i am not sure about the c# mongodb syntax. But if you include '$box', it ll work
You could also use the Query builder for this query:
var query = Query.Within("loc", lowerLeft.Lon, lowerLeft.Lat,
upperRight.Lon, upperRight.Lat);
and let the query builder worry about the nitty gritty details of creating a properly formed query.
Related
I would like to use the C# driver in MongoDB to make a full text search.
But I see that when I create the index, I can't select 'none' as a language.
I would like terms to be matched as they are and without removing the stop words either.
Given a type
public class Entity
{
public string Text;
}
You can do this:
var collection = new MongoClient().GetDatabase("test").GetCollection<Entity>("collection");
var indexKeysDefinition = new IndexKeysDefinitionBuilder<Entity>().Text(x => x.Text);
var createIndexOptions = new CreateIndexOptions { DefaultLanguage= "none" };
collection.Indexes.CreateOne(new CreateIndexModel<Entity>(indexKeysDefinition, createIndexOptions));
Mongo DB's Aggregation pipeline has an "AddFields" stage that allows you to project new fields to the pipeline's output document without knowing what fields already existed.
It seems this has not been included in the C# driver for Mongo DB (using version 2.7).
Does anyone know if there are any alternatives to this? Maybe a flag on the "Project" stage?
I'm not sure all the BsonDocument usage is required. Certainly not in this example where I append the textScore of a text search to the search result.
private IAggregateFluent<ProductTypeSearchResult> CreateSearchQuery(string query)
{
FilterDefinition<ProductType> filter = Builders<ProductType>.Filter.Text(query);
return _collection
.Aggregate()
.Match(filter)
.AppendStage<ProductType>("{$addFields: {score: {$meta:'textScore'}}}")
.Sort(Sort)
.Project(pt => new ProductTypeSearchResult
{
Description = pt.ExternalProductTypeDescription,
Id = pt.Id,
Name = pt.Name,
ProductFamilyId = pt.ProductFamilyId,
Url = !string.IsNullOrEmpty(pt.ShopUrl) ? pt.ShopUrl : pt.TypeUrl,
Score = pt.Score
});
}
Note that ProductType does have a Score property defined as
[BsonIgnoreIfNull]
public double Score { get; set; }
It's unfortunate that $addFields is not directly supported and we have to resort to "magic strings"
As discussed here Using $addFields in MongoDB Driver for C# you can build the aggregation stage yourself with a BsonDocument.
To use the example from https://docs.mongodb.com/manual/reference/operator/aggregation/addFields/
{
$addFields: {
totalHomework: { $sum: "$homework" } ,
totalQuiz: { $sum: "$quiz" }
}
}
would look something like this:
BsonDocument expression = new BsonDocument(new List<BsonElement>() {
new BsonElement("totalHomeWork", new BsonDocument(new BsonElement("$sum", "$homework"))),
new BsonElement("totalQuiz", new BsonDocument(new BsonElement("$sum", "$quiz")))
});
BsonDocument addFieldsStage = new BsonDocument(new BsonElement("$addFields", expression));
IAggregateFluent<BsonDocument> aggregate = col.Aggregate().AppendStage(addFieldsStage);
expression being the BsonDocument representing
{
totalHomework: { $sum: "$homework" } ,
totalQuiz: { $sum: "$quiz" }
}
You can append additional stages onto the IAggregateFluent Object as normal
IAggregateFluent<BsonDocument> aggregate = col.Aggregate()
.Match(filterDefintion)
.AppendStage(addFieldsStage)
.Project(projectionDefintion);
Am completely new to Mongodb and C# driver.
Development is being done using Monodevelop on Ubuntu 14.04 and Mongodb's version is 3.2.10 :
Currently my code has a POCO as below:
public class User
{
public String Name { get; set;}
public DateTime LastModifiedAt { get; set;}
public DateTime LastSyncedAt { get; set;}
public User ()
{
}
}
Have been able to create a collection and also to add users.
How do I find users, whose LastModifiedAt timestamp is greater than LastSyncedAt timestamp ? Did some searching, but haven't been able to find the answer.
Any suggestions would be of immense help
Thanks
Actually, it is not very simple. This should be possible with querysuch as :
var users = collection.Find(user => user.LastModifiedAt > user.LastSyncedAt).ToList();
But unfortunetly MongoDriver could not translate this expression.
You could either query all Users and filter on the client side:
var users = collection.Find(Builders<User>.Filter.Empty)
.ToEnumerable()
.Where(user => user.LastModifiedAt > user.LastSyncedAt)
.ToList();
Or send json query, because MongoDb itself is able to do it:
var jsonFliter = "{\"$where\" : \"this.LastModifiedAt>this.LastSyncedAt\"}";
var users = collection.Find(new JsonFilterDefinition<User>(jsonFliter))
.ToList();
And, yes, you need an Id - Property for your model class, i haven't mentioned it first, because i thought you do have one, just not posted in the question.
There is another way to do it. First lets declare collection:
var collection = Database.GetCollection<BsonDocument>("CollectionName");
Now lets add our project:
var pro = new BsonDocument {
{"gt1", new BsonDocument {
{ "$gt", new BsonArray(){ "$LastModifiedAt", "$LastSyncedAt" }
}
} },
{"Name", true },
{"LastModifiedAt", true },
{"LastSyncedAt", true }
};
Now lets add our filter:
var filter = Builders<BsonDocument>.Filter.Eq("gt1", true);
We'll aggregate our query:
var aggregate = collection.Aggregate(new AggregateOptions { AllowDiskUse = true })
.Project(pro)
.Match(filter)
Now our query is ready. We can check our query as follow:
var query=aggregate.ToString();
Lets run our query as follow:
var query=aggregate.ToList();
This with return the required data in list of bson documents.
This solution will work mongo c# driver 3.6 or above. Please comment in case of any confusion. Hopefully i'll able to explain this.
Actually I want to delete all documents of a kind in a test procedure but I couldn't do that using DocumentStore.DatabaseCommands.DeleteByIndex command , so I tried to see if I could query those entities , Here's my code :
var store = new DocumentStore { Url = "http://localhost:8080" };
store.Initialize();
var result= store.DatabaseCommands.Query("Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag : RunningTables" },null);
result.Results.ForEach(x=>x.WriteTo(new JsonTextWriter(Console.Out)));
It returns no document but when I use RavenDB Studio and execute the same query on Raven/DocumentsByEntityName index.
Then I took a look at my command Url and I realized start=30 so I changed the code as follow :
var result= store.DatabaseCommands.Query("Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag : RunningTables",Start=0 },null);
But nothing changed except that now Url doesn't contain start anymore.
What's wrong with my code ?
OK. I've got what was wrong with my Code.I didn't select the default database and there was more than one database in my RavenDB. So I did it like this :
var store = new DocumentStore { Url = "http://localhost:8080" , DefaultDatabase = "MyDbName" };
store.Initialize();
var result= store.DatabaseCommands.Query("Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag : RunningTables" },null);
result.Results.ForEach(x=>x.WriteTo(new JsonTextWriter(Console.Out)));
I´m trying to make a query in ElasticSearch with the NEST c# client a query without accent, my data has portuguese latin word with accent. See the code bellow:
var result = client.Search<Book>(s => s
.From(0)
.Size(20)
.Fields(f => f.Title)
.FacetTerm(f => f.OnField(of => of.Genre))
.Query(q => q.QueryString(qs => qs.Query("sao")))
);
This search did not find anything. My data on this index contains many titles like: "São Cristóvan", "São Gonçalo".
var settings = new IndexSettings();
settings.NumberOfReplicas = 1;
settings.NumberOfShards = 5;
settings.Analysis.Analyzers.Add("snowball", new Nest.SnowballAnalyzer { Language = "Portuguese" });
var idx5 = client.CreateIndex("idx5", settings);
How I can make query "sao" and find "são" using ElasticSearch?
I think have to create index with right properties, but I already tried many settings like.
or in Raw Mode:
{
"idx" : {
"settings" : {
"index.analysis.filter.jus_stemmer.name" : "brazilian",
"index.analysis.filter.jus_stop._lang_" : "brazilian"
}
}
}
How can I make the search and ignore accents?
Thanks Friends,
See the solution:
Connect on elasticsearch search with putty execute:
curl -XPOST 'localhost:9200/idx30/_close'
curl -XPUT 'localhost:9200/idx30/_settings' -d '{
"index.analysis.analyzer.default.filter.0": "standard",
"index.analysis.analyzer.default.tokenizer": "standard",
"index.analysis.analyzer.default.filter.1": "lowercase",
"index.analysis.analyzer.default.filter.2": "stop",
"index.analysis.analyzer.default.filter.3": "asciifolding",
"index.number_of_replicas": "1"
}'
curl -XPOST 'localhost:9200/idx30/_open'
Replace "idx30" with name of your index
Done!
I stumbled upon this thread since I got the same problem.
Here's the NEST code to create an index with an AsciiFolding Analyzer:
// Create the Client
string indexName = "testindex";
var uri = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(uri).SetDefaultIndex(indexName);
var client = new ElasticClient(settings);
// Create new Index Settings
IndexSettings set = new IndexSettings();
// Create a Custom Analyzer ...
var an = new CustomAnalyzer();
// ... based on the standard Tokenizer
an.Tokenizer = "standard";
// ... with Filters from the StandardAnalyzer
an.Filter = new List<string>();
an.Filter.Add("standard");
an.Filter.Add("lowercase");
an.Filter.Add("stop");
// ... just adding the additional AsciiFoldingFilter at the end
an.Filter.Add("asciifolding");
// Add the Analyzer with a name
set.Analysis.Analyzers.Add("nospecialchars", an);
// Create the Index
client.CreateIndex(indexName, set);
Now you can Map your Entity to this index (it's important to do this after you created the Index)
client.MapFromAttributes<TestEntity>();
And here's how such an entity could look like:
[ElasticType(Name = "TestEntity", DisableAllField = true)]
public class TestEntity
{
public TestEntity(int id, string desc)
{
ID = id;
Description = desc;
}
public int ID { get; set; }
[ElasticProperty(Analyzer = "nospecialchars")]
public string Description { get; set; }
}
There you go, the Description-Field is now inserted into the index without accents.
You can test this if you check the Mapping of your index:
http://localhost:9200/testindex/_mapping
Which then should look something like:
{
testindex: {
TestEntity: {
_all: {
enabled: false
},
properties: {
description: {
type: "string",
analyzer: "nospecialchars"
},
iD: {
type: "integer"
}
}
}
}
}
Hope this will help someone.
You'll want to incorporate an ACSII Folding filter into your analyzer to accomplish this. That will mean constructing the snowballanalyzer form tokenizers and filters (unless nest allows you to add filters to non-custom analyzers. ElasticSearch doesn't, though, as far as I know).
A SnowballAnalyzer incorporates:
StandardTokenizer
StandardFilter
(Add the ASCIIFolding Filter here)
LowercaseFilter
StopFilter (with the appropriate stopword set)
SnowballFilter (with the appropriate language)
(Or maybe here)
I would probably try to add the ASCIIFoldingFilter just before LowercaseFilter, although it might be better to add it as the very las step (after SnowballFilter). Try it both ways, see which works better. I don't know enough about either the Protuguese stemmer to say which would be best for sure.