I'm trying to create multiple unique indexes using c# MongoDB driver connecting to Azure DocumentDB instance, but I'm receiving the following exception when trying to create the second unique index:
MongoDB.Driver.MongoCommandException: 'Command createIndexes failed: Message: {"Errors":["The number of unique keys cannot be greater than 1."]}
I can't seem to find any documentation regarding the number of unique keys for Azure DocumentDB collection. Note that this exception does not occur when using actual MongoDB instance.
var keys = Builders<ProductEntity>.IndexKeys.Ascending(p => p.UPC);
var options = new CreateIndexOptions<ProductEntity>() { Name = "UX_UPC", Unique = true, Sparse = true };
var result = await _collection.Indexes.CreateOneAsync(keys, options);
keys = Builders<ProductEntity>.IndexKeys.Ascending(p => p.Manufacturer).Ascending(p => p.MPN);
options = new CreateIndexOptions<ProductEntity>() { Name = "UX_Manufacturer_MPN", Unique = true, Sparse = true };
result = await _collection.Indexes.CreateOneAsync(keys, options);
public class ProductEntity
{
public Guid Id { get; set; }
public string UPC { get; set; }
public string MPN { get; set; }
public string Manufacturer { get; set; }
}
From this blog, we could find it does not support for Unique indexes currently.
General availability is just the beginning for all the features and improvements we have in stored for DocumentDB: API for MongoDB. In the near future, we will be releasing support for Unique indexes and a couple major performance improvements.
This thread discussed Cannot create index in Azure DocumentDb with Mongodb protocol, you could refer to it.
Related
I have a DynamoDB table with GSI and I need to use AWS .NET SDK to retrieve two specific attributes, preferably using the persistence model. I've created a separate class for this projection like this
[DynamoDBTable("MyTable")]
public class MyItem
{
public long Number{ get; set; }
public string Name{ get; set; }
}
but even when I specify SelectValues.SpecificAttributes to get only these attributes, it seems that DynamoDB client tries to retrieve primary key attributes (which I would like to avoid) and gives me an exception Unable to locate property for key attribute <PK hash key attribute name>. Here is the code I use to query
var keyExpression = new Expression
{
ExpressionStatement = $"GSI_PK = :pkValue",
ExpressionAttributeValues =
{
[":pkValue"] = 1
}
};
db.FromQueryAsync<MyItem>(new QueryOperationConfig
{
IndexName = "MyIndex",
KeyExpression = keyExpression,
Select = SelectValues.SpecificAttributes,
AttributesToGet = new List<string> { "Number", "Name" }
}
Is there a way to stop DynamoDB client from mapping non-requested items?
I'm migrating an application from Azure SQL DB to Cosmos DB and am having trouble storing documents in CosmosDB.
My application is written in C# using the Microsoft.Azure.DocumentDB library.
When I use the method CreateDocumentAsync(Uri, object) from the library Microsoft.Azure.DocumentDB to insert a record it goes through without any error.
When I query the database using the Azure CosmosDB Data explorer I do not see any records although in the mongo shell the count() states the records are there and the .find() gives an error:
"Unknown server error occurred when processing this request."
I used the library Microsoft.Azure.DocumentDB to make a crud of sorts. I tested it with the emulator which worked. however when I was trying to couple a online cosmosDB I came upon the problem that the moment that I insert a document via the method CreateDocumentAsync(Uri, object) from the library I no longer am able to see inserted documents in the comosDb dataexplorer.
I tried to insert it without an id and tried to insert it with a objectId _id however I kept getting the same problem.
When I look in the collection via the mongo shell I do see that a document has been added but when I used the db.colleciton.find() I get the error: "Unknown server error occurred when processing this request."
Code further down the chain is able to retrieving the documents. Am I missing something? do I need to set a setting on in azure dB or is this a known issue of the library?
class SalesOrder
{
public string Id { get; set; }
public string PurchaseOrderNumber { get; set; }
public DateTime OrderDate { get; set; }
public string AccountNumber { get; set; }
public decimal SubTotal { get; set; }
public decimal TaxAmt { get; set; }
public decimal Freight { get; set; }
public decimal TotalDue {get; set;}
}
class Program
{
private static readonly string endpointUrl = "endpoint";
private static readonly string authorizationKey = "authorizationKey";
private static readonly string databaseId = "databaseId";
private static readonly string collectionId = "collectionId";
private static DocumentClient client;
static void Main(string[] args)
{
using (client = new DocumentClient(new Uri(endpointUrl), authorizationKey))
{
var collectionLink = UriFactory.CreateDocumentCollectionUri(databaseId, collectionId);
Insert(collectionLink).Wait();
}
}
private static async Task Insert(Uri collectionLink)
{
var orders = new List<object>();
orders.Add(new SalesOrder
{
Id = "POCO1",
PurchaseOrderNumber = "PO18009186470",
OrderDate = new DateTime(2005, 7, 1),
AccountNumber = "10-4020-000510",
SubTotal = 419.4589m,
TaxAmt = 12.5838m,
Freight = 472.3108m,
TotalDue = 985.018m
});
orders.Add(new SalesOrder
{
Id = "POCO2",
PurchaseOrderNumber = "PO15428132599",
OrderDate = new DateTime(2005, 7, 1),
AccountNumber = "10-4020-000646",
SubTotal = 6107.0820m,
TaxAmt = 586.1203m,
Freight = 183.1626m,
TotalDue = 4893.3929m,
});
foreach (var order in orders)
{
Document created = await client.CreateDocumentAsync(collectionLink, order);
Console.WriteLine(created);
}
}
}
The problem is that you have a Cosmos DB account which is using the MongoDB API but you are trying to create data using the SQL API SDK.
If you want to use the MongoDB API when you need to use a MongoDB SDK, not the Cosmos DB one.
However, if you are migrating from Azure SQL and you don't have a specific reason to use the MongoDB API then you should just use the CosmosDB SQL API and the SDK you are already using. If you do that your problem will go away.
The reason why it occurs is because the SQL API SDK is creating SQL API looking files so the CosmosDB MongoDB interface can't deal with them in the UI.
I'm using collection.FindOneAndDeleteAsync, but this uses a ton of cpu when used to get many documents. What's the best way to go about finding multiple documents(anywhere from 100 to 50k) and delete, using the c# mongo driver?
Thanks
You need to Find the docs you want to delete, and then delete them using DeleteMany with a filter of _id: {$in: ids}, where ids is an enumerable of the _id values of those documents.
C# example:
public class Entity
{
public ObjectId id { get; set; }
public string name { get; set; }
}
// Find the documents to delete
var test = db.GetCollection<Entity>("test");
var filter = new BsonDocument();
var docs = test.Find(filter).ToList();
// Get the _id values of the found documents
var ids = docs.Select(d => d.id);
// Create an $in filter for those ids
var idsFilter = Builders<Entity>.Filter.In(d => d.id, ids);
// Delete the documents using the $in filter
var result = test.DeleteMany(idsFilter);
I got an .net core application that is pretty straight forward it is using REST to add and download objects to and from mongo db. Adding items works really well. Getting a list that contains all items aswell, but when I try to access one using id then everytime I get null. What should i change to make this piece of code work. It means get a Tool object from database using it unique ID when there's one matching in database.
Here's a object in database
Here's my repository class
private IMongoCollection<Tool> Tools => _database.GetCollection<Tool>("Tools");
public async Task<Tool> GetAsync(Guid id) =>
await Tools.AsQueryable().FirstOrDefaultAsync(tool => tool.Id == id);
Argument looks like that when I check it out in debugger "{ee1aa9fa-5d17-464c-a8ba-f685203b911f}"
Edit
Tool Class Properties
public Guid Id { get; protected set; }
public string Model { get; protected set; }
public string Brand { get; protected set; }
public string Type { get; protected set; }
public uint Box { get; protected set; }
Fixed check comments
Project on github
The easiest way to do this in C# MongoDB Driver is to set a global GuidRepresentation setting which can be found on the BsonDefaults object. This is a global setting and will effect all serialization/deserialization of GUIDs in to Bson Binary Objects.
BsonDefaults.GuidRepresentation = GuidRepresentation.PythonLegacy;
var collection = new MongoClient().GetDatabase("test").GetCollection<ClassA>("test");
var item = collection.Find(x => x.MyId == new Guid("ee1aa9fa-5d17-464c-a8ba-f685203b911f"))
.FirstOrDefault();
The second option is to convert the GUID manually from a LUUID to a CSUUID, for this there is a helper class within the MongoDB driver of GuidConverter, with this it converts the GUID in to byte[] which is normally used for storage but we can use it for our query.
BsonDefaults.GuidRepresentation = GuidRepresentation.CSharpLegacy;
var collection = new MongoClient().GetDatabase("test").GetCollection<ClassA>("test");
var luuid = new Guid("0798f048-d8bb-7048-bb92-7518ea4272cb");
var bytes = GuidConverter.ToBytes(luuid, GuidRepresentation.PythonLegacy);
var csuuid = new Guid(bytes);
var item = collection.Find(x => x.MyId == csuuid)
.FirstOrDefault();
I also noticed that you're using Robo 3T (formerly Robomongo), within this application you can set how GUIDs are displayed by going to Options -> Legacy UUID Encodings
We are looking to switch from a relational database to elastic search and I am trying to get some basic code up and running with Nest. We have existing objects which use guids for ids that I would like to save into an elastic search index.
I don't want to add any specific attributes as the class is used in different applications and I don't want to add unnecessary dependencies to Nest.
Right now my code looks like this:
var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node)
settings.DefaultIndex = "test";
var client = new ElasticClient(settings);
var testItem = new TestType { Id = Guid.NewGuid(), Name = "Test", Value = "10" };
var response = client.Index(testItem);
With TestType as:
public class TestType
{
public Guid Id { get; set; }
public string Name { get; set; }
public decimal Value { get; set; }
}
However I get an error like:
ServerError: 400Type: mapper_parsing_exception Reason: "failed to
parse [id]" CausedBy: "Type: number_format_exception Reason: "For
input string: "c9c0ed42-86cd-4a94-bc86-a6112f4c9188""
I think I need to specify a mapping that tells the server the Id is a string, but I can't find any examples or documentation on how I do this without using the attributes.
Assuming you're using Elasticsearch 2.x and NEST 2.x (e.g. latest of both at time of writing is Elasticsearch 2.3.5 and NEST 2.4.3), then NEST will automatically infer the id of a POCO by default from the Id property. In the case of a GUID id, this will be saved as a string in Elasticsearch.
Here's an example to get you going
void Main()
{
var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node)
// default index to use if one is not specified on the request
// or is not set up to be inferred from the POCO type
.DefaultIndex("tests");
var client = new ElasticClient(settings);
// create the index, and explicitly provide a mapping for TestType
client.CreateIndex("tests", c => c
.Mappings(m => m
.Map<TestType>(t => t
.AutoMap()
.Properties(p => p
// don't analyze ids when indexing,
// so they are indexed verbatim
.String(s => s
.Name(n => n.Id)
.NotAnalyzed()
)
)
)
)
);
var testItem = new TestType { Id = Guid.NewGuid(), Name = "Test", Value = "10" };
// now index our TestType instance
var response = client.Index(testItem);
}
public class TestType
{
public Guid Id { get; set; }
public string Name { get; set; }
public decimal Value { get; set; }
}
Take a look at the Automapping documentation for more examples of how to explicitly map a POCO for controlling norms, analyzers, multi_fields, etc.
What I normally do is to have a separate class that is only specific to Elasticsearch. And use Automapper to map that into a DTO or ViewModel, or Model into the Elasticsearch Document.
That way, you won't have to expose an object that have a dependency in NEST and attributes that might be specific only to Elasticsearch.
Another good reason is that normally, documents in ES are flat, so you would normally flatten your objects before you index them to ES.