The following is my codes to remove documents:
var filterAddInfo = builder.Lte("Claim_Date", branchEntity.Report_Date);
mongoDB.BranchPerformance.FindOneAndUpdate(
filterMain,
Builders<BsonDocument>.Update.PullFilter("Add_Info", filterAddInfo));
It's working with MongoDB, but it's not working if I connect to Azure MongoDB Api. It prompt:
Command findAndModify failed: Operator ''OPERATOR_PULL' with condition' is not supported..
Seems like condition (Eg. lte is not supported in Azure MongoDB Api). May I know is there any alternative way to change my codes cater for this condition?
We do not yet support the pull operator with a condition specified. Please reach out to askcosmosmongoapi [at] microsoft [dot] com with a sample document, and we'll be happy to work with you on a workaround.
Related
We are using Mongo C# driver. Locally my backend is a real MongoDB, and on production in Azure - MS CosmosDB with Mongo interface.
My Mongo document has a version. I read the document, modify it, increase the version, write it - and want to be sure that nobody has changed the document between read and write. So I use the version in the update filter:
So I'm doing this:
var builder = Builders<SettingsStorage>.Filter;
var filter = builder.Eq(c => c.Id, myId) & builder.Eq(c => c.Version, versionAsReadBeforeUpdate);
await this.configurations.FindOneAndUpdateAsync(filter, updateDef);
Or this, just to be sure:
var filter1 = Builders<SettingsStorage>.Filter.Eq(c => c.Id, myId);
var filter2 = Builders<SettingsStorage>.Filter.Eq(c => c.Version, versionAsReadBeforeUpdate);
var filter = Builders<SettingsStorage>.Filter.And(filter1, filter2);
await this.configurations.FindOneAndUpdateAsync(filter, updateDef);
So if somebody changed the document in between, the version will also change and the filter fail. I'll get "Command findAndModify failed: E11000 duplicate key error collection: configurations Failed _id or unique key constraint" exception and will be able to run retry policies etc.
Now the thing is it works perfect with Mongo backend, but almost always brings this exception when running agains CosmosDB, both when deployed and from the same local environment. It's the same call, it's for sure only one simultaneous caller. So how come? Does the c# driver act differently for CosmosDB? What could I try or how can this be explained?
Note: with a normal filter, i.e. just builder.Eq(c => c.Id, myId), both environments behave the same way and work properly.
CosmosDB is not real MongoDB. It emulates MongoDB. As a result the semantics will vary enormously between real MongoDB and CosmosDB. If you want identical semantics run MongoDB Atlas which also runs in the Azure cloud. The mongod process running locally will be identical to the mongod in the Atlas cloud if you are running the same version of MongoDB. You can choose which version of MongoDB to run when you build your first cluster.
There is a free tier for beginners that is free forever and doesn't require a credit card so give it a spin and see if you get the same semantics.
I'm using the Neo4j desktop client for a proof of concept. I'm having trouble figuring out how to obtain credentials to call out to the Neo4j server to query from managed code. I'm using the driver, and I'm unsure how to actually obtain/manage credentials with Neo4j. All the places I've looked say that I should be able to run the following command in the terminal of the Browser for Neo4j...but doesn't work.
CALL dbms.security.createUser('username', 'password', false)
I get the following response when try to run that line.
I'm currently using version 3.3.1 for Neo4j, and it's being run as enterprise edition. Can anyone explain what is wrong? Am I missing some step to configure/unlock this API call to add a user?
It's a limitation of the Desktop version :
Anybody & everybody can get a free-for-development use (single-user,
local desktop/ single machine) license via Neo4j Desktop.
It's a single user database, so obviously you are not allow to create.
As per title, I would like to request a calculation to a Spark cluster (local/HDInsight in Azure) and get the results back from a C# application.
I acknowledged the existence of Livy which I understand is a REST API application sitting on top of Spark to query it, and I have not found a standard C# API package. Is this the right tool for the job? Is it just missing a well known C# API?
The Spark cluster needs to access Azure Cosmos DB, therefore I need to be able to submit a job including the connector jar library (or its path on the cluster driver) in order for Spark to read data from Cosmos.
As a .NET Spark connector to query data did not seem to exist I wrote one
https://github.com/UnoSD/SparkSharp
It is just a quick implementation, but it does have also a way of querying Cosmos DB using Spark SQL
It's just a C# client for Livy but it should be more than enough.
using (var client = new HdInsightClient("clusterName", "admin", "password"))
using (var session = await client.CreateSessionAsync(config))
{
var sum = await session.ExecuteStatementAsync<int>("val res = 1 + 1\nprintln(res)");
const string sql = "SELECT id, SUM(json.total) AS total FROM cosmos GROUP BY id";
var cosmos = await session.ExecuteCosmosDbSparkSqlQueryAsync<IEnumerable<Result>>
(
"cosmosName",
"cosmosKey",
"cosmosDatabase",
"cosmosCollection",
"cosmosPreferredRegions",
sql
);
}
If your just looking for a way to query your spark cluster using SparkSql then this is a way to do it from C#:
https://github.com/Azure-Samples/hdinsight-dotnet-odbc-spark-sql/blob/master/Program.cs
The console app requires an ODBC driver installed. You can find that here:
https://www.microsoft.com/en-us/download/details.aspx?id=49883
Also the console app has a bug: add this line to the code after the part where the connection string is generated.
Immediately after this line:
connectionString = GetDefaultConnectionString();
Add this line
connectionString = connectionString + "DSN=Sample Microsoft Spark DSN";
If you change the name of the DSN when you install the spark ODBC Driver you will need to change the name in the above line then.
Since you need to access data from Cosmos DB, you could open a Jupyter Notebook on your cluster and ingest data into spark (create a permanent table of your data there) and then use this console app/your c# app to query that data.
If you have a spark job written in scala/python and need to submit it from a C# app then I guess LIVY is the best way to go. I am unsure if Mobius supports that.
Microsoft just released a dataframe based .NET support for Apache Spark via the .NET Foundation OSS. See http://dot.net/spark and http://github.com/dotnet/spark for more details. It is now available in HDInsight per default if you select the correct HDP/Spark version (currently 3.6 and 2.3, soon others as well).
UPDATE:
Long ago I said a clear no to this question.
However times has changed and Microsoft made an effort.
Pleas check out https://dotnet.microsoft.com/apps/data/spark
https://github.com/dotnet/spark
// Create a Spark session
var spark = SparkSession
.Builder()
.AppName("word_count_sample")
.GetOrCreate();
Writing spark applications in C# now is that easy!
OUTDATED:
No, C# is not the tool you should choose if you would like to work with Spark! However if you really want to do the job with it try as mentioned above Mobius
https://github.com/Microsoft/Mobius
Spark has 4 main languages and API-s for them: Scala, Java, Python, R.
If you are looking for a language in production I would not suggest the R API. The Other 3 work well.
For Cosmo DB connection I would suggest: https://github.com/Azure/azure-cosmosdb-spark
There is a function named "InStrRev" which is working fine in Access, but when I use that same function to get records in C# windows form then an error message pops up saying
Unidentified function 'InStrRev' in expression.
Is there some way that I can use this function, or is there some other function I can use in my Access query that gets the last index of any character from a field?
The older "Jet" driver for Access did not allow us to use VBA functions like InStrRev() in queries from external applications. Those functions would only be available to queries that were run from within Microsoft Access itself.
However, the OLEDB and ODBC drivers for the newer version of the Access Database Engine (a.k.a "ACE") do allow external applications to make use of many of those built-in VBA functions. So, if your application uses
Provider=Microsoft.Jet.OLEDB.4.0; (OLEDB), or
Driver={Microsoft Access Driver (*.mdb)}; (ODBC)
then the InStrRev() function will not work. However, if you use the newer "ACE" driver:
Provider=Microsoft.ACE.OLEDB.12.0; (OLEDB), or
Driver={Microsoft Access Driver (*.mdb, *.accdb)}; (ODBC)
then those same InStrRev() queries will run without error.
The newer version of the Access Database Engine (and drivers) is available as a free download here:
Microsoft Access Database Engine 2010 Redistributable
The equivalent method in .Net is String.LastIndexOf. You can use it as follows:
var str = "foo bar foo";
var lastIndexOfFoo = str.LastIndexOf("foo");
How to get the number of (open) MongoDB connections with the C# driver (1.9.0 NuGet package)?
MongoDB documentations describes that db.serverStatus() should give iformation about the count of open and available connections but I can't find any function in the C# driver which represents this information.
(Documentation of db.serverStatus(): http://docs.mongodb.org/manual/reference/command/serverStatus/ )
I searched my fork of the driver for serverStatus, there's no results, so currently, I don't think this command is supported.
There's no associated jira (with serverStatus, at least) that i can find
Adding this functionality would be fairly trivial I would think
Edit
I wrote on the mongodb driver google group, and got this reply from Craig at MongoDB inc.
You can simply run:
MongoDatabase adminDb = ...;
adminDb.RunCommand("serverStatus");
I believe this needs to be run on the admin database.