"Write" Transactions don't work when using C#/Node.js with Amazon Neptune - c#

I am able to connect to Neptune and was able to add some data to it with no issues. However, when I tried the code at https://tinkerpop.apache.org/docs/current/reference/#gremlin-dotnet-transactions it doesn't seem to work. I receive following error:
"Received data deserialized into null object message. Cannot operate on it."
I even jumped to a JS sample (https://tinkerpop.apache.org/docs/current/reference/#gremlin-javascript-transactions) and tried again. It doesn't work either.
What am I missing?

At the time of this writing, Amazon Neptune only has support for TinkerPop version 3.4.11. The "traversal transactions" semantics using tx(), that you are referencing, is new as of 3.5.2 that was released by Apache TinkerPop in mid January 2022.
Transactions are typically only required when you need to submit multiple queries but have all of the queries bounded within a single commit, or with rollback if one of the queries were to fail. If you don't need this, then each Gremlin query sent to Neptune behaves as a single transaction.
If you do need transaction-like behavior in 3.4.11, here's a link to the documentation on how to do that in Neptune using Gremlin sessions: https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-sessions.html
If you don't need transactions, then here are examples of interacting with Neptune by submitting individual queries:
(.NET) https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-dotnet.html
(JS) https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-node-js.html

Related

How to catch LDAP Sync Info Message using Content Synchronization Operation

I am trying to retrieve the deleted UUIDs from a Openldap server using a .Net Core Console Application.
I was able to see that a Sync Info Message was indeed sent by my Openldap server and that it contained the UUIDS of the present entries by using a Perl script and dumping the whole response.
I set up a Openldap server with the syncprov Overlay (see my previous question Can't get deleted items from OpenLDAP Server using Content Synchronization Operation (syncrepl)).
After re-reading the RFC4533 multiple times and the OpenLdap Syncrepl documentation and analysing the response, with my current configuration (No Accesslog) it is impossible to retrieve deleted entries, only a list of present entries. They are contained in the Sync Info Message. I wish to retrieve the information anyway so I can make a delta between what is sent and what is on my client.
Do you know how to catch the message in C#?
I tried using the DirectoryServices.Protocols and the Novell.Directory.Ldap libraries (separately). I must have missed something but don't know what exactly...
I used the Novell Code sample (the SearchPersist one and adding the corresponding control) available at https://www.microfocus.com/documentation/edirectory-developer-documentation/ldap-libraries-for-c-sharp/.
I can retrieve added/modified entries but not the Sync Info Message containing the present entries.
By digging a bit into the Novell Library, I found some useful classes for decoding ASN1 Objects.
By using the following code I am able to determine the type of the Intermediate Sync Info Message.
var decoder = new LBERDecoder();
(Asn1Tagged)decoder.decode(intermediateResponse.getValue());`
Then, depending on the Tag I am able to decode the message (using the method .decode(valueToDecode))

"The wait operation timed out" after 120secs of CREATE INDEX called from SMO TransferData method

Recently created a C# tool using SMO class to automate the refactor and merge of SQL Server databases for migration into Azure.
The TransferData method successfully adheres to the BulkCopyTimeout for the data copy phase; proved this by extending it when it timed out.
When the transfer phase moves to CREATE INDEX statements they appear to hit a timeout after 120sec / 2mins on a particularly large table.
The ServerConnection object has StatementTimeout and ConnectionTimeout both set to 0 (as initial research suggested doing) to no avail.
Running a profiler trace, I noticed the "Application Name" differs from the original set (MergeDB v1.8) when the bulk copy and index create phases are running.
The original connection is still present but it appears that the Transfer class spawns additional connections (but whilst appearing to pass on BulkCopyTimeout; failing to pass on the application name and (my hypothesis) the StatementTimeout property.
I'm using SMO v150.18131.0 connecting to SQL 2008 R2.

Can't store users in the default MVC application

I had some problems with using the authorization before so I got a brand new everything - new computer, new OS, fresh installation of VS, new app and DB in a new resource group on the Azure. The whole shabang.
I can confirm that I can log in to the Azure DB as the screenshots below show.
I can see the databases, tables, users etc.
The problem is that, although it works locally (using the default connection string provided automagically for me), it doesn't perform very well in the Azure (although I'm using the publish file from there). It said something about the file not being found and according to this answer, I needed to change the connection string.
After I've altered it, I get the following error. Please note that the firewall is open and that I can access the DB when I run the code of my applications. I feel that there's something that goes wrong when the authentication part is automatically configured. I'm out of ideas on how to trouble-shoot it, though.
[SqlException (0x80131904): Login failed for user 'Chamster'.
This session has been assigned a tracing ID of '09121235-87f3-4a92-a371-50bc475306ca'. Provide this tracing ID to customer support when you need assistance.]
The connection string I'm using is this.
Server=tcp:f8goq0bvq7.database.windows.net,1433;
Database=Squicker;
User ID=Chamster#f8goq0bvq7;
Password=Abc123();
Encrypt=True;
TrustServerCertificate=False;
Connection Timeout=10;
This issue's bothered me for a while and I'll be bounting it in two days. Any suggestion's warmly appreciated.
I believe I've managed to resolve this weird issue. It appears that the user I'm using, despite being admin with all bells and whistles isn't recognized as admin when used in the connection string and trying to create the tables (which is the case at the first registration).
My solution was to create two logins - one with db_owner role and one with db_datareader and db_datawriter. First, I've used the elevated user in my connection string and registered a single user. That created the tables in the database as shown below.
Then, while able to continue as admin, I realized that we should try the demoted user and tada!, it worked perfectly. Once the tables were there, the whole shabeling behaved as expected.
To be perfectly sure, I dropped the tables from the database and there it was - the same issues as before. When I changed to the elevated user, the tables were restored allowing me to get back to the demoted one.
I also tried dropping the tables, confirming the issues to re-appear and then creating the tables manually. That works too! So basically,the only gotcha that caused it all was the original admin who's not treated as admin.
It might have to do with the fact that my Azure account's getting a bit old, LiveID used there is ancient and that didn't have an updated version of DB in Azure (the pull-up to v12 was carried out the 18th of December, so it's possible that it also was a requirement to get it working). I'm too tired and lazy to check that out and I realize that I've no idea how to get an "old" type of account. Besides, the issue will decrease and gradually vanish because the old accounts get upgraded eventually.

Sharing connections Simple.Data.UnresolvableObjectException

I have just plugged in application insights into my azure web role which is using simple data.
In the exception view in Application Insight I am seeing exception's occurring for every database call for both UseSharedConnection and StopUsingSharedConnection
One thing to note is the query is being completed successfully.
I can see that in the help at http://simplefx.org/simpledata/docs/pages/Start/SharingAConnection.html it says this is only supported in ADO Adapter only.
My repository code looks like this
My Connection String is in the format following.
Server=tcp:.database.windows.net,1433;Database=;User ID=#;Password=;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;

Limiting records synchronized to mobile device

Similar questions have been asked before but after a day of going through the answers I'm still very confused.
I'm using Microsoft's Sync Framework with SQL2008 on the server and SQL CE on Windows Mobile devices. I would have thought this was a VERY common requirement. I don't want to replicate large tables onto the mobile device. I only want the records that are needed. For example, each user will need their "jobs" out of the jobs table. They don't need any other user's jobs. So I need something like "where jobId = 3" for one device and "where jobId=4" for another etc.
This looked promising: http://jtabadero.spaces.live.com/blog/cns!BF49A449953D0591!1203.entry
but unfortunately it doesn't work with my code. This code from the sample seems to be trying to get the code that contains the SQL:
var remoteProvider = (LocalDataCache1ServerSyncProvider)syncAgent.RemoteProvider;
var selectIncrementalInsertsCommand = remoteProvider.SalesLT_CustomerSyncAdapter.SelectIncrementalInsertsCommand;
BUT the code containing the SQL (generated by VS) is on the server-side and only a proxy is available in the client-side code. This is how the proxy is added:
// The WCF Service
var webSvcProxy = new MicronetCacheSyncService();
// The Remote Server Provider Proxy
var serverProvider = new ServerSyncProviderProxy(webSvcProxy);
// The Sync Agent
var syncAgent = new MicronetCacheSyncAgent();
syncAgent.RemoteProvider = serverProvider;
So how can I get to the server-side code that contains the sql from the client-side? Sorry I'm not explaining this very well but I guess it's unlikely anyone will have an answer. The short version is does anyone know a SIMPLE way to limit the records that are synced to a mobile device is this type of app? I think the example was meant for desktop apps.
It looks to me like this sync framework is another one of Microsoft's half-baked releases that is really just a beta. It's starting to remind me of some previous horrible experiences with Entity Framework 1.0 :(
The tutorial at http://msdn.microsoft.com/en-us/library/dd918848%28SQL.105%29.aspx contains everything you need to provision filtering for a scope.
FYI, that tutorial is for Sync Framework 2.0, whereas from your code above it appears you're using Sync Framework 1.0 -- a legacy product.

Categories

Resources