I am wanting to read a specific message from an EventHub.
I'm using the EventHubConsumerClient and the ReadEventsFromPartitionAsync using a partitionId, and Offset I have.
client.ReadEventsFromPartitionAsync(partitionId, EventPosition.FromOffset(offset), cancellationSource.Token);
The issue I have is, that despite the Offset and Partition ID being correct, I'm not getting the messages I expect back.
Context
I working on something to validate that messages are being properly processed in a distributed system.
Source Event Hub -> Processor Function -> Destination Event Hubs.
I read both ends (hubs) and validate messages arrive where they should, if they don't, look up the message from the Source Event Hub, (by partitionId and offset matching against a message ID).
It is the messages I'm looking up that don't appear to have message IDs or offset IDs I expect.
UPDATE:
I was mistaken in my recall of the defaults for EventPosition and referenced docs for the wrong SDK package below. By default, EventPosition.FromOffset is inclusive. (src)
The creation pattern in the question would include the event at the provided offset. If you're not seeing the event returned, then the offset would seem to be incorrect.
Original Answer (incorrect):
The EventPosition that you're building is non-inclusive and will not include the event at that offset, but rather start at the next available event. Using the following overload should target the event that you're looking for:
EventPosition.FromOffset(offset, true)
It looks as if the summary in the docs doesn't do a great job of calling attention to the default; I'll take a follow-up to make that more clear.
The issue was due to the incorrect offsets being provided from the AZ Function binding metadata.
Using a batch of one - correct.
Using a batch of more than one, completely wrong offsets, sequences etc.
Updated the eventhubs package (preview..) and it works fine :/
Related
The AWS documentation states consistently that ARNs should not be constructed programmatically from names or URLs, because the way those strings are constructed is not guaranteed to be constant in time.
My issue is that, on SQS, the RedrivePolicy attribute returned by GetQueueAttributes references the dead-letter queue by ARN only.
I am currently writing a service to create queues and set them up, or verify that their setup is correct if they already exist. But I don't see the way I can verify that the dead-letter queue ARN matches an existing queue, unless I do parse it to get the name. Is there a way around that?
(actually to be fair, there is one way that respects the "don't parse ARNs programmatically" rule, which consists in calling ListQueues then loop through the resulting URLs calling GetQueueAttributes on each, but that sounds like a silly amount of work, and could potentially fail if there are more than 1000 queues on the account, so I'm excluding doing this).
Currently looking for a solution in C# but the issue is not language-dependent.
In C#, there is a parser class in the AWSSDK.Core package called Amazon.Arn.
It was seemingly added around version 3.3.104 of the package in Dec '19 (Source here). So even though ARNs aren't meant to be constructed programmatically, this seems to be setting the format in stone.
Now to get the name, knowing that the ARN is that of a queue, one can do Arn.Parse(queueArn).Resource.
Conversely, one can create a new Arn object then call ToString() on it to get the full ARN.
This of course can be improved for a random ARN (The Arn object contains more information than the resource sur as the service), and will not work as is for any resource type (e.g. Resource would return queue-name:some-guid for an SQS subscription, or rule/rule-name for an EventBridge rule.
More information can be found about ARNs here.
This is to get the Queue URL using the ARN,
queue_name = queue_arn.split(':')[-1]
account_id = queue_arn.split(':')[-2]
sqs_client = boto3.client('sqs')
url = sqs_client.get_queue_url(
QueueName=queue_name,
QueueOwnerAWSAccountId=account_id
)
And if queue exists:
try:
queue = sqs_client.get_queue_by_name(QueueName=queue_name)
except:
print("Queue doesn't exist!")
I integrated Firebase for Unity, it worked well; but I have problems understanding custom parameters.
I am using Firebase' s level_up event(it is not related but works for my purpose) and I added custom parameters to it like:
Parameter(string levelfailedornot, int currentlevel)
So a parameter for level_up event looks like this:
Parameter("fail", 151)
I thought I could see which level failed how many times, which level is easier than others etc. The problem is that I can see this custom parameters with values in the "last 30 minutes activities"("fail 151 - 32 times reported" "success 3 - 3 times reported" etc.) at firebase analytics console but I can' t see them other than that panel. How can I achieve this? I added custom parameters reporting to the level_up event but they show only how many times "fail" or "success" reported.
Looks like you are logging events only in the debug/test mode.
For events to appear and stay in Firebase Dashboard, ensure all these steps mentioned in the Get Started guide for Unity are followed in same order.
Since currentLevel is int type parameter it is reported as int and levelfailedornot is String parameter reports "fail" or "pass". The screen for event would display all the parameters in view cards along with values received from various devices using your app.
The last 30 minutes activities are updated early, but it takes 6-7 hours before they are added to the long time analysis.
Also, the numeric parameters are only visible as Average and Sum, which are not helpful in your case. If you want to count custom parameter events, it has to be Text type.
I am using QuickFix/N to send MultiLeg orders to IB. My message is rejected with the error 58=Value is incorrect (out of range) for this tag (tag = 167). The broker informed me that the proper value for that tag was "MLEG" which is what I set it to. Message flow as follows:
<outgoing> 8=FIX.4.2_9=229_35=AB_34=2_49=direc513_52=20150904-13:46:32.201_56=IB_11=1234.76_15=USD_21=2_38=10000_40=1_54=1_55=ACC-PLD_60=20150904-21:46:32.161_167=MLEG_207=SMART_555=2_600=ACC_608=ES_623=1255_624=1_564=O_600=PLD_608=ES_623=1066_624=2_564=O_10=220_
<incoming> 8=FIX.4.2_9=000238_35=8_34=000002_43=N_52=20150904-13:46:33_49=IB_56=direc513_11=1234.76_17=17556.1441374393.0_150=8_20=0_103=0_39=8_55=USD_38=10000_44=0.00_32=0_31=0.00_14=0_151=0_6=0_54=1_37=0_167=MLEG_58=Unsupported type_60=20150904-13:46:33_40=1_15=USD_10=136_
A first chance exception of type 'QuickFix.IncorrectTagValue' occurred in QuickFix.dll
<event> Message 2 Rejected: Value is incorrect (out of range) for this tag (Field=167)
<outgoing> 8=FIX.4.2_9=128_35=3_34=3_49=direc513_52=20150904-13:46:32.998_56=IB_45=2_58=Value is incorrect (out of range) for this tag_371=167_372=8_373=5_10=204_
The broker informed me that he would check on the rejection but that the second outgoing message indicated that on my side we were rejecting the 167=MLEG and needed to relax that.
I am not sure what is to be done here, but I am using 4.2 and noticed that MLEG was only defined in 4.3. As the broker prefers 4.2, I put the MLEG definition in the 4.3 Data Dictionary. At this point, I no longer got the same error but am now getting an "Unsupported Type" error.
<outgoing> 8=FIX.4.2_9=229_35=AB_34=2_49=direc513_52=20150907-08:17:41.066_56=IB_11=1234.67_15=USD_21=2_38=10000_40=1_54=1_55=ACC-PLD_60=20150907-16:17:41.022_167=MLEG_207=SMART_555=2_600=ACC_608=ES_623=1255_624=1_564=O_600=PLD_608=ES_623=1066_624=2_564=O_10=235_
<incoming> 8=FIX.4.2_9=000238_35=8_34=000002_43=N_52=20150907-08:17:46_49=IB_56=direc513_11=1234.67_17=17556.1441613866.0_150=8_20=0_103=0_39=8_55=USD_38=10000_44=0.00_32=0_31=0.00_14=0_151=0_6=0_54=1_37=0_167=MLEG_58=Unsupported type_60=20150907-08:17:46_40=1_15=USD_10=155_
So the immediate questions which come to mind are:
Why do I get a rejection when the broker said MLEG was acceptable or is this issue simply due to the fact that I didn't have that definition in my 4.2 DD.
Is there something else I should be doing to relax the restriction on my side?
Did I do the right thing to include the definition in the 4.2 DD?
If so, what is meant by unsupported type and why did the message not include a tag reference for the error?
Am I asking the wrong questions and does someone know the right question?
Is there something else obviously wrong with the outgoing message?
I didn't include code because I think I know what code to use to create the message once I know what is the problem. If, however, someone thinks it would be useful, I could do it.
Any help greatly appreciated.
MsgType AB (NewOrderMultileg) was only added in FIX 4.3, so I suspect this is the cause of the “Unsupported type” message. You’ll notice you were getting this error in your first example too, where your tag 167 value was being rejected.
Adding the new value to your dictionary is the correct way to add it as a valid value your end.
I suspect your broker actually expects you to send a NewOrderSingle message (35=D) with the MLEG value and any other custom fields, to replicate the functionality of the NewOrderMultileg. Failing that, you'd need to use a more up-to-date version of the FIX protocol (probably 4.4 or 5.0)
Cheers,
Campbell
I am working with Fix 4.3 and have two issues, if I can get one issue resolved it should eliminate the second.
However... I am using QuickFIX example files as a way of starting off my project, I am able to connect to the target machine, and get marketdata out, however.. it returns many results.
the first of which is what I am after and after that I would like it to stop polling for information.
the second issue is I am getting the notification Message X Rejected: Tag appears more than once (field=6215)
Looking in the code this is the tenor value, is I make any change to this then the application fails and doesn't get any FIX information.
I would be grateful if anyone can point me in the right direction to help me resolve this.
This is my cfg file with the target and sender compid removed.
I am using STunnel to make my connection hence the socket looking at localhost.
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=2
FileStorePath=store
FileLogPath=log
StartTime=00:00:00
EndTime=00:00:00
UseDataDictionary=Y
DataDictionary=../../../../spec/fix/FIX43.xml
SocketConnectHost=127.0.0.1
SocketConnectPort=1337
LogoutTimeout=5
ResetOnLogon=Y
ResetOnDisconnect=Y
[SESSION]
# inherit ConnectionType, ReconnectInterval and SenderCompID from default
BeginString=FIX.4.3
SenderCompID=XXXX
TargetCompID=XXXX
HeartBtInt=3000
thanks
Simon
I'm sure you have not updated your data dictionary XML file to match any customizations that your counterparty has made.
6215 is a custom tag of some sort, and I bet it's inside a repeating group. However, I suspect that, in your DD, you haven't added it inside the group. Therefore, when the engine comes to it, it says "6215 doesn't belong to this group, so the group must just have ended", and it thinks 6215 is outside the group. When this happens the second time, you get your error.
Fix your DD so it matches your counterparty's specifications and this should go away.
I am working on creating an iCal feed for our application. Things are going well. I have everything working except exceptions. For example when you schedule a recurring event and need to cancel a day, I am using the EXDATE tag in the feed and that is working fine for removing a scheduled occurrence.
The issues is if you have a recurring event that starts today # 2pm and recurrs 5 times. In our application the user can change any one of those weeks to start # 3pm if needed. How to I specify that in the iCal feed?
I have been looking at the documentation, but must be missing something...
Thanks a bunch!!
Drowsy is on the right track.
The UID's MUST match so that the adjusment is recognised as
belonging to the original event.
The Recurrence Id matches it to the instance of the recurring sequence that is being modified.
This is because of course one might be changing the date and time
as well as other changes and one doesn't want the original
instance there generated by the recurring spec, as well as the
modification.
And yes finally the sequence id must be there so
that one knows the sequence or layer of modifications in case
there are several.
For example - here's a dump of what google calendar generates if you modify a recurring event.
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20140325T084000
DTEND;TZID=Australia/Sydney:20140325T101000
DTSTAMP:20140327T060506Z
UID:vu2d4gjdj4mpfuvas53qi32s7k#google.com
RECURRENCE-ID;TZID=Australia/Sydney:20140325T083000
CREATED:20131216T033331Z
DESCRIPTION:
LAST-MODIFIED:20140327T060215Z
LOCATION:
SEQUENCE:1
STATUS:CONFIRMED
SUMMARY:test Event
TRANSP:OPAQUE
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20140128T083000
DTEND;TZID=Australia/Sydney:20140128T100000
RRULE:FREQ=WEEKLY;UNTIL=20141208T213000Z;BYDAY=TU
DTSTAMP:20140327T060506Z
UID:vu2d4gjdj4mpfuvas53qi32s7k#google.com
CREATED:20131216T033331Z
DESCRIPTION:
LAST-MODIFIED:20140222T101012Z
LOCATION:
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:Test event
TRANSP:OPAQUE
END:VEVENT
I believe as long as you generate a record with a RECURRENCE-ID based on the original time, and using the original UID you should be able to set the DTSTART and DTEND values for a single instance. You would need to increment the SEQUENCE too. That should trigger updates on client software.