I am facing a subscription issue with EventStore.ClientAPI. I have a projection manager that configured to subscribe all and the service started and i am getting subscriptions for events such as $metadata, $UserCreated, $statsCollected etc. but no event from the stream i created. I am pretty new here, please guide me where i am going wrong.
void InitiateProjection(IProjection projection)
{
var checkpoint = GetCurrentPosition(projection.GetType());
_eventStoreConnection.SubscribeToAllFrom(
checkpoint,
CatchUpSubscriptionSettings.Default,
EventAppeared(projection),
LiveProcessingStarted(projection));
}
Event store - product created event
What about if you specifically connect to ProductCreatedDomainEvent+d78435fc43fd-a7bf-56c01d7efa25?
As an observation, the stream name might start to give you problems (unless you changed the system proejctions to search for + instead or - already?)
If you have changed to +, could you try conencting to the $ce-ProductCreatedDomainEvent and see what you get.
Related
I have some code that uses the Service Bus Event Data, and I suspect that I need to use the offset property as, currently, my program is (or seems to be) re-running the same Event Hub data over and over again.
My code is as follows:
public class EventHubListener : IEventProcessor
{
private static EventHubClient _eventHubClient;
private const string EhConnectionStringNoPath = "Endpoint=...";
private const string EhConnectionString = EhConnectionStringNoPath + ";...";
private const string EhEntityPath = "...";
public void Start()
{
_eventHubClient = EventHubClient.CreateFromConnectionString(EhConnectionString);
EventHubConsumerGroup defaultConsumerGroup = _eventHubClient.GetDefaultConsumerGroup();
EventHubDescription eventHub = NamespaceManager.CreateFromConnectionString(EhConnectionStringNoPath).GetEventHub(EhEntityPath);
foreach (string partitionId in eventHub.PartitionIds)
{
defaultConsumerGroup.RegisterProcessor<EventHubListener>(new Lease
{
PartitionId = partitionId
}, new EventProcessorCheckpointManager());
Console.WriteLine("Processing : " + partitionId);
}
}
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (EventData eventData in messages)
{
string bytes = Encoding.UTF8.GetString(eventData.GetBytes());
MyData data = JsonConvert.DeserializeObject<MyData>(bytes);
As I get the same messages over and over again, I suspect that I need to do something like this:
string bytes = Encoding.UTF8.GetString(eventData.GetBytes(), eventData.Offset, eventData.SerializedSizeInBytes - eventData.Offset);
However, Offset is a string, even though it seems to be a numeric value ("12345" for example). The documentation on context.CheckPointAsync() made it seem like that might be the answer; however, issuing that at the end of the loop seems to make no difference.
So, I have a two part question:
What is offset? Is it what I think it is (i.e. a numeric marker to a point in the stream) and, if so, why is it a string?
Why would I be getting the same messages over again? As I understand Event Hubs, although they guarantee at least once, once a Checkpoint has been issues, I shouldn't be getting the same messages back.
EDIT:
After a while of messing about, I've come up with something that avoids this problem; however, I certainly wouldn't claim it's a solution:
var filteredMessages =
messages.Where(a => a.EnqueuedTimeUtc >= _startDate)
.OrderBy(a => a.EnqueuedTimeUtc);
Using the EventProcessorHost seemed to actually make the problem worse; that is, not only were historical events being replayed, but they seemed to be replayed in a random order.
EDIT:
I came across this excellent article by #Mikhail, which does seem to address my exact issue. However; and presumably the root of my problem (or one of them, assuming this is correct, then I'm unsure why using the EventProcessorHost doesn't just work out of the box as #Mikhail said himself in the comments). However, the ServiceBus version of ICheckpointManager only has a single interface method:
namespace Microsoft.ServiceBus.Messaging
{
public interface ICheckpointManager
{
Task CheckpointAsync(Lease lease, string offset, long sequenceNumber);
}
}
Your title should be event hub, rather than service bus. For your question:
Although event hub has similar design as Kafka, but one big difference is that you should manage offsets by yourself. Event hub broker has completely no idea about your consumer group's offset.
So event hub sdk provide some help class to store offset in storage account, but you still need to call checkpoint manually after processing the message.
What is offset? Is it what I think it is (i.e. a numeric marker to a point in the stream) and, if so, why is it a string?
The offset is the pointer within a stream. The offset of an event changes as events gets removed from your Event Hub when the Message Retention policy has elapsed. So a message that was once at offset 10, maybe at offset 0 several days later because older messages were dropped from the stream. This has a good diagram: Event Hubs: Stream Offsets.
Why would I be getting the same messages over again? As I understand Event Hubs, although they guarantee at least once, once a Checkpoint has been issues, I shouldn't be getting the same messages back.
You may be getting the same messages again if you are using the low-level EventReceiver offset since messages expire from the Event Hub when the Message Retention policy elapses (ie. Default is 1 day). Sequence number is a better field to leverage because it does not change.
When checkpointing succeeds, it tells us the last event that was successfully processed, so you shouldn't be getting the same event back because when the client starts, it'll create a stream to a position in the event stream after that event. You can file an issue on GitHub.
EventProcessorHost is helpful as it tries to balance the processing of partitions between the number of instances running. (ie. Consider a 6 partition Event Hub. If you have 2 EventProcessorHosts connected to the same Event Hub reading with the same consumer group, they'll end up balancing the processing of those partitions with 3 each.) It also reconnects when there are transient failures like network loss.
It supports checkpointing to durable storage like Azure Storage Blob. Here is a sample: Process Events using an EventProcessorClient
We're introducing Application Insights into out Desktop app. Since the user might be off-line when using the app, we're using a PersistenceChannel to make sure the events can be sent in a later session, and we call flush when the app is shutting down ( Dispose() of our Tracker):
public ApplicationInsightsTracker()
{
this.client = new TelemetryClient();
this.client.InstrumentationKey = InstrumentationKey;
TelemetryConfiguration.Active.TelemetryChannel = new PersistenceChannel();
TelemetryConfiguration.Active.TelemetryChannel.DeveloperMode = true;
}
~ApplicationInsightsTracker()
{
this.Dispose();
}
public override void Dispose()
{
this.client.Flush();
GC.SuppressFinalize(this);
}
public override void TrackEvent(ITrackerEvent trackerEvent)
{
try
{
this.client.TrackEvent(trackerEvent.Name, trackerEvent.Properties);
}
catch (Exception e)
{
Debug.WriteLine(string.Format("Failed to track event {0}. Exception message {1}", trackerEvent.Name, e.Message));
}
}
We're also using continuous export to send the event data from Application Insights to an Azure Blob database. We connect Power BI to the Blob database, and the other day the refresh functionality stopped working. We investigated and it turns out we were loading 2 events with the same unique ID. Looking into the blobs we found 2 consecutive blobs with the same event:
blob1.blob - Holds 1 event
{"event":...,"internal":{"data":{"id":"8709bb70-e6b1-11e5-9080-f77f0d66d988"..."data":{..."eventTime":"2016-03-10T11:15:53.9378827Z"}..."user":{..."anonId": "346033da-012d-4cc4-9841-836e5d8f8e32"..."session":{"id":"cb668d2f-9755-4afd-97c2-66cc3504349a"...
blob2.blob - Holds 3 events
{"event":...,"internal":{"data":{"id":"8709bb70-e6b1-11e5-9080-f77f0d66d988"..."data":{..."eventTime":"2016-03-10T11:15:53.9378827Z"}..."user":{..."anonId": "346033da-012d-4cc4-9841-836e5d8f8e32"..."session":{"id":"cb668d2f-9755-4afd-97c2-66cc3504349a"...
{"event":...
{"event":...
As you can see the first event on both blobs is the same. We were running tests on the PersistenceChannel having the machine connected / disconnected from the network, and somewhere along the line AI did this.
We're not entirely sure if this is a problem in how we're using it, or a flaw with the library. As you can imagine getting duplicate events through can be quite a pain (specially if you're building a model externally).
Are we doing something odd with AI, or is this a known issue?
I checked with the team that does export, and they said
the current export pipeline has opportunities for duplicate exports
And it is something they are looking into.
So it doesn't look like you are doing anything wrong, this is just a case you'll need to be aware of, and work around for now.
Exported data from AppInsights may contain dupes.
If you are exporting all your data to Power BI, then you can use Power Query's built-in duplicate removal feature.
My application uses the EWS API with a Streaming Subscription and everything is working fine and thats a problem for me as i havn't been able to exercise my recovery code for the OnSubscriptionError event.
Here is the code i use to subscribe for streaming notifications
private void SetStreamingNotifications(List<FolderId> folder_ids)
{
streaming_subscriptions_connection = new StreamingSubscriptionConnection(exchange_service, 30);
streaming_subscriptions_connection.OnDisconnect += OnDisconnect;
streaming_subscriptions_connection.OnSubscriptionError += OnSubscriptionError;
streaming_subscriptions_connection.OnNotificationEvent += OnNotificationEvent;
foreach (var folder_id in folder_ids)
{
StreamingSubscription sub = exchange_service.SubscribeToStreamingNotifications(
new[] { folder_id },
EventType.Created,
EventType.Modified,
EventType.Deleted,
EventType.Moved,
EventType.Copied
);
streaming_subscriptions_connection.AddSubscription(sub);
}
streaming_subscriptions_connection.Open();
}
private void OnSubscriptionError(object sender, SubscriptionErrorEventArgs args)
{
/* What exceptions can i expect to find in "args.Exception" */
/* Can the streaming subscription be recovered or do i need to create a new one? */
}
So my question is how can i trigger a subscription error so i can ensure my code can recover where possible and log / alert when not possible?
EDIT
Following a comment from #kat0r i feel i should add:
I'm currently testing against Exchange 2013 and also intend to test against Exchange 2010 SP1.
I logged a call with Microsoft to find out if it was possible. The short answer is no you can't trigger the OnSubscriptionError event.
Here are the email responses from MS:
In answer to your question, I don’t believe that there is a way you can trigger the OnSubscriptionError event. The correct action to take if you do encounter this event is to attempt to recreate the subscription that encountered the error. I will see if I can find out any further information about this, but the event is generated rarely and only when an unexpected error is encountered on the Exchange server (which is why it probably isn’t possible to trigger it).
It occurred to me that the EWS Managed API has been open-sourced, and is now available on Github: https://github.com/officedev/ews-managed-api
Based on this, we can see exactly what causes the OnSubscriptionError event to be raised – and as far as I can see, this only occurs in the IssueSubscriptionFailures and IssueGeneralFailure methods, both of which can be found in StreamingSubscriptionConnection.cs. Any error that is not ServiceError.ErrorMissedNotificationEvents and is tied to a subscription will result in this event being raised (and the subscription being removed). The error is read from the Response Xml. Of course, this doesn’t really answer your question of how to trigger the event, as that involves causing Exchange to generate such an error (and I’m afraid there is no information on this). It may be possible to inject some Xml (indicating an error) into a response in a test environment – in theory, you may be able to use Fiddler to do this (as it allows you to manipulate requests/responses).
A few things you could do is
Throttling will restrict the maximum number of subscriptions you can create so if you just keep creating new subscription you should get a throttling response from the server once you exceed 20.
The other thing is if you take the SubscriptionId and use a different process to unsubscribe you other code should get a Subscription not found.
You also want to test underlying network issue eg break the connection , dns if you have dev environment see what happens when you bounce the Exchange Server etc.
Cheers
Glen
Having set up a ReferenceDataRequest I send it along to an EventQueue
Service refdata = _session.GetService("//blp/refdata");
Request request = refdata.CreateRequest("ReferenceDataRequest");
// append the appropriate symbol and field data to the request
EventQueue eventQueue = new EventQueue();
Guid guid = Guid.NewGuid();
CorrelationID id = new CorrelationID(guid);
_session.SendRequest(request, eventQueue, id);
long _eventWaitTimeout = 60000;
myEvent = eventQueue.NextEvent(_eventWaitTimeout);
Normally I can grab the message from the queue, but I'm hitting the situation now that if I'm making a number of requests in the same run of the app (normally around the tenth), I see a TIMEOUT EventType
if (myEvent.Type == Event.EventType.TIMEOUT)
throw new Exception("Timed Out - need to rethink this strategy");
else
msg = myEvent.GetMessages().First();
These are being made on the same thread, but I'm assuming that there's something somewhere along the line that I'm consuming and not releasing.
Anyone have any clues or advice?
There aren't many references on SO to BLP's API, but hopefully we can start to rectify that situation.
I just wanted to share something, thanks to the code you included in your initial post.
If you make a request for historical intraday data for a long duration (which results in many events generated by Bloomberg API), do not use the pattern specified in the API documentation, as it may end up making your application very slow to retrieve all events.
Basically, do not call NextEvent() on a Session object! Use a dedicated EventQueue instead.
Instead of doing this:
var cID = new CorrelationID(1);
session.SendRequest(request, cID);
do {
Event eventObj = session.NextEvent();
...
}
Do this:
var cID = new CorrelationID(1);
var eventQueue = new EventQueue();
session.SendRequest(request, eventQueue, cID);
do {
Event eventObj = eventQueue.NextEvent();
...
}
This can result in some performance improvement, though the API is known to not be particularly deterministic...
I didn't really ever get around to solving this question, but we did find a workaround.
Based on a small, apparently throwaway, comment in the Server API documentation, we opted to create a second session. One session is responsible for static requests, the other for real-time. e.g.
_marketDataSession.OpenService("//blp/mktdata");
_staticSession.OpenService("//blp/refdata");
The means one session operates in subscription mode, the other more synchronously - I think it was this duality which was at the root of our problems.
Since making that change, we've not had any problems.
My reading of the docs agrees that you need separate sessions for the "//blp/mktdata" and "//blp/refdata" services.
A client appeared to have a similar problem. I solved it by making hundreds of sessions rather than passing in hundreds of requests in one session. Bloomberg may not be to happy with this BFI (brute force and ignorance) approach as we are sending the field requests for each session but it works.
Nice to see another person on stackoverflow enjoying the pain of bloomberg API :-)
I'm ashamed to say I use the following pattern (I suspect copied from the example code). It seems to work reasonably robustly, but probably ignores some important messages. But I don't get your time-out problem. It's Java, but all the languages work basically the same.
cid = session.sendRequest(request, null);
while (true) {
Event event = session.nextEvent();
MessageIterator msgIter = event.messageIterator();
while (msgIter.hasNext()) {
Message msg = msgIter.next();
if (msg.correlationID() == cid) {
processMessage(msg, fieldStrings, result);
}
}
if (event.eventType() == Event.EventType.RESPONSE) {
break;
}
}
This may work because it consumes all messages off each event.
It sounds like you are making too many requests at once. BB will only process a certain number of requests per connection at any given time. Note that opening more and more connections will not help because there are limits per subscription as well. If you make a large number of time consuming requests simultaneously, some may timeout. Also, you should process the request completely(until you receive RESPONSE message), or cancel them. A partial request that is outstanding is wasting a slot. Since splitting into two sessions, seems to have helped you, it sounds like you are also making a lot of subscription requests at the same time. Are you using subscriptions as a way to take snapshots? That is subscribe to an instrument, get initial values, and de-subscribe. If so, you should try to find a different design. This is not the way the subscriptions are intended to be used. An outstanding subscription request also uses a request slot. That is why it is best to batch as many subscriptions as possible in a single subscription list instead of making many individual requests. Hope this helps with your use of the api.
By the way, I can't tell from your sample code, but while you are blocked on messages from the event queue, are you also reading from the main event queue while(in a seperate event queue)? You must process all the messages out of the queue, especially if you have outstanding subscriptions. Responses can queue up really fast. If you are not processing messages, the session may hit some queue limits which may be why you are getting timeouts. Also, if you don't read messages, you may be marked a slow consumer and not receive more data until you start consuming the pending messages. The api is async. Event queues are just a way to block on specific requests without having to process all messages from the main queue in a context where blocking is ok, and it would otherwise be be difficult to interrupt the logic flow to process parts asynchronously.
I am attempting to build an application that can monitor multiple remote machines through WMI. As a C# developer, I have chosen to utilize the System.Management namespace.
For performance and scalability reasons, I would much prefer to use an event-driven method of gathering information than a poll-based one. As such, I have been investigating the ManagementEventWatcher class.
For simple monitoring tasks, this class seems to be exactly what I want. I create the object, give it ManagementScope, EventQuery, and EventWatcherOptions parameters, subscribe to the EventArrived event, and call the Start method (simplified example below).
using SM = System.Management;
...
SM.ManagementEventWatcher _watcher;
SM.ConnectionOptions conxOptions;
SM.ManagementScope scope;
SM.WqlEventQuery eventQuery;
SM.EventWatcherOptions eventOptions;
SM.EventArrivedEventHandler handler;
string path = #"\\machine\root\cimv2";
conxOptions = new SM.ConnectionOptions ();
conxOptions.Username = user;
conxOptions.Password = password;
scope = new SM.ManagementScope (path, conxOptions);
scope.Connect ();
eventQuery = new SM.WqlEventQuery ("SELECT * FROM __InstanceCreationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32_Process'");
eventOptions = new SM.EventWatcherOptions ();
eventOptions.Context.Add ("QueryName", "Process Query");
_watcher = new SM.ManagementEventWatcher (scope, eventQuery, eventOptions);
handler = new SM.EventArrivedEventHandler (HandleWMIEvent);
_watcher.EventArrived += handler;
_watcher.Start ();
Console.WriteLine ("Press Any Key To Continue");
Console.ReadKey ();
_watcher.Stop ();
_watcher.EventArrived -= handler;
The problem I am running into is that it is difficult to detect when the connection to the remote machine has been broken through various means (machine restart, downed router, unplugged network cable, etc.).
The ManagementEventWatcher class does not appear to provide any means of determining that the connection has been lost, as the Stopped event will not fire when this occurs. The ManagementScope object attached to the ManagementEventWatcher still shows IsConnected as true, despite the broken link.
Does anyone have any ideas on how to check the connection status?
The only thing I can think to do at this point is to use the ManagementScope object to periodically perform a WMI query against the machine and make sure that still works, though that can only check the local->remote connection and not the corresponding remote->local connection. I suppose I could look up another WMI query I could use to verify the connection (assuming the query works), but that seems like more work than I should have to do.
There are two kinds of event consumers in WMI - temporary and permanent. What you might be looking for is a permanent event subscription. Here is a brief blurb about that on MSDN
A permanent consumer is a COM object that can receive a WMI event at all times. A permanent event consumer uses a set of persistent objects and filters to capture a WMI event. Like a temporary event consumer, you set up a series of WMI objects and filters that capture a WMI event. When an event occurs that matches a filter, WMI loads the permanent event consumer and notifies it about the event. Because a permanent consumer is implemented in the WMI repository and is an executable file that is registered in WMI, the permanent event consumer operates and receives events after it is created and even after a reboot of the operating system as long as WMI is running. For more information, see Receiving Events at All Times.
This MSDN article should be enough to get you going http://msdn.microsoft.com/en-us/library/aa393014(VS.85).aspx.
However, in my situation in dealing with this problem, we chose to poll for the data as opposed to creating a permanent consumer. Another option is to monitor for certain events (such as a reboot) and then re-register your temporary event consumer.
Check out this post here. It covers how to detect when a removable disk is inserted using C#. SHould be inline with your WMI code that you supplied.
Subscribe to the NetworkAvailabilityChange event, this should let you know about the status of your current connection through the NetworkAvailabilityEventArgs.IsAvailable property. With a little extra work the NetworkAddressChange event will let you know about machines that move about, change addresses and etc on your network . . . The System.Net.NetworkInformation has good information . . . I' assuming you don't mind using something other than WMI to monitor this.
As far as I can tell, when something like that happens you should receive an exception of type ManagementException that contains wbemErrCallCancelled WMI error code (0x80041032).