Have a pretty much working environment to use for tracing, OpenTelemetry against Jaeger Tracing.
I sort of read that Batch for process is the preferred way then Simple. How ever with in .Net Framework 4.8 Batch dose not seem to give any results being logged.
I did a capture of packet data with Wireshark. Nothing happens when running in Batch.
Is there something with this configuration that is missing to have this as ExportProcessorType.Batch instead of ExportProcessorType.Simple?
public TracerProvider GetTracerProvider(string host, int port)
{
BackendServiceResource = ResourceBuilder.CreateDefault()
.AddService(Process.GetCurrentProcess().ProcessName)
.AddAttributes(new[]
{
new KeyValuePair<string, object>("MachineName", Environment.MachineName),
new KeyValuePair<string, object>("UserName", Environment.UserName),
});
return Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(BackendServiceResource)
.SetSampler(new AlwaysOnSampler())
.SetErrorStatusOnException(true)
.AddSource(ActivitySource.Name)
.AddConsoleExporter()
.AddJaegerExporter(jeager =>
{
jeager.AgentHost = host;
jeager.AgentPort = port;
jeager.MaxPayloadSizeInBytes = 4096;
jeager.ExportProcessorType = ExportProcessorType.Simple;
jeager.BatchExportProcessorOptions = new BatchExportProcessorOptions<Activity>()
{
MaxQueueSize = 2048,
ScheduledDelayMilliseconds = 5000,
ExporterTimeoutMilliseconds = 30000,
MaxExportBatchSize = 512,
};
})
.Build();
}
Thought I post a solution for this sort of issue.
The reason for these problems is because in some cases as application can close down before the process is fully completed. A solution for this is using and making sure that all gets completed before all is valid for closure.
You can read more about it more here https://github.com/open-telemetry/opentelemetry-dotnet/issues/2758
Related
I'm trying to create an application where I have to use WHOIS to get some information I need.
To get the WHOIS information I use this function I found on here and adjusted a little:
string Whois(string domain, string whoisServer = "whois.iana.org")
{
string toReturn = "";
TcpClient tcpClinetWhois = new TcpClient(whoisServer, 43);
MemoryStream memoryStreamWhois = new MemoryStream();
Task copying = tcpClinetWhois.GetStream().CopyToAsync(memoryStreamWhois);
StreamWriter streamWriter = new StreamWriter(tcpClinetWhois.GetStream());
streamWriter.WriteLine(domain);
streamWriter.Flush();
copying.Wait(3000);
toReturn = Encoding.ASCII.GetString(memoryStreamWhois.ToArray());
if (toReturn.Contains("refer:"))
{
toReturn = Whois(domain, toReturn.Split('\n').Where(W => W.StartsWith("refer:")).Select(R => R.Replace("refer:", "").Trim()).First());
}
return toReturn;
}
When I run it, it works for most TLDs like .com or .org but not for .co.uk or .network and probably others too. I have no idea why it wouldn't work because the right WHOIS server gets selected for the TLD. I am also not getting any errors.
I'm using .Net7.0 and my Android 11 phone for testing.
I've tested this exact same function with the exact same domains on the same network but in a Console Application with no problems at all! Everything works fine except when I try this function in a Xamarin Application.
For a project, I have to communicate with a Raspberry Pi Zero from a UWP-APP via TCP. Because both, the Raspberry and the computer with the interface, have got a private IP, I have to use a server to forward messages from one client to the other one. This part already works but now my problem is that I have to implement video streaming from the Raspberry to the UWP-APP.
Because my partner is in charge of creating and designing the UWP-APP, I have made myself a little Test-Interface with WindowsForms. I have tried several techniques like Netcat the video output over the server to the client or direct TCP-streaming with raspivid, but the best solution so far is the one I found in this project here. But instead of using the Eneter.Messaging-library I use my own class for communication with TcpClients.
I use mono to run my C# script on the Raspberry and the code to stream the Video looks like this:
while (true)
{
//Wait with streaming until the Interface is connected
while (!RemoteDeviceConnected || VideoStreamPaused)
{
Thread.Sleep(500);
}
//Check if Raspivid-Process is already running
if(!Array.Exists(Process.GetProcesses(), p => p.ProcessName.Contains("raspivid")))
raspivid.Start();
Thread.Sleep(2000);
VideoData = new byte[VideoDataLength];
try
{
while (await raspivid.StandardOutput.BaseStream.ReadAsync(VideoData, 0, VideoDataLength) != -1 && !VideoChannelToken.IsCancellationRequested && RemoteDeviceConnected && !VideoStreamPaused)
{
// Send captured data to connected clients.
VideoConnection.SendByteArray(VideoData, VideoDataLength);
}
raspivid.Kill();
Console.WriteLine("Raspivid killed");
}
catch(ObjectDisposedException)
{
}
}
Basically, this method just reads the h264 data from the Standard-Output-Stream of the raspivid process in chunks and sends it to the server.
The next method runs on the server and just forwards the byte array to the connected interface-client.
while (RCVVideo[id].Connected)
{
await RCVVideo[id].stream.ReadAsync(VideoData, 0, VideoDataLength);
if (IFVideo[id] != null && IFVideo[id].Connected == true)
{
IFVideo[id].SendByteArray(VideoData, VideoDataLength);
}
}
SendByteArray() uses the NetworkStream.Write() Method.
On the interface, I write the received byte[] to a named pipe, to which the VLC-Control connects to:
while (VideoConnection.Connected)
{
await VideoConnection.stream.ReadAsync(VideoData, 0, VideoDataLength);
if(VideoPipe.IsConnected)
{
VideoPipe.Write(VideoData, 0, VideoDataLength);
}
}
Following code initializes the pipe-server:
// Open pipe that will be read by VLC.
VideoPipe = new NamedPipeServerStream(#"\raspipipe",
PipeDirection.Out, 1,
PipeTransmissionMode.Byte,
PipeOptions.WriteThrough, 0, 10000);
And for VLC:
LibVLC libVLC = new LibVLC();
videoView1.MediaPlayer = new MediaPlayer(libVLC);
videoView1.MediaPlayer.Play(new Media(libVLC, #"stream/h264://\\\.\pipe\raspipipe", FromType.FromLocation));
videoView1.MediaPlayer.EnableHardwareDecoding = true;
videoView1.MediaPlayer.FileCaching = 0;
videoView1.MediaPlayer.NetworkCaching = 300;
This works fine on the Windowsforms-App and I can get the delay down to 2 or 3 seconds (It should be better in the end but it is acceptable). But on the UWP-App I can't get it to work even after adding /LOCAL/ to the pipe name. It shows that the VLC-Control connects to the pipe, and I can see that data is written to the pipe but it doesn't display video.
So my question is:
How can I get this to work with the VLC-Control (LibVLCSharp) in UWP? Am I missing something fundamental?
Or is there even a better way to stream the video in this case?
I have researched a bit on the UWP-MediaPlayerElement to but I can't find a way to get my byte[] into it.
First of all, thank you for your quick responses and interesting ideas!
I took a look into Desktop Bridge but it is not really what I wanted, because my colleague has already put in a lot of effort to design the UWP-APP and my Windows-Form is just a botch to try things out.
But the thing that really worked for me was StreamMediaInput . I have no idea how I missed this before. This way I just passed my NetworkStream directly to the MediaPlayer without using a Named-Pipe.
LibVLC libVLC = new LibVLC();
videoView1.MediaPlayer = new MediaPlayer(libVLC);
Media streamMedia = new Media(libVLC, new StreamMediaInput(Client.Channels.VideoConnection.stream), ":demux=h264");
videoView1.MediaPlayer.EnableHardwareDecoding = true;
videoView1.MediaPlayer.FileCaching = 0;
videoView1.MediaPlayer.NetworkCaching = 500;
videoView1.MediaPlayer.Play(streamMedia);
This solution is now working for me both, in UWP and in Windows-Forms.
I have a basic producer app and a consumer app. if I run both and have both start consuming on their respective topics, I have a great working system. My thought was that if I started the producer and sent a message that I would be able to then start the consumer and have it pick up that message. I was wrong.
Unless both are up and running, I lose messages (or they do not get consumed).
my consumer app looks like this for comsuming...
Uri uri = new Uri("http://localhost:9092");
KafkaOptions options = new KafkaOptions(uri);
BrokerRouter brokerRouter = new BrokerRouter(options);
Consumer consumer = new Consumer(new ConsumerOptions(receiveTopic, brokerRouter));
List<OffsetResponse> offset = consumer.GetTopicOffsetAsync(receiveTopic, 100000).Result;
IEnumerable<OffsetPosition> t = from x in offset select new OffsetPosition(x.PartitionId, x.Offsets.Max());
consumer.SetOffsetPosition(t.ToArray());
IEnumerable<KafkaNet.Protocol.Message> msgs = consumer.Consume();
foreach (KafkaNet.Protocol.Message msg in msgs)
{
do some stuff here based on the message received
}
unless I have the code between the lines, it starts at the beginning every time I start the application.
What is the proper way to manage topic offsets so messages are consumed after a disconnect happens?
If I run
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic chat-message-reply-XXX consumer-property fetch-size=40000000 --from-beginning
I can see the messages, but when I connect my application to that topic, the consumer.Consume() does not pick up the messages it has not already seen. I have tried this with and without runing the above bat file to see if that makes any difference. When I look at the consumer.SetOffsetPosition(t.ToArray()) call (t specifically) it shows that the offset is the count of all messages for the topic.
Please help,
Set auto.offset.reset configuration in your ConsumerOptions to earliest. When the consumer group starts the consume messages, it will consume from the latest offset because the default value for auto.offset.reset is latest.
But I looked at kafka-net API now, it does not have a AutoOffsetReset property, and it seems pretty insufficient with its configuration in consumers. It also lacks documentation with method summaries.
I would suggest you use Confluent .NET Kafka Nuget package because it is owned by Confluent itself.
Also, why are calling GetTopicOffsets and setting that offset back again in consumer. I think when you configure your consumer, you should just start reading messages with Consume().
Try this:
static void Main(string[] args)
{
var uri = new Uri("http://localhost:9092");
var kafkaOptions = new KafkaOptions(uri);
var brokerRouter = new BrokerRouter(kafkaOptions);
var consumerOptions = new ConsumerOptions(receivedTopic, brokerRouter);
var consumer = new Consumer(consumerOptions);
foreach (var msg in consumer.Consume())
{
var value = Encoding.UTF8.GetString(msg.Value);
// Process value here
}
}
In addition, enable logs in your KafkaOptions and ConsumerOptions, they will help you a lot:
var kafkaOptions = new KafkaOptions(uri)
{
Log = new ConsoleLog()
};
var consumerOptions = new ConsumerOptions(topic, brokerRouter)
{
Log = new ConsoleLog()
});
I switched over to use Confluent's C# .NET package and it now works.
I've a C# client witch i want to monitor with azure insights.
I've added the following Nugets:
Microsoft.ApplicationInsights v2.9.1
Microsoft.ApplicationInsights.Agent.Intercept v2.4.0
Microsoft.ApplicationInsights.DependencyCollector v2.9.1
Microsoft.ApplicationInsights.PerfCounterCollector v2.9.1
Microsoft.ApplicationInsights.Web v2.9.1
Microsoft.ApplicationInsights.WindowsServer v2.9.1
Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel v2.9.1
Microsoft.AspNet.TelemetryCorrelation v1.0.5
System.Diagnostics.DiagnosticSource v4.5.1
The problem is that the Azure Portal is recognizing my events and exceptions in the usage category but not as live metric stream. My Client is connected and the live metric stream is available but no temeletry data like tracked events or exceptions are shown. Even in Visual Studio there is no available Application Insights data during the debugging time.
I've tried a lot of times to uninstall and reinstall all the nugets. Updated them to the newest version but without any effort.
This is the related code for my Azure Client. The Payload is just a class with a few properties witch i want to track. It's mapped into a dictionary for the payload to send.
public override void Initialize()
{
try
{
base.Initialize();
var configuration = new TelemetryConfiguration();
configuration.InstrumentationKey = Configuration.AnalyticsCodeId;
var dependencies = new DependencyTrackingTelemetryModule();
dependencies.Initialize(configuration);
configuration.TelemetryInitializers.Add(new Microsoft.ApplicationInsights.Extensibility.OperationCorrelationTelemetryInitializer());
configuration.TelemetryInitializers.Add(new ClientIpHeaderTelemetryInitializer());
configuration.TelemetryInitializers.Add(new AccountIdTelemetryInitializer());
customTelemetry = new AzureCustomTelemetryInitializer(Payload);
configuration.TelemetryInitializers.Add(customTelemetry);
client = new TelemetryClient(configuration);
if (CheckTrackingIsAllowed())
InitLiveMetric(configuration);
}
catch (Exception e)
{
Log.Write(e);
}
}
private void InitLiveMetric(TelemetryConfiguration configuration)
{
QuickPulseTelemetryProcessor processor = null;
configuration.TelemetryProcessorChainBuilder
.Use((next) =>
{
processor = new QuickPulseTelemetryProcessor(next);
return processor;
})
.Build();
var quickPulse = new QuickPulseTelemetryModule();
quickPulse.Initialize(configuration);
quickPulse.RegisterTelemetryProcessor(processor);
}
public override void SendEventAsync(string eventName, string modulName)
{
if (!CheckTrackingIsAllowed())
return;
Task.Run(() =>
{
var p = MapAzurePayload(Payload);
client.TrackEvent(eventName, p);
});
}
This code seems to work properly as i can see the the tracked events and exceptions in the usage category in the Azure Portal. But as i said, not as live metric what would be very nice and must normally work with the code i think.
Any ideas why the live metric stream is not working as intended?
Edit: Found the reason... The problem is that my first track event is send when the client seems not to be ready at all. If I delayed the sending it works as intended.
My solution is to delay the first sending of a track. Not nice but i have no other idea...
I have a Windows Service wrapping a WCF Service, which contains a WorkflowApplication, which runs Activities. I have also configured SQL Server 2008 Express (I know, it's approaching EOL, but the documentation explicitly states that only SQL Server 2005 or SQL Server 2008 are supported) to host the database and the connection works. To be even clearer: The entire process of the Activity completes and receives the return (I'm calling it via the WCF client wrapped in PowerShell).
The issue that I'm having is that I've configured SqlWorkflowInstanceStoreBehavior on the ServiceHost and SqlWorkflowInstanceStore on the WorkflowApplication. Neither of these throws a SQL exception but I think that the ServiceHost is taking precidence, as all that I can see is a singe entry on the LockOwnersTable.
Code from Windows Service:
this.obj = new ServiceHost(typeof(WorkflowService));
SqlWorkflowInstanceStoreBehavior instanceStoreBehavior = new SqlWorkflowInstanceStoreBehavior("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.AggressiveRetry,
InstanceEncodingOption = InstanceEncodingOption.GZip,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(2)
};
this.obj.Description.Behaviors.Add(instanceStoreBehavior);
this.obj.Open();
Code from WCF Service/WorkflowApplication:
SqlWorkflowInstanceStore newSqlWorkflowInstanceStore = new SqlWorkflowInstanceStore("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
EnqueueRunCommands = true,
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5)
};
InstanceHandle workflowInstanceStoreHandle = newSqlWorkflowInstanceStore.CreateInstanceHandle();
CreateWorkflowOwnerCommand createWorkflowOwnerCommand = new CreateWorkflowOwnerCommand();
InstanceView newInstanceView = newSqlWorkflowInstanceStore.Execute(workflowInstanceStoreHandle, createWorkflowOwnerCommand, TimeSpan.FromSeconds(30));
newSqlWorkflowInstanceStore.DefaultInstanceOwner = newInstanceView.InstanceOwner;
// Now stage the WorkflowApplication, using the SQL instance.
AutoResetEvent syncEvent = new AutoResetEvent(false);
WorkflowApplication newWorkflowApplication = new WorkflowApplication(unwrappedActivity)
{
InstanceStore = newSqlWorkflowInstanceStore
};
Questions:
Does the ServiceHost SqlWorkflowInstanceStoreBehavior override the SqlWorkflowInstanceStore on the WorkflowApplication? If so, the obvious answer would be to remove the SqlWorkflowInstanceStoreBehavior on the ServiceHost; however, as inferred before, I fear that will prove fruitless, as the WorkflowApplication currently isn't logging anything (or even attempting to, from what I can tell).
ASAppInstanceService seems specific to WindowsServer. Is is possible to host those (for dev/pre-production) on Windows 10, if the ServiceHost (via Windows Service option) is always going to block/disable the WorkflowApplication from making the SQL calls?
Figued out the answer:
newWorkflowApplication.Persist();