So far in my application i am logging all the trace events to a file on my hard drive.
I now want to enhance it in the form of a separate windows trace application which listens
to the trace from the Main application (as they are generated) and report it on a
gridview like interface. Now the question is :
What kind of TraceListener i have to use to get a maximum benefit interms of speed at which the log information is read ?
Restrictions
Owing to certain restrictions, i cannot use the database logging and reading approach
Will listening to the application eventLogs help in any way?
Thanks for the suggestions and time.
The default trace listener will use the win32 api OutputDebugString. You can listen to messages passed to this method using existing tools. Have a look at this, for instance:
http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx
Maybe that will save you the time it takes to write your own.
If you do want to write your own and your main concern is to get trace messages as fast as possible to the trace viewer application, then you could have your TraceListener accept network connections from the viewer. Whenever a trace messages is handled by the trace listener, you would write to the network. If you're not concerned about being able to view trace messages on a remote machine, then listening to whats put to OutputDebugString is also an option, of course.
This would effect the performance of the application doing the tracing of course so writing to the network is best done asynchronously without blocking the trace write call. While writing to the network you would have to add trace messages to a queue to process.
Here is a simple example which probably works:
public class NetworkPublishingTraceListener : TraceListener {
private List<string> messages = new List<string>();
private AutoResetEvent messagesAvailable = new AutoResetEvent(false);
private List<TcpClient> traceViewerApps;
private object messageQueueLock = new object();
public NetworkPublishingTraceListener(int port) {
// Setup code for accepting and dealing with network connections.
(new Thread(BackgroundThread) { IsBackground = true }).Start();
}
public override void Write(string message) {
if (traceViewerApps.Count == 0) {
return;
}
lock (messageQueueLock) {
messages.Add(message);
}
messagesAvailable.Set();
}
public override void WriteLine(string message) {
Write(message + Environment.NewLine);
}
private void BackgroundThread() {
while (true) {
messagesAvailable.WaitOne();
List<string> messagesToWrite;
lock (messageQueueLock) {
messagesToWrite = messages;
messages = new List<string>();
}
traceViewerApps.ForEach(viewerApp => {
StreamWriter writer = new StreamWriter(viewerApp.GetStream());
messagesToWrite.ForEach(message => writer.Write(message));
});
}
}
}
Why don't you use log4net or nlog ?
Related
So basically I want my server to raise an event (or a callback) when a connected client sends data. I can't come up with a solution to this problem and can't find anything online after days of searching.
What I've thought of was making an asynchronous foreach loop that looped through all the connected users, and check if there is any data to be read on each one (using TcpClient.Avaliable, but a network stream could also check this) but an infinite loop like this without any stop would be bad practice and use an insane amount of resources (from what I understand at least, I am new to threading and networking).
There is logic I need to be executed whenever the server gets data from a client (in this case a message, because it's a chat application), basically broadcast it to every other user, but I just can't find out how to detect if any user has sent data so that it raises an event to broadcast the message, log the message, etc...
Please be "soft" with the explanations as I am new to threading/networking and ty in advance.
As per request here is my code, take note that it is prototype-y and a bit unfinished, but I'm sure it gets the point across:
//Properties
public List<User> ConnectedUsers { get; private set; } = new List<User>();
public TcpListener listener { get; set; }
public bool IsListeningForConnections { get; set; }
public int DisconnectionCheckInterval { get; set; } //in seconds
//Events
public event EventHandler<ServerEventArgs> UserConnected;
public event EventHandler<ServerEventArgs> MessageReceived;
public NetworkManager()
{
listener = new TcpListener(IPAddress.Parse("192.168.1.86"), 6000); //binds // TODO: Change to: user input / prop file
DisconnectionCheckInterval = 10;
IsListeningForConnections = false;
}
public async void StartListeningForConnections()
{
IsListeningForConnections = true;
listener.Start();
while (IsListeningForConnections)
{
User newUser = new User();
newUser.TcpClient = await listener.AcceptTcpClientAsync();
OnUserConnected(newUser); // raises/triggers the event
}
}
public void StartListeningForDisconnections()
{
System.Timers.Timer disconnectionIntervalTimer = new System.Timers.Timer(DisconnectionCheckInterval * 1000);
//TODO: setup event
//disconnectionIntervalTimer.Elasped += ;
disconnectionIntervalTimer.AutoReset = true;
disconnectionIntervalTimer.Enabled = true;
//disconnectionIntervalTimer.Stop();
//disconnectionIntervalTimer.Dispose();
}
public async void StartListeningForData()
{
//??????????
}
public async void SendData(string data, TcpClient recipient)
{
try
{
byte[] buffer = Encoding.ASCII.GetBytes(data);
NetworkStream stream = recipient.GetStream();
await stream.WriteAsync(buffer, 0, buffer.Length); //await
Array.Clear(buffer, 0, buffer.Length);
}
catch { } //TODO: handle exception when message couldn't be sent (user disconnected)
}
public string ReceiveData(TcpClient sender)
{
try
{
NetworkStream stream = sender.GetStream();
byte[] buffer = new byte[1024];
stream.Read(buffer, 0, buffer.Length);
return Encoding.ASCII.GetString(buffer).Trim('\0');
}
catch
{
return null; //TODO: handle exception when message couldn't be read (user disconnected)
}
}
protected virtual void OnUserConnected(User user)
{
ConnectedUsers.Add(user);
UserConnected?.Invoke(this, new ServerEventArgs() { User = user });
}
protected virtual void OnMessageReceived(User user, Message message) //needs trigger
{
MessageReceived?.Invoke(this, new ServerEventArgs() { User = user, Message = message });
}
basically a different class will call all the 3 classes that start with "StartListeningForX", then one of the 3 corresponding events are raised when one of the checks goes through (disconnection/connection/new message), and process that data, I just can't get my hands on how to call an event when a new message arrives for each user.
What I've thought of was making an asynchronous foreach loop that looped through all the connected users, and check if there is any data to be read on each one (using TcpClient.Avaliable, but a network stream could also check this) but an infinite loop like this without any stop would be bad practice and use an insane amount of resources
The standard practice is to have an "infinite" loop for each connected client, so that there is always a read going on every socket. I put "infinite" in quotes because it will actually eventually stop; either by reading 0 bytes (indicating end of stream) or by receiving an exception (indicating a broken connection).
I am new to threading/networking
It's funny how often I see developers trying to learn networking and threading at the same time. Let me be clear: threading and TCP/IP sockets are both extremely complicated and take quite a bit of time to learn all the sharp corners. Trying to learn both of these topics at once is insane. I strongly recommend choosing one of them to learn about (I'd recommend threading first), and only after that one is mastered, proceed to the other.
RabbitMQ
If you have access to the client side code, I'd consider using something like RabbitMQ, or a similar queue service. This allows to link the different apps together through a message broker or queue, and get messages/events real time.
There are functions you can call on event received.
I have a job that imports files into a system. Everytime a file is imported, we create a blob in azure and we send a message with instructions to a queue so that the data is persisted in SQL accordingly. We do this using azure-webjobs and azure-webjobssdk.
We experienced an issue in which after the messages failed more than 7 times, they didn't move to the poision queue as expected. The code is the following:
Program.cs
public class Program
{
static void Main()
{
//Set up DI
var module = new CustomModule();
var kernel = new StandardKernel(module);
//Configure JobHost
var storageConnectionString = AppSettingsHelper.Get("StorageConnectionString");
var config = new JobHostConfiguration(storageConnectionString) { JobActivator = new JobActivator(kernel), NameResolver = new QueueNameResolver() };
config.Queues.MaxDequeueCount = 7;
config.UseTimers();
//Pass configuration to JobJost
var host = new JobHost(config);
host.RunAndBlock();
}
}
Functions.cs
public class Functions
{
private readonly IMessageProcessor _fileImportQueueProcessor;
public Functions(IMessageProcessor fileImportQueueProcessor)
{
_fileImportQueueProcessor = fileImportQueueProcessor;
}
public async void FileImportQueue([QueueTrigger("%fileImportQueueKey%")] string item)
{
await _fileImportQueueProcessor.ProcessAsync(item);
}
}
_fileImportQueueProcessor.ProcessAsync(item) threw an exception and the message's dequeue count was properly increased and re-processed. However, it was never moved to the poison-queue. I attached a screenshot of the queues with the dequeue counts at over 50.
After multiple failures the webjob was stuck in a Pending Restart state and I was unable to either stop or start and I ended up deleting it completely. After running the webjob locally, I saw messages being processed (I assumed that the one with a dequeue count of over 7 should've been moved to the poison queue).
Any ideas on why this is happening and what can be done to have the desired behavior.
Thanks,
Update
Vivien's solution below worked.
Matthew has kind enough to do a PR that will address this. You can check out the PR here.
Fred,
The FileImportQueue method being an async void is the source of your problem.
Update it to return a Task:
public class Functions
{
private readonly IMessageProcessor _fileImportQueueProcessor;
public Functions(IMessageProcessor fileImportQueueProcessor)
{
_fileImportQueueProcessor = fileImportQueueProcessor;
}
public async Task FileImportQueue([QueueTrigger("%fileImportQueueKey%")] string item)
{
await _fileImportQueueProcessor.ProcessAsync(item);
}
}
The reason for the dequeue count to be over 50 is because when _fileImportQueueProcessor.ProcessAsync(item) threw an exception it will crash the whole process. Meaning the WebJobs SDK can't execute the next task that will move the message to the poison queue.
When the message is available again in the queue the SDK will process it again and so on.
I have a .NET Remoting service which works fine most of the time. If an exception or error happens, it logs the error to a file but still continues to run.
However, about once every two weeks the service stops responding to clients, which causes the client appication to crash with a SocketException with the following message:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
No exception or stack trace is written to our log file, so I can't figure out where the service is crashing at, which leads me to believe that it is somewhere outside of my code which is failing. What additional steps can I take to figure out the root cause of this crash? I would imagine that it writes something to an EventLog somewhere, but I am not super familiar with Windows' Event Logging system so I'm not exactly sure where to look.
Thanks in advance for any assistance with this.
EDIT: Forgot to mention, stopping or restarting the service does nothing, the service never responds. I need to manually kill the process before I can start the service again.
EDIT 2:
public class ClientInfoServerSinkProvider :
IServerChannelSinkProvider
{
private IServerChannelSinkProvider _nextProvider = null;
public ClientInfoServerSinkProvider()
{
}
public ClientInfoServerSinkProvider(
IDictionary properties,
ICollection providerData)
{
}
public IServerChannelSinkProvider Next
{
get { return _nextProvider; }
set { _nextProvider = value; }
}
public IServerChannelSink CreateSink(IChannelReceiver channel)
{
IServerChannelSink nextSink = null;
if (_nextProvider != null)
{
nextSink = _nextProvider.CreateSink(channel);
}
return new ClientIPServerSink(nextSink);
}
public void GetChannelData(IChannelDataStore channelData)
{
}
}
public class ClientIPServerSink :
BaseChannelObjectWithProperties,
IServerChannelSink,
IChannelSinkBase
{
private IServerChannelSink _nextSink;
public ClientIPServerSink(IServerChannelSink next)
{
_nextSink = next;
}
public IServerChannelSink NextChannelSink
{
get { return _nextSink; }
set { _nextSink = value; }
}
public void AsyncProcessResponse(
IServerResponseChannelSinkStack sinkStack,
Object state,
IMessage message,
ITransportHeaders headers,
Stream stream)
{
IPAddress ip = headers[CommonTransportKeys.IPAddress] as IPAddress;
CallContext.SetData("ClientIPAddress", ip);
sinkStack.AsyncProcessResponse(message, headers, stream);
}
public Stream GetResponseStream(
IServerResponseChannelSinkStack sinkStack,
Object state,
IMessage message,
ITransportHeaders headers)
{
return null;
}
public ServerProcessing ProcessMessage(
IServerChannelSinkStack sinkStack,
IMessage requestMsg,
ITransportHeaders requestHeaders,
Stream requestStream,
out IMessage responseMsg,
out ITransportHeaders responseHeaders,
out Stream responseStream)
{
if (_nextSink != null)
{
IPAddress ip =
requestHeaders[CommonTransportKeys.IPAddress] as IPAddress;
CallContext.SetData("ClientIPAddress", ip);
ServerProcessing spres = _nextSink.ProcessMessage(
sinkStack,
requestMsg,
requestHeaders,
requestStream,
out responseMsg,
out responseHeaders,
out responseStream);
return spres;
}
else
{
responseMsg = null;
responseHeaders = null;
responseStream = null;
return new ServerProcessing();
}
}
This is like trying to find out why nobody picks up the phone when you call a friend. And the problem is that his house burned down to the ground. An imperfect view of what is going on is the core issue, especially bad with a service because there is so little to look at.
This can't get better until you use that telephone to talk to the service programmer and get him involved with the problem. Somebody is going to have to debug this. And yes, it will be difficult, failing once every two weeks might not be considered critical enough. Or too long to sit around waiting for it to happen. Only practical thing you can do to help is create a minidump of the process and pass that to the service programmer so he's got something to poke at. If the service runs on another machine then get the LAN admin involved as well.
The issue was due to a deadlock caused in my code, if memory serves I had two locking objects and I locked one from inside the other, essentially making them wait for each other. I was able to determine this by hooking up a debugger to the remote service.
I am relatively new both to MSMQ and Threading in .NET. I have to create a service which listen in different threads, via TCP and SNMP, several network Devices and all this stuff run in dedicated threads, but here also is required to listen on MSMQ Queue from another applications.
I am analyzing another similar projects and there is used next logic:
private void MSMQRetrievalProc()
{
try
{
Message mes;
WaitHandle[] handles = new WaitHandle[1] { exitEvent };
while (!exitEvent.WaitOne(0, false))
{
try
{
mes = MyQueue.Receive(new TimeSpan(0, 0, 1));
HandleMessage(mes);
}
catch (MessageQueueException)
{
}
}
}
catch (Exception Ex)
{
//Handle Ex
}
}
MSMQRetrievalThread = new Thread(MSMQRetrievalProc);
MSMQRetrievalThread.Start();
But in another service (message dispatcher) I used asynchronous messages' reading based on MSDN Example:
public RootClass() //constructor of Main Class
{
MyQ = CreateQ(#".\Private$\MyQ"); //Get or create MSMQ Queue
// Add an event handler for the ReceiveCompleted event.
MyQ.ReceiveCompleted += new
ReceiveCompletedEventHandler(MsgReceiveCompleted);
// Begin the asynchronous receive operation.
MyQ.BeginReceive();
}
private void MsgReceiveCompleted(Object source, ReceiveCompletedEventArgs asyncResult)
{
try
{
// Connect to the queue.
MessageQueue mq = (MessageQueue)source;
// End the asynchronous Receive operation.
Message m = mq.EndReceive(asyncResult.AsyncResult);
// Process received message
// Restart the asynchronous Receive operation.
mq.BeginReceive();
}
catch (MessageQueueException Ex)
{
// Handle sources of MessageQueueException.
}
return;
}
Does asynchronous handling suppose that every message will be handled in other than main thread?
Could and need this (2nd) approach be put in separate thread?
Please advice better approach or some simple alternatives.
Messages arrival in Queue doesn't have some rule-defined behavior. It may be that for long time no nay message will arrive or in one second there my arrive many (up to 10 or even more) messages. Based on actions defined in some message it will need to delete/change some objects having running threads.
I highly recommend using WCF for MSMQ.
http://msdn.microsoft.com/en-us/library/ms789048.aspx
This allows you to both asynchronous handle the incoming calls using the WCF threading model which allows for throttling, capping, retries, etc...
We're evaluating db4o (an OO-DBMS from http://www.db4o.com). We've put together a performance test for client/server mode, where we spin up a server, then hammer it with several clients at once. It seems like the server can only process one client's query at a time.
Have we missed a configuration switch somewhere that allows for this scenario? Server implementation is below. The client connects, queries (read-only), and disconnects per operation, and operations run one immediately after the other from several worker threads in the client process. We see same behaviour if we spin up one client process with one worker each against the same server.
Any suggestions?
Edit: We've now discovered, and tried out, the Lazy and Snapshot QueryModes, and although this alleviates the blocking server problem (partially), we still see significant concurrency problems when our clients (we run 40 concurrent test-clients that wait 1-300ms before issuing a random operation-request) hammer on the server. There appear to be exceptions emanating from the LINQ provider and from the IO internals :-(
public class Db4oServer : ServerConfiguration, IMessageRecipient
{
private bool stop;
#region IMessageRecipient Members
public void ProcessMessage(IMessageContext con, object message)
{
if (message is StopDb4oServer)
{
Close();
}
}
#endregion
public static void Main(string[] args)
{
//Ingestion.Do();
new Db4oServer().Run(true, true);
}
public void Run(bool shouldIndex, bool shouldOptimizeNativeQueries)
{
lock (this)
{
var cfg = Db4oFactory.NewConfiguration();
if (shouldIndex)
{
cfg.ObjectClass(typeof (Sequence))
.ObjectField("<ChannelID>k__BackingField")
.Indexed(true);
cfg.ObjectClass(typeof (Vlip))
.ObjectField("<ChannelID>k__BackingField")
.Indexed(true);
}
if (shouldOptimizeNativeQueries)
{
cfg.OptimizeNativeQueries(true);
}
var server = Db4oFactory.OpenServer(cfg, FILE, PORT);
server.GrantAccess("0", "kieran");
server.GrantAccess("1", "kieran");
server.GrantAccess("2", "kieran");
server.GrantAccess("3", "kieran");
//server.Ext().Configure().ClientServer().SingleThreadedClient(false);
server.Ext().Configure().MessageLevel(3);
server.Ext().Configure().Diagnostic().AddListener(new DiagnosticToConsole());
server.Ext().Configure().ClientServer().SetMessageRecipient(this);
try
{
if (!stop)
{
Monitor.Wait(this);
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
server.Close();
}
}
public void Close()
{
lock (this)
{
stop = true;
Monitor.PulseAll(this);
}
}
}
Well, there is something on the db40 servers that doesn't allow too many clients on at a time since it is too much for some to handle. You also locked it which did nothing to help in this case.