I'm Currently trying to make a log viewer for analyzing logs. I'm reading the log file that has timestamp, logtype, and text on each line. From this I create an object that has the DateTime object, Type and Text.
The Type is divided in 6 types:
public enum LogType
{
DEBUG = 0,
EVENT,
ERROR,
TEST_STEP,
WARNING,
SUCCESS
}
The idea is that each log level has a checkbox next to it so that I can quickly add or remove a specified log level. Now this works fine for the small log files with a good spread of different log levels. But when I have huge log files it takes some time to load and as I iterate the array of objects each time a checkbox changes and print again line for line this is quite ineffective.
Is there some better way to connect these objects to each line, or have another component where this is better suited so that you can easily hide or show the log levels?
Best Regards
Andreas
All i can recommend is using WPF and UI virtualization so that you don't burden the UI with a ton of elements that represent a line in the log, and for the filtering use parlallelization to speed things up and to leave the UI responsive.
Since you don't need to update in real time, you could read the entire file, and store the different types in different classes, which inherit from the same base class. For example, you have a base class Log and a derived class DebugLog : Log, etc.,for each type. You can define and implement most fields in the base class only, since it's simply storing some data, without the need to modify it.
These derived classes store the data separately per type of log level. That way, you can refresh your UI by clearing the list and loading only the data from the classes that you need, without the need to re-iterate through all of the data again (depending on the filters of course).
You would have to build some kind of sort handler to display the data correctly though.
Related
I'm implementing CQRS pattern with Event sourcing, I'm using NServiceBus, NEventStore and NES(Connects between NSB and NEventStore).
My application will check a web service regularly for any file to be downloaded and processed. when the a file is found, a command (DownloadFile) is sent to the bus, and received by FileCommandHandler which creates a new aggregate root (File) and handle the message.
Now inside the (File aggregate root) I have to check that the content of the file doesn't match with any other file content (Since the web service guarantee that only file name is unique, and the content may be duplicated with different name), by hashing it and comparing with the list of hashed contents.
The question is where I have to save the list of hash codes? is it allowed to query the read model?
public class File : AggregateBase
{
public File(DownloadFile cmd, IFileService fileDownloadService, IClaimSerializerService serializerService, IBus bus)
: this()
{
// code to download the file content, deserialize it, and publish an event.
}
}
public class FileCommandHandler : IHandleMessages<DownloadFile>, IHandleMessages<ExtractFile>
{
public void Handle(DownloadFile command)
{
//for example, is it possible to do this (honestly, I feel it is not, since read model should always considered stale !)
var file = readModelContext.GetFileByHashCode (Hash(command.FileContent));
if (file != null)
throw new Exception ("File content matched with another already downloaded file");
// Since there is no way to query the event source for file content like:
// eventSourceRepository.Find<File>(c=>c.HashCode == Hash(command.FileContent));
}
}
Seems like you're looking for deduplication.
Your command side is where you want things to be consistent. Queries will always leave you open to race conditions. So, instead of running a query, I'd reverse the logic and actually write the hash into a database table (any db with ACID guarantees). If this write is successful, process the file. If the write of the hash fails, skip processing.
There's no point putting this logic into a handler, because retrying the message in case of failure (ie storing the hash multiple times) will not make it succeed. You'd also end up with messages for duplicate files in the error q.
A good place for the deduplication logic is likely inside your web service client. Some pseudo logic
Get file
Open transaction
Insert hash into database & catch failure (not any failure, only failure to insert)
Bus.Send message to process file if # of records inserted in step 3 is not zero
commit transaction
Some example deduplication code in NServiceBus gateway here
Edit:
Looking at their code, I actually think the session.Get<DeduplicationMessage> is unnecessary. session.Save(gatewayMessage); should be enough and is the consistency boundary.
Doing a query would make sense only if the rate of failure is high, meaning you have a lot of duplicate content files. If 99%+ of inserts succeed, the duplicates can indeed be treated as exceptions.
This depends on a lot of things ... throughput being one of them. But since you're approaching this problem in a "pull based" fashion anyway (you're querying a webservice to poll for work (downloading and analysing a file)), you could make this whole process serial without having to worry about collisions. Now that might not give the desired rate at which you want to be handling "the work", but more importantly ... have you measured? Let's sidestep that for a minute and assume that serial isn't going to work. How many files are we talking about? A few 100, 1000, ... millions? Depending on that hashes might fit into memory and could be rebuilt if/when the process should come down. There might also be an opportunity to partition your problem along the axis of time or context. Every file since the beginning of dawn or just today, or maybe this month's worth of files? Really, I think you should dig deeper in your problem space. Apart from that, this feels like an awkward problem to solve using event sourcing, but YMMV.
When you have a true uniqueness-constraint in your domain, you can make the uniqueness-tester a domain service, whose implementation is part of the infrastructure -- similar to a repository, whose interface is part of the domain and whose implementation is part of the infrastructure. For the implementation, you can then use an in-memory hash or a database that is updated/queried as needed.
I am attempting to build (for learning purposes) my own event logger; I am not interested in hearing about using a non-.net frameworks instead of building my own as I am doing this to better understand .net.
The idea is to have an event system that I can write out to a log file and/or pull from while inside the program. To do this I am creating an LogEvent class that will be stored inside of a Queue<LogEvent>.
I am planning on using the following fields in my LogEvent class:
private EventLogEntryType _eventType //enum: error, info, warning...
private string _eventMessage
private Exception _exception
private DateTime _eventTime
What I am not sure is the best way to capture the object that caused the event to be called. I thought about just doing a private Object _eventObject; but I am thinking that is not thread safe or secure.
Any advice on how to best store the object that called the event would be appreciated. I am also open to any other suggestions you may have.
Thanks, Tony
First off, nothing wrong with writing your own. There are some good frameworks our there, but sometimes you reach the point where some bizarre requirement gets you rolling your own, I've been there anyway...
I don't think you should be using text messages. After doing this type of logging in several projects, I have come the the conclusion that the best approach is to have a set of event types (integer IDs) with some type of extra information field.
You should have an enum of LogEvetTypes that looks something like this:
public enum LogEventTypes
{
//1xxx WS Errors
ThisOrThatWebServiceError = 1001,
//2xxx DB access error
//etc...
}
This, from my experience will make your life much easier when trying to make use of the information you logged. You can also add an ExtraInformation field in order to provide event instance specific information.
As for the object that caused the event, I would just use something like typeof(YourClass).ToString();. If this a custom class you created, you can also implement a ToString override that will name sense in your logging context.
Edit: I am adding several details I wrote about in the comments, since I think they are important. Passing objects, which are not immutable, by ref to service methods is generally not a good idea. You might reassigne the same variable in a loop (for example) and create a bug that is near-impossible to find. Also, I would recommend doing some extra work now to decouple the logging infrastructure from the implementation details of the application, since doing this later will cause a lot of pain. I am saying this from my own very painful experience.
I need to transfer .NET objects (with hierarchy) over network (multiplayer game). To save bandwidth, I'd like to transfer only fields (and/or properties) that changes, so fields that won't change won't transfer.
I also need some mechanism to match proper objects on the other client side (global object identifier...something like object ID?)
I need some suggestions how to do it.
Would you use reflection? (performance is critical)
I also need mechanism to transfer IList deltas (added objects, removed objects).
How is MMO networking done, do they transfer whole objects?
(maybe my idea of per field transfer is stupid)
EDIT:
To make it clear: I've already got mechanism to track changes (lets say every field has property, setter adds field to some sort of list or dictionary, which contains changes - structure is not final now).
I don't know how to serialize this list and then deserialize it on other client. And mostly how to do it effectively and how to update proper objects.
There's about one hundred of objects, so I'm trying avoid situation when I would write special function for each object. Decorating fields or properties with attributes would be ok (for example to specify serializer, field id or something similar).
More about objects: Each object has 5 fields in average. Some object are inherited from other.
Thank you for all answeres.
Another approach; don't try to serialize complex data changes: instead, send just the actual commands to apply (in a terse form), for example:
move 12432 134, 146
remove 25727
(which would move 1 object and remove another).
You would then apply the commands at the receiver, allowing for a full resync if they get out of sync.
I don't propose you would actually use text for this - that is just to make the example clearer.
One nice thing about this: it also provides "replay" functionality for free.
The cheapest way to track dirty fields is to have it as a key feature of your object model, I.e. with a "fooDirty" field for every data field "foo", that you set to true in the "set" (if the value differs). This could also be twinned with conditional serialization, perhaps the "ShouldSerializeFoo()" pattern observed by a few serializers. I'm not aware of any libraries that match exactly what you describe (unless we include DataTable, but ... think of the kittens!)
Perhaps another issue is the need to track all the objects for merge during deserialization; that by itself doesn't come for free.
All things considered, though, I think you could do something alon the above lines (fooDirty/ShouldSerializeFoo) and use protobuf-net as the serializer, because (importantly) that supports both conditional serialization and merge. I would also suggest an interface like:
ISomeName {
int Key {get;}
bool IsDirty {get;}
}
The IsDrty would allow you to quickly check all your objects for those with changes, then add the key to a stream, then the (conditional) serialization. The caller would read the key, obtain the object needed (or allocate a new one with that key), and then use the merge-enabled deserialize (passing in the existing/new object).
Not a full walk-through, but if it was me, that is the approach I would be looking at. Note: the addition/removal/ordering of objects in child-collections is a tricky area, that might need thought.
I'll just say up front that Marc Gravell's suggestion is really the correct approach. He glosses over some minor details, like conflict resolution (you might want to read up on Leslie Lamport's work. He's basically spent his whole career describing different approaches to dealing with conflict resolution in distributed systems), but the idea is sound.
If you do want to transmit state snapshots, instead of procedural descriptions of state changes, then I suggest you look into building snapshot diffs as prefix trees. The basic idea is that you construct a hierarchy of objects and fields. When you change a group of fields, any common prefix they have is only included once. This might look like:
world -> player 1 -> lives: 1
... -> points: 1337
... -> location -> X: 100
... -> Y: 32
... -> player 2 -> lives: 3
(everything in a "..." is only transmitted once).
It is not logical to transfer only changed fields because you would be wasting your time on detecting which fields changed and which didn't and how to reconstruct on the receiver's side which will add a lot of latency to your game and make it unplayable online.
My proposed solution is for you to decompose your objects to the minimum and sending these small objects which is fast. Also, you can use compression to reduce bandwidth usage.
For the Object ID, you can use a static ID which increases when you construct a new Object.
Hope this answer helps.
You will need to do this by hand. Automatically keeping track of property and instance changes in a hierarchy of objects is going to be very slow compared to anything crafted by hand.
If you decide to try it out anyway, I would try to map your objects to a DataSet and use its built in modification tracking mechanisms.
I still think you should do this by hand, though.
Say you have a dialog with 3 fields.
Case 1. Entering field 1 will default field 2 and field 3
(eventHandler1)
Case 2. Entering field 2 will default field 3
(eventHandler2)
Case 3. Entering field 3 doesn't default anything.
There are two problems with this:
Redundancy: w/o additional effort, eventHandler1 implicitly
triggers eventHandler2. In this toy example, it's not a problem.
But expand the scenario out to a larger number of fields and w/o
care, the overhead can become huge.
Order-dependency: I think field 3 will default with eventHandler2.
But in either case; sometimes, eventHandler1's default may be
correct. Other-times, eventHandler2's default may be correct.
Is there a clean, structured way of handling this in C# w/o having to deal with a massive number of states?
Ok, what you will need to do is seperate your layers in the winforms app. Design a class that does the work of determining which fields impact which other fields. Each of the properties in this class will use custom code in the property setter to modify dependant properties, which will in turn send a signal to the window that their contents have changed.
The "View" (View-like) class will be a container for your first class, and handle input events, calling methods from the second class to handle the results. Lastly it will update the other fields when commanded to do so from the other class.
Old, incorrect answer here (Didn't catch the Winforms tag, silly me
The simplest answer is good View-Model and View seperation, but
without a code sample, it's hard to determine if MVVM is appropriate.
So your View xaml will have definitions for three (four, ten,
whatever) fields, each databound to a property in your ViewModel.
The setter for each property will handle the logic of setting
dependant values in other properties. As each property is set, they
should notify when changed, and the UI will update without any further
work from you.
Much less coupled; much more structured.
as a beginner, I have formulated some ideas, but wanted to ask the community about the best way to implement the following program:
It decodes 8 different types of data file. They are all different, but most are similar (contain a lot of similar fields). In addition, there are 3 generations of system which can generate these files. Each is slightly different, but generates the same types of files.
I need to make a visual app which can read in any one of these, plot the data in a table (using datagridview via datatable at the moment) before plotting on a graph.
There is a bit more to it, but my question is regarding the basic structure.
I would love to learn more about making best use of object oriented techniques if that would suit well.
I am using c# (unless there are better recommendations) largely due to my lacking experience and quick development time.
I am currently using one class called 'log' that knows what generation/log type the file that is open is. it controls reading and exporting to a datatable. A form can then give it a path, wait for it to process the file and request the datatable to display.
Any obvious improvements?
As you have realised there is a great deal of potential in creating a very elegant OOP application here.
Your basic needs - as much as I can see from the information you have share - are:
1) A module that recognises the type of file
2) A module that can read the file and load the data into a common structure (is it going to be common structure??) this consists of handlers
3) A module that can visualise the data
For the first one, I would recommend two patterns:
1a) Factory pattern: File is passed to a common factory and is parsed to the point that it can decide the handler
2a) Chain-of-responsibility: File is passed to each handler which knows if it can support the file or not. If it cannot passes to the next one. At the end either one handler picks it up or an error will occur if the last handler cannot process it.
For the second one, I recommend to design a common interface and each handler implements common tasks such as load, parse... If visualisation is different and specific to handlers then you would have those set of methods as well.
Without knowing more about the data structure I cannot comment on the visualisation part.
Hope this helps.
UPDATE
This is the factory one - a very rough pseudocode:
Factory f = new Factory();
ILogParser parser = f.GetParser(fileName); // pass the file name so that factory inspects the content and returns appropriate handler
CommonDataStructure data = parser.Parse(fileName); // parse the file and return CommonDataStructure.
Visualiser v = new Visualiser(form1); // perhaps you want to pass the refernce of your form
v.Visualise(data); // draw pretty stuff now!
Ok, first thing - make one class for every file structure type, as a parser. Use inheritance as needed to combine common functionality.
Every file parser should have a method to identify whether it can parse a file, so you can take a file name, and just ask the parsers which thinks it can handle the data.
.NET 4.0 and the extensibility framework can allow dynamic integration of the parsers without a known determined selection graph.
The rest depends mostly on how similar the data is etc.
Okay, so the basic concept of OOP is thinking of Classes etc as Objects, straight from the offset, object orientated programming can be a tricky subject to pick up at first but the more practice you get the more easy you find it to implement programs using OOP.
Take a look here: http://msdn.microsoft.com/en-us/beginner/bb308750.aspx
So you can have a Decoder class and interface, something like this.
interface IDecoder
{
void DecodeTypeA(param1, 2, 3 etc);
void DecodeTypeB(param1, 2, 3 etc);
}
class FileDecoder : IDecoder
{
void DecodeTypeA(param1, 2, 3 etc)
{
// Some Code Here
}
void DecodeTypeB(param1, 2, 3 etc)
{
// Some Code Here
}
}