Custom SNMP Trap Implementation in .NET - c#

I need to create a monitoring mechanism using SNMP (in .NET). I think we'll be using an nsoftware component to handle most of the work.
It appears we must use 'traps' to communicate from the agent to the server. We'll have a number of different traps and various information detailing each trap. What is the best way to implement custom traps? That is, what is the best way to not only send a trap, but also send the information describing the trap to our 'snmp manager'? I think this is done through "variable bindings". To use "variable bindings" do we need to create our own "enterprise number" and use an "enterpriseSpecific" trap? Should we implement our own, custom MIBs or can we just send the data we need with the trap (via variable bindings)?

Unless you want to notify about one of the 5 predefined traps (e.g. cold start, warm start): yes, you will have to define an enterpriseSpecific trap, and you will need to allocate object identifiers (and plenty of them).
Parameters are indeed transmitted in variable bindings; these are structures defined as
VarBind ::=
SEQUENCE {
name ObjectName,
value ObjectSyntax
}
VarBindList ::= SEQUENCE OF VarBind
ObjectName ::= OBJECT IDENTIFIER
ObjectSyntax ::= CHOICE {
simple SimpleSyntax,
application-wide ApplicationSyntax
}
SimpleSyntax ::= CHOICE {
number INTEGER,
string OCTET STRING,
object OBJECT IDENTIFIER,
empty NULL
}
ApplicationSyntax ::= CHOICE {
address NetworkAddress,
counter Counter,
gauge Gauge,
ticks TimeTicks,
arbitrary Opaque
}
You somehow need to tell your library what the name and the value is; the library should provide API to support the various data types available as values. Notice that the variable "names" are again object identifiers.

I suggest you first determine how many cases your agent will send data back to server/monitor.
Then you need to decide how to distinguish those cases (using different ID or packaging different variable bindings).
And now write down several packets on a piece of paper and start to author the trap definition in MIB document.
What's next depends on which library you utilize to implement the conversation. Well, 'nsoftware one is a nice choice.
BTW, I rather send out TRAP v2 packet or INFORM instead of TRAP v1.
Regards,
Lex Li
http://sharpsnmplib.codeplex.com

Related

What is the purpose of the StringSegment class?

In the Microsoft.Extensions.Primitives package lib there is a class StringSegment for which the comments indicate that it is:
An optimized representation of a substring.
I was unaware of this particular class, until I discovered aspnet announcement #244, stating: Microsoft.Net.Http.Headers converted to use StringSegments.
Still, looking at the implementation of the StringSegment class, I fail to see what purpose it actually serves. I see a buffer, which I guess would indicate better manipulation on partial characters (the 'segment' part perhaps?). I also see several helper functions which are closely related - if not identical - in behaviour to those already available at regular strings, such as StartsWith/Endswith, Substring etc. The aspnet-core docs list these in full, but again this also lacks context on "why" it should be used.
So what exactly is the purpose of the StringSegment class and in which scenarios is it applicable to use it?
Is it useful to call the class in my application code, when I manipulate strings?
Can we have an example, where it will be beneficial?
It lets you perform a variety of string operations on a substring of another string, without actually calling Substring() and creating a new string object. It's roughly analogous to the way in C you can have a pointer into the middle of a string.
When parsing text many new string objects may be created or copied. This class in theory would help reduce memory used when handling large substrings. Other languages have similar concepts (see std::string_view in C++17)

What should I identify with the id argument in TraceSource.TraceEvent method?

I use the TraceSource class for logging in my .NET projects.
However a point that has never been clear to me is, what the intent of the id parameter in the TraceEvent method. Currently, I always set it to 0.
But what is the expected or typical useful usage of it?
I can think of a few possibilities:
It is an ID for the occurrence of the event (i.e. the same line of code produces a different ID on each execution);
It is an ID for the method call (i.e. you can infer the line of code from the ID);
It is an ID for a family of similar events (e.g. all error messages that say that the database is absent share the same ID);
It is an ID for a set of events that are related to a logical operation, in combination with the TraceEventType.(Start|Stop|Suspend|Resume|Transfer) enumeration values;
I've asked myself the same question and I didn't found anything to clarify this in any Microsoft documentation.
What I've manage to find is an article written by a Microsoft MVP, Richard Grimes:
"The id parameter is whatever you choose it to be, there is no compulsion that a particular ID is associated with a particular format message."
He uses 0, for the id argument, in all examples.
In MSDN articles, I've seen it used random, not providing any additional info.
I believe that you can use in any way that helps you best when reading the logs, as long as you maintain the same code convention. It may prove useful afterwards in trace filtering, if you want to use the SourceFilter.ShouldTrace method, that accept an id argument too.
I use it to describe the error type, if I have an error, or use 0 for anything else.
As far as I've seen in the documentation, it's not specifically intended for one purpose. I think it's there for you to tie in with your own logic for tracing events. The ShouldTrace() method on SourceFilter takes a matching id parameter, so you can also use it to determine which events or event types go where.
Personally, when I use TraceSource (which admittedly isn't much, having only discovered it recently) I use it to track event types or categories. In one application I already had an enum for event types that I was using with another logging method, with values Debug, Info, Warn, Error, Fatal, so I cast that to int and used that as the id, which helped with filtering later so I could filter out anything below the level I was interested in to de-clutter the trace.
Another possibility is that you could use different values to relate to different parts of the application, so Data Access = 1, User Accounts = 2, Product Logic = 3, Notifications = 4, UI = 5 etc. Again, you could then use this to filter the trace down to only the type of thing you're looking at.
Alternatively, you could (as you suggested) use different id values to mean different event types, so you could use them like error codes so that (for example) any time you saw an id of 26 you'd know that the database connection could not be established, or whatever.
It doesn't particularly matter what you use the id parameter for, as long as:
It is useful to you in building and debugging the program
It is clear and understandable to programmers reading through your code
It is used consistently throughout the program
One possibility is that you could have a centralised class that manages the event ids and provides the values based on some sort of input to make sure that the whole application uses the same id for the same thing.

Identifying a property name with a low footprint

I wish to send packets to sync properties of constantly changing game objects in a game. I've sent notifications of when a property changes on the server side to a EntitySync object that is in charge of sending out updates for the client to consume.
Right now, I'm pre-fixing the property string name. This is a lot of overhead for when you're sending a lot of updates (position, HP, angle). I'd like for a semi-unique way to idneity these packets.
I thought about attributes (reflection... slow?), using a suffix on the end and sending that as an ID (Position_A, HP_A) but I'm at a loss of a clean way to identify these properties quickly with a low foot print. It should consume as few bytes as possible.
Ideas?
Expanding on Charlie's explanation,
The protobuf-net library made by Marc Gravell is exactly what you are looking for in terms of serialization. To clarify, this is Marc Gravell's library, not Googles. It uses Google's protocol buffer encoding. It is one of the smallest footprint serializes out there, in fact it will likely generate smaller packets than you manually serializing it will ( How default Unity3D handles networking, yuck ).
As for speed, Marc uses some very clever trickery (Namely HyperDescriptors) http://www.codeproject.com/Articles/18450/HyperDescriptor-Accelerated-dynamic-property-acces
to all but remove the overhead of run time reflection.
Food for thought on the network abstraction; take a look at Rx http://msdn.microsoft.com/en-us/data/gg577609.aspx Event streams are the most elegant way I have dealt with networking and multithreaded intra-subsystem communication to date:
// Sending an object:
m_eventStream.Push(objectInstance);
// 'handling' an object when it arrives:
m_eventStream.Of(typeof(MyClass))
.Subscribe ( obj =>
{
MyClass thisInstance = (MyClass) obj;
// Code here will be run when a packet arrives and is deserialized
});
It sounds like you're trying to serialize your objects for sending over a network. I agree it's not efficient to send the full property name over the wire; this consumes way more bytes than you need.
Why not use a really fantastic library that Google invented just for this purpose.
This is the .NET port: http://code.google.com/p/protobuf-net/
In a nutshell, you define the messages you want to send such that each property has a unique id to make sending the properties more efficient:
SomeProperty = 12345
Then it just sends the id of the property and its value. It also optimizes the way it sends the values, so it might use only 1, 2, 3 bytes etc depending on how large the value is. Very clever, really.

Transfer objects on per field basis over network

I need to transfer .NET objects (with hierarchy) over network (multiplayer game). To save bandwidth, I'd like to transfer only fields (and/or properties) that changes, so fields that won't change won't transfer.
I also need some mechanism to match proper objects on the other client side (global object identifier...something like object ID?)
I need some suggestions how to do it.
Would you use reflection? (performance is critical)
I also need mechanism to transfer IList deltas (added objects, removed objects).
How is MMO networking done, do they transfer whole objects?
(maybe my idea of per field transfer is stupid)
EDIT:
To make it clear: I've already got mechanism to track changes (lets say every field has property, setter adds field to some sort of list or dictionary, which contains changes - structure is not final now).
I don't know how to serialize this list and then deserialize it on other client. And mostly how to do it effectively and how to update proper objects.
There's about one hundred of objects, so I'm trying avoid situation when I would write special function for each object. Decorating fields or properties with attributes would be ok (for example to specify serializer, field id or something similar).
More about objects: Each object has 5 fields in average. Some object are inherited from other.
Thank you for all answeres.
Another approach; don't try to serialize complex data changes: instead, send just the actual commands to apply (in a terse form), for example:
move 12432 134, 146
remove 25727
(which would move 1 object and remove another).
You would then apply the commands at the receiver, allowing for a full resync if they get out of sync.
I don't propose you would actually use text for this - that is just to make the example clearer.
One nice thing about this: it also provides "replay" functionality for free.
The cheapest way to track dirty fields is to have it as a key feature of your object model, I.e. with a "fooDirty" field for every data field "foo", that you set to true in the "set" (if the value differs). This could also be twinned with conditional serialization, perhaps the "ShouldSerializeFoo()" pattern observed by a few serializers. I'm not aware of any libraries that match exactly what you describe (unless we include DataTable, but ... think of the kittens!)
Perhaps another issue is the need to track all the objects for merge during deserialization; that by itself doesn't come for free.
All things considered, though, I think you could do something alon the above lines (fooDirty/ShouldSerializeFoo) and use protobuf-net as the serializer, because (importantly) that supports both conditional serialization and merge. I would also suggest an interface like:
ISomeName {
int Key {get;}
bool IsDirty {get;}
}
The IsDrty would allow you to quickly check all your objects for those with changes, then add the key to a stream, then the (conditional) serialization. The caller would read the key, obtain the object needed (or allocate a new one with that key), and then use the merge-enabled deserialize (passing in the existing/new object).
Not a full walk-through, but if it was me, that is the approach I would be looking at. Note: the addition/removal/ordering of objects in child-collections is a tricky area, that might need thought.
I'll just say up front that Marc Gravell's suggestion is really the correct approach. He glosses over some minor details, like conflict resolution (you might want to read up on Leslie Lamport's work. He's basically spent his whole career describing different approaches to dealing with conflict resolution in distributed systems), but the idea is sound.
If you do want to transmit state snapshots, instead of procedural descriptions of state changes, then I suggest you look into building snapshot diffs as prefix trees. The basic idea is that you construct a hierarchy of objects and fields. When you change a group of fields, any common prefix they have is only included once. This might look like:
world -> player 1 -> lives: 1
... -> points: 1337
... -> location -> X: 100
... -> Y: 32
... -> player 2 -> lives: 3
(everything in a "..." is only transmitted once).
It is not logical to transfer only changed fields because you would be wasting your time on detecting which fields changed and which didn't and how to reconstruct on the receiver's side which will add a lot of latency to your game and make it unplayable online.
My proposed solution is for you to decompose your objects to the minimum and sending these small objects which is fast. Also, you can use compression to reduce bandwidth usage.
For the Object ID, you can use a static ID which increases when you construct a new Object.
Hope this answer helps.
You will need to do this by hand. Automatically keeping track of property and instance changes in a hierarchy of objects is going to be very slow compared to anything crafted by hand.
If you decide to try it out anyway, I would try to map your objects to a DataSet and use its built in modification tracking mechanisms.
I still think you should do this by hand, though.

WCF primitive type vs complex type

I'm designing a WCF service that will return a list of objects that are describing a person in the system.
The record count is really big and there I have some properties like person's sex.
Is it better to create a new enum (It's a complex-type and consumes more bandwidth) named Sex with two values (Male and Female) or use a primitive type for this like bool IsMale?
Very little point switching to bool; which is bigger:
<gender>male</gender> (or an attribute gender="male")
or
<isMale>true</isMale> (or an attribute isMale="true")
Not much in it ;-p
The record count is really big...
If bandwidth becomes an issue, and you control both ends of the service, then rather than change your entities you could look at some other options:
pre-compress the data as (for example) gzip, and pass (instead) a byte[] or Stream, noting to enable MTOM on the service
(or) switch serializer; proobuf-net has WCF hooks, and can achieve significant bandwidth improvements over the default DataContractSerialier (again: enable MTOM). In a test based on Northwind data (here) it reduced 736,574 bytes to 133,010. and reduced the CPU required to process it (win:win). For info, it reduces enums to integers, typically requiring only 1 byte for the enum value and 1 byte to identify the field; contrast to <gender>Male</gender>, which under UTF8 is 21 bytes (more for most other encodings), or gender="male" at 14 bytes.
However, either change will break your service if you have external callers who are expecting regular SOAP...
The reason to not use an enum is that XML Schema does not have a concept equivalent to an enum. That is, it does not have a concept of named values. The result is that enums don't always translate between platforms.
Use a bool, or a single-character field instead.
I'd suggest you model it in whatever way seems most natural until you run into a specific issue or encounter a requirement for a change.
WCF is designed to abstract the underlying details, and if bandwidth is a concern then i think a bool, int or enum will all probably be 4 bytes. You could optimize by using a bitmask or using a single byte.
Again, the ease of use of the API and maintainability is probably more important, which do you prefer?
if( user[i].Sex == Sexes.Male )
if( user[i].IsMale == true; ) // Could also expose .IsFemale
if( user[i].Sex == 'M' )
etc. Of course you could expose multiple.

Categories

Resources