BadHttpRequestException due to MinRequestBodyDataRate and poor connection - c#

I have an ASP.NET Core web application that accepts file uploads from authenticated users, along with some error handling to notify me when an exception occurs (Exceptional). My problem is I'm periodically getting alerts of BadHttpRequestExceptions due to users accessing the application via mobile devices in areas with unreliable coverage. I used to get this with 2.0 too, but until the exception description got updated in 2.1, I never knew it was specifically related to MinRequestBodyDataRate, or that it was configurable.
I've up'd the defaults (240 bytes over 5 seconds) with the following
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Limits.MinRequestBodyDataRate = new MinDataRate(240.0, TimeSpan.FromSeconds(10.0));
})
.UseStartup<Startup>();
This doubles the duration it will wait for the client/browser, but I'm still getting error alerts. I can't control the poor reception of the users, but I'm still hoping to try and find a way to mitigate the errors. I can extend the minimum even more, but I don't want to do it for the entire application if possible. Ideally it'd be to the specific route/action handling the upload. The documentation I was able to find indicated this minimum can be configured using the IHttpMinRequestBodyDataRateFeature. I thought about putting this inside the action, but I don't think the action is even called until the modelbinding has gotten the entire file uploaded in order to bind it to my parameter.
public async Task<IActionResult> Upload(IFormFile file)
{
var minRequestBodyDataRateFeature = HttpContext.Features.Get<IHttpMinRequestBodyDataRateFeature>();
minRequestBodyDataRateFeature.MinDataRate = new MinDataRate(240, TimeSpan.FromSeconds(30));
myDocumentService.SaveFile(file);
}
This is somewhat mitigated by the fact that I had previously implemented chunked uploading, and only save the file to the service/database when the last chunk comes it (building up the file gradually in a temporary location in between). But I'm still get these BadHttpRequestExceptions even with the extended duration done via CreateWebHostBuilder (chunking not shown, as it's not relevant).
I'm thinking the best bet might be to try and wire it up in the configure method that sets up the middleware, but I'm not sure about the best way to only apply it to one action. Url maybe? And I'm wondering what implications there would be if I disabled the min rate entirely by setting it to null.
I'm basically just looking to make the connection more tolerant of poor quality connections while not upsetting too much of the day to day for other aspects of the application.
I need a way to apply the change (potentially in the middleware Startup.Configure()), in such a way that it only applies to the affected route/url/action.
Are there implications to consider if I disable it entirely (versus enlarging it) for that route/url/action? Entire application?
Failing any of that, is it safe to simply ignore these errors and never log them? Do I create a blind spot to a malicious interactions with the application/server?

I have noticed the same issue. I have bigger files that users can download from my site and some people, depending on the region, have slow download rates and the download stops after a while. I also see the exceptions in the log.
I do not recommend to disable the feature, because of some stuck or very slow connections, that maybe sum up. Just increase the time and lower the rate to a value where you can say that your web page is still usable.
I would not do that for specific URL's. It does not hurt when the pages have the same settings unless you notice some performance issues.
Also keep in mind that there is also a MinResponseDataRate option.
The best way is to set some reasonable values at the application startup for all routes. There will always be exceptions in the log from people that losing the connection to the internet and kestrel has to close the connection after a while to free up resources.
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Limits.MinRequestBodyDataRate =
new MinDataRate(bytesPerSecond: 80, gracePeriod: TimeSpan.FromSeconds(20));
options.Limits.MinResponseDataRate =
new MinDataRate(bytesPerSecond: 80, gracePeriod: TimeSpan.FromSeconds(20));
})
.ConfigureLogging((hostingContext, logging) =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
.UseNLog()
.UseStartup<Startup>()
.Build();
}
Please also have a look at your async Upload method. Make sure you have an await or return a task. I don't think it compiles this way.

Here is an option you can do using a ResourceFilter which runs before model binding.
using System;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.AspNetCore.Server.Kestrel.Core;
using Microsoft.AspNetCore.Server.Kestrel.Core.Features;
using Microsoft.Extensions.DependencyInjection;
namespace YourNameSpace
{
public class RateFilter : Attribute, IResourceFilter
{
private const string EndPoint = "YourEndPoint";
public void OnResourceExecuting(ResourceExecutingContext context)
{
try
{
if (!context.HttpContext.Request.Path.Value.Contains(EndPoint))
{
throw new Exception($"This filter is intended to be used only on a specific end point '{EndPoint}' while it's being called from '{context.HttpContext.Request.Path.Value}'");
}
var minRequestRateFeature = context.HttpContext.Features.Get<IHttpMinRequestBodyDataRateFeature>();
var minResponseRateFeature = context.HttpContext.Features.Get<IHttpMinResponseDataRateFeature>();
//Default Bytes/s = 240, Default TimeOut = 5s
if (minRequestRateFeature != null)
{
minRequestRateFeature.MinDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10));
}
if (minResponseRateFeature != null)
{
minResponseRateFeature.MinDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10));
}
}
catch (Exception ex)
{
//Log or Throw
}
}
public void OnResourceExecuted(ResourceExecutedContext context)
{
}
}
}
Then you can use the attribute on a specific end point like
[RateFilter]
[HttpPost]
public IActionResult YourEndPoint(YourModel request)
{
return Ok();
}
You can further customize the filter to take in the endpoint/rates as
ctor parameters.
You can also remove the check for specific endpoint
You can return instead of throw

Related

Checking the throttle before using a GraphServiceClient

I have several GraphServiceClients and I'm using them to retrieve information from Microsoft Graph API. There's a throttle on the GraphServiceClient calls. As far as I understood from this documentation, you can't call APIs more than 10,000 times in a 10-minute time frame and you can only use 4 concurrent requests at the same time. What's a thread-safe and efficient way to check if I have reached the maximum limit?
My implementation
I came up with this but I'm not sure if it's actually how the Microsoft Graph is checking for the limits.
public class ThrottledClient
{
private readonly TimeSpan _throttlePeriod;
private readonly int _throttleLimit;
public ThrottledClient(int throttleLimit, TimeSpan throttlePeriod)
{
_throttleLimit = throttleLimit;
_throttlePeriod = throttlePeriod;
}
private readonly ConcurrentQueue<DateTime> _requestTimes = new();
public required GraphServiceClient GraphClient { get; set; }
public async Task CheckThrottleAsync(CancellationToken cancellationToken)
{
_requestTimes.Enqueue(DateTime.UtcNow);
if(_requestTimes.Count > _throttleLimit)
{
Console.WriteLine($"Count limit, {DateTime.Now:HH:mm:ss}");
_requestTimes.TryDequeue(out var oldestRequestTime);
var timeRemaining = oldestRequestTime + _throttlePeriod - DateTime.UtcNow;
if(timeRemaining > TimeSpan.Zero)
{
Console.WriteLine($"Sleeping for {timeRemaining}");
await Task.Delay(timeRemaining, cancellationToken).ConfigureAwait(false);
Console.WriteLine($"Woke up, {DateTime.Now:HH:mm:ss}");
}
}
}
}
public class Engine
{
public async Task RunAsync()
{
var client = GetClient();
await client.CheckThrottleAsync(_cts.Token).ConfigureAwait(false);
await DoSomethingAsync(client.GraphClient).ConfigureAwait(false);
}
}
I can think of other ways to use like lock or Semaphore but again, I'm not sure if I'm thinking about this correctly.
I believe you can use a Graph Developer proxy to test these.
Microsoft Graph Developer Proxy aims to provide a better way to test applications that use Microsoft Graph. Using the proxy to simulate errors, mock responses, and demonstrate behaviors like throttling, developers can identify and fix issues in their code early in the development cycle before they reach production.
More details can be found here https://github.com/microsoftgraph/msgraph-developer-proxy
We use Polly to automatically retry failed http calls. It also has support for exponentially backing of.
So why not handle the error in a way that works for your application, instead of trying to figure out what the limit it before hand (and doing an extra call counting to the limit). You can test those scenarios with the Graph Developer proxy from the other answer.
We also use a circuit breaker to fail quick without extra call to graph and retry later.
The 4 concurrent requests you’re mentioning are for Outlook resources (I’ve written many user voices, GitHub issues and escalations on it). Since Q4 2022, you can do 20 requests in a batch for all resources (in the same tenant). Batching reduces the http overhead and might help you to overcome throttling limits, by combining requests in a smart way.

Masstransit with multiple consumers of same message fire multiple times in .NET 6

I am having a simple scenario where I am publishing certain message using IPublishEndpoint and I want whatever microservice engages a consumer for it to consume independently on other microservices, but JUST ONCE not 10x. When I configure as documentation says it does not behave as described. It appears to multiplicate the message consumer-count-ish times as each consumer is not firing just once, but 3x in my case. Why?
Exact scenario: I have 3 independent microservices running in docker as mvc projects held in one solution, interconnected with core library where contracts resides. Each project has its own implementation of IConsumer of SAME contract class from core library and every project is registering that consumer at startup using same rabbitmq instance and virtualhost. For demonstration I have simplified the code to minimum.
From vague and confusing masstransit documentation I could not find why is behaving like this or what I am doing wrong nor how should I configure it properly (https://masstransit-project.com/). Masstransit documentation is very fragmented and does not explain much what their main configuration methods actually do for real in rabbitmq.
public interface ISystemVariableChanged
{
/// <summary>Variable key that was modified.</summary>
public string Key { get; set; }
/// <summary>Full reload requested.</summary>
public bool FullReload { get; set; }
}
3 consumers like this:
public class SystemVariableChangedConsumer : IConsumer<ISystemVariableChanged>
{
private readonly ILogger<SystemVariableChangedConsumer > logger;
public SystemVariableChangedConsumer (ILogger<SystemVariableChangedConsumer > logger)
{
this.logger = logger;
}
public async Task Consume(ConsumeContext<ISystemVariableChanged> context)
{
logger.LogInformation("Variable changed in /*ProjectName*/"); // differs per project
await Task.CompletedTask;
}
}
3x Startup like this
services.AddMassTransit(bus =>
{
bus.AddConsumer<SystemVariableChangedConsumer>();
// bus.AddConsumer<SystemVariableChangedConsumer>().Endpoint(p => p.InstanceId = "/*3 different values*/"); // not working either
bus.SetKebabCaseEndpointNameFormatter();
bus.UsingRabbitMq((context, rabbit) =>
{
rabbit.Host(options.HostName, options.VirtualHost, h =>
{
h.Username(options.UserName);
h.Password(options.Password);
});
rabbit.UseInMemoryOutbox();
rabbit.UseJsonSerializer();
rabbit.UseRetry(cfg => cfg.Incremental(options.RetryLimit, TimeSpan.FromSeconds(options.RetryTimeout), TimeSpan.FromSeconds(options.RetryTimeout)));
// rabbit.ConfigureEndpoints(bus); // not working either
// not working either
rabbit.ReceiveEndpoint("system-variable-changed", endpoint =>
{
endpoint.ConfigureConsumer<SystemVariableChangedConsumer>(context);
});
});
});
I tried many setups and they tend to behave quite the same wrong way (eg. setting endpoint instance ID etc.).
Regardless if I use ReceiveEndpoint method to configure per endpoint manually or ConfigureEndpoints to configure them all it makes not much of a difference.
I read various materials about this but they did not helped with masstransit setup. This should be absolute basic usecase easily achiveable, idk.
In rabbitmq console it created 1 interface exchange routing to 3 sub-exchanges created per consumer and each of those bound to final queue.
I am looking for some clean solution, not hardcoded queue names.
Can anyone help me with correct startup setup?
Thank you
This is all that is required:
services.AddMassTransit(bus =>
{
// assuming the same consumer is used, in the same namespace.
// If the consumers have different names/namespaces, InstanceId is not required
bus.AddConsumer<SystemVariableChangedConsumer>()
.Endpoint(p => p.InstanceId = "/*3 different values*/");
bus.SetKebabCaseEndpointNameFormatter();
bus.UsingRabbitMq((context, rabbit) =>
{
rabbit.Host(options.HostName, options.VirtualHost, h =>
{
h.Username(options.UserName);
h.Password(options.Password);
});
rabbit.UseMessageRetry(cfg => cfg.Incremental(options.RetryLimit, TimeSpan.FromSeconds(options.RetryTimeout), TimeSpan.FromSeconds(options.RetryTimeout)));
rabbit.UseInMemoryOutbox();
rabbit.ConfigureEndpoints(context);
});
});
I'd suggest clearing your entire broker exchange/queue binding history before running it, since previous bindings might be causing redelivery issues. But RabbitMQ is usually good about preventing duplicate deliveries for the same message to the same exchange.

Exception when creating on demand loactor for Azure Media Services asset

I'm using Azure Functions for accessing Azure Media Services. I have a HTTP trigger that will get me a streaming link for a video file.
When I try to create an on demand streaming locator the server sometimes return the following error:
One or more errors occurred. Microsoft.WindowsAzure.MediaServices.Client: An error occurred while processing this request.
Locator's ExpirationDateTime cannot be in the past.
Here is part of the code:
using System.Net;
using Microsoft.WindowsAzure.MediaServices.Client;
...
public static readonly TimeSpan ExpirationTimeThreshold = TimeSpan.FromHours(5);
public static readonly DateTime TimeSkew = DateTime.UtcNow.AddMinutes(-5);
...
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
string assetId = req.GetQueryNameValuePairs().FirstOrDefault(q => string.Compare(q.Key, "assetId", true) == 0).Value;
IAsset wantedAsset = _context.Assets.Where(a => a.Id.Equals(assetId)).FirstOrDefault();
if (wantedAsset == null)
{
return req.CreateResponse(HttpStatusCode.BadRequest,
"No asset matches the Id");
}
originLocator = wantedAsset.Locators.Where(l => l.Type == LocatorType.OnDemandOrigin).FirstOrDefault();
if (originLocator == null)
{
IAccessPolicy policy = _context.AccessPolicies.Create("Streaming policy",
ExpirationTimeThreshold,
AccessPermissions.Read);
originLocator = _context.Locators.CreateLocator(LocatorType.OnDemandOrigin, wantedAsset,
policy,
TimeSkew); //This is where the exception is returned
}
...
}
This seems to be very inconsistent behaviour, as it will work as expected most of the time.
Once when this happened I tried to put in log statements in the Azure Function, and when I saved the changes the function worked as expected.
One thing I realized as I was writing this is that the TimeSkew variable will not be updated until I save the changes to the Azure Functions.
This means that the first time this is run the TimeSkew would for example be 5:00PM and after 5 hours the TimeSkew would still be 5:00PM, meaning the ExpirationTimeThreshold and the TimeSkew will negate each other?
Making TimeSkew a local variable should then fix my problem?
I think you already answered your own question. Every time you recompile (or more specifically, every time a new instance starts running your function), the TimeSkew static variable is initialized. For all subsequent requests to the same instance, TimeSkew value stays the same, which probably leads to the mentioned error.
Solution: don't initialize any static variable to DateTime.Now related things, unless you really want to track the start time of the first run on each instance (which you don't).

RetryPolicy in Enterprise Library 5 does not work

I am working on a small app to translate and import a large amount of data from one database to another. To do this, I'm using Entity Framework and some custom extensions to commit a page of items at a time, in batches of 1000 or so. Since this can take a while, I was also interested in making sure the whole thing wouldn't grind to a halt if there is a hiccup in the connection while it's running.
I chose the Transient Fault Handling Application block, part of Enterprise Library 5.0, following this article (see Case 2: Retry Policy With Transaction Scope). Here is an example of my implementation in the form of an ObjectContext extension, which simply adds objects to the context and tries to save them, using a Retry Policy focused on Sql Azure stuff:
public static void AddObjectsAndSave<T>(this ObjectContext context, IEnumerable<T> objects)
where T : EntityObject
{
if(!objects.Any())
return;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>
(10, TimeSpan.FromSeconds(10));
var tso = new TransactionOptions();
tso.IsolationLevel = IsolationLevel.ReadCommitted;
var name = context.GetTableName<T>();
foreach(var item in objects)
context.AddObject(name, item);
policy.ExecuteAction(() =>
{
using(TransactionScope ts = new TransactionScope(TransactionScopeOption.Required, tso))
{
context.SaveChanges();
ts.Complete();
}
});
}
The code works great, until I actually test the Retry Policy by pausing my local instance of Sql Server while it's running. It almost immediately poops, which is odd. You can see that I've got the policy configured to try again in ten second intervals; either it is ignoring the interval or failing to catch the error. I suspect the latter, but I'm new to this so I don't really know.
I suspect that the SqlAzureTransientErrorDetectionStrategy does not include the error your are simulating. This policy implements specific errors thrown by SQL Azure. Look at this code to find out which errors are implemented by this policy: http://code.msdn.microsoft.com/Reliable-Retry-Aware-BCP-a5ae8e40/sourcecode?fileId=22213&pathId=1098196556
To handle the error you are trying to catch, you could implement your own strategy by implementing the ITransientErrorDetectionStrategy interface.

Registering change notification with Active Directory using C#

This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}

Categories

Resources