**Note: Cross-posted at LabVIEW forums: http://forums.ni.com/t5/LabVIEW/C-VISA-wait-on-RQS/td-p/3122939
I'm trying to write a simple C# (.NET 4.0) program to control a Keithley 2400 SMU over VISA GPIB and I'm having trouble with getting the program to wait for the Service Request that the Keithley sends at the end of the sweep.
The sweep is a simple linear voltage sweep, controlled internally by the Keithley unit. I've got the unit set up to send a ServiceRequest signal at the end of the sweep or when compliance is reached.
I'm able to send the commands to the SMU and read the data buffer, but only if I manually enter a timeout between the sweep start command and the read data command.
One issue I'm having is that I'm quite new to C# - I'm using this project (porting parts of my LV code) to learn it.
Here is what I have so far for my C# code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using NationalInstruments.VisaNS;
private void OnServiceRequest(object sender, MessageBasedSessionEventArgs e)
{
Console.WriteLine("Service Request Received!");
}
// The following code is in a class method, but
public double[,] RunSweep()
{
// Create the session and message-based session
MessageBasedSession mbSession = null;
Session mySession = null;
string responseString = null;
// open the address
Console.WriteLine("Sending Commands to Instrument");
instrAddr = "GPIB0::25::INSTR";
mySession = ResourceManager.GetLocalManager().Open(instrAddr);
// Cast to message-based session
mbSession = (MessageBasedSession)mySession;
// Here's where things get iffy for me... Enabling the event and whatnot
mbSession.ServiceRequest += new MessageBasedSessionEventHandler(OnServiceRequest);
MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest;
mbSession.EnableEvent(srq, EventMechanism.Handler);
// Start the sweep (SMU was set up earlier)
Console.WriteLine("Starting Sweep");
mbSession.Write(":OUTP ON;:INIT");
int timeout = 10000; // milliseconds
// Thread.Sleep(10000); // using this line works fine, but it means the test always takes 10s even if compliance is hit early
// This raises error saying that the event is not enabled.
mbSession.WaitOnEvent(srq, timeout);
// Turn off the SMU.
Console.WriteLine("I hope the sweep is done, cause I'm tired of waiting");
mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV");
// Get the data
string data = mbSession.Query(":TRAC:DATA?");
// Close session
mbSession.Dispose();
// For now, create a dummy array, 3x3, to return. The array after is the starting value.
double[,] dummyArray = new double[3, 3] {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
return dummyArray;
}
All the above is supposed to mimic this LabVIEW code:
So, any ideas on where I'm going wrong?
Thanks,
Edit:
After a little more fiddling, I've found that the Service Request function OnServiceRequest is actually fired at the right time ("Service Request Received!" is printed to the console).
It turns out that I need to enable the event as a Queue rather than a handler. This line:
mbSession.EnableEvent(srq, EventMechanism.Handler);
Should actually be:
mbSession.EnableEvent(srq, EventMechanism.Queue);
Source: The documentation under "Remarks". It was a pain to find the docs on it... NI needs to make it easier :-(.
With this change, I also don't need to create the MessageBasedSessionEventHandler.
The final, working code looks like:
rm = ResourceManager.GetLocalManager().Open("GPIB0::25::INSTR");
MessageBasedSession mbSession = (MessageBasedSession)rm;
MessageBasedSessionEventType srq = MessageBasedSessionEventType.ServiceRequest;
mbSession.EnableEvent(srq, EventMechanism.Queue); // Note QUEUE, not HANDLER
int timeout = 10000;
// Start the sweep
mbSession.Write(":OUTP ON;:INIT");
// This waits for the Service Request
mbSession.WaitOnEvent(srq, timeout);
// After the Service Request, turn off the SMUs and get the data
mbSession.Write(":OUTP OFF;:TRAC:FEED:CONT NEV");
string data = mbSession.Query(":TRAC:DATA?");
mbSession.Dispose();
What you're doing looks correct to me so it's possible that there's an issue with the NI library.
The only thing I can think of to try is waiting for "all events" instead of just "ServiceRequest." like this:
mbSession.WaitOnEvent(MessageBasedSessionEventType.AllEnabledEvents, timeout);
Note: that it doesn't look like you can "enable" all events (so don't change that part).
I also looked for some examples of how other people are doing Keithley sweeps and I found this and this(Matlab ex). As I suspected in both they don't use events to determine when the sweep is finished, but rather a "while loop that keeps polling the Keithley" (the first link actually uses threads, but it's the same idea). This makes me think that maybe that's your best bet. So you could just do this:
int timeout = 10000;
int cycleWait = 1000;
for (int i = 0; i < timeout / cycleWait; i++)
{
try
{
string data = mbSession.Query(":TRAC:DATA?");
break;
}
catch
{
Thread.Sleep(cycleWait);
}
}
(You may also have to check if data is null, but there has to be some way of knowing when the sweep is finished).
Related
I am pretty new in c# and could need some help on a audio project.
My audio input buffers call a method when they are filled up. In the method I marshall this buffers to a local float[], pass this to a function, where some audio processing is done. After processing the function throws back the manipulated float[], which I pass by Marshall.copy to the audio aout buffer. it works but it is is pretty hard to get the audioprocessing done fast enough, to pass the result back to the method without ending in ugly glitches. If I enlarge the audio buffers it gets better but I get untolarable high latency in the signal chain.
One problem is the GC. My DSP Routine is doing some FFT and the methods frequently need allocate local variables. I think this is slowing down my process a lot.
So I need a way to allogate once (and re-access) a few pieces of umanaged memory, keep this mem fpr the entire runtime and just reference to it from the methods.
I found e.g:
IntPtr hglobal = Marshal.AllocHGlobal(8192);
Marshal.FreeHGlobal(hglobal);
SO what I tried is to define a global static class "Globasl" with static member and asigned that IntPtr to that.
Globals.mem1 = hglobal;
From within any nother method I can access this now by
e.g.
int[] f = new int[2];
f[0] = 111;
f[1] = 222;
Marshal.Copy(f, 0, Globals.mem1, 2);
Now comes my problem:
If I want to access this int[] from the example above in another method, how could I do this?
thank you for your fast help.
I was little unprecise seems, sorry
my audiodevice driver throws a buffer filled event which I catch, (in pseudocode since I dont have access to my home desktop right now). looks like:
void buffer (....)
{
byte[] buf = new byte[asiobuffersize];
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
buf= manipulate(buf);
marshall.copy(buf, 0, asiooutbuffer, asiobufferlenth);
}
the manipulate function is doing some conversions from byte to float then some math (FFT) and backtransform to byte and looks like e.g.
private byte[] manipulate(byte[] buf, Complex[] filter)
{
float bu = convertTofloat(buf); //conversion from byte to audio float here
Complex[] inbuf = new Complex[bu.Length];
Complex[] midbuf = new Complex[bu.Length];
Complex[] mid2buf = new Complex[bu.Length];
Complex[] outbuf = new Complex[bu.Length];
for ( n....)
{
inbuf[n]= bu[n]; //Copy to Complex
}
midbuf=FFT(inbuf); //Doing FFT transform
for (n....)
{
mid2buf[n]=midbuf[n]*filter[n]; // Multiply with filter
}
outbuf=iFFT(mid2buf) //inverse iFFT transform
byte bu = convertTobyte(float); //conversion from float back to audio byte
return bu;
}
here I expect my speed issue to be. So I thought the problem could be solved if the manipulating funktion could just "get" a fixed piece of unmanaged memory where (once pre-created) sits fixed all those those variables (Complex e.g.) and pre allocated menory, so that I must not create new ones each time the function is called. I first expected the reason for my glitches in wrong FFT or math but it happens in kind of "sharp" time distances of few seconds, so it is not connected to audio signal issues like clipping. I think the isseue happens, when the GC is doing some serious work and eats me exactly this few miliseconds missing to get the outbut buffer filled in time.
acceptable.
I really doubt the issues you are experiencing are induced by your managed buffer creation/copying. Instead I think your problem is that you have your data capture logic coupled with your DSP logic. Usually, captured data resides in a circular buffer, where the data is rewritten after some period, so you should be fetching this data as soon as posible.
The problem is that you don't fetch the next available data block until after your DSP is done; you already know FFT operations are really CPU intensive! If you have a processing peak, you may not be able to retrieve data before it is rewritten by the capture driver.
One possibility to address your issue is try to increase, if posible, the size and/or available amount of capture buffers. This buys you more time before the captured data is rewritten. The other possibility, and is the one that I favor is decoupling your processing stage from your capture stage, in this way, if new data is available while you are busy performing your DSP computations, you can still grab it and buffer it almost as soon as it becomes available. You become much more resilient to garbage collection induced pauses or computing peaks inside your manipulate method.
This would involve creating two threads: the capture thread and the processing thread. You would also need an "Event" that signals the processing thread that new data is available, and a queue, that will serve as a dynamic, expandable buffer.
The capture thread would look something like this:
// the following are class members
AutoResetEvent _bufQueueEvent = new AutoResetEvent(false);
Queue<byte[]> _bufQueue = new Queue<byte[]>;
bool _keepCaptureThreadAlive;
Thread _captureThread;
void CaptureLoop() {
while( _keepCaptureThreadAlive ) {
byte[] asioinbuffers = WaitForBuffer();
byte[] buf = new byte[asiobuffers.Length]
marshall.copy(asioinbuffers, 0, buf, asiobufferlenth);
lock( _bufQueue ) {
_bufQueue.Enqueue(buf);
}
_bufQueueEvent.Set(); // notify processing thread new data is available
}
}
void StartCaptureThread() {
_keepCaptureThreadAlive = true;
_captureThread = new Thread(new(ThreadStart(CaptureLoop));
_captureThread.Name = "CaptureThread";
_captureThread.IsBackground = true;
_captureThread.Start();
}
void StopCaptureThread() {
_keepCaptureThreadAlive = false;
_captureThread.Join(); // wait until the thread exits.
}
The processing thread would look something like this
// the following are class members
bool _keepProcessingThreadAlive;
Thread _processingThread;
void ProcessingLoop() {
while( _keepProcessingThreadAlive ) {
_bufQueueEvent.WaitOne(); // thread will sleep until fresh data is available
if( !_keepProcessingThreadAlive ) {
break; // check if thread is being waken for termination.
}
int queueCount;
lock( _bufQueue ) {
queueCount = _bufQueue.Count;
}
for( int i = 0; i < queueCount; i++ ) {
byte[] buffIn;
lock( _bufQueue ) {
// only lock during dequeue operation, this way the capture thread Will
// be able to enqueue fresh data even if we are still doing DSP processing
buffIn = _bufQueue.Dequeue();
}
byte[] buffOut = manipulate(buffIn); // you are safe if this stage takes more time than normal, you will still get the incoming data
// additional logic using manipulate() return value
...
}
}
}
void StartProcessingThread() {
_keepProcessingThreadAlive = true;
_processingThread = new Thread(new(ThreadStart(ProcessingLoop));
_processingThread.Name = "ProcessingThread";
_processingThread.IsBackground = true;
_processingThread.Start();
}
void StopProcessingThread() {
_keepProcessingThreadAlive = false;
_bufQueueEvent.Set(); // wake up thread in case it is waiting for data
_processingThread.Join();
}
At my job we also perform a lot of DSP and this pattern has really helped us with the kind of issues you are experiencing.
What I'm trying to accomplish is a C# application that will read logs from the Windows Event Logs and store them somewhere else. This has to be fast, since some of the devices where it will be installed generate a high amount of logs/s.
I have tried three approaches so far:
Local WMI: it didn't work good, there are too many errors and exceptions caused by the size of the collections that need to be loaded.
EventLogReader: I though this was the perfect solution, since it allows you to query the event log however you like by using XPath expressions. The problem is that when you want to get the content of the message for each log (by calling FormatDescription()) takes way too much time for long collections.
E.g: I can read 12k logs in 0.11s if I just go over them.
If I add a line to store the message for each log, it takes nearly 6 minutes to complete exactly the same operation, which is totally crazy for such a low number of logs.
I don't know if there's any kind of optimization that might be done to EventLogReader in order to get the message faster, I couldn't find anything either on MS documentation nor on the Internet.
I also found that you can read the log entries by using a class called EventLog. However, this technology does not allow you to enter any kind of filters so you basically have to load the entire list of logs to memory and then filter it out according to your needs.
Here's an example:
EventLog eventLog = EventLog.GetEventLogs().FirstOrDefault(el => el.Log.Equals("Security", StringComparison.OrdinalIgnoreCase));
var newEntries = (from entry in eventLog.Entries.OfType()
orderby entry.TimeWritten ascending
where entry.TimeWritten > takefrom
select entry);
Despite of being faster in terms of getting the message, the use of memory might be high and I don't want to cause any issues on the devices where this solution will get deployed.
Can anybody help me with this? I cannot find any workarounds or approaches to achieve something like this.
Thank you!.
You can give the EventLogReader class a try. See https://learn.microsoft.com/en-us/previous-versions/bb671200(v=vs.90).
It is better than the EventLog class because accessing the EventLog.Entries collection has the nasty property that its count can change while you are reading from it. What is even worse is that the reading happens on an IO threadpool thread which will let your application crash with an unhandled exception. At least that was the case some years ago.
The EventLogReader also gives you the ability to supply a query string to filter for the events you are interested in. That is the way to go if you write a new application.
Here is an application which shows how you can parallelize reading:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Diagnostics.Eventing.Reader;
using System.Linq;
using System.Threading.Tasks;
namespace EventLogReading
{
class Program
{
static volatile bool myHasStoppedReading = false;
static void ParseEventsParallel()
{
var sw = Stopwatch.StartNew();
var query = new EventLogQuery("Application", PathType.LogName, "*");
const int BatchSize = 100;
ConcurrentQueue<EventRecord> events = new ConcurrentQueue<EventRecord>();
var readerTask = Task.Factory.StartNew(() =>
{
using (EventLogReader reader = new EventLogReader(query))
{
EventRecord ev;
bool bFirst = true;
int count = 0;
while ((ev = reader.ReadEvent()) != null)
{
if ( count % BatchSize == 0)
{
events.Enqueue(ev);
}
count++;
}
}
myHasStoppedReading = true;
});
ConcurrentQueue<KeyValuePair<string, EventRecord>> eventsWithStrings = new ConcurrentQueue<KeyValuePair<string, EventRecord>>();
Action conversion = () =>
{
EventRecord ev = null;
using (var reader = new EventLogReader(query))
{
while (!myHasStoppedReading || events.TryDequeue(out ev))
{
if (ev != null)
{
reader.Seek(ev.Bookmark);
for (int i = 0; i < BatchSize; i++)
{
ev = reader.ReadEvent();
if (ev == null)
{
break;
}
eventsWithStrings.Enqueue(new KeyValuePair<string, EventRecord>(ev.FormatDescription(), ev));
}
}
}
}
};
Parallel.Invoke(Enumerable.Repeat(conversion, 8).ToArray());
sw.Stop();
Console.WriteLine($"Got {eventsWithStrings.Count} events with strings in {sw.Elapsed.TotalMilliseconds:N3}ms");
}
static void ParseEvents()
{
var sw = Stopwatch.StartNew();
List<KeyValuePair<string, EventRecord>> parsedEvents = new List<KeyValuePair<string, EventRecord>>();
using (EventLogReader reader = new EventLogReader(new EventLogQuery("Application", PathType.LogName, "*")))
{
EventRecord ev;
while ((ev = reader.ReadEvent()) != null)
{
parsedEvents.Add(new KeyValuePair<string, EventRecord>(ev.FormatDescription(), ev));
}
}
sw.Stop();
Console.WriteLine($"Got {parsedEvents.Count} events with strings in {sw.Elapsed.TotalMilliseconds:N3}ms");
}
static void Main(string[] args)
{
ParseEvents();
ParseEventsParallel();
}
}
}
Got 20322 events with strings in 19,320.047ms
Got 20323 events with strings in 5,327.064ms
This gives a decent speedup of a factor 4. I needed to use some tricks to get faster because for some strange reason the class ProviderMetadataCachedInformation is not thread safe and uses internally a lock(this) around the Format method which defeats paralell reading.
The key trick is to open the event log in the conversion threads again and then read a bunch of events of the query there via the event bookmark Api. That way you can format the strings independently.
Update1
I have landed a change in .NET 5 which increases performance by a factor three up to 20. See https://github.com/dotnet/runtime/issues/34568.
You can also copy the EventLogReader class from .NET Core and use this one instead which will give you the same speedup.
The full saga is described by my Blog Post: https://aloiskraus.wordpress.com/2020/07/20/ms-performance-hud-analyze-eventlog-reading-performance-in-realtime/
We discussed a bit about reading the existing logs in the comments, can access the Security-tagged logs by accessing:
var eventLog = new EventLog("Security");
for (int i = 0; i < eventLog.Entries.Count; i++)
{
Console.WriteLine($"{eventLog.Entries[i].Message}");
}
This might not be the cleanest (performance-wise) way of doing it, but I doubt any other will be faster, as you yourself have already found out by trying out different techniques.
A small edit duo to Alois post: EventLogReader is not faster out of the box than EventLog, especially when using the for-loop mechanism showed in the code block above, I think EventLog is faster -- it only accesses the entries inside the loop using their index, the Entries collection is just a reference, whereas while using the EventLogReader, it will perform a query first and loop through that result, which should be slower. As commented on Alois's post: if you don't need to use the query option, just use the EventLog variant. If you do need querying, use the EventLogReader as is can query on a lower level than you could while using EventLog (only LINQ queries, which is slower ofcourse than querying in while executing the look-up).
To prevent you from having this hassle again in the future, and because you said you are running a service, I'd use the EntryWritten event of the EventLog class:
var eventLog = new EventLog("Security")
{
EnableRaisingEvents = true
};
eventLog.EntryWritten += EventLog_EntryWritten;
// .. read existing logs or do other work ..
private static void EventLog_EntryWritten(object sender, EntryWrittenEventArgs e)
{
Console.WriteLine($"received new entry: {e.Entry.Message}");
}
Note that you must set the EnableRaisingEvents to true in order for the event to fire whenever a new entry is logged. It'll also be a good practice (also, performance-wise) to start a (for example) Task, so that the system won't lock itself while queuing up the calls to your event.
This approach works fine if you want to retrieve all newly created events, if you want to retrieve newly created events but use a query (filter) for these events, you can check out the EventLogWatcher class, but in your case, when there are no constraints, I'd just use the EntryWritten event because you don't need filters and for plain old simplicity.
I need to test if there's any memory leak in our application and monitor to see if memory usage increases too much while processing the requests.
I'm trying to develop some code to make multiple simultaneous calls to our api/webservice method. This api method is not asynchronous and takes some time to complete its operation.
I've made a lot of research about Tasks, Threads and Parallelism, but so far I had no luck. The problem is, even after trying all the below solutions, the result is always the same, it appears to be processing only two requests at the time.
Tried:
-> Creating tasks inside a simple for loop and starting them with and without setting them with TaskCreationOptions.LongRunning
-> Creating threads inside a simple for loop and starting them with and without high priority
-> Creating a list of actions on a simple for loop and starting them using
Parallel.Foreach(list, options, item => item.Invoke)
-> Running directly inside a Parallel.For loop (below)
-> Running TPL methods with and without Options and TaskScheduler
-> Tried with different values for MaxParallelism and maximum threads
-> Checked this post too, but it didn't help either. (Could I be missing something?)
-> Checked some other posts here in Stackoverflow, but with F# solutions that I don't know how to properly translate them to C#. (I never used F#...)
(Task Scheduler class taken from msdn)
Here's the basic structure that I have:
public class Test
{
Data _data;
String _url;
public Test(Data data, string url)
{
_data = data;
_url = url;
}
public ReturnData Execute()
{
ReturnData returnData;
using(var ws = new WebService())
{
ws.Url = _url;
ws.Timeout = 600000;
var wsReturn = ws.LongRunningMethod(data);
// Basically convert wsReturn to my method return, with some logic if/else etc
}
return returnData;
}
}
sealed class ThreadTaskScheduler : TaskScheduler, IDisposable
{
// The runtime decides how many tasks to create for the given set of iterations, loop options, and scheduler's max concurrency level.
// Tasks will be queued in this collection
private BlockingCollection<Task> _tasks = new BlockingCollection<Task>();
// Maintain an array of threads. (Feel free to bump up _n.)
private readonly int _n = 100;
private Thread[] _threads;
public TwoThreadTaskScheduler()
{
_threads = new Thread[_n];
// Create unstarted threads based on the same inline delegate
for (int i = 0; i < _n; i++)
{
_threads[i] = new Thread(() =>
{
// The following loop blocks until items become available in the blocking collection.
// Then one thread is unblocked to consume that item.
foreach (var task in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(task);
}
});
// Start each thread
_threads[i].IsBackground = true;
_threads[i].Start();
}
}
// This method is invoked by the runtime to schedule a task
protected override void QueueTask(Task task)
{
_tasks.Add(task);
}
// The runtime will probe if a task can be executed in the current thread.
// By returning false, we direct all tasks to be queued up.
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
return false;
}
public override int MaximumConcurrencyLevel { get { return _n; } }
protected override IEnumerable<Task> GetScheduledTasks()
{
return _tasks.ToArray();
}
// Dispose is not thread-safe with other members.
// It may only be used when no more tasks will be queued
// to the scheduler. This implementation will block
// until all previously queued tasks have completed.
public void Dispose()
{
if (_threads != null)
{
_tasks.CompleteAdding();
for (int i = 0; i < _n; i++)
{
_threads[i].Join();
_threads[i] = null;
}
_threads = null;
_tasks.Dispose();
_tasks = null;
}
}
}
And the test code itself:
private void button2_Click(object sender, EventArgs e)
{
var maximum = 100;
var options = new ParallelOptions
{
MaxDegreeOfParallelism = 100,
TaskScheduler = new ThreadTaskScheduler()
};
// To prevent UI blocking
Task.Factory.StartNew(() =>
{
Parallel.For(0, maximum, options, i =>
{
var data = new Data();
// Fill data
var test = new Test(data, _url); //_url is pre-defined
var ret = test.Execute();
// Check return and display on screen
var now = DateTime.Now.ToString("HH:mm:ss");
var newText = $"{Environment.NewLine}[{now}] - {ret.ReturnId}) {ret.ReturnDescription}";
AppendTextBox(newText, ref resultTextBox);
}
}
public void AppendTextBox(string value, ref TextBox textBox)
{
if (InvokeRequired)
{
this.Invoke(new ActionRef<string, TextBox>(AppendTextBox), value, textBox);
return;
}
textBox.Text += value;
}
And the result that I get is basically this:
[10:08:56] - (0) OK
[10:08:56] - (0) OK
[10:09:23] - (0) OK
[10:09:23] - (0) OK
[10:09:49] - (0) OK
[10:09:50] - (0) OK
[10:10:15] - (0) OK
[10:10:16] - (0) OK
etc
As far as I know there's no limitation on the server side. I'm relatively new to the Parallel/Multitasking world. Is there any other way to do this? Am I missing something?
(I simplified all the code for clearness and I believe that the provided code is enough to picture the mentioned scenarios. I also didn't post the application code, but it's a simple WinForms screen just to call and show results. If any code is somehow relevant, please let me know, I can edit and post it too.)
Thanks in advance!
EDIT1: I checked on the server logs that it's receiving the requests two by two, so it's indeed something related to sending them, not receiving.
Could it be a network problem/limitation related to how the framework manages the requests/connections? Or something with the network at all (unrelated to .net)?
EDIT2: Forgot to mention, it's a SOAP webservice.
EDIT3: One of the properties that I send (inside data) needs to change for each request.
EDIT4: I noticed that there's always an interval of ~25 secs between each pair of request, if it's relevant.
I would recommend not to reinvent the wheel and just use one of the existing solutions:
Most obvious choice: if your Visual Studio license allows you can use MS Load Testing Framework, most likely you won't even have to write a single line of code: How to: Create a Web Service Test
SoapUI is a free and open source web services testing tool, it has some limited load testing capabilities
If for some reasons SoapUI is not suitable (i.e. you need to run load tests in clustered mode from several hosts or you need more enhanced reporting) you can use Apache JMeter - free and open source multiprotocol load testing tool which supports web services load testing as well.
A good solution to create load tests without write a own project is use this service https://loader.io/targets
It is free for small tests, you can POST Parameters, Header,... and you have a nice reporting.
Isnt the "two requests at a time" the result of the default maxconnection=2 limit on connectionManagement?
<configuration>
<system.net>
<connectionManagement>
<add address = "http://www.contoso.com" maxconnection = "4" />
<add address = "*" maxconnection = "2" />
</connectionManagement>
</system.net>
</configuration>
My favorite load testing library is NBomber. It has an easy and powerful API, realistic user simulations, and provides you with nice HTML reports about latency and requests per second.
I used it to test my API and wrote an article about how I did it.
I have a API request queue that I have looped using System.Timers.Timer. I set it up like this:
private static void SetupTimerLoop()
{
queueLoopTimer.Elapsed += new ElapsedEventHandler(OnTimedEvent);
queueLoopTimer.Interval = 100;
queueLoopTimer.Start();
}
Periodically OnTimedEvent will only get called once every second, almost exactly for large spans of time. It will then speed back up to once every 100ms. I cannot accurately reproduce these results, sometimes it happens, sometimes it does not. I have watched my CPU usage and it doesn't spike during these times of slowdown, if anything it goes down.
If I make breakpoints it shows that the timers interval is still only 100ms.
What could be going on here? Is there anything I can do to further troubleshoot this?
Potentially related, when this happens all my HTTP requests that are initiated from the loop (these are put onto Tasks so they return asynchronously) stop returning.
Request related:
private T Get<T>(string endpoint, IRequest request) where T : class
{
var client = InitHttpClient();
string debug = BaseUrl + endpoint + ToQueryString(request);
HttpResponseMessage response = client.GetAsync(BaseUrl + endpoint + ToQueryString(request)).Result;
string body = response.Content.ReadAsStringAsync().Result;
if (response.IsSuccessStatusCode)
{
T result = JsonConvert.DeserializeObject<T>(body, _serializerSettings);
return result;
}
var error = JsonConvert.DeserializeObject<HelpScoutError>(body);
throw new HelpScoutApiException(error, body);
}
I never get to: string body = response.Content.ReadAsStringAsync().Result; as long as the loop seems to be running slow.
Not sure if that could have something to do with it, hopefully you guys have some insight.
Edit: I don't flood the API with requests. I throttle the number of requests that go out per rolling 60-second period. During that time I just use an if statement to pass over the API call for that loop.
First of all, I'm not too clear when you say you have a loop. Your "OnTimedEvent" is an event handler that gets triggered at a preset interval (100ms).
As for a hint, you might want to try adding logic at the beginning of your "OnTimedEvent" handler to prevent re-entrance, just in-case your "OnTimedEvent" handler is taking longer than 100ms.
static int working = 0;
private static void OnTimedEvent(Object source, System.Timers.ElapsedEventArgs e)
{
if (working > 0)
return;
working++;
//do your normal work here
working--;
}
I'm trying to create a web app which does many things but the one that I'm currently focused in is the inbox count. I want to use EWS StreamSubscription so that I can get notification for each event and returns the total count of items in the inbox. How can I use this in terms of MVC? I did find some code from Microsoft tutorial that I was gonna test, but I just couldn't figure how I could use it in MVC world i.e. What's the model going to be, if model is the count then how does it get notified every time an event occurs in Exchange Server, etc.
Here's the code I downloaded from Microsoft, but just couldn't understand how I can convert the count to json and push it to client as soon as a new change event occurs. NOTE: This code is unchanged, so it doesn't return count, yet.
using System;
using System.Linq;
using System.Net;
using System.Threading;
using Microsoft.Exchange.WebServices.Data;
namespace StreamingNotificationsSample
{
internal class Program
{
private static AutoResetEvent _Signal;
private static ExchangeService _ExchangeService;
private static string _SynchronizationState;
private static Thread _BackroundSyncThread;
private static StreamingSubscriptionConnection CreateStreamingSubscription(ExchangeService service,
StreamingSubscription subscription)
{
var connection = new StreamingSubscriptionConnection(service, 30);
connection.AddSubscription(subscription);
connection.OnNotificationEvent += OnNotificationEvent;
connection.OnSubscriptionError += OnSubscriptionError;
connection.OnDisconnect += OnDisconnect;
connection.Open();
return connection;
}
private static void SynchronizeChangesPeriodically()
{
while (true)
{
try
{
// Get all changes from the server and process them according to the business
// rules.
SynchronizeChanges(new FolderId(WellKnownFolderName.Inbox));
}
catch (Exception ex)
{
Console.WriteLine("Failed to synchronize items. Error: {0}", ex);
}
// Since the SyncFolderItems operation is a
// rather expensive operation, only do this every 10 minutes
Thread.Sleep(TimeSpan.FromMinutes(10));
}
}
public static void SynchronizeChanges(FolderId folderId)
{
bool moreChangesAvailable;
do
{
Console.WriteLine("Synchronizing changes...");
// Get all changes since the last call. The synchronization cookie is stored in the _SynchronizationState field.
// Only the the ids are requested. Additional properties should be fetched via GetItem calls.
var changes = _ExchangeService.SyncFolderItems(folderId, PropertySet.IdOnly, null, 512,
SyncFolderItemsScope.NormalItems, _SynchronizationState);
// Update the synchronization cookie
_SynchronizationState = changes.SyncState;
// Process all changes
foreach (var itemChange in changes)
{
// This example just prints the ChangeType and ItemId to the console
// LOB application would apply business rules to each item.
Console.Out.WriteLine("ChangeType = {0}", itemChange.ChangeType);
Console.Out.WriteLine("ChangeType = {0}", itemChange.ItemId);
}
// If more changes are available, issue additional SyncFolderItems requests.
moreChangesAvailable = changes.MoreChangesAvailable;
} while (moreChangesAvailable);
}
public static void Main(string[] args)
{
// Create new exchange service binding
// Important point: Specify Exchange 2010 with SP1 as the requested version.
_ExchangeService = new ExchangeService(ExchangeVersion.Exchange2010_SP1)
{
Credentials = new NetworkCredential("user", "password"),
Url = new Uri("URL to the Exchange Web Services")
};
// Process all items in the folder on a background-thread.
// A real-world LOB application would retrieve the last synchronization state first
// and write it to the _SynchronizationState field.
_BackroundSyncThread = new Thread(SynchronizeChangesPeriodically);
_BackroundSyncThread.Start();
// Create a new subscription
var subscription = _ExchangeService.SubscribeToStreamingNotifications(new FolderId[] {WellKnownFolderName.Inbox},
EventType.NewMail);
// Create new streaming notification conection
var connection = CreateStreamingSubscription(_ExchangeService, subscription);
Console.Out.WriteLine("Subscription created.");
_Signal = new AutoResetEvent(false);
// Wait for the application to exit
_Signal.WaitOne();
// Finally, unsubscribe from the Exchange server
subscription.Unsubscribe();
// Close the connection
connection.Close();
}
private static void OnDisconnect(object sender, SubscriptionErrorEventArgs args)
{
// Cast the sender as a StreamingSubscriptionConnection object.
var connection = (StreamingSubscriptionConnection) sender;
// Ask the user if they want to reconnect or close the subscription.
Console.WriteLine("The connection has been aborted; probably because it timed out.");
Console.WriteLine("Do you want to reconnect to the subscription? Y/N");
while (true)
{
var keyInfo = Console.ReadKey(true);
{
switch (keyInfo.Key)
{
case ConsoleKey.Y:
// Reconnect the connection
connection.Open();
Console.WriteLine("Connection has been reopened.");
break;
case ConsoleKey.N:
// Signal the main thread to exit.
Console.WriteLine("Terminating.");
_Signal.Set();
break;
}
}
}
}
private static void OnNotificationEvent(object sender, NotificationEventArgs args)
{
// Extract the item ids for all NewMail Events in the list.
var newMails = from e in args.Events.OfType<ItemEvent>()
where e.EventType == EventType.NewMail
select e.ItemId;
// Note: For the sake of simplicity, error handling is ommited here.
// Just assume everything went fine
var response = _ExchangeService.BindToItems(newMails,
new PropertySet(BasePropertySet.IdOnly, ItemSchema.DateTimeReceived,
ItemSchema.Subject));
var items = response.Select(itemResponse => itemResponse.Item);
foreach (var item in items)
{
Console.Out.WriteLine("A new mail has been created. Received on {0}", item.DateTimeReceived);
Console.Out.WriteLine("Subject: {0}", item.Subject);
}
}
private static void OnSubscriptionError(object sender, SubscriptionErrorEventArgs args)
{
// Handle error conditions.
var e = args.Exception;
Console.Out.WriteLine("The following error occured:");
Console.Out.WriteLine(e.ToString());
Console.Out.WriteLine();
}
}
}
I just want to understand the basic concept as in what can be model, and where can I use other functions.
Your problem is that you are confusing a service (EWS) with your applications model. They are two different things. Your model is entirely in your control, and you can do whatever you want with it. EWS is outside of your control, and is merely a service you call to get data.
In your controller, you call the EWS service and get the count. Then you populate your model with that count, then in your view, you render that model property. It's really that simple.
A web page has no state. It doesn't get notified when things change. You just reload the page and get whatever the current state is (ie, whatever the current count is).
In more advanced applications, like Single Page Apps, with Ajax, you might periodically query the service in the background. Or, you might have a special notification service that uses something like SignalR to notify your SPA of a change, but these concepts are far more advanced than you currently are. You should probably develop your app as a simple stateless app first, then improve it to add ajax functionality or what not once you have a better grasp of things.
That's a very broad question without a clear-cut answer. Your model could certainly have a "Count" property that you could update. The sample code you found would likely be used by your controller.