Support on ADC current measurement and Offset removal.
I am kindly asking for support and details in understanding the concept of dc-offset removal from an ADC current measurement in a motor drive application. In particular, I have my calibration parameters (i.e., gain and offset) to convert the raw ADC data to real current in Ampere. I report the function I wrote in the following.
#define GAIN (0.0042)
#define OFFSET (36.007)
void ADC_Convertion(unsigned short ADC_RAW, float CURRENT)
{
CURRENT = ADC_RAW*GAIN- OFFSET;
}
I understand that a further function is needed to remove the offset dynamically that takes into account the sampling frequency of the controller and other information regarding the LSB. If the above is correct, is there anyone who can explain the concept in detail and/or refer to any significant paper/book on the topic? I would really appreciate any help on this.
Kind regards
Related
I am trying to gather performance information in C#. For RAM, CPU and other PC components i am using PerformanceCounters but for GPU there is not really one that gives me information like
% current GPU utilization
and
% current VRam utilization
For NVIDIA there is (in some cases) a performance counter category "NVIDIA GPU" that has all the important counters, but in some cases (e.g. MX250) or AMD GPUs there is not a Category of such kind.
I have tried using the "GPU-Engine" performance counter category, but i dont know how to interpret the data gathered with NextSample() (the NextValue() of the "Utilization Percentage" counter is just always 0, even if my GPU is at 15% or somthing).
The Information given here helped understanding why i need NextSample() but doesnt gave information about the correct way to calculate with rawValue.
i have tried using this:
var sample = gpuCounter.NextSample();
Task.Delay(2000).Wait()
var sample2 = gpuCounter.NextSample();
var ticks = ((sample2.TimeStamp - sample.TimeStamp) / sample.CounterFrequency) * 100000;
double calc = Math.Abs(raw2 - raw) / ticks;
Is there a way to gather live performance information for AMD and Intel in C# ?
Or does anybody know how to calculate the correct utilization value with the rawValue of the NextSample() from the "Utilization Percentage" counter from "GPU-Engine" category?
Any help welcome.
Thanks in advance.
Update:
This calculation returns the same value as NextValue() for the
"Utilization Percentage" but the value of NextValue() is way to low
(if it is not 0).
My idea is that the instance of the "Utilization Percentage" counter
is something like only for one process, this would also be the reason
for alot of 0s.
Can anybody confirm this?
Update2:
if i add up all NextValues of all Counter Instances from (for example some kind of 3D instances) i get an pretty close to correct value, but you need to call NextValue() function twice in an interval more than 1s and take the second value or the results are useless(from my experience). The Problem: this leads to delayed results and so it is hard to say if it works
I'm looking for a way to detect faulty sensors in an IOT environment.
In this case a tank level sensor. The readings are always fluctuating somewhat, and the "hop" at the beginning is a tank refill which is "normal". On Sep 16 the sensor started to malfunction and just gives apparent random values after that.
As a programmer ideally I'd like a simple way of detecting the problem (and as soon after it starts as possible).
I can mess about with "if direction of vector between two hourly averages changes direction more than once per day it is unstable". But I guess there are more sound and stable algorithms out there.
Two simple options:
domain knowledge based: If you know the max possible output of the tank (say 5 liter/h), any output above that would signal an error. I.e. in case of the example, if
t1-t2 > 5
assuming t1 and t2 show the tank capacity at hourly intervall. You might want to add sensor accuracy related safety margin.
past data based: Assuming that all tanks are similar regarding output capacity and used sensor quality, calculate the following for all your data of non-faulty sensors:
max(t1-t2)
The result is the error threshold to be used, similar to the value 5 above.
Note: tank refill operation might require additional consideration.
Additional methods are described e.g. here. You can find other papers for sure.
http://bourbon.usc.edu/leana/pubs-full/sensorfaults.pdf
Standard deviation.
You're looking at how much variation there is between the measurements. Standard deviation is an easy formula, and well known. Look for a high value, and you know there's a problem.
You can also use coefficient of variation, which is the ratio of the mean to standard deviation.
So... no clue where I should be asking this question but I'm hoping someone here can at least point me in the right direction. I have a time series that I would like to do spectral analysis on but I can't find any tools for doing FFT that accommodate a varied time difference between data points (they all assume dt is constant). Does anyone know of a tool that would work for this (I'm specifically looking for a periodogram or some other way to determine periodicity).
My only thought is to do linear interpolation between data points at a specific time interval to give the data a constant dt but I'm worried that will scew the spectral analysis data.
Here is a small chunk of the data; time, data, dt
time data dt
39.630 49662.1 0.170
39.810 49582.5 0.180
40.150 49430.0 0.340
40.320 49413.8 0.170
40.490 49324.0 0.170
40.670 49092.5 0.180
40.830 49025.6 0.160
41.010 49101.5 0.180
any suggestions??
Most mathematical theory behind FFT requires a fixed sampling period.
I can suggest you to do the following:
Create a sequence of equally spaced time instants
Calculate the value for each instant based on the neighbor points (linear or quadratic interpolation should do it). You can use as many points as you desire in order to obtain the best approximation.
Depending on the degree of detail you want for your results use a parametric or non-parametric method for estimating spectrum: Burg method can be useful and LPC/AR model can also be useful.
Check the links at Mathworks:
http://www.mathworks.com/help/signal/nonparametric-spectral-estimation.html
http://www.mathworks.com/help/signal/parametric-spectral-estimation.html
I suppose it should be done with IMediaSeeking SetPositions, but I don't know how define parameters inside.
There is no dedicated method to step back in DirectShow (as such existing for stepping forward). Yes you can use IMediaSeeking::SetPositions, however but note that it is not DirectShow itself who implement it, but actual underlying filters, so support for re-positioning depends on filters and implementation, and may be limited to, for example, stepping through key frames (splice points) only. DirectShow.NET is onyl a wrapper over DirectShow and it also does not add anything on top what DirectShow offers for stepping.
IBasicVideo *pBasicVideo=NULL;//Interface to the Ibasic Video
HRESULT hr;
REFTIME pavgfrt=0;//Get the reftime variable
REFERENCE_TIME pnowrt=0;//Get the reference time variable
pBasicVideo->get_AvgTimePerFrame(&pavgfrt);
pBasicVideo->get_AvgTimePerFrame(&pavgfrt);//Get the avg time per frame in seconds
pSeek->GetCurrentPosition(&pnowrt);//Get the current time in the unit of 100 nanoseconds
REFERENCE_TIME temp=pnowrt;//store it in a temp variable
REFERENCE_TIME temp1=(REFERENCE_TIME)(pavgfrt*10000000);//convert avg time into the format of current time
pnowrt=temp+temp1;//Add to framestep forward and subtract to framestep backward
pSeek->SetPositions(&pnowrt,AM_SEEKING_AbsolutePositioning, NULL,AM_SEEKING_NoPositioning);//Set the seeking position to the new time
pnowrt=0;//Reset the time variable
This works for me in C++. Wrapping this code in C# may not be difficult for you. Hope this helps.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
So far, I have implemented the algorithm found on this blog post with limited success.
The concept of my program is to initialize the sinewave, then change the frequency according to the position of the mouse on screen - move the mouse up and the sine wave gets higher and vice versa (essentially a theremin-type instrument using the mouse).
The problem with what I've implemented so far is that when the frequency of the sine wave is updated, there is an audible click, which instead of providing the smooth frequency sweep, makes it sound like there are discrete frequency levels.
I have been searching high and low on NAudio forums and on here, but it doesn't seem like there's anyone else trying to do this kind of thing using NAudio, or for that matter any other sound module - all of the similar programs that perform similarly using equipment like the Kinect use virtual midi cabling and an existing software module, but I would like to implement the same concept without relying on external software packages.
I have posted the section of my code pertaining to this issue on NAudio's forum here and as you can see, I'm following through MarkHeath's recommendation on here to attempt to find a solution to my problem.
You need to avoid discontinuities in the output waveform (these are the clicks you are hearing). The easiest way to do this is with a LUT-based waveform generator - this works for any periodic waveform (i.e. not just pure sine waves). Typically you use a fixed point phase accumulator, which is incremented for each new sample by a delta value which corresponds to the current output frequency. You can safely modify delta however you like and the waveform will still be continuous.
Pseudo code (for one output sample):
const int LUT_SIZE;
const int LUT[LUT_SIZE]; // waveform lookup table (typically 2^N, N >= 8)
Fixed index; // current index into LUT (integer + fraction)
Fixed delta; // delta controls output frequency
output_value = LUT[(int)index];
// NB: for higher quality use the fractional part of index to interpolate
// between LUT[(int)index] and LUT[(int)index + 1], rather than just
// truncating the index and using LUT[(int)index] only. Depends on both
// LUT_SIZE and required output quality.
index = (index + delta) % LUT_SIZE;
Note: to calculate delta for a given output frequency f and a sample rate Fs:
delta = FloatToFixed(LUT_SIZE * f / Fs);