I'm using .NET MF with a microcontroller and a digital reflectance sensor. I need to implement the following pseudocode in C# to read from the sensor. I am using GHIElectronics.NETMF.FEZ for input/output ports. This is the product I'm using http://www.sparkfun.com/products/9454
int QRE1113_Pin = 2; //connected to digital 2
void setup(){
Serial.begin(9600);
}
void loop(){
int QRE_Value = readQD();
Serial.println(QRE_Value);
}
int readQD(){
//Returns value from the QRE1113
//Lower numbers mean more refleacive
//More than 3000 means nothing was reflected.
pinMode( QRE1113_Pin, OUTPUT );
digitalWrite( QRE1113_Pin, HIGH );
delayMicroseconds(10);
pinMode( QRE1113_Pin, INPUT );
long time = micros();
//time how long the input is HIGH, but quit after 3ms as nothing happens after that
while (digitalRead(QRE1113_Pin) == HIGH && micros() - time < 3000);
int diff = micros() - time;
return diff;
EDIT: I'm using
new InputPort((Cpu.Pin)FEZ_Pin.Digital.IO44, false, Port.ResistorMode.Disabled)
to create inputports and
new OutputPort((Cpu.Pin)FEZ_Pin.Digital.IO44, false)
to create outputports, but I don't know how to change pin mode from Out to In or set a pin as HIGH
Related
My goal is pretty simple: I wanna read information from my RadioLink R12DS receiver by S-BUS protocol using desktop console application, written on C#.
I use AT9S transmitter blinded together with receiver in 12 channel mode. I tested it on Pixhawk flight controller. Everything was fine there, no any problem with retrieving data.
I designed a console application based on investigated articles. Here is a couple most valuable of them:
http://forum.fpv.kz/topic/303-frsky-x8r-sbus-v-cppm-konverter-na-arduino/ https://github.com/bolderflight/SBUS
My application receiving byte stream from a COM Port, one by one, and tries catch message header "0x0F", but it doesn't appear.
The SBUS protocol uses inverted serial logic with a baud rate of 100000, 8 data bits, even parity bit, and 2 stop bits. The SBUS packet is 25 bytes long consisting of:
Byte[0]: SBUS Header, 0x0F
Byte[1-22]: 16 servo channels, 11 bits per servo channel
Byte[23]:
Bit 7: digital channel 17 (0x80)
Bit 6: digital channel 18 (0x40)
Bit 5: frame lost (0x20)
Bit 4: failsafe activated (0x10)
Bit 0 - 3: n/a
Byte[24]: SBUS End Byte, 0x00
A table mapping bytes[1-22] to servo channels is included.
Here is listing of my code:
static void Main(String[] args)
{
var availablePorts = SerialPort.GetPortNames();
using(var port = new SerialPort(availablePorts[0], 100000, Parity.Even, 8, StopBits.Two))
{
port.DataReceived += PortOnDataReceived;
while(true)
{
if(!port.IsOpen)
TryReconnect(port);
Thread.Sleep(1000);
}
}
}
// HANDLERS ///////////////////////////////////////////////////////////////////////////////
private static void PortOnDataReceived(Object sender, SerialDataReceivedEventArgs serialDataReceivedEventArgs)
{
var serialPort = (SerialPort)sender;
if(SbusConverter.TryReadMessage(serialPort, out var messageBuffer))
{
var message = SbusConverter.Convert(messageBuffer);
Console.WriteLine(message.ServoChannels[0]);
}
}
public static Boolean TryReadMessage(SerialPort serialPort, out Byte[] messageBuffer)
{
const Int32 messageLength = 25;
const Int32 endOfStream = -1;
const Byte sBusMessageHeader = 0x0f;
const Byte sBusMessageEndByte = 0x00;
messageBuffer = new Byte[messageLength];
if(serialPort.BytesToRead < messageLength)
return false;
do
{
var value = serialPort.ReadByte();
if(value == endOfStream)
return false;
if(value == sBusMessageHeader)
{
messageBuffer[0] = (Byte)value;
for(var i = 1; i < messageLength; i++)
{
messageBuffer[i] = (Byte)serialPort.ReadByte();
}
if(messageBuffer[0] == sBusMessageHeader &&
messageBuffer[24] == sBusMessageEndByte)
return true;
}
} while(serialPort.BytesToRead > 0);
return false;
}
I have thoughts in my head and I want ask one question here.
It's possible, that RadioLink use different, modified or their own S-BUS implementation, than Futaba and I found no proper documentation yet.
Anybody, who experienced in that field, any suggestions please. It seems, I am stuck.
Thank you!
I made some investigations of received data stream and uncovered that RadioLink devices uses "0x1F" as a frame start byte insted of "0x0F". Another connection and message properties are the same.
var availablePorts = SerialPort.GetPortNames();
using(var port = new SerialPort(availablePorts[0], 100000, Parity.None, 8, StopBits.One)
{
Handshake = Handshake.None
})
{
port.DataReceived += PortOnDataReceived;
while(true)
{
if(!port.IsOpen)
OpenPort(port);
Thread.Sleep(1000);
}
}
Two years too late, but I just noticed you have 1 stop bit set, instead of 2, in the code of your answer. That could probably explain 0x1F instead 0x0F and otherwise shifted data.
I am trying to do the same kind of operation, intercepting Sbus signal from a herelink radio controller, so far I discovered a shift in the data and my start byte is recognised with the value of 0x1E. Moreover the data seems super noisy when looked at the oscilloscope, some 1 might be missing because of the poor quality of the signal and the ramp from 0 to 1.
I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.
I am working with the Xamarin Forms in combination with the I2C devices and the Raspberry Pi. I programmed in C# and the Raspberry Pi was installed with Windows IoT. And I have encountered a problem about the parameter access.
I have a microtimer in the UWP project and I want to read the data from the analog input every 100ms. In the OnTimedEvent, there is a caculation which requires the some parameters which are set in the PCL project,the namespace is "I2CADDA.MainPage.xaml.cs". I tried to set these parameters as public static.
public static double gainFactor = 1;
public static double gainVD = 1;
And in the UWP project , I use the dependency service because I have to use the micro timer, so the realization of the interface is done in "I2CADDA.UWP.MainPage.xaml.cs", in the function OnTimedEvent, I tried to get the parameters from PCL project file.
public void OnTimedEvent(object sender, MicroLibrary.MicroTimerEventArgs timerEventArgs)
{
byte[] readBuf = new byte[2];
I2CDevice.ReadI2C(chan, readBuf); //read voltage data from analog to digital converter
sbyte high = (sbyte)readBuf[0];
int mvolt = high * 16 + readBuf[1] / 16;
val = mvolt / 204.7 + inputOffset;
val = val / gainFactor / gainVD; //gainFactor and gainVD shows not exist in current context
}
It seems that the UWP project can not have access to the PCL project in normal way. May I ask how can I solve this problem? Thank you very much!!!
In C#, to call a static Field, you should use the class name to call it,
In your code, the static Fields are in the I2CADDA.MainPage.xaml.cs, and for instance, they are in the I2CADDA.MainPage class, you can call the field as
double Factor = I2CADDA.MainPage.gainFactor;
double VD = I2CADDA.MainPage.gainVD;
So your above code should be like this:
public void OnTimedEvent(object sender, MicroLibrary.MicroTimerEventArgs timerEventArgs)
{
byte[] readBuf = new byte[2];
I2CDevice.ReadI2C(chan, readBuf); //read voltage data from analog to digital converter
sbyte high = (sbyte)readBuf[0];
int mvolt = high * 16 + readBuf[1] / 16;
val = mvolt / 204.7 + inputOffset;
val = val / I2CADDA.MainPage.gainFactor / I2CADDA.MainPage.gainVD;
}
Please also make sure you have reference the PCL in your UWP project.
To provide a little bit of context. I am trying to output live audio from a camera in my c# application. After doing some research it seems pretty obvious to do it in a c++ managed dll. I chose the XAudio2 api because it should be pretty easy to implement and use with dynamic audio content.
So the idea is to create the XAudio device in c++ with an empty buffer and push in the audio from the c# code side. The audio chunks are pushed every 50ms because I want to keep the latency as small as possible.
// SampleRate = 44100; Channels = 2; BitPerSample = 16;
var blockAlign = (Channels * BitsPerSample) / 8;
var avgBytesPerSecond = SampleRate * blockAlign;
var avgBytesPerMillisecond = avgBytesPerSecond / 1000;
var bufferSize = avgBytesPerMillisecond * Time;
_sampleBuffer = new byte[bufferSize];
Everytime the timer runs it gets the pointer of the audio buffer, reads the data from the audio, copies the data to the pointer and calls the PushAudio method.
I am also using a stopwatch to check how long the processing took and calculate the interval again for the timer to include the processing time.
private void PushAudioChunk(object sender, ElapsedEventArgs e)
{
unsafe
{
_pushAudioStopWatch.Reset();
_pushAudioStopWatch.Start();
var audioBufferPtr = Output.AudioCapturerBuffer();
FillBuffer(_sampleBuffer);
Marshal.Copy(_sampleBuffer, 0, (IntPtr)audioBufferPtr, _sampleBuffer.Length);
Output.PushAudio();
_pushTimer.Interval = Time - _pushAudioStopWatch.ElapsedMilliseconds;
_pushAudioStopWatch.Stop();
DIX.Log.WriteLine("Push audio took: {0}ms", _pushAudioStopWatch.ElapsedMilliseconds);
}
}
This is the implementation of the c++ part.
Regarding to the documentation on msdn I created a XAudio2 device and added the MasterVoice and SourceVoice. The buffer is empty at first because the c# part is responsible to push in the audio data.
namespace Audio
{
using namespace System;
template <class T> void SafeRelease(T **ppT)
{
if (*ppT)
{
(*ppT)->Release();
*ppT = NULL;
}
}
WAVEFORMATEXTENSIBLE wFormat;
XAUDIO2_BUFFER buffer = { 0 };
IXAudio2* pXAudio2 = NULL;
IXAudio2MasteringVoice* pMasterVoice = NULL;
IXAudio2SourceVoice* pSourceVoice = NULL;
WaveOut::WaveOut(int bufferSize)
{
audioBuffer = new Byte[bufferSize];
wFormat.Format.wFormatTag = WAVE_FORMAT_PCM;
wFormat.Format.nChannels = 2;
wFormat.Format.nSamplesPerSec = 44100;
wFormat.Format.wBitsPerSample = 16;
wFormat.Format.nBlockAlign = (wFormat.Format.nChannels * wFormat.Format.wBitsPerSample) / 8;
wFormat.Format.nAvgBytesPerSec = wFormat.Format.nSamplesPerSec * wFormat.Format.nBlockAlign;
wFormat.Format.cbSize = 0;
wFormat.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
HRESULT hr = XAudio2Create(&pXAudio2, 0, XAUDIO2_DEFAULT_PROCESSOR);
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateMasteringVoice(&pMasterVoice);
}
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateSourceVoice(&pSourceVoice, (WAVEFORMATEX*)&wFormat,
0, XAUDIO2_DEFAULT_FREQ_RATIO, NULL, NULL, NULL);
}
buffer.pAudioData = (BYTE*)audioBuffer;
buffer.AudioBytes = bufferSize;
buffer.Flags = 0;
if (SUCCEEDED(hr))
{
hr = pSourceVoice->Start(0);
}
}
WaveOut::~WaveOut()
{
}
WaveOut^ WaveOut::CreateWaveOut(int bufferSize)
{
return gcnew WaveOut(bufferSize);
}
uint8_t* WaveOut::AudioCapturerBuffer()
{
if (!audioBuffer)
{
throw gcnew Exception("Audio buffer is not initialized. Did you forget to set up the audio container?");
}
return (BYTE*)audioBuffer;
}
int WaveOut::PushAudio()
{
HRESULT hr = pSourceVoice->SubmitSourceBuffer(&buffer);
if (FAILED(hr))
{
return -1;
}
return 0;
}
}
The problem I am facing is that I always have some cracking in the output. I tried to increase the interval of the timer or increased the buffer size a bit. Everytime the same result.
What am I doing wrong?
Update:
I created 3 buffers the XAudio engine can go through. The cracking got away. The missing part now is to fill the buffers at the right time from the c# part to avoid buffers with the same data.
void Render(void* param)
{
std::vector<byte> audioBuffers[BUFFER_COUNT];
size_t currentBuffer = 0;
// Get the current state of the source voice
while (BackgroundThreadRunning && pSourceVoice)
{
if (pSourceVoice)
{
pSourceVoice->GetState(&state);
}
while (state.BuffersQueued < BUFFER_COUNT)
{
std::vector<byte> resultData;
resultData.resize(DATA_SIZE);
CopyMemory(&resultData[0], pAudioBuffer, DATA_SIZE);
// Retreive the next buffer to stream from MF Music Streamer
audioBuffers[currentBuffer] = resultData;
// Submit the new buffer
XAUDIO2_BUFFER buf = { 0 };
buf.AudioBytes = static_cast<UINT32>(audioBuffers[currentBuffer].size());
buf.pAudioData = &audioBuffers[currentBuffer][0];
pSourceVoice->SubmitSourceBuffer(&buf);
// Advance the buffer index
currentBuffer = ++currentBuffer % BUFFER_COUNT;
// Get the updated state
pSourceVoice->GetState(&state);
}
Sleep(30);
}
}
XAudio2 does not copy the source data buffer at the time you submit it via SubmitSourceBuffer. You must keep that data (which is in your application memory) valid, and the buffer allocated for the entire time that XAudio2 will need to read out of it to process the data. This is done for efficiency to avoid the need for an extra copy, but puts the multi-threaded burden of keeping the memory available until it's done playing on you. That also means you can't modify the playing buffer.
Your current code is just reusing the same buffer which is causing the popping as you change the data while it's play. You can solve this with having 2 or three buffers you rotate between. A XAudio2 Source Voice has status information you can use to determine when it's done playing a buffer, or you can register for explicit callbacks which tell you when the buffer is no longer being used.
See DirectX Tool Kit for Audio and classic XAudio2 samples for examples of using XAudio2.
I need to be able to play certain tones in a c# application. I don't care if it generates them on the fly or if it plays them from a file, but I just need SOME way to generate tones that have not only variable volume and frequency, but variable timbre. It would be especially helpful if whatever I used to generate these tones would have many timbre pre-sets, and it would be even more awesome if these timbres didn't all sound midi-ish (meaning some of them sounded like the might have been recordings of actual instruments).
Any suggestions?
You might like to take a look at my question Creating sine or square wave in C#
Using NAudio in particular was a great choice
This article helped me with something similar:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/18fe83f0-5658-4bcf-bafc-2e02e187eb80/beep-beep
The part in particular is the Beep Class:
public class Beep
{
public static void BeepBeep(int Amplitude, int Frequency, int Duration)
{
double A = ((Amplitude * (System.Math.Pow(2, 15))) / 1000) - 1;
double DeltaFT = 2 * Math.PI * Frequency / 44100.0;
int Samples = 441 * Duration / 10;
int Bytes = Samples * 4;
int[] Hdr = {0X46464952, 36 + Bytes, 0X45564157, 0X20746D66, 16, 0X20001, 44100, 176400, 0X100004, 0X61746164, Bytes};
using (MemoryStream MS = new MemoryStream(44 + Bytes))
{
using (BinaryWriter BW = new BinaryWriter(MS))
{
for (int I = 0; I < Hdr.Length; I++)
{
BW.Write(Hdr[I]);
}
for (int T = 0; T < Samples; T++)
{
short Sample = System.Convert.ToInt16(A * Math.Sin(DeltaFT * T));
BW.Write(Sample);
BW.Write(Sample);
}
BW.Flush();
MS.Seek(0, SeekOrigin.Begin);
using (SoundPlayer SP = new SoundPlayer(MS))
{
SP.PlaySync();
}
}
}
}
}
It can be used as follows
Beep.BeepBeep(100, 1000, 1000); /* 10% volume */
There's a popular article on CodeProject along these lines:
http://www.codeproject.com/KB/audio-video/CS_ToneGenerator.aspx
You might also check out this thread:
http://episteme.arstechnica.com/eve/forums/a/tpc/f/6330927813/m/197000149731
In order for your generated tones not to sound 'midi-ish', you'll have to use real-life samples and play them back. Try to find some good real instrument sample bank, like http://www.sampleswap.org/filebrowser-new.php?d=INSTRUMENTS+single+samples%2F
Then, when you want to compose melody from them, just alternate playback frequency relative to the original sample frequency.
Please drop me a line if you find this answer usefull.