High CPU load using WCF streaming - c#

I've been doing a lot of research on this issue but unfortunately I wasn't able to find a solution.
My problem is that I am experiencing a quite high CPU load even on powerful machines when using WCF (NetTcpBinding, Streamed) - to be more specific, my i5 860 has a load of 20-40 percent when handling 20 client threads. When it comes to deploying a real service (and not a testing project) and there's around 50 real clients sending small data packages every second (around 20kb per transfer) the CPU load is already at 80-90 percent. In the end there should be 200+ clients but I can't imagine how this should work with such CPU loads...
For testing purposes I have set up a small project with just a simple client and server based on WCF streamed transfer using NetTcpBinding. There's already a lot of 'desperation code' in it, because I have tried to make it work... for my testing I used a 200MB file that's being sent to the WCF service 20 times.
Here's the contract:
[ServiceContract(Namespace = "WCFStreamTest.WCFService")]
public interface IStreamContract
{
[OperationContract(Name = "ReceiveStream")]
StreamMessage ReceiveStream(StreamMessage msg);
[OperationContract(Name = "SendStream")]
StreamMessage SendStream(StreamMessage msg);
}
The StreamMessage class used in here is just a MessageContract containing a string header and a Stream object.
The server code looks as follows:
[ServiceBehavior(IncludeExceptionDetailInFaults = false, InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = true, MaxItemsInObjectGraph = int.MaxValue)]
public class StreamService : IStreamContract
{
public StreamMessage ReceiveStream(StreamMessage msg)
{
if (File.Exists(msg.Parameters))
return new StreamMessage() { Parameters = msg.Parameters, DataStream = new System.IO.FileStream(msg.Parameters, System.IO.FileMode.Open, System.IO.FileAccess.Read) };
return new StreamMessage();
}
public StreamMessage SendStream(StreamMessage msg)
{
if (msg.Parameters.Trim().Length > 0)
{
int bufferSize = 8096 * 4;
byte[] buffer = new byte[bufferSize];
int bytes = 0;
while ((bytes = msg.DataStream.Read(buffer, 0, bufferSize)) > 0)
{
byte b = buffer[0];
b = (byte)(b + 1);
}
}
return new StreamMessage();
}
}
The test project just uses the SendStream method for testing - and that method just reads the data stream and does nothing else.
At this point I think I'll just save your time reading and don't post the full code in here. Maybe a download link to the demo project will be sufficient? (To make it work there is one line in the client's Program.cs that's object to be changed: FileInfo fi = new FileInfo(#"<<<>>>");)
WCFStreamTest Project
I'd be really happy about any idea on how to lower CPU usage... thanks in advance for any help and tips...

Can you try to make the thread sleep in while loop, and see how it goes.
while ((bytes = msg.DataStream.Read(buffer, 0, bufferSize)) > 0)
{
byte b = buffer[0];
b = (byte)(b + 1);
Thread.Sleep(100);
}
If it is a video streaming service you might have to tweak the sleep interval.

Related

SharpDX XAudio2: 6 SourceVoice limit

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

How to write data to USB HID device in Android with only a single input endpoint

I have a USB HID device that I would like to communicate with. I am successfully doing so on Windows using the HidSharp library (link: https://github.com/treehopper-electronics/HIDSharp). My Windows application is developed using the .NET Framework 4.5, C#, and Visual Studio.
I now want to communicate with this same USB HID device from an Android tablet instead of from the Windows desktop. I am encountering some problems doing so. When I have the device plugged in to my tablet, it reports a single interface with a single "read" endpoint. Here is what is reported to me:
Interface #0
Class: Human Interaction Device (0x3)
Endpoint: #0
Address : 0x81 (10000001)
Number : 1
Direction : Inbound (0x80)
Type : Intrrupt (0x3)
Poll Interval : 1
Max Packet Size: 64
Attributes : 000000011
As you can see, it only reports a single endpoint, which is an inbound endpoint. I need to be able to output simple commands to this device, which I was able to do so successfully on Windows using HidSharp.
HidSharp abstracted everything into a single "stream" object that you could read from and write to. Using the Android APIs, there isn't a single "stream" object, but rather there seem to be 3 different ways of reading/writing: bulk transfer, control transfer, and USB Request. I've tried sending out data using all 3, but with seemingly no success.
Any suggestions on what to do? Is there a reason why I could send out data to this device on Windows, but seemingly cannot do so from Android? Is there a way to use a single endpoint as both a read and a write endpoint? Is there something that I am just obviously missing and not understanding?
I am using Xamarin as my development environment (C#, Visual Studio 2017). Since code is always helpful, here is how I am connecting to the device:
int VendorID = 0x04d8;
int ProductID = 0x2742;
UsbManager USB_Manager = null;
UsbDevice USB_Device = null;
UsbDeviceConnection DeviceConnection = null;
UsbInterface DeviceInterface = null;
UsbEndpoint OutputEndpoint = null;
UsbEndpoint InputEndpoint = null;
//Grab the Android USB manager and get a list of connected devices
var USB_Manager = MyMainActivity.ApplicationContext.GetSystemService(Android.Content.Context.UsbService) as Android.Hardware.Usb.UsbManager;
var attached_devices = USB_Manager.DeviceList;
//Find the device in the list of connected devices
foreach (var d in attached_devices.Keys)
{
if (attached_devices[d].VendorId == VendorID && attached_devices[d].ProductId == ProductID)
{
USB_Device = attached_devices[d];
break;
}
}
//Assuming we found the correct device, let's set everything up
if (USB_Device != null)
{
for (int j = 0; j < USB_Device.InterfaceCount; j++)
{
DeviceInterface = USB_Device.GetInterface(j);
for (int i = 0; i < DeviceInterface.EndpointCount; i++)
{
var temp_ep = DeviceInterface.GetEndpoint(i);
if (temp_ep.Type == Android.Hardware.Usb.UsbAddressing.XferInterrupt)
{
if (temp_ep.Direction == Android.Hardware.Usb.UsbAddressing.In)
{
InputEndpoint = temp_ep;
}
if (temp_ep.Direction == Android.Hardware.Usb.UsbAddressing.Out)
{
OutputEndpoint = temp_ep;
}
}
}
}
//Request permission to communicate with this USB device
UsbReceiver receiver = new UsbReceiver();
PendingIntent pending_intent = PendingIntent.GetBroadcast(Game.Activity, 0, new Android.Content.Intent(UsbReceiver.ACTION_USB_PERMISSION), 0);
IntentFilter intent_filter = new IntentFilter(UsbReceiver.ACTION_USB_PERMISSION);
Game.Activity.RegisterReceiver(receiver, intent_filter);
USB_Manager.RequestPermission(USB_Device, pending_intent);
bool has_permission = USB_Manager.HasPermission(USB_Device);
var device_connection = USB_Manager.OpenDevice(USB_Device);
device_connection.ClaimInterface(DeviceInterface, true);
DeviceConnection = device_connection;
}
Next, here is how I attempt to read from the device:
//3 methods of attempting to read from the device
//Method 1:
byte[] inpt = new byte[64];
var request = new UsbRequest();
request.Initialize(DeviceConnection, InputEndpoint);
var byte_buffer = ByteBuffer.Allocate(64);
request.Queue(byte_buffer, 64);
DeviceConnection.RequestWait();
byte_buffer.Rewind();
for(int i = 0; i < 64; i++)
{
inpt[i] = (byte) byte_buffer.Get();
}
//Method 2:
byte[] inpt = new byte[64];
DeviceConnection.BulkTransfer(InputEndpoint, inpt, inpt.Length, 1000);
//Method 3:
byte[] inpt = new byte[64];
DeviceConnection.ControlTransfer(UsbAddressing.In, 0, 0, 0, inpt, 64, 1000);
And finally, here is how I attempt to write data to this device:
//Method 1:
byte[] output_msg; //This variable is assigned elsewhere in the code
DeviceConnection.BulkTransfer(OutputEndpoint, output_msg, output_msg.Length, 30);
//Method 2:
byte[] output_msg; //This variable is assigned elsewhere in the code
DeviceConnection.ControlTransfer(UsbAddressing.Out, 0, 0, 0, output_msg, output_msg.Length, 1000);
//Method 3:
byte[] output_msg; //This variable is assigned elsewhere in the code
var write_request = new UsbRequest();
write_request.Initialize(DeviceConnection, OutputEndpoint);
var byte_buffer_write = ByteBuffer.Wrap(output_msg);
request.Queue(byte_buffer_write, output_msg.Length);
DeviceConnection.RequestWait();
"OutputEndpoint" is typically null because there is no output endpoint, so I often replace "OutputEndpoint" with "InputEndpoint", but with no success.
Any help would be greatly appreciated! Thanks!!!
You are dealing with HID devices which means you should do Interrupt Transfers.
In Android, you should use UsbRequest to perform Interrupt Transfers (as it does Asynchronous NonBlocking IO).
The endpoints are unidirectional and can be used for both inbounds and outbound (but not at the same time)
If the endpoint is inbound then submit the Urb using UsbRequest and queue as you tried before but using empty buffer with expected bufferLength.
The RequestWait will return UsbRequest Object back upon completion.
If the usbRequest.getEndPoint().getDirection() is inbound then your buffer variable will be updated with read buffer from the device.
If the usbRequest.getEndpoint().getDirection() is outbound then you should pass your buffer to write data to the device

XAudio2 - Cracking output when using a dynamic buffer

To provide a little bit of context. I am trying to output live audio from a camera in my c# application. After doing some research it seems pretty obvious to do it in a c++ managed dll. I chose the XAudio2 api because it should be pretty easy to implement and use with dynamic audio content.
So the idea is to create the XAudio device in c++ with an empty buffer and push in the audio from the c# code side. The audio chunks are pushed every 50ms because I want to keep the latency as small as possible.
// SampleRate = 44100; Channels = 2; BitPerSample = 16;
var blockAlign = (Channels * BitsPerSample) / 8;
var avgBytesPerSecond = SampleRate * blockAlign;
var avgBytesPerMillisecond = avgBytesPerSecond / 1000;
var bufferSize = avgBytesPerMillisecond * Time;
_sampleBuffer = new byte[bufferSize];
Everytime the timer runs it gets the pointer of the audio buffer, reads the data from the audio, copies the data to the pointer and calls the PushAudio method.
I am also using a stopwatch to check how long the processing took and calculate the interval again for the timer to include the processing time.
private void PushAudioChunk(object sender, ElapsedEventArgs e)
{
unsafe
{
_pushAudioStopWatch.Reset();
_pushAudioStopWatch.Start();
var audioBufferPtr = Output.AudioCapturerBuffer();
FillBuffer(_sampleBuffer);
Marshal.Copy(_sampleBuffer, 0, (IntPtr)audioBufferPtr, _sampleBuffer.Length);
Output.PushAudio();
_pushTimer.Interval = Time - _pushAudioStopWatch.ElapsedMilliseconds;
_pushAudioStopWatch.Stop();
DIX.Log.WriteLine("Push audio took: {0}ms", _pushAudioStopWatch.ElapsedMilliseconds);
}
}
This is the implementation of the c++ part.
Regarding to the documentation on msdn I created a XAudio2 device and added the MasterVoice and SourceVoice. The buffer is empty at first because the c# part is responsible to push in the audio data.
namespace Audio
{
using namespace System;
template <class T> void SafeRelease(T **ppT)
{
if (*ppT)
{
(*ppT)->Release();
*ppT = NULL;
}
}
WAVEFORMATEXTENSIBLE wFormat;
XAUDIO2_BUFFER buffer = { 0 };
IXAudio2* pXAudio2 = NULL;
IXAudio2MasteringVoice* pMasterVoice = NULL;
IXAudio2SourceVoice* pSourceVoice = NULL;
WaveOut::WaveOut(int bufferSize)
{
audioBuffer = new Byte[bufferSize];
wFormat.Format.wFormatTag = WAVE_FORMAT_PCM;
wFormat.Format.nChannels = 2;
wFormat.Format.nSamplesPerSec = 44100;
wFormat.Format.wBitsPerSample = 16;
wFormat.Format.nBlockAlign = (wFormat.Format.nChannels * wFormat.Format.wBitsPerSample) / 8;
wFormat.Format.nAvgBytesPerSec = wFormat.Format.nSamplesPerSec * wFormat.Format.nBlockAlign;
wFormat.Format.cbSize = 0;
wFormat.SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
HRESULT hr = XAudio2Create(&pXAudio2, 0, XAUDIO2_DEFAULT_PROCESSOR);
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateMasteringVoice(&pMasterVoice);
}
if (SUCCEEDED(hr))
{
hr = pXAudio2->CreateSourceVoice(&pSourceVoice, (WAVEFORMATEX*)&wFormat,
0, XAUDIO2_DEFAULT_FREQ_RATIO, NULL, NULL, NULL);
}
buffer.pAudioData = (BYTE*)audioBuffer;
buffer.AudioBytes = bufferSize;
buffer.Flags = 0;
if (SUCCEEDED(hr))
{
hr = pSourceVoice->Start(0);
}
}
WaveOut::~WaveOut()
{
}
WaveOut^ WaveOut::CreateWaveOut(int bufferSize)
{
return gcnew WaveOut(bufferSize);
}
uint8_t* WaveOut::AudioCapturerBuffer()
{
if (!audioBuffer)
{
throw gcnew Exception("Audio buffer is not initialized. Did you forget to set up the audio container?");
}
return (BYTE*)audioBuffer;
}
int WaveOut::PushAudio()
{
HRESULT hr = pSourceVoice->SubmitSourceBuffer(&buffer);
if (FAILED(hr))
{
return -1;
}
return 0;
}
}
The problem I am facing is that I always have some cracking in the output. I tried to increase the interval of the timer or increased the buffer size a bit. Everytime the same result.
What am I doing wrong?
Update:
I created 3 buffers the XAudio engine can go through. The cracking got away. The missing part now is to fill the buffers at the right time from the c# part to avoid buffers with the same data.
void Render(void* param)
{
std::vector<byte> audioBuffers[BUFFER_COUNT];
size_t currentBuffer = 0;
// Get the current state of the source voice
while (BackgroundThreadRunning && pSourceVoice)
{
if (pSourceVoice)
{
pSourceVoice->GetState(&state);
}
while (state.BuffersQueued < BUFFER_COUNT)
{
std::vector<byte> resultData;
resultData.resize(DATA_SIZE);
CopyMemory(&resultData[0], pAudioBuffer, DATA_SIZE);
// Retreive the next buffer to stream from MF Music Streamer
audioBuffers[currentBuffer] = resultData;
// Submit the new buffer
XAUDIO2_BUFFER buf = { 0 };
buf.AudioBytes = static_cast<UINT32>(audioBuffers[currentBuffer].size());
buf.pAudioData = &audioBuffers[currentBuffer][0];
pSourceVoice->SubmitSourceBuffer(&buf);
// Advance the buffer index
currentBuffer = ++currentBuffer % BUFFER_COUNT;
// Get the updated state
pSourceVoice->GetState(&state);
}
Sleep(30);
}
}
XAudio2 does not copy the source data buffer at the time you submit it via SubmitSourceBuffer. You must keep that data (which is in your application memory) valid, and the buffer allocated for the entire time that XAudio2 will need to read out of it to process the data. This is done for efficiency to avoid the need for an extra copy, but puts the multi-threaded burden of keeping the memory available until it's done playing on you. That also means you can't modify the playing buffer.
Your current code is just reusing the same buffer which is causing the popping as you change the data while it's play. You can solve this with having 2 or three buffers you rotate between. A XAudio2 Source Voice has status information you can use to determine when it's done playing a buffer, or you can register for explicit callbacks which tell you when the buffer is no longer being used.
See DirectX Tool Kit for Audio and classic XAudio2 samples for examples of using XAudio2.

WCF Handling of Large Files

All the posts and search results that I've reviewed regarding WCF and uploading of large files have pretty much have the same answers for increasing the maximums like maxReceivedMessageSize. That's great I guess if you're just trying to get it working, but what if you have an actual maximum that you want to enforce. How do you handle that better?
Currently the client gets a System.ServiceModel.EndpointNotFoundException
saying, "There was no endpoint listening", inner exception "The remote server returned an error: (404) Not Found." How could I catch this in my service and return a better error message? Like "File exceeds maximum allowable size"
You can pass your file as a Stream parameter to your service method. Doing so allows to dynamically check the file size without processing it entirely (one should set here some sane maxReceivedMessageSize which shouldn't be exceeded in the most of real-world cases).
Here is an example of RESTful WCF service:
[ServiceContract]
public interface IFileService
{
[OperationContract, WebInvoke(Method = "POST", UriTemplate = "/ProcessFile")]
string ProcessFile(Stream file);
}
public class FileService : IFileService
{
public string ProcessFile(Stream file)
{
const int bufferLength = 32;
const int maxSize = 256;
var buffer = new byte[bufferLength];
int bytesRead, totalBytesRead = 0;
do
{
bytesRead = file.Read(buffer, 0, bufferLength);
totalBytesRead += bytesRead;
if (totalBytesRead > maxSize)
return $"File is too large - maximum {maxSize} bytes allowed.";
}
while (bytesRead > 0);
return $"Total {totalBytesRead} bytes read.";
}
}
Here is a sample code for hosting this service:
var host = new ServiceHost(typeof(FileService));
host.AddServiceEndpoint(typeof(IFileService),
new WebHttpBinding { TransferMode = TransferMode.StreamedRequest },
"http://localhost:8080")
.EndpointBehaviors.Add(new WebHttpBehavior());
host.Open();
Console.WriteLine("The host is opened. Press ENTER to exit...");
Console.ReadLine();
host.Close();
Note that one needs TransferMode = TransferMode.StreamedRequest to be able to process large files - see this question for details.

LibUsbDotNet UsbDevice.ControlTransfer hangs

I have a C# .Net Winforms application, which uses LibUsbDotNet to program firmware into an USB-device (Atmel AVR32) using "DFU_DNLOAD" transfers, which is a special kind of control-transfers. This all works, BUT: A specific kind of transfer, which causes the device to erase its internal flash, fails to send an ACK within the correct timing.
When this happens, my LibUsbDotNet connection becomes irreparably broken, which causes everything to fail.
My code does the following:
int TransferToDevice(byte request, short value, byte[] data)
{
var setup = new UsbSetupPacket(
(byte)(UsbCtrlFlags.Direction_Out | UsbCtrlFlags.RequestType_Class | UsbCtrlFlags.Recipient_Interface),
request,
value,
0,
(short)data.Length);
int n;
IntPtr unmanagedPointer = System.Runtime.InteropServices.Marshal.AllocHGlobal(data.Length);
System.Runtime.InteropServices.Marshal.Copy(data, 0, unmanagedPointer, data.Length);
// UsbDevice obtained else-where
if (!UsbDevice.ControlTransfer(ref setup, unmanagedPointer, data.Length, out n))
{
n = 0;
}
System.Runtime.InteropServices.Marshal.FreeHGlobal(unmanagedPointer);
return n;
}
// In order to do a "DFU_DNLOAD", the method above is used as follows:
TransferToDevice(DFU_DNLOAD, Transactions++, data); // "data" is the payload
// where DFU_DNLOAD is:
private const byte DFU_DNLOAD = 1;
// Transactions is
short Transaction = 0;
The above code works (the device correctly receives the "DFU_DNLOAD" message), but the missing ACK is the problem. Once the error occurs, every attempt to communicate with the device (even if I try to re-initialize everything) fails, untill the device is disconnected and re-inserted...
I would like to be able to reset or re-initialize the USB-connection somehow, when this error occurs. Currently I am only able to re-establish communications with the device by exiting my application and re-starting it manually.
This was never solved to my satisfaction, ended up implementing my own "DFU" protocol ontop of LibUSB using plain C, and P/Invoke to that, avoiding LibUsbDotNet entirely... This solution seems to work.
Just guessing, but in case if data is array of short, than size of the buffer should be adjusted
int numberOfValues = data.Length;
int size = Marshal.SizeOf(typeof(short));
IntPtr unmanagedPointer = Marshal.AllocHGlobal(numberOfValues*size);
if (unmanagedPointer == IntPtr.Zero)
throw new OutOfMemoryException("Unable allocate memory");

Categories

Resources