My application using UdpClient to receive images from some other machine.
Each image size is 951000 bytes and MTU limit is 1500 bytes.
So the sender application must use fragmentation ... and each sending package contain header that contain 2 int
total_number
current_number
The code receiving bytes .. .and this is very intensive bit rate because the video have new frame to send to my application every 30 milisecond ..
I found myself losing packages and i don't know how to do it different and not to lose packages.
Someone have any idea how to solve this ?
Is there any better way ?
this is the code
public class PackagePartial
{
public int total_count;
public int current_count; // first package is 1
public byte[] buffer;
public byte[] Serializable()
{
// make the Serialize
}
public static void DeSerializable(byte[] v)
{
total_count = ... ;
current_count = ...
buffer = ...
}
}
// the network layer
int lastPackIndex = 0;
List<byte> collection = new List<byte>();
while(true)
{
byte[] package = _clientListener.Receive(ref ep);
PackagePartial p = PackagePartial.DeSerializable(package);
// indication that i lost package
if(p.current_count - lastPackIndex != 1 )
{
collection.Clear();
lastPackIndex = 0
continue;
}
if(p.current_count == p.total_count)
{
// image Serialize and send it to the GUI layer as bitmap
Image img = ConvertBytesToImage(collection);
SendToGui(img);
collection.Clear();
lastPackIndex = 0
}
else
{
lastPackIndex = p.current_count
collection.AddRange(p.Buffer)
}
Don't deserialize each package to an intermediate class after recieve.
Create a List of byte arrays and stuff them all in there as they come in.
Once the other side finishes sending, look in the first one to find the total count and see if the List.Count matches the total count.
If it does, you have all of the packages, now you can reassemble the image, just disregard the headers, you don't need them anymore.
Since you don't need anything at this point but the data from each packet, assembling the image should be faster (no serialization to an intermediate class involved anymore).
This should minimize the processing required for each image.
Related
I have a middleware telemetry handler, that has a method that awaits the execution of a request, and then tries to store some key data values from the response body into custom dimensions fields in application insights, so I can use graphana and potentially other 3rd party products to analyse my reponses.
public class ResponseBodyHandler : IResponseBodyHandler
{
private readonly ITelemetryPropertyHandler _telemetryPropertyHandler = new TelemetryPropertyHandler();
public void TransformResponseBodyDataToTelemetryData(RequestTelemetry requestTelemetry, string responseBody)
{
SuccessResponse<List<Booking>> result = null;
try
{
result = JsonConvert.DeserializeObject<SuccessResponse<List<Booking>>>(responseBody);
}
catch (Exception e)
{
Log.Error("Telemetry response handler, failure to deserialize response body: " + e.Message);
return;
}
_telemetryPropertyHandler.CreateTelemetryProperties(requestTelemetry, result);
}
}
public class TelemetryPropertyHandler : ITelemetryPropertyHandler
{
private readonly ILabelHandler _labelHandler = new LabelHandler();
public void CreateTelemetryProperties(RequestTelemetry requestTelemetry, SuccessResponse<List<Booking>> result)
{
Header bookingHeader = result?.SuccessObject?.FirstOrDefault()?.BookingHeader;
requestTelemetry?.Properties.Add("ResponseClientId", "" + bookingHeader?.ConsigneeNumber);
Line line = bookingHeader?.Lines.FirstOrDefault();
requestTelemetry?.Properties.Add("ResponseProductId", "" + line?.PurchaseProductID);
requestTelemetry?.Properties.Add("ResponseCarrierId", "" + line?.SubCarrierID);
_labelHandler.HandleLabel(requestTelemetry, bookingHeader);
requestTelemetry?.Properties.Add("ResponseBody", JsonConvert.SerializeObject(result));
}
}
Now, inside: _labelHandler.HandleLabel(requestTelemetry, bookingHeader);
It extracts an Image that is base64 encoded, chunks up the string in sizes of 8192 characters, and adds them to the Properties as: Image index 0 .. N (N being the total chunks)
I can debug and verify that the code works.
However, on application insights, the entire "request" entry, is missing, not just the custom dimensions.
I am assuming that this is due to a maximum size constraint, and I am likely trying to add more data than is "allowed", however, I can't for the life of me, find the documentation that enforces this restriction.
Can someone tell what rule I am breaking? so I can either truncate the image out, if it isn't possible to store that much data? Or if there is something else I am doing wrong?
I have validated, that my code works fine, as long as I truncate the data into a single Property, that of course only partially stores the Image. (Making said "feature" useless)
public class LabelHandler : ILabelHandler
{
private readonly IBase64Splitter _base64Splitter = new Base64Splitter();
public void HandleLabel(RequestTelemetry requestTelemetry, Header bookingHeader)
{
Label label = bookingHeader?.Labels.FirstOrDefault();
IEnumerable<List<char>> splitBase64String = _base64Splitter.SplitList(label?.Base64.ToList());
if (splitBase64String != null)
{
bool imageHandlingWorked = true;
try
{
int index = 0;
foreach (List<char> chunkOfImageString in splitBase64String)
{
string dictionaryKey = $"Image index {index}";
string chunkData = new string(chunkOfImageString.ToArray());
requestTelemetry?.Properties.Add(dictionaryKey, chunkData);
index++;
}
}
catch (Exception e)
{
imageHandlingWorked = false;
Log.Error("Error trying to store label in chunks: " + e.Message);
}
if (imageHandlingWorked && label != null)
{
label.Base64 = "";
}
}
}
}
The above code is responsible for adding the chunks to a requestTelemetry Property field
public class Base64Splitter : IBase64Splitter
{
private const int ChunkSize = 8192;
public IEnumerable<List<T>> SplitList<T>(List<T> originalList)
{
for (var i = 0; i < originalList.Count; i += ChunkSize)
yield return originalList.GetRange(i, Math.Min(ChunkSize, originalList.Count - i));
}
}
This is the specific method for creating a char list chunk of characters, that correspond to the application insights maximum size pr custom dimension field.
Here is an image of the truncated field being added, if I just limit myself to a single property, but truncate the base64 encoded value.
[I'm from Application Insights team]
You can find field limits documented here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/data-model-request-telemetry
On ingestion side there is a limit of 64 * 1024 bytes for overall JSON payload (need to add it to documentation).
You're facing something different though - that custom dimensions are removed completely. Maybe SDK detects that 64kb is exceeded and "mitigates" it this way. Can you try to limit it to a little bit less than 64kb?
I understand that the internal buffer used by IMemoryOwner<byte>.Memory can be larger than asked. But, is IMemoryOwner<byte>.Memory.Length defined with what I asked or with the size of the internal buffer ? The document seems not accurate enough.
Let's take a look.
MemoryPool<T>.Rent is abstract, so we'll go looking for an implementation. ArrayMemoryPool<T>.Rent looks like a good, representative candidate. That implementation looks like this:
public sealed override IMemoryOwner<T> Rent(int minimumBufferSize = -1)
{
if (minimumBufferSize == -1)
minimumBufferSize = 1 + (4095 / Unsafe.SizeOf<T>());
else if (((uint)minimumBufferSize) > MaximumBufferSize)
ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument.minimumBufferSize);
return new ArrayMemoryPoolBuffer(minimumBufferSize);
}
Let's chase that into ArrayMemoryPoolBuffer:
private sealed class ArrayMemoryPoolBuffer : IMemoryOwner<T>
{
private T[]? _array;
public ArrayMemoryPoolBuffer(int size)
{
_array = ArrayPool<T>.Shared.Rent(size);
}
public Memory<T> Memory
{
get
{
T[]? array = _array;
if (array == null)
{
ThrowHelper.ThrowObjectDisposedException_ArrayMemoryPoolBuffer();
}
return new Memory<T>(array);
}
}
public void Dispose()
{
T[]? array = _array;
if (array != null)
{
_array = null;
ArrayPool<T>.Shared.Return(array);
}
}
}
The new Memory<T>(array) means that Memory<T>.Length will just be the size of the underlying array: we don't have any logic to have a smaller Memory<T> wrapping a larger array. So let's see if ArrayPool<T>.Shared.Rent gives us back an array of the correct size...
Again this is abstract, but we can find an implementation at ConfigurableArrayPool<T>. The meat of that method is:
int index = Utilities.SelectBucketIndex(minimumLength);
if (index < _buckets.Length)
{
// Search for an array starting at the 'index' bucket. If the bucket is empty, bump up to the
// next higher bucket and try that one, but only try at most a few buckets.
const int MaxBucketsToTry = 2;
int i = index;
do
{
// Attempt to rent from the bucket. If we get a buffer from it, return it.
buffer = _buckets[i].Rent();
if (buffer != null)
{
if (log.IsEnabled())
{
log.BufferRented(buffer.GetHashCode(), buffer.Length, Id, _buckets[i].Id);
}
return buffer;
}
}
while (++i < _buckets.Length && i != index + MaxBucketsToTry);
// The pool was exhausted for this buffer size. Allocate a new buffer with a size corresponding
// to the appropriate bucket.
buffer = new T[_buckets[index]._bufferLength];
}
else
{
// The request was for a size too large for the pool. Allocate an array of exactly the requested length.
// When it's returned to the pool, we'll simply throw it away.
buffer = new T[minimumLength];
}
We can see that the pool has multiple buckets, with each bucket containing arrays of a specified size. This method is finding the bucket of arrays which are just larger than the requested size; if that bucket is empty it goes looking at larger buckets. If it fails to find anything, it creates a new array with the bucket size; only if the requested size is larger than the pool can manage does it create an array of the exact requested size.
So it's very unlikely that the array underlying the Memory<T> has the requested size, and the construction of the Memory<T> does nothing to pretend that the Memory<T> is smaller than its underlying array.
The conclusion is that IMemoryOwner<byte>.Memory.Length can indeed be larger than requested.
byte[] bytes = Call("getBytes") ; where getBytes function returns a byte[].
above function is called to fetch image rgb data in csharp. Returned byte[] is deep copied into the bytes array.
Since return byte array is large, deep copying adds more time.
how to make bytes array in csharp to hold only reference of java byte[]?
public class TestUtil : MonoBehaviour
{
public static string TAG = "--------TestUtil------------> ";
private static AndroidJavaObject pluginClass;
public static List<byte[]> rgbList = new List<byte[]>();
void Start()
{
Debug.Log(TAG + "start called");
//mainDataArray = new byte[1280*720*4];
Debug.Log(TAG + "creating java object");
initializePlayer();
}
public void initializePlayer()
{
// StreamHandler is the Javaclass. here i am creating a object StreamHandler
pluginClass = new AndroidJavaObject("com.test.android.decoder.StreamHandler");
// using this code to get the context
AndroidJavaClass unityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject activity = unityPlayer.GetStatic<AndroidJavaObject>("currentActivity");
// setting context StreamHandler object
pluginClass.Call("setActivity", activity);
// setting the interface object, where java class will call the respective function
pluginClass.Call("setOnDecodeListener", new AndroidPluginCallback());
// initializing the player
pluginClass.Call("init", ipAddress, port, outWidth, outHeight);
Debug.Log(TAG + " Initialization done");
}
public void quitApplication(string sid)
{
Application.Quit();
}
private void Update()
{
if (Input.GetKey(KeyCode.Escape)) {
Debug.Log(TAG + "Escape");
quitApplication(sId);
}
}
int count;
private void LateUpdate()
{
if (0 != rgbList.Count) {
// here i am updating the rgb texture to Quad gameobject
}
}
class AndroidPluginCallback : AndroidJavaProxy
{
public AndroidPluginCallback() : base("com.test.android.OnDecodeListener") { }
public void success(byte[] videoPath)
{
}
public void onFrameAvailable()
{
// Called when java code successfully generate RGBA data
long startTime = DateTimeOffset.Now.ToUnixTimeMilliseconds();
// Need to call "getBytes()" to get the RGBA frame.
//Note: generally if you call same function from another java class it do shallow copy to byte[] object
// but in this case it is doing deep copy or i am not sure whats going on.
byte[] rawBytes = pluginClass.Call<byte[]>("getBytes"); // width and height of frame is 1088x1088
rgbList.Add(rawBytes);
long diff = DateTimeOffset.Now.ToUnixTimeMilliseconds() - startTime;
Debug.Log(TAG + "Entered into onFrameAvailable callback. time taken to copy to rawbytes: " + diff); // diff is 14ms on average. Not sure why.
}
public void fail(string errorMessage)
{
Debug.Log(TAG + "ENTER callback onError: " + errorMessage);
}
}
}
AndroidJavaObject uses the JNI (Java Native Interface) to marshal data to and from Java land. Depending on how Java stores the arrays in memory, the JNI may need to do a deep copy to form an array that C# would understand, such as if the JVM originally stored the array in non-contiguous blocks.
Here's IBM's description:
JNI provides a clean interface between Java code and native code. To maintain this separation, arrays are passed as opaque handles, and native code must call back to the JVM in order to manipulate array elements using set and get calls. The Java specification leaves it up to the JVM implementation whether these calls provide direct access to the arrays or return a copy of the array. For example, the JVM might return a copy when it has optimized arrays in a way that does not store them contiguously.
These calls, then, might cause copying of the elements being manipulated. For example, if you call GetLongArrayElements() on an array with 1,000 elements, you might cause the allocation and copy of at least 8,000 bytes (1,000 elements * 8 bytes for each long). When you then update the array's contents with ReleaseLongArrayElements(), another copy of 8,000 bytes might be required to update the array. Even when you use the newer GetPrimitiveArrayCritical(), the specification still permits the JVM to make copies of the full array.
So basically, try to avoid marshalling arrays across the JNI (such as with AndroidJavaObject as much as possible, because it's not up to the C# on whether the JNI makes a deep copy or not.
I am having a problem with the camera CallBack Buffer. The camera adds 4 new callbackbuffers, and also associates the byte Array to a dictionary mBytesToByteBuffer with the form (Byte[], ByteBuffer). I believe the problem lies with the fact the dictionary never updates its keys to the most recent frame. Leading to comparisons with current frame data failing. And the code not progressing past that point.
I have looked through the google example I have been working through, and it seems that they do not update their mBytesToByteBuffer at all, but that it seems to work.
From having looked at the source code for both AddCallbackBuffer and SetPreviewCallbackWithBuffer, it seems that they do not do anything with the byte arrays passed to them.
Declaration of mBytesToByteBuffer
public ConcurrentDictionary<byte[], ByteBuffer> mBytesToByteBuffer = new ConcurrentDictionary<byte[], ByteBuffer>();
Adding the CallbackBuffers:
camera.AddCallbackBuffer(createPreviewBuffer(mPreviewSize));
camera.AddCallbackBuffer(createPreviewBuffer(mPreviewSize));
camera.AddCallbackBuffer(createPreviewBuffer(mPreviewSize));
camera.AddCallbackBuffer(createPreviewBuffer(mPreviewSize));
camera.SetPreviewCallbackWithBuffer(new CameraPreviewCallback(this));
The dictionary mBytesToByteBuffer contains the correct size of byteArray, however this is never populated.
The Byte Buffer Dictionary is created here.
private byte[] createPreviewBuffer(Size previewSize)
{
int bitsPerPixel = ImageFormat.GetBitsPerPixel(ImageFormatType.Nv21);
long sizeInBits = previewSize.Height * previewSize.Width * bitsPerPixel;
int bufferSize = (int)System.Math.Ceiling(sizeInBits / 8.0d) + 1;
byte[] byteArray = new byte[bufferSize];
ByteBuffer buffer = ByteBuffer.Wrap(byteArray);
if (!buffer.HasArray)
{
throw new IllegalStateException("Failed to create valid buffer for camera source.");
}
mBytesToByteBuffer[byteArray] = buffer; //(byteArray, buffer);
return byteArray;
}
The callBackBuffer is called here through mBytesToByteBuffer
public void setNextFrame(byte[] data, Android.Hardware.Camera camera)
{
lock (mLock)
{
if (mPendingFrameData != null)
{
camera.AddCallbackBuffer(mPendingFrameData.ToArray<System.Byte>());
mPendingFrameData = null;
}
if (!cameraSource.mBytesToByteBuffer.ContainsKey(data))
{
Log.Debug(TAG, "Skipping Frame, Could not Find ByteBuffer Associated with image");
return;
}
mPendingTimeMillis = SystemClock.ElapsedRealtime() - mStartTimeMillis;
mPendingFrameId++;
mPendingFrameData = cameraSource.mBytesToByteBuffer[data];
Monitor.PulseAll(mLock);
}
}
But the code never gets past
if(!cameraSource.mBytesToByteBuffer.containsKey(data)
Data is fully populated.
CameraPreviewCallback:
private class CameraPreviewCallback : Java.Lang.Object, Android.Hardware.Camera.IPreviewCallback
{
CameraSource cameraSource;
public CameraPreviewCallback(CameraSource cs)
{
cameraSource = cs;
}
public void OnPreviewFrame(byte[] data, Android.Hardware.Camera camera)
{
cameraSource.mFrameProcessor.setNextFrame(data, camera);
}
}
I have been working through the Google Vison Example Camera Source
My Full Camera Source
MFC CArray was Serialized and saved to a database. I need to read this data into a C# project. I am able to retrieve the data as byte[] from the database. I then write the byte[] to a MemoryStream. Now I need to read the data from the MemoryStream.
Someone has apparently solved this before, but did not write their solution.
http://social.msdn.microsoft.com/Forums/eu/csharpgeneral/thread/17393adc-1f1e-4e12-8975-527f42e5393e
I followed these projects in my attempt to solve the problem.
http://www.codeproject.com/Articles/32741/Implementing-MFC-Style-Serialization-in-NET-Part-1
http://www.codeproject.com/Articles/32742/Implementing-MFC-Style-Serialization-in-NET-Part-2
The first thing in the byte[] is the size of the array, and I can retrieve that with binaryReader.readInt32(). However, I cannot seem to get back the float values. If I try binaryReader.readSingle() or
public void Read(out float d) {
byte[] bytes = new byte[4];
reader.Read(bytes, m_Index, 4);
d = BitConverter.ToSingle(bytes, 0);
}
I do not get back the correct data. What am I missing?
EDIT Here is the C++ code that serializes the data
typedef CArray<float, float> FloatArray;
FloatArray floatArray;
// fill floatArray
CSharedFile memoryFile(GMEM_MOVEABLE | GMEM_ZEROINIT);
CArchive ar(&memoryFile, CArchive::store);
floatArray.Serialize(ar);
ar.Close();
EDIT 2
By reading backward, I was able to get all of the floats, and was also able to determine that the size for CArray is byte[2], or Int16. Does anyone know if this is always the case?
Using the codeproject articles above, here is a C# implementation of CArray which will allow you to deserialize a serialized MFC CArray.
// Deriving from the IMfcArchiveSerialization interface is not mandatory
public class CArray : IMfcArchiveSerialization {
public Int16 size;
public List<float> floatValues;
public CArray() {
floatValues = new List<float>();
}
virtual public void Serialize(MfcArchive ar) {
if(ar.IsStoring()) {
throw new NotImplementedException("MfcArchive can't store");
}
else {
// be sure to read in the order in which they were stored
ar.Read(out size);
for(int i = 0; i < size; i++) {
float floatValue;
ar.Read(out floatValue);
floatValues.Add(floatValue);
}
}
}
}