I created an application that stores byte arrays in my SQLiteDatabase.
This same application also selects the byte arrays from the database every 'x' seconds.
The dataflow of my application is as follow:
Application - > SQLiteDatabase -> Application
My question is:
How do I fill one byte array with all the incoming byte arrays from the SQLiteDatabase?
For example:
Byte[] Data;
Needs to be filled with the following byte array:
Byte[] IncomingData;
IncomingData is constantly being filled by the SQLiteDatabase.
Data needs to be filled with IncomingData constantly.
Can someone help me out?
Just use Concat:
data1.Concat(IncomingData);
You'll need to add the System.Linq namespace reference.
There are a few approaches you can take.
Use a List<byte> and List.AddRange
Use LINQ's Enumerable.Concat
Use Array.Copy and do it all manually
Of the three, if possible go with the List as it will (likely) reduce the amount of array copying required. This is what List's are made for, they use an array behind the scenes with a certain capacity, it starts at 4 and doubles when it hits the capacity. The capacity can even be set to some large number with the list.Capacity property or the constructor that takes an int much like you can with an array. You can always bring the list back using List.ToArray.
Enumerable.Concat will likely only create an array of the minimum size, meaning a new array needs to be created every time you get some more bytes.
Related
I'm making a program in which one of its functions, in order to correctly create the message to be sent, keeps calling a function I have generated to add each of the parts to the array. The thing is, in C# you can't do this because the byte arrays (and if I'm not wrong, any kind of array) has a finite Length which cannot be changed.
Due to this, I thought of creating 2 byte variables. The first one would get the first to values. The second one would be created after you know the quantity of new bytes you have to add, and after this, you would delete the first variable and create it again, with the Length of the previous variable, but adding the Length of the new values, doing the same you did with the second variable. The code I've generated is:
byte[] message_mod_0 = adr_and_func;
byte[] byte_memory_adr = AddAndTypes.ToByteArray(memory_adr);
byte[] message_mod_1 = new byte[2 + byte_memory_adr.Length];
message_mod_1 = AddAndTypes.AddByteArrayToByteArray(message_mod_0, byte_memory_adr);
AddAndTypes.AddByteArrayToByteArray(message_mod_0, AddAndTypes.IntToByte(value));
byte[] CRC = Aux.CRC(message_mod_0);
AddAndTypes.AddByteArrayToByteArray(message_mod_0, CRC);
In this code, the two variables I've meant are message_mod_0 and message_mod_1. I also think of doing the deleting and redeclaring the byte_memory_adr variable that is required in order to know which is the Length of the byte array you want to add to the ouput message.
The parameters adr_and_func, memory_adr and value are given as input parameters of the function I'm making.
The question can be summed up as: is there any way to delete variables in the same scope they were created? And, in case it can be done, would there be any problem if I created a new variable with the same name after I have deleted the first one? I can't think of any reason why that could happen, but I'm pretty new to this programming language.
Also, I don't know if there is any less messy way of doing this.
This sounds like you are writing your own custom serializer.
I would recommend just using a existing library, like protobuf.net to define your messages if at all possible.
If this is not possible you can use a BinaryWriter to write your values to a Stream. If you want to keep it in memory use a MemoryStream and use .ToArray() when your done to get a array of all bytes.
As for memory, do not worry about it. Unless you have gigabyte sized messages the memory requirements should not be an issue, and the garbage collector will automatically recycle memory when it is no longer needed, and it can do this after the last usage, regardless of scope. If you have huge memory streams you might want to look at something like recyclable memory stream since this can avoid some allocation performance issues and fragmentation issues.
Why do I need to use the Add() to add elements to a List. Why can't I use indexing and do it. When I traverse the elements through the List I do it using the help of indexes.
int head = -1;
List<char> arr = new List<char>();
public void push(char s)
{
++head;
arr[head] = s;//throws runtime error.
arr.Add(s);
}
It doesn't throw any error during compile time. But throws an error at runtime stating IndexOutOfRangeException.
++head;
arr[head] = s;
This attempts to set element 1 of the list to s, but there is no element 1 yet because you've not added anything, or set the length of the list.
When you create an array, you define a length, so each item has a memory address that can be assigned to.
Lists are useful when you don't know how many items you're going to have, or what their index is going to be.
Arrays are fixed sizes. Once you allocate them, you can not add or remove "slots" from it. So if you need it to be bigger, you need to:
Detect that you need a bigger array.
Allocate a new, bigger array
copy all existing values to teh new, bigger array
start using the bigger array from now on everywhere
All that Lists do is automate that precise process. It will automatically detect that it needs to increase during Add() and then do step 2-4 automagically. It is even responsible to pick the initial size and by how much to grow it (to avoid having to grow to often.
They could in theory jsut react to List[11000] by growing the size to 11000. But chances are very big, that this value is a huge mistake. And preventing the Progarmmer from doing huge mistakes is what half the classes and compiler rules (like strong typisation) are there for. So they force you to use Add() so such a mistake can not happen.
Actually calling myArray[2] does not add the element, but just assigns the object to the specified index within the array. If the array´s size is less you´d get an IndexOutOfBoundsException, as in a list<T> also. So also in case of an array using the indexer assumes you actually have that many elements:
var array = new int[3];
array[5] = 4; // bang
This is because arrays have a fixed size which you can´t change. If you assign an object to an index greater the arrays size you get the exat same exception as for a List<T> also, there´s no difference here.
The only real difference here is that when using new array[3] you have an array of size 3 with indices up to 2 and you can call array[2]. However this would just return the default-value - in case of int this is zero. When using new List<int>(3) in contrast you don´t have actually three elements. In fact the list has no items at all and calling list[2] throws the exception. The parameter to a list is just the capacity, which is a parameter for the runtime to indicate when the underlying array of a list should be resized - an ability your array does not even have.
A list is an array wrapper, where the internal array size is managed by its methods. The constructor that takes a capacity simply creates an array of that size internally, but the count property (which reflects the count elements that has been added) will be zero. So in essence, zero slots in the array has been assigned a value.
The size of an array is managed by you the programmer. That is why you have to call static methods like System.Array.Resize (notice that the array argument is ref), if you want to change an array yourself. That method allocates a new chunk of memory for the new size.
So to sum up, the list essentially manages an array for you, and as such, the tradeoff is that you can only access as many array-like slots as has been added to it.
In C# I'm storing values in an array.
So to create this array I'm using this code, 'int[] values = new int[10];'
But, what if I need more than 10 values, or in the case I never know how many values I will have. Could be 1, 10 or 100.
I understand the idea that I need to let the compiler know how big the array should be so it can allocate memory space for it.
Is there a way to work around that?
You could just use a List and let it do all the heavy lifting for you:
List<int> values = new List<int>();
Arrays must have defined length. If you want dynamic size, consider using List class.
Please take a look at and research the concept of "Immutable objects"
An array has a fixed size, If you need an array with a dynamic size it is best to either create extension methods or a handler that does the work for you.
The work to be done is to get the array, create a new array with the new size based on whether you want to add or remove something, and to populate the new array with the data from the previous array. This will create a new object instead of modifying the previous object and will make sure you don't push items to a full array, or have an array with a size larger than the items that fit in it.
Ofcourse the List class would work as well and would probably solve your problem.
I have the task of reading data from a source in chunks, and storing the entire result in a byte array. Specifically, I need to make subsequent calls to "Socket.Receive". I would like to allocate the byte array with the final size in advance, and each time give the position within the array to copy data into. This, to avoid an extra copy.
In C++, you simply give the offset of the array. Could not figure out how to give the Receive method a location in the middle of the byte array...
Can this be done in C#?
There are overloads to Receive that accept the offset and count to read. You can use them: https://msdn.microsoft.com/en-us/library/system.net.sockets.socket.receive(v=vs.110).aspx - for a specific example: https://msdn.microsoft.com/en-us/library/w3xtz6a5(v=vs.110).aspx
I've encountered a problem, which is best illustrated with this code segment:
public static void Foo(long RemoveLocation)
{
// Code body here...
// MyList is a List type collection object.
MyList.RemoveAt(RemoveLocation);
}
Problem: RemoveLocation is a long. The RemoveAt method takes only int types. How do I get around this problem?
Solutions I'd prefer to avoid (because it's crunch time on the project):
Splitting MyList into two or more lists; that would require rewriting a lot of code.
Using int instead of long.
If there was a way you could group similar items together, could you bring the total down below the limit? E.g. if your data contains lots of repeated X,Y coords, you might be able to reduce the number of elements and still keep one list, by creating a frequency count field. e.g. (x,y,count)
In theory, the maximum number of elements in a list is int.MaxValue, which is about 2 billion.
However, it is very inefficient to use the list type to store an extremely large number of elements. It simply has not been designed for that and you're doing way better with a tree-like data structure.
For instance, if you look at Mono's implementation of the list types, you'll see that they're using a single array to hold the elements and I assume .NET's version does the same. Since the maximum size of an element in .NET is 2 GB, the actual maximum number of elements is 2 billion divided by the element size. So, for instance a list of strings on a 64-bit machine could hold at most about 268 million elements.
When using the mutable (non-readonly) list types, this array needs to be re-allocated to a larger size (usually using twice the old size) when adding items, requiring the entire contents to be copied. This is very inefficient.
In addition to this, having too large objects could also have negative impacts on the garbage collector.
Update
If you really need a very large list, you could simply write your own data type, for instance using an array or large arrays as internal storage.
There are also some useful comments about this here:
http://blogs.msdn.com/b/joshwil/archive/2005/08/10/450202.aspx