Why is my IEnumerable variable also getting updated? - c#

I am a little confused about why the logic here isn't working, and I feel like I have been staring at this bit of code for so long that I am missing something here. I have this method that gets called by another method:
private async Task<bool> Method1(int start, int end, int increment, IEnumerable<ModelExample> examples)
{
for (int i = start; i<= end; i++)
{
ModelExample example = examples.Where(x => x.id == i).Select(i => i).First();
example.id = example.id + increment; //Line X
// do stuff
}
return true;
}
I debuged the code above and it seems like when "Line X" gets executed not only is example.id changed individually but now that example in the List "examples" gets updated with the new id value.
I am not sure why? I want the list "examples" to remain the same for the entirety of the for loop, I am confused why updating the value for example.id updates it in the list as well?
(i.e. if the list before "Line X" had an entry with id = 0, after "Line X" that same entry has its id updated to 1, how can I keep the variable "examples" constant here?)
Any help appreciated, thanks.

This is what your list looks like:
+---------------+
| List examples | +-----------+
+---------------+ | Example |
| | +-----------+
| [0] ---------------> | Id: 1 |
| | | ... |
+---------------+ +-----------+
| |
| [1] ---------------> +-----------+
| | | Example |
+---------------+ +-----------+
| | | Id: 2 |
| ... | | ... |
| | +-----------+
+---------------+
In other words, your list just contains references to your examples, not copies. Thus, your variable example refers to one of the entities on the right-hand side and modifies it in-place.
If you need a copy, you need to create one yourself.

Related

EF retrieves rows with id zero (sometimes) causing new entities to be added to database during update operation

I have an API endpoint that updates a sequence for rows associated with an account.
The logic is similar to this
public async Task<ServiceResult<IOrderedEnumerable<SomeItem>>> ChangeSequence(
ChangeSequenceRequest request,
CancellationToken cancellationToken)
{
IOrderedEnumerable<SomeItem> originalPriorityList = await
someItemRepository.Get(request.AccountId, cancellationToken);
//logic to check whether the lock token is valid
//new sequence order is updated
await someItemRepository.SetSequence(newItemSequence,
originaItemSequence,
request.AccountId,
cancellationToken)
//release the token
//create service result
}
The request contains the following structure:
public class ChangeSequenceRequest
{
public string token {get; set}
public IEnumerable<SomeItem> newSequence { get; set; } = Enumerable.Empty<SomeItem>();
}
Then SomeItem has the following structure
public class SomeItem
{
public int Id {get; set;}
public string Name {get; set;}
public int Sequence {get; set};
}
Here's the logic behind someItemRepository.SetSequence call
public async Task SetSequence(
IOrderedEnumerable<SomeItem> newSequence,
IOrderedEnumerable<SomeItem> originalSequence,
int accountId,
CancellationToken cancellationToken)
{
await _dbContextFactory.DbContext(cancellationToken).Exec(async (dbx, ct) =>
{
Dictionary<string, SomeItem> originalSequenceByName = originalSequence.ToDictionary(x => x.Name);
for(int i = 0; i < newSequence.Count(); i++)
{
SomeItem newSomeItem = newSequence.ElementAt(i);
//logic to skip unchanged items
newSomeItem.Id = originalSequenceByName[newSomeItem.Name].Id;
dbx.Update(newSomeItem);
}
await dbx.SaveChangesAsync();
});
}
Strange behavior observed
I have set up a console application to call the API endpoint 100s times concurrently with the same lock key, but the randomized sequence (still follow a proper arithmetic sequence). Note that the lock key is destroyed every time after a call to the API endpoint. What I have noticed is that sometimes the sequence get's out of whack. I.e. I observe results like this in the database:
What I expect in
SomeItemTable:
| Id | Name | Sequence | AccountId |
|---------------------|-----------------------------------|------------------|
| 99 | A | 1 | 3 |
| 100 | B | 2 | 3 |
| 101 | C | 3 | 3 |
| 102 | D | 4 | 3 |
But I observe:
| Id | Name | Sequence | AccountId |
|---------------------|-----------------------------------|------------------|
| 99 | A | 2 | 3 |
| 100 | B | 4 | 3 |
| 101 | C | 3 | 3 |
| 102 | D | 1 | 3 |
| 103 | D | 1 | 3 |
Or any other variation that ruins the sequence.
What I currently understand about the problem
My current understanding indicates that when updating the database context, the entity state gets set to "Added" instead of "Modified". This occurs when you don't give the primary key to the entity. I.e. leave it as 0. You can see that it is being mapped when calling 'SetSequence'. However, sometimes when retrieving originalSequence via someItemRepository.Get(request.AccountId, cancellationToken); I get some entries in the collection that has id 0. This is the main crux of the problem, and I have no understanding as to why this happens.
What I expected to happen
I expect the database to simply have a sequence update of the last call that finished executed on the table as per "Last in Wins"
I've figured it out. The repository for the retrieval of the sequence was decorated with cache interface (which under this configuration uses redis) and the records within the cache would not have been assigned id's, which meant that every time I was retrieving records, some of them would have added state instead of modified due to unassigned id.

Algorithm to always get oldest array index based on position of newest

This is almost an impossible question to ask, but any advice on the algorithm would be greatly appreciated (I will explain the best I can);
I have an array of size ~4000 bytes which contains data in byte format.
For this demonstration, I am going to simplify things a bit; say it's size 7 (to represent 'blocks' of data, not single values!);
| 0 | 1 | 2 | 3 | 4 | 5 | 6 |
I am adding a value at position 0, with the reset of the array being '0'
key: N = newest, O = oldest, X = Filled
| N | | | | | | |
I now need to add another value. this will be entered at the next available position.
| O | N | | | | | |
So now position [0] is now the 'oldest' part of the array, and position [1] is the newest.
This has been (currently) worked out by looking all the way right, seeing no values, and then starting from position [0] until it sees a value.
Let's add another:
| O | X | N | | | | |
Note, the oldest value hasn't changed position, as it is still the oldest part of the array.
I am now going to 'clear' the oldest part of the array (in this example it is currently pos [0]). this makes 'O' move over to the next position.
| | O | N | | | | |
Lets add another value. Since it will go to the first 'empty' space, it will go to position [0]; this means the first position is now at position [0].
| N | O | X | | | | |
I'm going to clear another one now; so again, by looking from the right of the newest value, I see a value is at position 1. So i'm going to clear it.
| | | O/N | | | | |
this means position [2] is now both the newest and oldest value available.
Adding another makes;
| N | | O | | | | |
Adding another;
| X | N | O | | | | |
and adding another;
| X | X | O | N | | | |
I am looking to delete the oldest value now. So by looking right from position of the 'newest' variable, I see pos[0] has a value, so that must be it. UH-OH that's not the oldest value!
As you can (hopefully) tell, I am unable to get the oldest ticket by looking to the right for my next value - this problem only occurs every so often, and has been hard to find a solution.
I only know the index of the most recent value added, and this is very hard to find a solution. (lots of scribbliing and diagrams have been attempted, lots of scrumpled up paper).
So if anyone had any ideas as to how I could ALWAYS find the oldest value's index, I would be greatly appreciative! (I also know this is quite a complex question, so if anyone wants/needs clarification, I'll be happy to edit/explain further!) I have tagged c#, but realistically I only need a BASIC algorithm for any progress to be honest!!!
====================================================================================
EDIT
Answers have suggested to allocate to the right of the 'newest' position;
like:
| | | O | N | | | |
| | | O | X | N | | |
| | | O | X | X | N | |
| | | O | X | X | X | N |
| N | | O | X | X | X | X |
| X | N | O | X | X | X | X |
Which I think COULD work, but anyone know if this would fail (say, I removed a value at a certain time/etc?)
I guess you are forced to use an array; if not then you should consider switching to an adequate data structure such as a Queue.
If you are indeed forced to use an array, and can only keep a pointer to the latest block, then i would recommend always adding new blocks the to the right of the latest block, with a index wrapping back to zero at array size.
This lets you determine what the oldest block is by looking to the blocks right of the latest block, until you find a non empty block: this is your oldest block. Null it to remove it from the array and carry on :)
Let's illustrate:
| N | | | | | | | // newBlockIndex at 0, adding, newBlockIndex becomes 1
| X | N | | | | | | // newBlockIndex at 1, adding, newBlockIndex becomes 2
| X | X | N | | | | | // newBlockIndex at 2, adding, newBlockIndex becomes 3
| | X | N | | | | | // newBlockIndex at 3, removing, no item before index 0, we delete it
| | X | X | N | | | | // newBlockIndex at 3, adding, newBlockIndex becomes 4
...
EDIT TO ADD
Regarding your edit, I think the mechanism is quite robust. Even if you were to remove an item (any item, even the latest one) by error, the next operation can succeed because latest and newest are defined in regards to their position to the current index. Newest item is the first on the left of the index, oldest the first on the right.
Even if you don't check for your array size and fill it completely (which I don't recommend, though), the algorithm will overwrite the oldest item with the newest: it may not be good but it is coherent with the notion of a queue. Of course if the array fills up you can always decide to allocate a new one larger and copy the current one to the larger array
What you are looking for is a queue data structure.
Queues can be conveniently implemented with a circular buffer where you have a head index and a tail index.
Head and Tail are both initially set to zero.
Add a new element by writing it to where Tail points, then increment Tail. Wrap as needed if incrementing makes it go off the end of the array.
Delete an old element by incrementing Head. Again, wrap as needed if incrementing makes it go off the end of the array.
Head always points at the oldest element.
Tail always points to the right of the newest element.
Use a System.Collections.Generic.Queue<T>, where T is a byte block.
Queue<byte[]> queue = new Queue<byte[]>();
byte[] block;
queue.Enqueue(new byte[] { 10, 11, 12, 13 });
queue.Enqueue(new byte[] { 20, 21, 22, 23 });
queue.Enqueue(new byte[] { 30, 31, 32, 33 });
block = queue.Dequeue();
queue.Enqueue(new byte[] { 40, 41, 42, 43 });
block = queue.Dequeue();
block = queue.Dequeue();
queue.Enqueue(new byte[] { 50, 51, 52, 53 });
queue.Enqueue(new byte[] { 60, 61, 62, 63 });
queue.Enqueue(new byte[] { 70, 71, 72, 73 });
block = queue.Dequeue();
// ...
Dequeue always removes the oldest element!
Since you have clarified in comments that it must be an array, here is a solution that encapsulates an array-queue in a class. It treats consecutive elements as data block of a defined size. It also allows you to access array elements by index and the array itself. This not typical for queues, but since you need the array...
public class ArrayBlocksQueue<T>
{
private T[] _array;
private int _in, _out, _count, _length, _blockSize;
public ArrayBlocksQueue(int maxBlocks, int blockSize)
{
_length = maxBlocks * blockSize;
_blockSize = blockSize;
_array = new T[_length];
}
public void Enqueue(params T[] block)
{
if (block == null) {
throw new ArgumentNullException();
}
if (block.Length != _blockSize) {
throw new ArgumentException("Data does not have required block size.");
}
if (_count + _blockSize > _length) {
throw new ApplicationException("Queue is full");
}
block.CopyTo(_array, _in);
_in = (_in + _blockSize) % _length;
_count += _blockSize;
}
public T[] Dequeue()
{
if (_count == 0) {
throw new ApplicationException("Queue is empty");
}
T[] temp = new T[_blockSize];
System.Array.Copy(_array, _out, temp, 0, _blockSize);
_out = (_out + _blockSize) % _length;
_count -= _blockSize;
return temp;
}
public int Count { get { return _count; } }
public int BlockCount { get { return _count / _blockSize; } }
public T[] Array { get { return _array; } }
public T this[int index]
{
get
{
if (!IsIndexValid(index)) {
throw new IndexOutOfRangeException();
}
return _array[index];
}
set
{
if (!IsIndexValid(index)) {
throw new IndexOutOfRangeException();
}
_array[index] = value;
}
}
public bool IsIndexValid(int index)
{
if (index < 0 || index >= _length) {
return false;
}
if (_count == _length) {
return true;
}
return _out > _in
? index < _in || index >= _out
: index >= _out && index < _in;
}
}

Should I use ToList() Deep Clone IList?

Assume I have below code to deep clone a to b
IList<int> a = new List<int>();
a.Add(5);
IList<int> b = a.ToList();
bad or good?
It's seems work as ToList return a new List. But when I google it, others always use things like
listToClone.Select(item => (T)item.Clone()).ToList();
what the diff?
It can be explained if you understand how data got stored. There are two storage types of data, value type and reference type. Here below is an example of a declaration of a primitive type and an object
int i = 0;
MyInt myInt = new MyInt(0);
The MyInt class is then
public class MyInt {
private int myint;
public MyInt(int i) {
myint = int;
}
public void SetMyInt(int i) {
myint = i;
}
public int GetMyInt() {
return myint;
}
}
How would that be stored in memory ? Here below is an example. Please note that all memory examples here below are simplified !
_______ __________
| i | 0 |
| | |
| myInt | 0xadfdf0 |
|_______|__________|
For each object that you are creating in your code, you will create a reference to the said object. The object will be grouped in a heap. For a difference between stack and heap memory, please refer to this explanation.
Now, back to your question, cloning a list. Here below is an example of creating lists of integers and MyInt objects
List<int> ints = new List<int>();
List<MyInt> myInts = new List<MyInt>();
// assign 1 to 5 in both collections
for(int i = 1; i <= 5; i++) {
ints.Add(i);
myInts.Add(new MyInt(i));
}
Then we look to the memory,
_______ __________
| ints | 0x856980 |
| | |
| myInts| 0xa02490 |
|_______|__________|
Since lists comes from a collection, each field contains a reference address, which leads to the next
___________ _________
| 0x856980 | 1 |
| 0x856981 | 2 |
| 0x856982 | 3 |
| 0x856983 | 4 |
| 0x856984 | 5 |
| | |
| | |
| | |
| 0xa02490 | 0x12340 |
| 0xa02491 | 0x15631 |
| 0xa02492 | 0x59531 |
| 0xa02493 | 0x59421 |
| 0xa02494 | 0x59921 |
|___________|_________|
Now you can see that the list myInts contains again references while ints contains values. When we want to clone a list using ToList(),
List<int> cloned_ints = ints.ToList();
List<MyInt> cloned_myInts = myInts.ToList();
we get a result like here below.
original cloned
___________ _________ ___________ _________
| 0x856980 | 1 | | 0x652310 | 1 |
| 0x856981 | 2 | | 0x652311 | 2 |
| 0x856982 | 3 | | 0x652312 | 3 |
| 0x856983 | 4 | | 0x652313 | 4 |
| 0x856984 | 5 | | 0x652314 | 5 |
| | | | | |
| | | | | |
| | | | | |
| 0xa02490 | 0x12340 | | 0xa48920 | 0x12340 |
| 0xa02491 | 0x15631 | | 0xa48921 | 0x12340 |
| 0xa02492 | 0x59531 | | 0xa48922 | 0x59531 |
| 0xa02493 | 0x59421 | | 0xa48923 | 0x59421 |
| 0xa02494 | 0x59921 | | 0xa48924 | 0x59921 |
| | | | | |
| | | | | |
| 0x12340 | 0 | | | |
|___________|_________| |___________|_________|
The 0x12340 is then the reference of the first MyInt object, holding variable 0. It's shown here simplified to explain it well.
You can see that the list appears as cloned. But when we want to change a variable of the cloned list, the first one will be set to 7.
cloned_ints[0] = 7;
cloned_myInts[0].SetMyInt(7);
Then we get the next result
original cloned
___________ _________ ___________ _________
| 0x856980 | 1 | | 0x652310 | 7 |
| 0x856981 | 2 | | 0x652311 | 2 |
| 0x856982 | 3 | | 0x652312 | 3 |
| 0x856983 | 4 | | 0x652313 | 4 |
| 0x856984 | 5 | | 0x652314 | 5 |
| | | | | |
| | | | | |
| | | | | |
| 0xa02490 | 0x12340 | | 0xa48920 | 0x12340 |
| 0xa02491 | 0x15631 | | 0xa48921 | 0x12340 |
| 0xa02492 | 0x59531 | | 0xa48922 | 0x59531 |
| 0xa02493 | 0x59421 | | 0xa48923 | 0x59421 |
| 0xa02494 | 0x59921 | | 0xa48924 | 0x59921 |
| | | | | |
| | | | | |
| 0x12340 | 7 | | | |
|___________|_________| |___________|_________|
Did you see the changes ? The first value in 0x652310 got changed to 7. But at the MyInt object, the reference address didn't got changed. However, the value will be assigned to the 0x12340 address.
When we want to display the result, then we have the next
ints[0] -------------------> 1
cloned_ints[0] -------------> 7
myInts[0].GetMyInt() -------> 7
cloned_myInts[0].GetMyInt() -> 7
As you can see, the original ints has kept its values while the original myInts has a different value, it got changed. That's because both pointers points to the same object. If you edit that object, both pointers will call that object.
That's why there are two types of clonings, deep and shallowed one. The example below here is a deep clone
listToClone.Select(item => (T)item.Clone()).ToList();
This selects each item in the original list, and clones each found object in the list. The Clone() comes from the Object class, which will create a new object with same variables.
However, please notice that it's not secure if you have an object or any reference types in your class, you have to implement the cloning mechanism yourself. Or you will face same issues as described here above, that the original and cloned object is just holding a reference. You can do that by implmenting the ICloneable interface, and this example of implementing it.
I hope that it's now clear for you.
It depends. If you have a collection of value types it will copy them. If you have a list of reference types then it will only copy references to real objects and not the real values. Here's a small example
void Main()
{
var a = new List<int> { 1, 2 };
var b = a.ToList();
b[0] = 2;
a.Dump();
b.Dump();
var c = new List<Person> { new Person { Age = 5 } };
var d = c.ToList();
d[0].Age = 10;
c.Dump();
d.Dump();
}
class Person
{
public int Age { get; set; }
}
The previous code results in
a - 1, 2
b - 2, 2
c - Age = 10
d - Age = 10
As you can see the first number in the new collection changed and did not affect the other one. But that was not the case with the age of the person I created.
If the contents an IList<T> either encapsulate values directly, identify immutable objects for the purpose of encapsulating the values therein, or encapsulate the identities of shared mutable objects, then calling ToList will create a new list, detached from the original, which encapsulates the same data.
If the contents of an IList<T> encapsulate values or states in mutable objects, however, such an approach would not work. References to mutable objects can only values or if the objects in question are unshared. If references to mutable objects are shared, the whereabouts of all such references will become an part of the state encapsulated thereby. To avoid this, copying a list of such objects requires producing a new list containing references to other (probably newly-created) objects that encapsulate the same value or states as those in the original. If the objects in question include a usable Clone method, that might be usable for the purpose, but if the objects in question are collections themselves, the correct and efficient behavior of their Clone method would rely upon them knowing the whether they contain objects which must not be exposed to the recipient of the copied list--something which the .NET Framework has no way of telling them.

Best data structure for caching objects with a composite unique id

I have a slow function that makes an expensive trip to the server to retrieve RecordHdr objects. These objects are sorted by rid first and then by aid. They are then returned in batches of 5.
| rid | aid |
-------------->
| 1 | 1 | >
| 1 | 3 | >
| 1 | 5 | > BATCH of 5 returned
| 1 | 6 | >
| 2 | 2 | >
-------------->
| 2 | 3 |
| 2 | 4 |
| 3 | 1 |
| 3 | 2 |
| 3 | 5 |
| 3 | 6 |
| 4 | 1 |
| 4 | 2 |
| 4 | 5 |
| 4 | 6 |
After I retrieve the objects, I have to wrap them in another class called WrappedRecordHdr. I'm wondering what is the best data structure I can use to maintain a cache of WrappedRecordHdr objects such that if I'm asked for an object by rid and aid, I return a particular object for it. Also if I'm asked for the rid, I should return all objects that have that rid.
So far I have created two structures for each scenario (This may not be the best way, but It's what I'm using for now):
// key: (rid, aid)
private CacheMap<int, int, WrappedRecordHdr> m_ridAidCache =
new CacheMap<int, int, WrappedRecordHdr>();
// key: (rid)
private CacheMap<int, WrappedRecordHdr[]> m_ridCache =
new CacheMap<int, WrappedRecordHdr[]>();
Also, I'm wondering if there is a way I can rewrite this to be more efficient. Right now I have to get a number of records that I need to wrap within another object. Then, I need to group them in a dictionary by id so that if I am asked for a certain rid I can return all objects that have the same rid. The records have been already sorted, so I'm hoping the GroupBy doesn't attempt to sort them beforehand.
RecordHdr[] records = server.GetRecordHdrs(sessId, BATCH_SIZE) // expensive call to server.
// After all RecordHdr objects are retrieved, we loop through the received objects. For each RecordHdr object a WrappedRecordHdr object has to be created.
WrappedRecordHdr[] wrappedRecords = new WrappedRecordHdr[records.Length];
for (int i = 0; i < wrappedRecords.Length; i++)
{
if (records[i] == null || records[i].aid == 0 || records[i].rid == 0) continue; // skip invalid results.
wrappedRecords[i] = new WrappedRecordHdr(AccessorManager, records[i], projectId);
}
// Group all records found in a dictionary of rid => array of WrappedRecordHdrs, so all records with the same
// rid are returned.
objects associated to a particular rid.
Dictionary<int, WrappedRecordHdr[]> dict = wrappedRecords.GroupBy(obj => obj.rid).ToDictionary(gdc => gdc.Key, gdc => gdc.ToArray());
m_ridCache = dict;
As to the data structure, I think there are really two different questions here:
What structure to use;
Should there be one or two caches;
It seems to me that you want one cache, typed as a MemoryCache. The key would be the RID, and the value would be a Dictionary, where the key is an AID and the value is the header.
This has the following advantages:
The WrappedRecordHdrs are stored only once;
The MemoryCache already has all of the caching logic implemented, so you don't need to rewrite that;
When provided with only an RID, you know the AID of each WrappedRecordHdr (which you don't get with the array in the initial post);
These things are always compromises, so this has disadvantages too of course:
Cache access (get or set) requires constructing a string each time;
RID + AID lookups require indexing twice (as opposed to writing some fast hashing function that takes an RID and AID and returns a single key into the cache, however that would require that you either have two caches (one RID only, one RID + AID) or that you store the same WrappedRecordHdr twice per AID (once for RID + AID and once for null + AID));

Packing items vertically (with fixed length and horizontal position)

Let's say i have some items, that have a defined length and horizontal position (both are constant) :
1 : A
2 : B
3 : CC
4 : DDD (item 4 start at position 1, length = 3)
5 : EE
6 : F
I'd like to pack them vertically, resulting in a rectangle having smallest height as possible.
Until now, I have some very simple algorithm that loops over the items and that check row by row if placing them in that row is possible (that means without colliding with something else). Sometimes, it works perfectly (by chance) but sometimes, it results in non-optimal solution.
Here is what it would give for the above example (step by step) :
A | A B | ACC B | ACC B | ACC B | ACC B |
DDD | DDD | FDDD |
EE | EE |
While optimal solution would be :
ADDDB
FCCEE
Note : I have found that sorting items by their length (descending order) first, before applying algorithm, give better results (but it is still not perfect).
Is there any algorithm that would give me optimal solution in reasonable time ? (trying all possibilities is not feasible)
EDIT : here is an example that would not work using sorting trick and that would not work using what TylerOhlsen suggested (unless i dont understand his answer) :
1 : AA
2 : BBB
3 : CCC
4 : DD
Would give :
AA BBB
CCC
DD
Optimal solution :
DDBBB
AACCC
Just spitballing (off the top of my head and just pseudocode ). This algorithm is looping through positions of the current row and attempts to find the best item to place at the position and then moves on to the next row when this row completes. The algorithm completes when all items are used.
The key to the performance of this algorithm is creating an efficient method which finds the longest item at a specific position. This could be done by creating a dictionary (or hash table) of: key=positions, value=sorted list of items at that position (sorted by length descending). Then finding the longest item at a position is as simple as looking up the list of items by position from that hash table and popping the top item off that list.
int cursorRow = 0;
int cursorPosition = 0;
int maxRowLength = 5;
List<Item> items = //fill with item list
Item[][] result = new Item[][];
while (items.Count() > 0)
(
Item item = FindLongestItemAtPosition(cursorPosition);
if (item != null)
{
result[cursorRow][cursorPosition] = item;
items.Remove(item);
cursorPosition += item.Length;
}
else //No items remain with this position
{
cursorPosition++;
}
if (cursorPosition == maxRowLength)
{
cursorPosition = 0;
cursorRow++;
}
}
This should result in the following steps for Example 1 (at the beginning of each loop)...
Row=0 | Row=0 | Row=0 | Row=1 | Row=1 | Row=1 | Row=2 |
Pos=0 | Pos=1 | Pos=4 | Pos=0 | Pos=1 | Pos=3 | Pos=0 |
| A | ADDD | ADDDB | ADDDB | ADDDB | ADDDB |
F | FCC | FCCEE |
This should result in the following steps for Example 2 (at the beginning of each loop)...
Row=0 | Row=0 | Row=0 | Row=1 | Row=1 | Row=1 | Row=2 |
Pos=0 | Pos=2 | Pos=4 | Pos=0 | Pos=1 | Pos=3 | Pos=0 |
| AA | AACCC | AACCC | AACCC | AACCC | AACCC |
DD | DDBBB |
This is a classic Knapsack Problem. As #amit said, it is NP-Complete. The most efficient solution makes use of Dynamic Programming to solve.
The Wikipedia page is a very good start. I've never implemented any algorithm to solve this problem, but I've studied it relation with the minesweeper game, which is also NP-Complete.
Wikipedia: Knapsack Problem

Categories

Resources