How to deal with IEnumerables/Arrays/Collections in Fluxor State? - c#

I'm currently trying to implement Fluxor for my Blazor WASM app and all the instructions/tutorials I found recommended something like this example for the Store:
public record AppStore {
int ClickCounter,
bool IsLoading,
WeatherForecast[]? Forecasts
}
and then only talk about initial state and updates only happen to the bool and the int while the array is only ever replaced outright. I.e. the examples always fetch the complete data from the server, e.g. a 100 entries.
Now, here's my question: How do I properly deal with the array in my reducer when I have already 100 entries in there and only want do add/update/delete one? Is that even a good idea in the first place?

The best thing to do is to use ImmutableList<T> or ImmutableArray<T> instead, as this class is optimised for the purpose of returning a new instance that includes old data but without having to copy the elements.
I've recently released a new library called Reducible that helps to create complex state reducers. It results in fewer updates (e.g. a new parent object isn't created if an item in the list is not replaced).
https://github.com/mrpmorris/Reducible/blob/master/README.md

Related

C# Compiler-Building Scoping Issue

I am building a custom small interpreted script language and everything is working just fine except the scoping.
For the actual execution I am using a visitor pattern:
I modified the pattern to pass through the Variable Table:
public void visit(ProgrammTree proTree){
VariableTable vt = new VariableTable();
foreach (var t in proTree.getChildren()) {
t.accept(this, vt);
}
}
And here is where the problem starts:
public void visit(WhileTree whiletree, VariableTable vt) {
var cond = (ConditionTree)whiletree.getChild(0);
while (cond.accept(this, vt).toBoolean()) {
var clonedSubTable = new VariableTable(vt)
foreach (Tree t in whiletree.getChildren()) {
t.accept(this, clonedSubTable );
}
}
}
Problem is that changes within the loop are not performed in the outer scope.
Do you have a smart way to implement this?
You left a couple of things a bit vague, so I'm going to make the following assumptions (please point out any that are wrong):
Your VariableTable maps variable names directly to their associated value
Whenever you assign a value, you directly set that value as the entry in the table (without going through any layer of indirection)
Your cloned variable tables do not keep a reference to the original and don't propagate any changes to the original
So under these assumption the problem is that any assignments done using the cloned table won't be visible in the original table even in cases where the assigned-to variable was already present in the original table.
To fix this, there are multiple approaches:
You can make your table map variables to memory locations rather than values. You'd then have another table (or just a plain array) that maps memory locations to values. This way the table-entry for a variable would never change: Once the variable is defined, it gets a memory address and that address isn't going to change until the variable dies. Only the value at the address may change.
A quick-and-dirty alternative to that approach that keeps you from having to manage your own memory would be to add a mutable wrapper around your values, i.e. a ValueWrapper class with a setValue method. Assuming that your cloned table is a shallow copy of the original, that means that you can call setValue on an entry of the cloned table and, if that entry was already present in the original table, the change will also be reflected in the original table.
The solution that keeps closest to your current code would be to turn your table into a linked structure. Then new VariableTable(vt) would not actually copy anything, but simply create a new empty table with a parent link pointing to the original table. Any new entries would be inserted into the new table, but accesses to old entries would simply be propagated to the parent table.
Even if you choose to go with options 1 or 2 to solve your current problem, using a parent link instead of copying the table constantly would be a good idea anyway for performance reasons.
The downside of only going with solution 3 would be that you'll run into similar problems again when you implement closures. So it really only fixes your exact current problem.
The upside of solution 1 is that it allows you full control over your memory, so you have free hand in implementing features related to memory in any way you want. The down side is the same.

Manage Cache collection by multiple properties

I have Sensor class which contains few properties: id, a, b.
Another class called SensorCache and responsible for manage in memory cache for my sensor collection.
SensorCache implements "Cache aside pattern" see here
SensorCache works in traditional way - each Sensor request (the requests made by the id property) first goes to SensorCache:
if it already exists in memory - SensorCache return it
if not in memory, it brings the required Sensor from my DB, save into memory cache object (represented by `Dictionary') and return it.
Currently my dictionary key is based on the Sensor.id field.
I got a new requirement to return a Sensor by 2 fields (a and b) and keep my cache logic.
My cache object currently built to search by single property (Sensor.id) so I need to think about new structure which able to search in memory by 2 different options: Sensor.id or 'Sensor.a' and 'Sensor.b' pairs.
What is the best approach to handle this?
I thought about holding two different objects, one for each kind of search but this approach will consume much more memory (x2) so I want to hear another ideas before doing it.
You can write a separate class that implements IEquatable and overrides GetHashCode (and sometimes you have to, to achieve the required performance) but in this simple case, it sounds like you could use Tuple, that is, Dictionary<Tuple<(type of a), (type of b)>, Sensor>.

Session instead of viewsate

I have found many questions here about storing values in viewstate, but haven't found a good answer.
I have a situation when i retrieve large amount of data from database. Then i filter and manipulate the data according to my needs (so it is a preety heavy process). Then I put the result inside a list of custom class. For example lets say this class will be Person
List<Person> persons = new List<Person>();
private void FillPersons()
{
//Call to webservice
persons = ws.GetPersonsList();
//Do all kind of custom filtering
//Manipulate the data
}
Now the whole FillPersons() method is a heavy process that returns pretty small amount of data. And unfortunately it can't be moved to SQL and the heaviness is in the process, but that is not the point.
The point is that i need to reuse this data on the page between post backs.
Right now in order to spare the additional call to FillPersons() I mark Person class as serializeable and store the list in the viewstate, that works fine except the fact that the page becomes 1mb size because of the viewstate. According to what i have read, it is not so acceptable approach i.e. it is not secure and it blows the source code making the page heavy etc. (second is what most concerns me)
So it leaves me with a session. However session is persisted not only between postbacks, but much after it, even when user leaves the page. Or worst- the session will end before user decide to postback. So finding the best time span for session lifetime is mission impossible.
My question is what is the best practice to reuse "datasets" between postbacks?
What you guys do in such cases?
Thanks.
PS: hidden fields etc. is not an option.
You can store this kind of data in the Cache. It is application wide, so depending on what you add use the key accordingly.
var key = UserID + "_personList";
Cache.Add(key, personList, null,
DateTime.Now.AddSeconds(60),
Cache.NoSlidingExpiration,
CacheItemPriority.High,
null);
Note that you can never assume that the data is in the cache (it might have been flushed) so always check if it returns null and than refill it.
Viewstate is not a good way of storing large objects. As you mentioned your page size will get bigger and every postback will take lots of time.
I would suggest using cache. By using cache your list wont be saved there till end of session and you can set how much time it should be stored there. For caching you may use HttpCache or some distibuted caching system like AppFabric or MemCached . This nuget package will help using these cache systems.
this link will help how to configure AppFabric.
I should edit with some code to make it more helpful.
https://bitbucket.org/glav/cacheadapter/wiki/Home
var cacheProvider = AppServices.Cache; // will pick cachadapter using web.config ( can be Http, Memory, AppFabric or MemCached)
var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddSeconds(3), () =>
{
// This is the anonymous function which gets called if the data is not in the cache.
// This method is executed and whatever is returned, is added to the cache with the
// passed in expiry time.
Console.WriteLine("... => Adding data to the cache... 1st call");
var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 };
return someData;
});
Other than a cache (good idea by Magnus), the only other way I can think of is to keep the results of your heavy operation stored in the database server.
You mention that it takes a lot of time to retrieve the data. Once done, store it in a purposely established table with some type of access key. Give that key to the browser and use it for pulling what pieces you need back out.
Of course, without knowing the full architecture it's really hard to give a solution. So, in order of preference:
Store it back in the database with a unique key for this user.
Store it in a remote cache
Store it in a local cache
Under no circumstance would I store it in the page (viewstate), cookie (sounds too big anyway), or in session.
Have you considered using ASP.NET caching?
You should choose a key that will suite your exact needs and you will have your data stored in the server memory. But keep in mind cache is application specific and is valid for all users.
If the data you process is not often changed, the processing algorithm doesn't depend on user specific settings and it is not critical to always have the latest data maybe this is the best option I can think of.
Store your filtered collection on disk in a file. Give the file the same name as a key you can store in viewstate. Use that key to retrieve the file on postbacks. In order to keep the file system from filling up, have two folders. Alternate the days for which folder you save the files to. That way you can wipe out the contents of the folder that is not being used that day. This method has extremely good performance, and can scale with a web farm if your folder locations are identified by a network path.
I think personlist is a shared object. Does everyone use the same list? You can store on Application.
Application["PersonList"] =persons;
persons = (List<"Person">)Application["PersonList"]
Or you can Store on Static class.
public static class PersonList { public static List<"Person"> Get {get;set;} }
You should write this code to Application_Start on Global.asax file
PersonList.Get = ws.GetPersonsList();
And you can get List by using this code
persons = PersonList.Get;

Db4O activation depth, Faq, Best Practise for Web Application

Our database includes 4,000,000 records (sql server) and it's physical size is 550 MB .
Entities in database are related each other as graph style. When i load an entity from db with 5 level depth there is a problem (all records are loaded).
Is there any mechanism like Entity Framework( Include("MyProperty.ItsProperty"))
What is the best Types for using with db4O databases?
Is there any issue for Guid, Generic Collections?
Is there any best practise for WebApplication with db4o? Session Containers+EmbeddedDb4ODb or Client/ServerDb4O?
Thx for help..
Thx for good explanation. But i want to give my exact problem as a sample:
I have three entities: (N-N relationship. B is an intersection Entity. Concept:Graph)
class A
{
public B[] BList;
public int Number;
public R R;
}
class B
{
public A A;
public C C;
public D D;
public int Number;
}
class C
{
public B[] BList;
public E E;
public F F;
public int Number;
}
I want to query dbContext.A.Include("BList.C.BList.A").Include("BList.C.E.G").Where(....)
I want to get :A.BList.C.BList.A.R
But I dont want to get :A.R
I want to get :A.BList.C.E.G
But I dont want to get :A.BList.C.F
I want to get :A.BList.C.E.G
But I dont want get :A.BList.D
Note:this requirements can change a query to another query
Extra question is there any possibility to load
A.BList[#Number<120].C.BList.A[#Number>100] Super syntax :)
Activation: As you said db4o uses it's activation-mechanism to control which objects are loaded. To prevent that to many objects are loaded there are different strategies.
Lower the global default activation-depth: configuration.Common.ActivationDepth = 2 Then use the strategies below to activate objects on need.
Use class-specific activation configuration like cascading activation, minimum and maximun activation-depth etc.
Activate objects explicit on demand: container.Activate(theObject,5)
However all these stuff is rather painful on complex object graphs. The only strategy to get away from that pain is transparent activation. Create an attribute like TransparentlyActivated. Use this attribute to mark your stored classes. Then use the db4otool to enhance your classes. Add the db4otool-command to the Post-Build events in Visual Studio: Like 'PathTo\Db4oTool.exe -ta -debug -by-attribute:YourNamespace.TransparentlyActivated $(TargetPath)
Guid, Generic Collections:
No (in Version 7.12 or 8.0). However if you store your own structs: Those are handled very poorly by db4o
WebApplication: I recommend an embedded-container, and then a session-container for each request.
Update for extended question part
To your case. For such complex activation schema I would use transparent activation.
I assume you are using properties and not public fields in your real scenario, otherwise transparent persistence doesn't work.
The transparent activation basically loads an object in the moment a method/property is called the first. So when you access the property A.R then A itself it loaded, but not the referenced objects. I just go through a few of you access patterns to show what I mean:
Getting 'A.BList.C.BList.A.R'
A is loaded when you access A.BList. The BList array is filled with unactivate objects
You keep navigating further to BList.C. At this moment the BList object is loaded
Then you access C.BList. db4o loads the C-object
And so on and so forth.
So when you get 'A.BList.C.BList.A.R' then 'A.R' isn't loaded
A unloaded object is represented by an 'empty'-shell object, which has all values set to null or the default value. Arrays are always fully loaded, but first filled with unactivated objects.
Note that theres no real query syntax to do some kind of elaborate load requests. You load your start object and then pull stuff in as you need it.
I also need to mention that this kind of access will perform terrible over the network with db4o.
Yet another hint. If you want to do elaborate work on a graph-structure, you also should take a look at graph databases, like Neo4J or Sones Graph DB

Copying from EntityCollection to EntityCollection impossible?

How would you do this (pseudo code): product1.Orders.AddRange(product2.Orders);
However, the function "AddRange" does not exist, so how would you copy all items in the EntityCollection "Orders" from product2 to product1?
Should be simple, but it is not...
The problem is deeper than you think.
Your foreach attempt fails, because when you call product1.Orders.Add, the entity gets removed from product2.Orders, thus rendering the existing enumerator invalid, which causes the exception you see.
So why does entity get removed from produc2? Well, seems quite simple: because Order can only belong to one product at a time. The Entity Framework takes care of data integrity by enforcing rules like this.
If I understand correctly, your aim here is to actually copy the orders from one product to another, am I correct?
If so, then you have to explicitly create a copy of each order inside your foreach loop, and then add that copy to product1.
For some reason that is rather obscure to me, there is no automated way to create a copy of an entity. Therefore, you pretty much have to manually copy all Order's properties, one by one. You can make the code look somewhat more neat by incorporating this logic into the Order class itself - create a method named Clone() that would copy all properties. Be sure, though, not to copy the "owner product reference" property, because your whole point is to give it another owner product, isn't it?
Anyway, do not hesitate to ask more questions if something is unclear. And good luck.
Fyodor
Based on the previous two answers, I came up with the following working solution:
public static void AddRange<T>(this EntityCollection<T> destinationEntityCollection,
EntityCollection<T> sourceEntityCollection) where T : class
{
var array = new T[sourceEntityCollection.Count()];
sourceEntityCollection.CopyTo(array,0);
foreach (var entity in array)
{
destinationEntityCollection.Add(entity);
}
}
Yes, the usual collection related functions are not there.
But,
1. Did you check CopyTo method?
2. Do you find any problem with using the iterator? You know, GetEnumerator, go through the collection and copy the entities?
The above two can solve your problems. But, I'm sure in .NET 3.0+ there would be compact solutions.
My answers are related to .NET 2.0

Categories

Resources