I am Calling API to get a list of contacts(they might be in 100's or 1000's) and list only lists 100 at a time and its giving me this pagination option with an object at the end of the list called 'nextpage' and with URL to next 100 and so on..
so in my c# code and am getting first 100 and looping through them (to do something) and looking up for 'nextpage' object and getting the URL and re-calling the API etc.. looks like this next page chain goes on depending on how many ever contacts we have.
can you please let me know if there is a way for me to loop through same code and still be able to use new URL from 'nextpage' object and run the logic for every 100 i get ?
Pseudo-code, as we have no concrete examples to work with, but...
Most APIs with pagination will have a total count of items. You can set a max items per iteration and track it like that, or check for the null next_object, depending on how the API handles it.
List<ApiObject> GetObjects() {
const int ITERATION_COUNT = 100;
int objectsCount = GetAPICount();
var ApiObjects = new List<ApiObject>();
for (int i = 0; i < objectsCount; i+= ITERATION_COUNT) {
// get the next 100
var apiObjects = callToAPI(i, ITERATION_COUNT); // pass the current offset, request the max per call
ApiObjects.AddRange(apiObjects);
} // this loop will stop after you've reached objectsCount, so you should have all
return ApiObjects;
}
// alternatively:
List<ApiObject> GetObjects() {
var nextObject = null;
var ApiObjects = new List<ApiObject>();
// get the first batch
var apiObjects = callToAPI(null);
ApiObjects.AddRange(apiObjects);
nextObject = callResponse.nextObject;
// and continue to loop until there's none left
while (nextObject != null) {
var apiObjects = callToAPI(null);
ApiObjects.AddRange(apiObjects);
nextObject = callResponse.nextObject;
}
return apiObjects;
}
That's the basic idea anyway, per the two usual web service approaches (with lots of detail left out, as this is not working code but only meant to demonstrate the general approach).
Related
having an ongoing problem with my project that I'm working on. Fairly new to C# and ASP.Net.
I'm currently trying to get an entry from a textfield and compare it to the last entry in my database. For my business rule, the Reading must not be higher than the Previous Years reading. I will have multiple Readings from different machines.
meterReading is the instance of my class MeterReading
This is currently what I have:
var checkMeterReading = (from p in db.MeterReading
where (p.Reading < meterReading.Reading)
select p);
if (checkMeterReading.Count() > 0)
{
if (!String.IsNullOrEmpty())
{
//saves into DB
}
}
else
{
TempData["Error"] = "Meter Reading must be higher than last actual";
}
Don't know if I'm doing anything stupid or not. Thanks in advance
You're currently checking whether any reading in the database is less than the current reading; that's clearly not right, as you could have stored readings of 200, 5000, 12005 and be testing against 9000. There are 2 readings less than 9000, so your code would allow you to insert the 9000 at the end. What you want to check is that all the readings are less, or equivalently: that no reading is higher:
var higherExists = db.MeterReading.Any(p => p.Reading > newReading);
if(higherExists) {
// danger!
} else {
// do the insert... as long as you're dealing with race conditions :)
}
Note that a better approach IMO would be to compare using time, since errors and meter replacements mean that the readings are not necessarily monotonic. Then you'd do something like:
var lastRow = db.MeterReading.OrderByDescending(p => p.ReadingDate).FirstOrDefault();
if(lastRow == null || lastRow.Reading < newReading) {
// fine
} else {
// danger
}
Note that your current code only supports one customer and meter. You probably also need to filter the table by those.
I have to implement a scheduling algorithm similar to an Outlook Meeting Organizer where I have several persons participating in the meeting and organizer finds the time slot when all persons from invitelist are available. So let's say I have a 3rd party service that implements the following interface:
interface IAvailabilityProvider
{
IEnumerable<DateTimeInterval> GetPersonAvailableTimeSlots(
string personName, DateTime startFrom);
}
Where DateTimeInterval is:
class DateTimeInterval{
public DateTime Start {get;set;}
public TimeSpan Length {get;set;}
}
GetPersonAvailableTimeSlots returns infinite iterator, it will enumerate all time slots of person's working hours excluding weekends and holidays and stuff like that, infinite to the future.
My task is to implement a function that takes a set of those iterators and returns another iterator of the intersections:
IEnumerable<DateTimeInterval> GetIntersections(
string[] persons, DateTime startFrom);
It gets iterators of available time slots for all persons and returns intersected timeslots, when all those persons available. Internally I have to implement the following function:
IEnumerable<DateTimeInterval> GetIntersections(
IEnumerable<DateTimeInterval>[] personsAvailableSlots);
The solution seems to be pretty straightforward for me.
static IEnumerable<DateTimeInterval> GetIntersections(IEnumerable<DateTimeInterval>[] personsAvailableSlots)
{
var enumerators = personsAvailableSlots.Select(timeline => timeline.GetEnumerator()).ToArray();
// Intersection is empty when at least one of iterators is empty.
for (int i = 0; i < personsAvailableSlots.Length; i++) if (!enumerators[i].MoveNext()) yield break;
while (true)
{
// first we ensure that intersection exists at the current state
// if not so, we have to move some iterators forward
var start = enumerators.Select(tl => tl.Current).Max(interval => interval.Start);
foreach (var iter in enumerators)
while (iter.Current.Start + iter.Current.Length <= start)
if (!iter.MoveNext()) yield break;
// now we check if the interval exists
var int_start = enumerators.Select(tl => tl.Current).Max(interval => interval.Start);
var int_end = enumerators.Select(tl => tl.Current).Min(interval => interval.Start + interval.Length);
if (int_end > int_start)
{
//if so, we return it
yield return new DateTimeInterval()
{
Start = int_start,
Length = int_end - int_start
};
// and, finally, we ensure next interval to start after the current one ends
//
// CAUTION: We are able to move iterators whose current interval have passed only.
// We will miss huge spans which cover several intervals in other iterators otherwise.
//
// In fact we should move the only inerator - that one wich currently limits the last result
foreach (var iter in enumerators)
while (iter.Current.Start + iter.Current.Length == int_end)
if (!iter.MoveNext()) yield break;
}
}
}
I have tested this for several simple scenarios, hope i'm not missing something important.
I've tested List<string> vs IEnumerable<string>
iterations with for and foreach loops , is it possible that the List is much faster ?
these are 2 of few links I could find that are publicly stating that performance is better iterating IEnumerable over List.
Link1
Link2
my tests was loading 10K lines from a text file that holds a list of URLs.
I've first loaded it in to a List , then copied List to an IEnumerable
List<string> StrByLst = ...method to load records from the file .
IEnumerable StrsByIE = StrByLst;
so each has 10k items Type <string>
looping on each collection for 100 times , meaning 100K iterations, resulted with
List<string> is faster by amazing 50 x than the IEnumerable<string>
is that predictable ?
update
this is the code that is doing the tests
string WorkDirtPath = HostingEnvironment.ApplicationPhysicalPath;
string fileName = "tst.txt";
string fileToLoad = Path.Combine(WorkDirtPath, fileName);
List<string> ListfromStream = new List<string>();
ListfromStream = PopulateListStrwithAnyFile(fileToLoad) ;
IEnumerable<string> IEnumFromStream = ListfromStream ;
string trslt = "";
Stopwatch SwFr = new Stopwatch();
Stopwatch SwFe = new Stopwatch();
string resultFrLst = "",resultFrIEnumrable, resultFe = "", Container = "";
SwFr.Start();
for (int itr = 0; itr < 100; itr++)
{
for (int i = 0; i < ListfromStream.Count(); i++)
{
Container = ListfromStream.ElementAt(i);
}
//the stop() was here , i was doing changes , so my mistake.
}
SwFr.Stop();
resultFrLst = SwFr.Elapsed.ToString();
//forgot to do this reset though still it is faster (x56??)
SwFr.Reset();
SwFr.Start();
for(int itr = 0; itr<100; itr++)
{
for (int i = 0; i < IEnumFromStream.Count(); i++)
{
Container = IEnumFromStream.ElementAt(i);
}
}
SwFr.Stop();
resultFrIEnumrable = SwFr.Elapsed.ToString();
Update ... final
taking out the counter to outside of the for loops ,
int counter = ..countfor both IEnumerable & List
then passed counter(int) as a count of total items as suggested by #ScottChamberlain .
re checked that every thing is in place, now the results are 5 % faster IEnumerable.
so that concludes , use by scenario - use case... no performance difference at all ...
You are doing something wrong.
The times that you get should be very close to each other, because you are running essentially the same code.
IEnumerable is just an interface, which List implements, so when you call some method on the IEnumerable reference it ends up calling the corresponding method of List.
There is no code implemented in the IEnumerable - this is what interfaces are - they only specify what functionality a class should have, but say nothing about how it's implemented.
You have a few problems with your test, one is the IEnumFromStream.Count() inside the for loop, every time it want to get that value it must enumerate over the entire list to get the count and the value is not cached between loops. Move that call outside of the for loop and save the result in a int and use that value for the for loop, you will see a shorter time for your IEnumerable.
Also the IEnumFromStream.ElementAt(i) behaves similarly to Count() it must iterate over the whole list up to i (eg: first time it goes 0, second time 0,1, third 0,1,2, and so on...) every time where List can jump directly to the index it needs. You should be working with the IEnumerator returned from GetEnumerator() instead.
IEnumerable's and for loop's don't mix well. Use the correct tool for the job, either call GetEnumerator() and work with that or use it in a foreach loop.
Now, I know a lot of you may be saying "But it is a interface it will be just mapping the calls and it should make no difference", but there is a key thing, IEnumerable<T> Does not have a Count() or ElementAt() method!. Those methods are extension methods added by LINQ, and the LINQ classes do not know the underlying collection is a List, so it does what it knows the underlying object can do, and that is iterating over the list every time the method is called.
IEnumerable using IEnumerator
using(var enu = IEnumFromStream.GetEnumerator())
{
//You have to call "MoveNext()" once before getting "Current" the first time,
// this is done so you can have a nice clean while loop like this.
while(enu.MoveNext())
{
Container = enu.Current;
}
}
The above code is basically the same thing as
foreach(var enu in IEnumFromStream)
{
Container = enu;
}
The important thing to remember is IEnumerable's do not have a length, in fact they can be infinitely long. There is a whole field of computer science on detecting a infinitely long IEnumerable
Based on the code you posted I think the problem is with your use of the Stopwatch class.
You declare two of these, SwFr and SwFe, but only use the former. Because of this, the last call to SwFr.Elapsed will get the total amount of time across both sets of for loops.
If you are wanting to reuse that object in this way, place a call to SwFr.Reset() right after resultFrLst = SwFr.Elapsed.ToString();.
Alternatively, you could use SwFe when running the second test.
Facebook fixed the /likes in Graph API. /likes now returns the complete list of user's that liked a particular object in the graph (Photos, Albums, etc). In before, it returns only 3 - 5 users.
My question is, how do you count the total number of "likes" without parsing the entire JSON and getting the element count? I'm only interested in the "likes" count; I'm not interested in the users who gave the likes.
It seems a little expensive to get the entire JSON dataset just to count.
EG: https://graph.facebook.com/161820597180936/likes
This photo has like 1,000+ likes.
Seeing as the string is JSON, why not convert it into a standard .net object, and use the .Count on the array that it creates. Then cache this information for 15 or more minutes (depending on stale you want your info).
The method above is quite heavy handed as you are essentially going to search a string an unknown number of times, to return an index, to compare it to an int, to add up another index and so on. And the C# above doesn't work (assuming it is C# that you are demoing).
Use something like this instead:
public static T FromJson<T>(this string s)
{
var ser = new System.Web.Script.Serialization.JavaScriptSerializer.JavaScriptSerializer();
return ser.Deserialize<T>(s);
}
where this method is an extension method, that takes properly formatted JSON string and converts it to the object T e.g.
var result = // call facebook here and get your response string
List<FacebookLikes> likes = result.FromJson <List<FacebookLikes>>();
Response.Write(likes.Count.ToString());
// now cache the likes somewhere, and get from cache next time.
Am not sure on the performance of this, not done any testing, but to me it looks a lot tidier and a lot more readable. And seeing as you are caching the data, then I'd go with the readable over the previous method.
Why is it expensive to parse the entire dataset? This should take milliseconds:
public static int CountLikes(string dataSet)
{
int i = 0;
int j = 0;
while ((i = dataSet.IndexOf('"id":', i)) != -1)
{
i += 5;
j++;
}
return j;
}
You can also append the parameter limit=# such as:
https://graph.facebook.com/161820597180936/likes?limit=1000
I'm trying to figure out the best way to represent some data. It basically follows the form Manufacturer.Product.Attribute = Value. Something like:
Acme.*.MinimumPrice = 100
Acme.ProductA.MinimumPrice = 50
Acme.ProductB.MinimumPrice = 60
Acme.ProductC.DefaultColor = Blue
So the minimum price across all Acme products is 100 except in the case of product A and B. I want to store this data in C# and have some function where GetValue("Acme.ProductC.MinimumPrice") returns 100 but GetValue("Acme.ProductA.MinimumPrice") return 50.
I'm not sure how to best represent the data. Is there a clean way to code this in C#?
Edit: I may not have been clear. This is configuration data that needs to be stored in a text file then parsed and stored in memory in some way so that it can be retrieved like the examples I gave.
Write the text file exactly like this:
Acme.*.MinimumPrice = 100
Acme.ProductA.MinimumPrice = 50
Acme.ProductB.MinimumPrice = 60
Acme.ProductC.DefaultColor = Blue
Parse it into a path/value pair sequence:
foreach (var pair in File.ReadAllLines(configFileName)
.Select(l => l.Split('='))
.Select(a => new { Path = a[0], Value = a[1] }))
{
// do something with each pair.Path and pair.Value
}
Now, there two possible interpretations of what you want to do. The string Acme.*.MinimumPrice could mean that for any lookup where there is no specific override, such as Acme.Toadstool.MinimumPrice, we return 100 - even though there is nothing referring to Toadstool anywhere in the file. Or it could mean that it should only return 100 if there are other specific mentions of Toadstool in the file.
If it's the former, you could store the whole lot in a flat dictionary, and at look up time keep trying different variants of the key until you find something that matches.
If it's the latter, you need to build a data structure of all the names that actually occur in the path structure, to avoid returning values for ones that don't actually exist. This seems more reliable to me.
So going with the latter option, Acme.*.MinimumPrice is really saying "add this MinimumPrice value to any product that doesn't have its own specifically defined value". This means that you can basically process the pairs at parse time to eliminate all the asterisks, expanding it out into the equivalent of a completed version of the config file:
Acme.ProductA.MinimumPrice = 50
Acme.ProductB.MinimumPrice = 60
Acme.ProductC.DefaultColor = Blue
Acme.ProductC.MinimumPrice = 100
The nice thing about this is that you only need a flat dictionary as the final representation and you can just use TryGetValue or [] to look things up. The result may be a lot bigger, but it all depends how big your config file is.
You could store the information more minimally, but I'd go with something simple that works to start with, and give it a very simple API so that you can re-implement it later if it really turns out to be necessary. You may find (depending on the application) that making the look-up process more complicated is worse over all.
I'm not entirely sure what you're asking but it sounds like you're saying either.
I need a function that will return a fixed value, 100, for every product ID except for two cases: ProductA and ProductB
In that case you don't even need a data structure. A simple comparison function will do
int GetValue(string key) {
if ( key == "Acme.ProductA.MinimumPrice" ) { return 50; }
else if (key == "Acme.ProductB.MinimumPrice") { return 60; }
else { return 100; }
}
Or you could have been asking
I need a function that will return a value if already defined or 100 if it's not
In that case I would use a Dictionary<string,int>. For example
class DataBucket {
private Dictionary<string,int> _priceMap = new Dictionary<string,int>();
public DataBucket() {
_priceMap["Acme.ProductA.MinimumPrice"] = 50;
_priceMap["Acme.ProductB.MinimumPrice"] = 60;
}
public int GetValue(string key) {
int price = 0;
if ( !_priceMap.TryGetValue(key, out price)) {
price = 100;
}
return price;
}
}
One of the ways - you can create nested dictionary: Dictionary<string, Dictionary<string, Dictionary<string, object>>>. In your code you should split "Acme.ProductA.MinimumPrice" by dots and get or set a value to the dictionary corresponding to the splitted chunks.
Another way is using Linq2Xml: you can create XDocument with Acme as root node, products as children of the root and and attributes you can actually store as attributes on products or as children nodes. I prefer the second solution, but it would be slower if you have thousands of products.
I would take an OOP approach to this. The way that you explain it is all your Products are represented by objects, which is good. This seems like a good use of polymorphism.
I would have all products have a ProductBase which has a virtual property that defaults
virtual MinimumPrice { get { return 100; } }
And then your specific products, such as ProductA will override functionality:
override MinimumPrice { get { return 50; } }