I have to implement a scheduling algorithm similar to an Outlook Meeting Organizer where I have several persons participating in the meeting and organizer finds the time slot when all persons from invitelist are available. So let's say I have a 3rd party service that implements the following interface:
interface IAvailabilityProvider
{
IEnumerable<DateTimeInterval> GetPersonAvailableTimeSlots(
string personName, DateTime startFrom);
}
Where DateTimeInterval is:
class DateTimeInterval{
public DateTime Start {get;set;}
public TimeSpan Length {get;set;}
}
GetPersonAvailableTimeSlots returns infinite iterator, it will enumerate all time slots of person's working hours excluding weekends and holidays and stuff like that, infinite to the future.
My task is to implement a function that takes a set of those iterators and returns another iterator of the intersections:
IEnumerable<DateTimeInterval> GetIntersections(
string[] persons, DateTime startFrom);
It gets iterators of available time slots for all persons and returns intersected timeslots, when all those persons available. Internally I have to implement the following function:
IEnumerable<DateTimeInterval> GetIntersections(
IEnumerable<DateTimeInterval>[] personsAvailableSlots);
The solution seems to be pretty straightforward for me.
static IEnumerable<DateTimeInterval> GetIntersections(IEnumerable<DateTimeInterval>[] personsAvailableSlots)
{
var enumerators = personsAvailableSlots.Select(timeline => timeline.GetEnumerator()).ToArray();
// Intersection is empty when at least one of iterators is empty.
for (int i = 0; i < personsAvailableSlots.Length; i++) if (!enumerators[i].MoveNext()) yield break;
while (true)
{
// first we ensure that intersection exists at the current state
// if not so, we have to move some iterators forward
var start = enumerators.Select(tl => tl.Current).Max(interval => interval.Start);
foreach (var iter in enumerators)
while (iter.Current.Start + iter.Current.Length <= start)
if (!iter.MoveNext()) yield break;
// now we check if the interval exists
var int_start = enumerators.Select(tl => tl.Current).Max(interval => interval.Start);
var int_end = enumerators.Select(tl => tl.Current).Min(interval => interval.Start + interval.Length);
if (int_end > int_start)
{
//if so, we return it
yield return new DateTimeInterval()
{
Start = int_start,
Length = int_end - int_start
};
// and, finally, we ensure next interval to start after the current one ends
//
// CAUTION: We are able to move iterators whose current interval have passed only.
// We will miss huge spans which cover several intervals in other iterators otherwise.
//
// In fact we should move the only inerator - that one wich currently limits the last result
foreach (var iter in enumerators)
while (iter.Current.Start + iter.Current.Length == int_end)
if (!iter.MoveNext()) yield break;
}
}
}
I have tested this for several simple scenarios, hope i'm not missing something important.
Related
I am Calling API to get a list of contacts(they might be in 100's or 1000's) and list only lists 100 at a time and its giving me this pagination option with an object at the end of the list called 'nextpage' and with URL to next 100 and so on..
so in my c# code and am getting first 100 and looping through them (to do something) and looking up for 'nextpage' object and getting the URL and re-calling the API etc.. looks like this next page chain goes on depending on how many ever contacts we have.
can you please let me know if there is a way for me to loop through same code and still be able to use new URL from 'nextpage' object and run the logic for every 100 i get ?
Pseudo-code, as we have no concrete examples to work with, but...
Most APIs with pagination will have a total count of items. You can set a max items per iteration and track it like that, or check for the null next_object, depending on how the API handles it.
List<ApiObject> GetObjects() {
const int ITERATION_COUNT = 100;
int objectsCount = GetAPICount();
var ApiObjects = new List<ApiObject>();
for (int i = 0; i < objectsCount; i+= ITERATION_COUNT) {
// get the next 100
var apiObjects = callToAPI(i, ITERATION_COUNT); // pass the current offset, request the max per call
ApiObjects.AddRange(apiObjects);
} // this loop will stop after you've reached objectsCount, so you should have all
return ApiObjects;
}
// alternatively:
List<ApiObject> GetObjects() {
var nextObject = null;
var ApiObjects = new List<ApiObject>();
// get the first batch
var apiObjects = callToAPI(null);
ApiObjects.AddRange(apiObjects);
nextObject = callResponse.nextObject;
// and continue to loop until there's none left
while (nextObject != null) {
var apiObjects = callToAPI(null);
ApiObjects.AddRange(apiObjects);
nextObject = callResponse.nextObject;
}
return apiObjects;
}
That's the basic idea anyway, per the two usual web service approaches (with lots of detail left out, as this is not working code but only meant to demonstrate the general approach).
What is the right Rx extension method (in .NET) to keep generating events for N seconds?
By 'keep generating events for N seconds' I mean that it will keep generating events in a loop since DateTime.Now till DateTime.Now + TimeSpan.FromSeconds(N)
I'm working on genetic algorithm that will generate number of hypotheses and propagate the most successful ones to the next generation. Need to constrain this guy in some elegant way.
Added later:
I've actually realized that I need to do pull instead of push and came-up with something like this:
public static class IEnumerableExtensions
{
public static IEnumerable<T> Pull<T>(this IEnumerable<T> enumerable, int? times = null)
{
if (times == null)
return enumerable.ToArray();
else
return enumerable.Take(times.Value).ToArray();
}
public static IEnumerable<T> Pull<T>(this IEnumerable<T> enumerable, TimeSpan timeout, int? times = null)
{
var start = DateTime.Now;
if (times != null) enumerable = enumerable.Take(times.Value);
using (var iterator = enumerable.GetEnumerator())
{
while (DateTime.Now < start + timeout && iterator.MoveNext())
yield return iterator.Current;
}
}
}
And usage would be:
var results = lazySource.SelectMany(item =>
{
//processing goes here
}).Pull(timeout: TimeSpan.FromSeconds(5), times: numberOfIterations);
There may well be a cleaner way of doing this, but you could use:
// This will generate events repeatedly
var interval = Observable.Interval(...);
// This will generate one event in N seconds
var timer = Observable.Timer(TimeSpan.FromSeconds(N));
// This will combine the two, so that the interval stops when the timer
// fires
var joined = interval.TakeUntil(timer);
It's been a long time since I've done any Rx, so I apologise if this is incorrect - but it's worth a try...
Jon's post is pretty much spot on, however I noticed your edit where you suggested you would create your own extension methods to do this. I think it would be better* if you just used the built in operators.
//LinqPad sample
void Main()
{
var interval = Observable.Interval(TimeSpan.FromMilliseconds(250));
var maxTime = Observable.Timer(TimeSpan.FromSeconds(10));
IEnumerable<int> lazySource = Enumerable.Range(0, 100);
lazySource.ToObservable()
.Zip(interval, (val, tick)=>val)
.TakeUntil(maxTime)
.Dump();
}
*ie. easy for other devs to maintain and understand
There are a number of different way to accomplish the same simple loop though the items of an object in c#.
This has made me wonder if there is any reason be it performance or ease of use, as to use on over the other. Or is it just down to personal preference.
Take a simple object
var myList = List<MyObject>;
Lets assume the object is filled and we want to iterate over the items.
Method 1.
foreach(var item in myList)
{
//Do stuff
}
Method 2
myList.Foreach(ml =>
{
//Do stuff
});
Method 3
while (myList.MoveNext())
{
//Do stuff
}
Method 4
for (int i = 0; i < myList.Count; i++)
{
//Do stuff
}
What I was wondering is do each of these compiled down to the same thing? is there a clear performance advantage for using one over the others?
or is this just down to personal preference when coding?
Have I missed any?
The answer the majority of the time is it does not matter. The number of items in the loop (even what one might consider a "large" number of items, say in the thousands) isn't going to have an impact on the code.
Of course, if you identify this as a bottleneck in your situation, by all means, address it, but you have to identify the bottleneck first.
That said, there are a number of things to take into consideration with each approach, which I'll outline here.
Let's define a few things first:
All of the tests were run on .NET 4.0 on a 32-bit processor.
TimeSpan.TicksPerSecond on my machine = 10,000,000
All tests were performed in separate unit test sessions, not in the same one (so as not to possibly interfere with garbage collections, etc.)
Here's some helpers that are needed for each test:
The MyObject class:
public class MyObject
{
public int IntValue { get; set; }
public double DoubleValue { get; set; }
}
A method to create a List<T> of any length of MyClass instances:
public static List<MyObject> CreateList(int items)
{
// Validate parmaeters.
if (items < 0)
throw new ArgumentOutOfRangeException("items", items,
"The items parameter must be a non-negative value.");
// Return the items in a list.
return Enumerable.Range(0, items).
Select(i => new MyObject { IntValue = i, DoubleValue = i }).
ToList();
}
An action to perform for each item in the list (needed because Method 2 uses a delegate, and a call needs to be made to something to measure impact):
public static void MyObjectAction(MyObject obj, TextWriter writer)
{
// Validate parameters.
Debug.Assert(obj != null);
Debug.Assert(writer != null);
// Write.
writer.WriteLine("MyObject.IntValue: {0}, MyObject.DoubleValue: {1}",
obj.IntValue, obj.DoubleValue);
}
A method to create a TextWriter which writes to a null Stream (basically a data sink):
public static TextWriter CreateNullTextWriter()
{
// Create a stream writer off a null stream.
return new StreamWriter(Stream.Null);
}
And let's fix the number of items at one million (1,000,000, which should be sufficiently high to enforce that generally, these all have about the same performance impact):
// The number of items to test.
public const int ItemsToTest = 1000000;
Let's get into the methods:
Method 1: foreach
The following code:
foreach(var item in myList)
{
//Do stuff
}
Compiles down into the following:
using (var enumerable = myList.GetEnumerable())
while (enumerable.MoveNext())
{
var item = enumerable.Current;
// Do stuff.
}
There's quite a bit going on there. You have the method calls (and it may or may not be against the IEnumerator<T> or IEnumerator interfaces, as the compiler respects duck-typing in this case) and your // Do stuff is hoisted into that while structure.
Here's the test to measure the performance:
[TestMethod]
public void TestForEachKeyword()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
foreach (var item in list)
{
// Write the values.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Foreach loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Foreach loop ticks: 3210872841
Method 2: .ForEach method on List<T>
The code for the .ForEach method on List<T> looks something like this:
public void ForEach(Action<T> action)
{
// Error handling omitted
// Cycle through the items, perform action.
for (int index = 0; index < Count; ++index)
{
// Perform action.
action(this[index]);
}
}
Note that this is functionally equivalent to Method 4, with one exception, the code that is hoisted into the for loop is passed as a delegate. This requires a dereference to get to the code that needs to be executed. While the performance of delegates has improved from .NET 3.0 on, that overhead is there.
However, it's negligible. The test to measure the performance:
[TestMethod]
public void TestForEachMethod()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
list.ForEach(i => MyObjectAction(i, writer));
// Write out the number of ticks.
Debug.WriteLine("ForEach method ticks: {0}", s.ElapsedTicks);
}
}
The output:
ForEach method ticks: 3135132204
That's actually ~7.5 seconds faster than using the foreach loop. Not completely surprising, given that it uses direct array access instead of using IEnumerable<T>.
Remember though, this translates to 0.0000075740637 seconds per item being saved. That's not worth it for small lists of items.
Method 3: while (myList.MoveNext())
As shown in Method 1, this is exactly what the compiler does (with the addition of the using statement, which is good practice). You're not gaining anything here by unwinding the code yourself that the compiler would otherwise generate.
For kicks, let's do it anyways:
[TestMethod]
public void TestEnumerator()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
// Get the enumerator.
using (IEnumerator<MyObject> enumerator = list.GetEnumerator())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle through the items.
while (enumerator.MoveNext())
{
// Write.
MyObjectAction(enumerator.Current, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Enumerator loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Enumerator loop ticks: 3241289895
Method 4: for
In this particular case, you're going to gain some speed, as the list indexer is going directly to the underlying array to perform the lookup (that's an implementation detail, BTW, there's nothing to say that it can't be a tree structure backing the List<T> up).
[TestMethod]
public void TestListIndexer()
{
// Create the list.
List<MyObject> list = CreateList(ItemsToTest);
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle by index.
for (int i = 0; i < list.Count; ++i)
{
// Get the item.
MyObject item = list[i];
// Perform the action.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("List indexer loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
List indexer loop ticks: 3039649305
However the place where this can make a difference is arrays. Arrays can be unwound by the compiler to process multiple items at a time.
Instead of doing ten iterations of one item in a ten item loop, the compiler can unwind this into five iterations of two items in a ten item loop.
However, I'm not positive here that this is actually happening (I have to look at the IL and the output of the compiled IL).
Here's the test:
[TestMethod]
public void TestArray()
{
// Create the list.
MyObject[] array = CreateList(ItemsToTest).ToArray();
// Create the writer.
using (TextWriter writer = CreateNullTextWriter())
{
// Create the stopwatch.
Stopwatch s = Stopwatch.StartNew();
// Cycle by index.
for (int i = 0; i < array.Length; ++i)
{
// Get the item.
MyObject item = array[i];
// Perform the action.
MyObjectAction(item, writer);
}
// Write out the number of ticks.
Debug.WriteLine("Enumerator loop ticks: {0}", s.ElapsedTicks);
}
}
The output:
Array loop ticks: 3102911316
It should be noted that out-of-the box, Resharper offers a suggestion with a refactoring to change the above for statements to foreach statements. That's not to say this is right, but the basis is to reduce the amount of technical debt in code.
TL;DR
You really shouldn't be concerned with the performance of these things, unless testing in your situation shows that you have a real bottleneck (and you'll have to have massive numbers of items to have an impact).
Generally, you should go for what's most maintainable, in which case, Method 1 (foreach) is the way to go.
In regards to the final bit of the question, "Did I miss any?" Yes, and I feel I would be remiss to not mention this even though the question is quite old. While those four ways of doing it will execute in relatively the same amount of time, there is a way not shown above that runs faster than all of them. Quite significantly, in fact, as the number of items in the iterated list increases. It would be the exact same way as the last method but instead of getting .Count in the condition check of the loop, you assign this value to a variable before setting up the loop and use that instead. Which leaves you with something like this:
var countVar = list.Count;
for(int i = 0; i < countVar; i++)
{
//loop logic
}
By doing it this way, you're only looking up a variable value at each iteration, rather than resolving the Count or Length properties, which is considerably less efficient.
I would suggest an even better and not well-known approach for faster loop iteration over a list. I would recommend you to first read about Span<T>. Note that you can use it if you are using .NET Core.
List<MyObject> list = new();
foreach (MyObject item in CollectionsMarshal.AsSpan(list))
{
// Do something
}
Be aware of the caveats:
The CollectionsMarshal.AsSpan method is unsafe and should be used only if you know what you're doing. CollectionsMarshal.AsSpan returns a Span<T> on the private array of List<T>. Iterating over a Span<T> is fast as the JIT uses the same tricks as for optimizing arrays. Using this method, it won't check the list is not modified during the enumeration.
This is a more detailed explanation of what it does behind the scenes and more, super interesting!
Which is the best way to loop thru an list? is for loop better than numerous List class's find method? Also if i use its find methed as i mentioned below which is anonymous delegate an instance of an predicate delegate, does it better than using lambda expression? Which one will execute faster?
var result = Books.FindLast(
delegate(Book bk)
{
DateTime year2001 = new DateTime(2001,01,01);
return bk.Publish_date < year2001;
});
It's a complex question because it involves a lot of different topics.
In general, delegates are many times slower than a simple function call but to enumerate a list (via foreach) is terribly slow too.
If you really care about performance (but do not do it a-priori, profile!) you should avoid delegates and enumerations. First big step (whenever possible) could be to use a Hashtable instead of a simple list.
Examples
Now some examples, I'll write the same function in different ways, from the more readable (but slower) to the less readable (but faster). I omit every error checking but a real world function shouldn't (at least some asserts are required).
This function uses LINQ, it's the more easy to understand but the slowest.
Note the books can be a generic enumeration (it's not required to be a List<T>)
public static Book FindLastBookPublishedBefore(IEnumerable<Book> books,
DateTime date)
{
return books.FindLast(x => x.Publish_date < date);
}
Same as before but without LINQ. Note that this function
handles a special case: the list doesn't contains any eligible book.
public static Book FindLastBookPublishedBefore(IEnumerable<Book> books,
DateTime date)
{
Book candidate = null;
foreach (Book book in books)
{
if (candidate == null || candidate.Publish_date > book.Publish_date)
candidate = book;
}
return candidate;
}
Same as before but without enumeration, note that this function
handles a special case: the list doesn't contains any eligible book.
public static Book FindLastBookPublishedBefore(List<Book> books,
DateTime date)
{
Book candidate = null;
for (int i=0; i < books.Count; ++i)
{
if (candidate == null || candidate.Publish_date > books[i].Publish_date)
candidate = books[i];
}
return candidate;
}
Same as before but with a SortedList<T> as suggested by #MaratKhasanov. Please note that with this container you'll have the good performances during the search but insertion of a new element can be more slow than a normal unsorted list (because list itself must be kept sorted). If the number of elements in the list is very high you may think to write your own sorted list using a Hashtable (using, for example, the year as key for the first level).
public static Book FindLastBookPublishedBefore(SortedList<Book> books,
DateTime date)
{
Book candidate = null;
for (int i=0; i < books.Count; ++i)
{
DateTime publishDate = books[i].Publish_date;
if (publishDate > date)
return candidate;
if (candidate == null || candidate.Publish_date > publishDate)
candidate = books[i];
}
return candidate;
}
Now an example a little bit more complex but with best search performance. Algorithm is derived from an ordinary binary search (note that if you want to match the first element that matches the predicate you may use the List.BinarySearch method directly).
Note that code is untested and can be optimized too, please consider it just an example.
public static Book FindLastBookPublishedBefore(List<Book> books,
DateTime date)
{
int min = 0, max = books.Count;
Book candidate = null;
while (min < max)
{
int mid = (min + max) / 2;
Book book = books[mid];
if (book.Publish_date > date)
max = mid - 1;
else
{
candidate = book;
++min;
}
if (min >= max)
break;
}
return candidate;
}
Before moving to a more complex container you may think to keep your SortedList<T> unsorted until the first search. It'll be really slow (because it will sort the list too) but inserts will be as fast as a normal list (but you have to try with real world data). Anyway last algorithm can be optimized a lot.
Maybe if you have so many items in your collection that you can't manage them with a normal collection you may think to move everything to a database...lol
Use whatever makes your code more readable. You can simplify the code above by using lambda expressions; they are just a simplified syntax for anonymous delegates. Wherever you can use delegates you can use lambda expressions or normal methods. You would pass a normal method as argument without parentheses.
The C# compiler really just makes a hidden method for anonymous delegates and lambda expressions. You will probably not experience any difference in speed.
var result = Books.FindLast(bk => bk.Publish_date < new DateTime(2001,01,01));
I have a List class, and I would like to override GetEnumerator() to return my own Enumerator class. This Enumerator class would have two additional properties that would be updated as the Enumerator is used.
For simplicity (this isn't the exact business case), let's say those properties were CurrentIndex and RunningTotal.
I could manage these properties within the foreach loop manually, but I would rather encapsulate this functionality for reuse, and the Enumerator seems to be the right spot.
The problem: foreach hides all the Enumerator business, so is there a way to, within a foreach statement, access the current Enumerator so I can retrieve my properties? Or would I have to foreach, use a nasty old while loop, and manipulate the Enumerator myself?
Strictly speaking, I would say that if you want to do exactly what you're saying, then yes, you would need to call GetEnumerator and control the enumerator yourself with a while loop.
Without knowing too much about your business requirement, you might be able to take advantage of an iterator function, such as something like this:
public static IEnumerable<decimal> IgnoreSmallValues(List<decimal> list)
{
decimal runningTotal = 0M;
foreach (decimal value in list)
{
// if the value is less than 1% of the running total, then ignore it
if (runningTotal == 0M || value >= 0.01M * runningTotal)
{
runningTotal += value;
yield return value;
}
}
}
Then you can do this:
List<decimal> payments = new List<decimal>() {
123.45M,
234.56M,
.01M,
345.67M,
1.23M,
456.78M
};
foreach (decimal largePayment in IgnoreSmallValues(payments))
{
// handle the large payments so that I can divert all the small payments to my own bank account. Mwahaha!
}
Updated:
Ok, so here's a follow-up with what I've termed my "fishing hook" solution. Now, let me add a disclaimer that I can't really think of a good reason to do something this way, but your situation may differ.
The idea is that you simply create a "fishing hook" object (reference type) that you pass to your iterator function. The iterator function manipulates your fishing hook object, and since you still have a reference to it in your code outside, you have visibility into what's going on:
public class FishingHook
{
public int Index { get; set; }
public decimal RunningTotal { get; set; }
public Func<decimal, bool> Criteria { get; set; }
}
public static IEnumerable<decimal> FishingHookIteration(IEnumerable<decimal> list, FishingHook hook)
{
hook.Index = 0;
hook.RunningTotal = 0;
foreach(decimal value in list)
{
// the hook object may define a Criteria delegate that
// determines whether to skip the current value
if (hook.Criteria == null || hook.Criteria(value))
{
hook.RunningTotal += value;
yield return value;
hook.Index++;
}
}
}
You would utilize it like this:
List<decimal> payments = new List<decimal>() {
123.45M,
.01M,
345.67M,
234.56M,
1.23M,
456.78M
};
FishingHook hook = new FishingHook();
decimal min = 0;
hook.Criteria = x => x > min; // exclude any values that are less than/equal to the defined minimum
foreach (decimal value in FishingHookIteration(payments, hook))
{
// update the minimum
if (value > min) min = value;
Console.WriteLine("Index: {0}, Value: {1}, Running Total: {2}", hook.Index, value, hook.RunningTotal);
}
// Resultint output is:
//Index: 0, Value: 123.45, Running Total: 123.45
//Index: 1, Value: 345.67, Running Total: 469.12
//Index: 2, Value: 456.78, Running Total: 925.90
// we've skipped the values .01, 234.56, and 1.23
Essentially, the FishingHook object gives you some control over how the iterator executes. The impression I got from the question was that you needed some way to access the inner workings of the iterator so that you could manipulate how it iterates while you are in the middle of iterating, but if this is not the case, then this solution might be overkill for what you need.
With foreach you indeed can't get the enumerator - you could, however, have the enumerator return (yield) a tuple that includes that data; in fact, you could probably use LINQ to do it for you...
(I couldn't cleanly get the index using LINQ - can get the total and current value via Aggregate, though; so here's the tuple approach)
using System.Collections;
using System.Collections.Generic;
using System;
class MyTuple
{
public int Value {get;private set;}
public int Index { get; private set; }
public int RunningTotal { get; private set; }
public MyTuple(int value, int index, int runningTotal)
{
Value = value; Index = index; RunningTotal = runningTotal;
}
static IEnumerable<MyTuple> SomeMethod(IEnumerable<int> data)
{
int index = 0, total = 0;
foreach (int value in data)
{
yield return new MyTuple(value, index++,
total = total + value);
}
}
static void Main()
{
int[] data = { 1, 2, 3 };
foreach (var tuple in SomeMethod(data))
{
Console.WriteLine("{0}: {1} ; {2}", tuple.Index,
tuple.Value, tuple.RunningTotal);
}
}
}
You can also do something like this in a more Functional way, depending on your requirements. What you are asking can be though of as "zipping" together multiple sequences, and then iterating through them all at once. The three sequences for the example you gave would be:
The "value" sequence
The "index" sequence
The "Running Total" Sequence
The next step would be to specify each of these sequences seperately:
List<decimal> ValueList
var Indexes = Enumerable.Range(0, ValueList.Count)
The last one is more fun... the two methods I can think of are to either have a temporary variable used to sum up the sequence, or to recalculate the sum for each item. The second is obviously much less performant, I would rather use the temporary:
decimal Sum = 0;
var RunningTotals = ValueList.Select(v => Sum = Sum + v);
The last step would be to zip these all together. .Net 4 will have the Zip operator built in, in which case it will look like this:
var ZippedSequence = ValueList.Zip(Indexes, (value, index) => new {value, index}).Zip(RunningTotals, (temp, total) => new {temp.value, temp.index, total});
This obviously gets noisier the more things you try to zip together.
In the last link, there is source for implementing the Zip function yourself. It really is a simple little bit of code.