How to print numbers from 1 to 1000 using oops concepts (i.e) without using loops, array and recursion
Since you mentioned the OOP concepts, one of which is encapsulation. You don't care about the implementation details when the method gets the job done.
Almost all linq extension methods use loops (Actually differed Iterators) in their implementation, it is easy not to realize that because it is a details encapsulated in the implementation.
To answer your question, the only way to do that without a loop is to write the WriteLine call 1000 times.
In OOP, you create a class to encapsulate the logic and then use it:
class Program
{
static void Main(string[] args)
{
(new RangePrinter).PrintRange(1, 1001);
}
}
No loops right ? Actually the loop is encapsulated in the implementation.
class RangePrinter
{
/* Injecting the Write target is skipped for simplicity */
public void PrintRange(int lowerBound, int upperBound)
{
for(int i = lowerBound; i < upperBound; i++)
{
Console.WriteLine(i);
}
}
}
The same applies when you use Enumerable (aka Linq extension methods):
Enumerable.Range(1, 1000).ToList().ForEach(x=> { Console.WriteLine(x); });
Here how the .NET Framework team implements the Range Internal Method:
static IEnumerable<int> RangeIterator(int start, int count) {
for (int i = 0; i < count; i++) yield return start + i;
}
Conclusion, when you need to do repetitive work then you need a loop. Any C# API that iterates over a collection (Linq's Where, Select, etc..) uses a loop. Loops are not bad unless the are not needed or they are nested when there is alternative approaches.
Just for fun, if this is meant to be a puzzle (forget the OOP requirement part), then you can do this:
string oneToThousand = #"1\r\n2\r\n3\r\n4\r\n5\r\n6\r\n7\r\n8\r\n9\r\n10\r\n11\r\n12\r\n13\r\n14\r\n15\r\n16\r\n17\r\n18\r\n19\r\n20\r\n" +
"21\r\n22\r\n23\r\n24\r\n25\r\n26\r\n27\r\n28\r\n29\r\n30\r\n31\r\n32\r\n33\r\n34\r\n35\r\n36\r\n37\r\n38\r\n39\r\n40\r\n" +
"41\r\n42\r\n43\r\n44\r\n45\r\n46\r\n47\r\n48\r\n49\r\n50\r\n51\r\n52\r\n53\r\n54\r\n55\r\n56\r\n57\r\n58\r\n59\r\n60\r\n" +
"61\r\n62\r\n63\r\n64\r\n65\r\n66\r\n67\r\n68\r\n69\r\n70\r\n71\r\n72\r\n73\r\n74\r\n75\r\n76\r\n77\r\n78\r\n79\r\n80\r\n" +
"81\r\n82\r\n83\r\n84\r\n85\r\n86\r\n87\r\n88\r\n89\r\n90\r\n91\r\n92\r\n93\r\n94\r\n95\r\n96\r\n97\r\n98\r\n99\r\n100\r\n";
/* Continue to 1000 */
Console.WriteLine(oneToThousand);
A recursive function (DEF) is a function which either calls itself or is in a potential cycle of function calls. As the definition specifies, there are two types of recursive functions. Consider a function which calls itself: we call this type of recursion immediate recursion.
Example:
public static void PrintTo(int number)
{
if (number == 0)
return;
Console.WriteLine(number);
PrintTo(number - 1);
}
static void Main(string[] args)
{
PrintTo(1000);
}
Related
I have this code in c++ to do this but I want to do the same in C# but it's not working and I can't figure it out why?
class Numero
{
public:
static int num;
Numero()
{
cout<<num++<<" ";
}
};
int Numero::num=1;
int main()
{
int n;
cout<<"Type n: ";
cin>>n;
Numero obj[n];
return 0;
}
this print "1 2 3 4 5 .... n"
but in C#
class numero
{
public static int num {get; set;}
public numero()
{
Console.WriteLine(num);
num++;
}
}
class Program
{
static void Main(string[] args)
{
numero.num=1;
Console.WriteLine("Type 'n'");
int n = int.Parse( Console.ReadLine());
Console.WriteLine("Printing to: {0}", n);
numero[] num_1 = new numero[n];
Console.WriteLine("End");
Console.ReadLine();
}
}
I tried in diferent ways but the only I get is:
Type 'n'
10
Printing to: 10
End
any idea in how to make it works? and why when creating the class numero it's not calling the numero constuctor??
The C++ version works because you're allocating n many elements of Numero on the stack which causes the compiler to invoke Numero's constructor n times. Ultimately your program is still using a loop, but it's hidden within the generated machine code rather than explicitly delcared in your code.
The psuedocode of the machine-code generated by the compiler resembles this:
numbers = Allocate( n * sizeof( Number ) );
for(int i=0;i<n;i++) numbers[i].ctor();
This is not possible in pure C# because classes exist on the heap and require explicit constructor calls (which you'd have to do in a loop), and while structs exist on the stack (sort-of) they do not have their default constructor called when they're allocated (see this QA for an explanation: Why can't I define a default constructor for a struct in .NET? ).
Your question sounds like a bad brain-teaser question that tests one's familiarity with a language, but has zero practical use because the only way to implement a loop is with a jump instruction somewhere (be it a loop, recursive call, or explicit goto; there are no other ways to do this - the only way to hide it is by calling another method or compiler-feature that performs the prohibited instruction).
Note that you could pull this off with Array.Initialize which will call the default constructor of a value-type array's elements, but C# doesn't allow you to define such a default constructor (but the CLI does allow them to exist). This might be to allow for some Managed C++ / C++/CLR interop feature, however.
So topic is the questions.
I get that method AsParallel returns wrapper ParallelQuery<TSource> that uses the same LINQ keywords, but from System.Linq.ParallelEnumerable instead of System.Linq.Enumerable
It's clear enough, but when i'm looking into decompiled sources, i don't understand how does it works.
Let's begin from an easiest extension : Sum() method. Code:
[__DynamicallyInvokable]
public static int Sum(this ParallelQuery<int> source)
{
if (source == null)
throw new ArgumentNullException("source");
else
return new IntSumAggregationOperator((IEnumerable<int>) source).Aggregate();
}
it's clear, let's go to Aggregate() method. It's a wrapper on InternalAggregate method that traps some exceptions. Now let's take a look on it.
protected override int InternalAggregate(ref Exception singularExceptionToThrow)
{
using (IEnumerator<int> enumerator = this.GetEnumerator(new ParallelMergeOptions?(ParallelMergeOptions.FullyBuffered), true))
{
int num = 0;
while (enumerator.MoveNext())
checked { num += enumerator.Current; }
return num;
}
}
and here is the question: how it works? I see no concurrence safety for a variable, modified by many threads, we see only iterator and summing. Is it magic enumerator? Or how does it works? GetEnumerator() returns QueryOpeningEnumerator<TOutput>, but it's code is too complicated.
Finally in my second PLINQ assault I found an answer. And it's pretty clear.
Problem is that enumerator is not simple. It's a special multithreading one. So how it works? Answer is that enumerator doesn't return a next value of source, it returns a whole sum of next partition. So this code is only executed 2,4,6,8... times (based on Environment.ProcessorCount), when actual summation work is performed inside enumerator.MoveNext in enumerator.OpenQuery method.
So TPL obviosly partition the source enumerable, then sum independently each partition and then pefrorm this summation, see IntSumAggregationOperatorEnumerator<TKey>. No magic here, just could plunge deeper.
The Sum operator aggregates all values in a single thread. There is no multi-threading here. The trick is that multi-threading is happening somewhere else.
The PLINQ Sum method can handle PLINQ enumerables. Those enumerables could be built up using other constructs (such as where) that allows a collection to be processed over multiple threads.
The Sum operator is always the last operator in a chain. Although it is possible to process this sum over multiple threads, the TPL team probably found out that this had a negative impact on performance, which is reasonable, since the only thing this method has to do is a simple integer addition.
So this method processes all results that come available from other threads and processes them on a single thread and returns that value. The real trick is in other PLINQ extension methods.
protected override int InternalAggregate(ref Exception singularExceptionToThrow)
{
using (IEnumerator<int> enumerator = this.GetEnumerator(new ParallelMergeOptions? (ParallelMergeOptions.FullyBuffered), true))
{
int num = 0;
while (enumerator.MoveNext())
checked { num += enumerator.Current; }
return num;
}
}
This code won't be executed parallel, the while will be sequentially execute it's innerscope.
Try this instead
List<int> list = new List<int>();
int num = 0;
Parallel.ForEach(list, (item) =>
{
checked { num += item; }
});
The inner action will be spread on the ThreadPool and the ForEach statement will be complete when all items are handled.
Here you need threadsafety:
List<int> list = new List<int>();
int num = 0;
Parallel.ForEach(list, (item) =>
{
Interlocked.Add(ref num, item);
});
If I have two list and I want to know if there are at least one common element, I have this two options:
lst1.Intersect(lst2).Any();
Lst1.Any(x => lst2.Contains(x));
The two options give me the result that I expect, however I don't know what is the best option. Which is more efficient? And why?
Thanks.
EDIT: when I created this post, apart of the solution, I was looking the reason. I know that I can run tests, but I wouldn't know the reason of the result. One is faster than the other? Is always one solution best than the other?
So for this reason, I hace accepted the answer of Matthew, not only for the test code, but also he explain when one is better than other and why. I appreciate a lot the contributions of Nicholas and Oren too.
Thanks.
Oren's answer has an error in the way the stopwatch is being used. It isn't being reset at the end of the loop after the time taken by Any() has been measured.
Note how it goes back to the start of the loop with the stopwatch never being Reset() so that the time that is added to intersect includes the time taken by Any().
Following is a corrected version.
A release build run outside any debugger gives this result on my PC:
Intersect: 1ms
Any: 6743ms
Note how I'm making two non-overlapping string lists for this test. Also note that this is a worst-case test.
Where there are many intersections (or intersections that happen to occur near the start of the data) then Oren is quite correct to say that Any() should be faster.
If the real data usually contains intersections then it's likely that it is better to use Any(). Otherwise, use Intersect(). It's very data dependent.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Demo
{
class Program
{
void run()
{
double intersect = 0;
double any = 0;
Stopwatch stopWatch = new Stopwatch();
List<string> L1 = Enumerable.Range(0, 10000).Select(x => x.ToString()).ToList();
List<string> L2 = Enumerable.Range(10000, 10000).Select(x => x.ToString()).ToList();
for (int i = 0; i < 10; i++)
{
stopWatch.Restart();
Intersect(L1, L2);
stopWatch.Stop();
intersect += stopWatch.ElapsedMilliseconds;
stopWatch.Restart();
Any(L1, L2);
stopWatch.Stop();
any += stopWatch.ElapsedMilliseconds;
}
Console.WriteLine("Intersect: " + intersect + "ms");
Console.WriteLine("Any: " + any + "ms");
}
private static bool Any(List<string> lst1, List<string> lst2)
{
return lst1.Any(lst2.Contains);
}
private static bool Intersect(List<string> lst1, List<string> lst2)
{
return lst1.Intersect(lst2).Any();
}
static void Main()
{
new Program().run();
}
}
}
For comparative purposes, I wrote my own test comparing int sequences:
intersect took 00:00:00.0065928
any took 00:00:08.6706195
The code:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
namespace Demo
{
class Program
{
void run()
{
var lst1 = Enumerable.Range(0, 10000);
var lst2 = Enumerable.Range(10000, 10000);
int count = 10;
DemoUtil.Time(() => lst1.Intersect(lst2).Any(), "intersect", count);
DemoUtil.Time(() => lst1.Any(lst2.Contains), "any", count);
}
static void Main()
{
new Program().run();
}
}
static class DemoUtil
{
public static void Print(this object self)
{
Console.WriteLine(self);
}
public static void Print(this string self)
{
Console.WriteLine(self);
}
public static void Print<T>(this IEnumerable<T> self)
{
foreach (var item in self)
Console.WriteLine(item);
}
public static void Time(Action action, string title, int count)
{
var sw = Stopwatch.StartNew();
for (int i = 0; i < count; ++i)
action();
(title + " took " + sw.Elapsed).Print();
}
}
}
If I also time this for overlapping ranges by changing the lists to this and upping the count to 10000:
var lst1 = Enumerable.Range(10000, 10000);
var lst2 = Enumerable.Range(10000, 10000);
I get these results:
intersect took 00:00:03.2607476
any took 00:00:00.0019170
In this case Any() is clearly much faster.
Conclusion
The worst-case performance is very bad for Any() but acceptible for Intersect().
The best-case performance is extremely good for Any() and bad for Intersect().
(and best-case for Any() is probably worst-case for Intersect()!)
The Any() approach is O(N^2) in the worst case and O(1) in the best case.
The Intersect() approach is always O(N) (since it uses hashing, not sorting, otherwise it would be O(N(Log(N))).
You must also consider the memory usage: the Intersect() method needs to take a copy of one of the inputs, whereas Any() doesn't.
Therefore to make the best decision you really need to know the characteristics of the real data, and actually perform tests.
If you really don't want the Any() to turn into an O(N^2) in the worst case, then you should use Intersect(). However, the chances are that you will be best off using Any().
And of course, most of the time none of this matters!
Unless you've discovered this part of the code to be a bottleneck, this is of merely academic interest. You shouldn't waste your time with this kind of analysis if there's no problem. :)
It depends on the implementation of your IEnumerables.
Your first try (Intersect/Any), finds all the matches and then determines if the set is empty or not. From the documentation, this looks to be something like O(n) operation:
When the object returned by this method is enumerated, Intersect enumerates first,
collecting all distinct elements of that sequence. It then enumerates [the]
second, marking those elements that occur in both sequences. Finally, the marked
elements are yielded in the order in which they were collected.
Your second try ( Any/Contains ) enumerates over the first collection, an O(n) operation, and for each item in the first collection, enumerates over the second, another O(n) operation, to see if a matching element is found. This makes it something like an O(n2) operation, does it not? Which do you think might be faster?
One thing to consider, though, is that the Contains() lookup for certain collection or set types (e.g., dictionaries, binary trees or ordered collections that allow a binary search or hashtable lookup) might be a cheap operation if the Contains() implementation is smart enough to take advantage of the semantics of the collection upon which it is operating.
But you'll need to experiment with your collection types to find out which works better.
See Matthew's answer for a complete and accurate breakdown.
Relatively easy to mock up and try yourself:
bool found;
double intersect = 0;
double any = 0;
for (int i = 0; i < 100; i++)
{
List<string> L1 = GenerateNumberStrings(200000);
List<string> L2 = GenerateNumberStrings(60000);
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
found = Intersect(L1, L2);
stopWatch.Stop();
intersect += stopWatch.ElapsedMilliseconds;
stopWatch.Reset();
stopWatch.Start();
found = Any(L1, L2);
stopWatch.Stop();
any += stopWatch.ElapsedMilliseconds;
}
Console.WriteLine("Intersect: " + intersect + "ms");
Console.WriteLine("Any: " + any + "ms");
}
private static bool Any(List<string> lst1, List<string> lst2)
{
return lst1.Any(x => lst2.Contains(x));
}
private static bool Intersect(List<string> lst1, List<string> lst2)
{
return lst1.Intersect(lst2).Any();
}
You'll find that the Any method is significantly faster in the long run, likely because it does not require the memory allocations and setup that intersect requires (Any stops and returns true as soon as it finds a match whereas Intersect actually needs to store the matches in a new List<T>).
A trivial example of an "infinite" IEnumerable would be
IEnumerable<int> Numbers() {
int i=0;
while(true) {
yield return unchecked(i++);
}
}
I know, that
foreach(int i in Numbers().Take(10)) {
Console.WriteLine(i);
}
and
var q = Numbers();
foreach(int i in q.Take(10)) {
Console.WriteLine(i);
}
both work fine (and print out the number 0-9).
But are there any pitfalls when copying or handling expressions like q? Can I rely on the fact, that they are always evaluated "lazy"? Is there any danger to produce an infinite loop?
As long as you only call lazy, un-buffered methods you should be fine. So Skip, Take, Select, etc are fine. However, Min, Count, OrderBy etc would go crazy.
It can work, but you need to be cautious. Or inject a Take(somethingFinite) as a safety measure (or some other custom extension method that throws an exception after too much data).
For example:
public static IEnumerable<T> SanityCheck<T>(this IEnumerable<T> data, int max) {
int i = 0;
foreach(T item in data) {
if(++i >= max) throw new InvalidOperationException();
yield return item;
}
}
Yes, you are guaranteed that the code above will be executed lazily. While it looks (in your code) like you'd loop forever, your code actually produces something like this:
IEnumerable<int> Numbers()
{
return new PrivateNumbersEnumerable();
}
private class PrivateNumbersEnumerable : IEnumerable<int>
{
public IEnumerator<int> GetEnumerator()
{
return new PrivateNumbersEnumerator();
}
}
private class PrivateNumbersEnumerator : IEnumerator<int>
{
private int i;
public bool MoveNext() { i++; return true; }
public int Current
{
get { return i; }
}
}
(This obviously isn't exactly what will be generated, since this is pretty specific to your code, but it's nonetheless similar and should show you why it's going to be lazily evaluated).
You would have to avoid any greedy functions that attempt to read to end. This would include Enumerable extensions like: Count, ToArray/ToList, and aggregates Avg/Min/Max, etc.
There's nothing wrong with infinite lazy lists, but you must make conscious decisions about how to handle them.
Use Take to limit the impact of an endless loop by setting an upper bound even if you don't need them all.
Yes, your code will always work without infinite looping. Someone might come along though later and mess things up. Suppose they want to do:
var q = Numbers().ToList();
Then, you're hosed! Many "aggregate" functions will kill you, like Max().
If it wasn't lazy evaluation, your first example won't work as expected in the first place.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is there not a ForEach extension method on the IEnumerable interface?
I've noticed when writing LINQ-y code that .ForEach() is a nice idiom to use. For example, here is a piece of code that takes the following inputs, and produces these outputs:
{ "One" } => "One"
{ "One", "Two" } => "One, Two"
{ "One", "Two", "Three", "Four" } => "One, Two, Three and Four";
And the code:
private string InsertCommasAttempt(IEnumerable<string> words)
{
List<string> wordList = words.ToList();
StringBuilder sb = new StringBuilder();
var wordsAndSeparators = wordList.Select((string word, int pos) =>
{
if (pos == 0) return new { Word = word, Leading = string.Empty };
if (pos == wordList.Count - 1) return new { Word = word, Leading = " and " };
return new { Word = word, Leading = ", " };
});
wordsAndSeparators.ToList().ForEach(v => sb.Append(v.Leading).Append(v.Word));
return sb.ToString();
}
Note the interjected .ToList() before the .ForEach() on the second to last line.
Why is it that .ForEach() isn't available as an extension method on IEnumerable<T>? With an example like this, it just seems weird.
Because ForEach(Action) existed before IEnumerable<T> existed.
Since it was not added with the other extension methods, one can assume that the C# designers felt it was a bad design and prefer the foreach construct.
Edit:
If you want you can create your own extension method, it won't override the one for a List<T> but it will work for any other class which implements IEnumerable<T>.
public static class IEnumerableExtensions
{
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{
foreach (T item in source)
action(item);
}
}
According to Eric Lippert, this is mostly for philosophical reasons. You should read the whole post, but here's the gist as far as I'm concerned:
I am philosophically opposed to
providing such a method, for two
reasons.
The first reason is that doing so
violates the functional programming
principles that all the other sequence
operators are based upon. Clearly the
sole purpose of a call to this method
is to cause side effects.
The purpose of an expression is to
compute a value, not to cause a side
effect. The purpose of a statement is
to cause a side effect. The call site
of this thing would look an awful lot
like an expression (though,
admittedly, since the method is
void-returning, the expression could
only be used in a “statement
expression” context.)
It does not sit well with me to make
the one and only sequence operator
that is only useful for its side
effects.
The second reason is that doing so
adds zero new representational power
to the language.
Because ForEach() on an IEnumerable is just a normal for each loop like this:
for each T item in MyEnumerable
{
// Action<T> goes here
}
ForEach isn't on IList it's on List. You were using the concrete List in your example.
I am just guessing here , but putting foreach on IEnumerable would make operations on it to have side effects . None of the "available" extension methods cause side effects , putting an imperative method like foreach on there would muddy the api I guess . Also, foreach would initialize the lazy collection .
Personally I've been fending off the temptation to just add my own , just to keep side effect free functions separate from ones with side effects.
ForEach is implemented in the concrete class List<T>
Just a guess, but List can iterate over its items without creating an enumerator:
public void ForEach(Action<T> action)
{
if (action == null)
{
ThrowHelper.ThrowArgumentNullException(ExceptionArgument.match);
}
for (int i = 0; i < this._size; i++)
{
action(this._items[i]);
}
}
This can lead to better performance. With IEnumerable, you don't have the option to use an ordinary for-loop.
LINQ follows the pull-model and all its (extension) methods should return IEnumerable<T>, except for ToList(). The ToList() is there to end the pull-chain.
ForEach() is from the push-model world.
You can still write your own extension method to do this, as pointed out by Samuel.
I honestly don't know for sure why the .ForEach(Action) isn't included on IEnumerable but, right, wrong or indifferent, that's the way it is...
I DID however want to highlight the performance issue mentioned in other comments. There is a performance hit based on how you loop over a collection. It is relatively minor but nevertheless, it certainly exists. Here is an incredibly fast and sloppy code snippet to show the relations... only takes a minute or so to run through.
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Start Loop timing test: loading collection...");
List<int> l = new List<int>();
for (long i = 0; i < 60000000; i++)
{
l.Add(Convert.ToInt32(i));
}
Console.WriteLine("Collection loaded with {0} elements: start timings",l.Count());
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("foreach loop test starting...");
DateTime start = DateTime.Now;
//l.ForEach(x => l[x].ToString());
foreach (int x in l)
l[x].ToString();
Console.WriteLine("foreach Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("List.ForEach(x => x.action) loop test starting...");
start = DateTime.Now;
l.ForEach(x => l[x].ToString());
Console.WriteLine("List.ForEach(x => x.action) Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("for loop test starting...");
start = DateTime.Now;
int count = l.Count();
for (int i = 0; i < count; i++)
{
l[i].ToString();
}
Console.WriteLine("for Loop Time for {0} elements = {1}", l.Count(), DateTime.Now - start);
Console.WriteLine("\n<===============================================>\n");
Console.WriteLine("\n\nPress Enter to continue...");
Console.ReadLine();
}
Don't get hung up on this too much though. Performance is the currency of application design but unless your application is experiencing an actual performance hit that is causing usability problems, focus on coding for maintainability and reuse since time is the currency of real life business projects...
It's called "Select" on IEnumerable<T>
I am enlightened, thank you.