I am working on a excercise here which is gathering data from a file and then displaying it on graph.
I am using a linkedIn video to help me here but I stumped on how they are able to store this extension method into two separate variables at the same time.
var (intersect,slope) = MathNet.Numerics.Fit.Line(arrx.ToArray(), arry.ToArray());
If I keep one variable on the left side the red lines go away but if I keep both "intersect" and "slope", it gives me an error.
Related
This is my first post on here. I'm attempting to create a 'simple' charting program (windows form based in c#). I say simple because I'm currently only playing around with only 2 series maximum, and a few transformations (percent changes, actual changes, moving averages and moving sums). It will get more complicated than this but having the limited functionality first might help me get a better handle on how all of this works in C#.
I've been searching on and off for a couple of days now but have had no luck so far with my specific situation. I'll try to be as detailed as possible. The user retrieves the time series data from a SQL Server. This part of the program is behaving as expected. I'm creating 2 queries (one for each series) to retrieve the data separately. I do the transformations in the SQL query. Each comes via a SQL adapter which is then placed into a data table. The series may be of different frequencies and the dates may not overlap (i.e. sometimes a stock prices will be daily, and exports will be monthly, or GDP quarterly). The exports number may come in as the first of every month, while stock prices may be missing a value on this date if it was a weekend. I suspect this part is important for my issue.
Nonetheless, I double checked this step and everything works as expected (the values are all coming in correctly at the right dates).
To add those series to a chart, I merge the two data tables like so (I believe this is where my issues are coming from):
myTable = myTables1.Copy();
myTables1.PrimaryKey = new DataColumn[] { myTables1.Columns["dates"] };
myTables2.PrimaryKey = new DataColumn[] { myTables2.Columns["dates"] };
myTable.Merge(myTables2, true);
Datagridview of the Merged data table here looks good to me
Then I create the chart: I've tried two different methods here. The first is to create each series in a 2 step loop (one for each series) and loop through each row in the table and add an x and y value for each series. I've also tried to set the data source for the chart to be the table, and create 2 series and set the X and Y members as the names of the columns in the table.
Like so:
mychart.DataSource = myTable;
Then in a loop for each series:
mychart.Series.Add(seriesname);
mychart.Series[seriesname].XValueMember = "dates";
mychart.Series[seriesname].YValueMembers = seriesname;
Regardless, my second series is always a bit off. Either there are straight lines going across or it misses some values (and by adding a tool tip i can tell that the dates where one series may have a value while the other does not, is where the problems are occurring).
I'm not looking for help with syntax (just ideas). So my question is: is there a standard or preferred way of getting and plotting series with different frequencies (or which may have different x values)? Alternatively, is there a good source/documentation where i can read up on this? I would add that i have a similar program using visual basic which uses one SQL query regardless of how many series there are. This works in terms of the chart looking as I'd expect it to, but it makes transformations much more complicated given the randomness of the null or empty values in the final table.
I'm rather new to Parse and Cloud Code, and I'm having trouble writing a certain query script.
I have a table of Salespeople, who have two integers : dailySold and dailyQuota.
The dailySold is reset to 0 each day, and the dailyQuota is defined by upper management.
Now, I'd like to make queries that call out bulks of users. Say, all users which dailySold is below their dailyQuota. In MySQL it would just look like this :
select * from salespeople where dailySold < dailyQuota
But in Parse / CloudCode I have been unable to find something like this. Currently, I'm loading all the entries, and going through them one by one, populating a large array clientside. This feels like the absolutely wrong way of doing it.
And the query.WhereNotEqualTo() function (and their siblings) seem to only be able to compare with static queries.
Does anyone know how to put together a query to optimize this ? I need it to go through thousands of records, and its often only 10-20 results I'm interested in. If nothing else, I'll have to make a cloudcode function that iterates for me serverside, but I still feel like there is some function I should be able to use, to make a more lean query.
You can't compare two columns in a query. You can only compare a key with a provided object. If the dailyQuota is set by upper management, I'm assuming this is the same for all salespeople, or for groups of people. I'd suggest first making a query for the daily quota and then either use
whereKey:matchesKey:inQuery
or just fetch the dailyQuota first and then use that value in the second query.
I have been using C# Deedle Frame and Series objects for managing financial data, it works great and it really changes the way time series are handled in C#.
But due to the immutable implementation, each time you modify the collection, a copy is made. I am appending daily points to a series and I wondered what would be the best way to keep adding rows to my Frame<TRowKey, TColumnKey> without impacting the performances. (I am currently using the Append method)
I expect the number of copy operations to grow like nbAddedRows! * nbColumns 😩
Thanks,
I would suggest trying to create a new frame with the data you want to append. Then, instead of using the Append method try using one of the various Join methods available.
What is the best way to retrieve a "X" number of random records using Entity Framework (EF5 if it's relevant). The value of "X" will be set based on where this will be used.
Is there a method for doing this built into EF or is best to pull down a result set and then use a C# random number function to pull the records. Or is there a method that I'm not thinking of?
On the off chance that it's relevant I have a table that stores images that I use for different usages (there is a FK to an image type table). The images that I use in my carousel on the homepage is what I'm wanting to add some variety to...consequently how "random" it is doesn't matter to me much. I'm just trying to get away from the same six or so pictures always being displayed. (Also, I'm not really interested in debating/discussing storing images in a table vs local storage.)
The solution needs to be one using EF via a LINQ statement. If this isn't directly possibly I may end up doing something SIMILAR to what #cmd has recommended in the comments. This would most likely be a matter of retrieving a record count...testing the PK to make sure the resulting object wasn't null and building a LIST of the X number of object's PKs to pass to front end. The carousel lazy loads the images so I don't actually need the image when I'm building the list that will be used by the carousel.
Can you just add an ORDER BY RAND() clause to your query?
See this related question: MySQL: Alternatives to ORDER BY RAND()
Can anyone please explain the concept of map-reduce, particularly in Mongo?
I also use C# so any specifics in that area would also be useful.
One way to understand Map-Reduce coming from C# and LINQ is to think of it as a SelectMany() followed by a GroupBy() followed by an Aggregate() operation.
In a SelectMany() you are projecting a sequence but each element can become multiple elements. This is equivalent to using multiple emit statements in your map operation. The map operation can also chose not to call emit which is like having a Where() clause inside your SelectMany() operation.
In a GroupBy() you are collecting elements with the same key which is what Map-Reduce does with the key value that you emit from the map operation.
In the Aggregate() or reduce step you are taking the collections associated with each group key and combining them in some way to produce one result for each key. Often this combination is simply adding up a single '1' value output with each key from the map step but sometimes it's more complicated.
One important caveat with MongoDB's map-reduce is that the reduce operation must accept and output the same data type because it may be applied repeatedly to partial sets of the grouped data. If you are passed an array of values, don't simply take the length of it because it might be a partial result from an earlier reduce operation.
Here's a spot to get started with Map Reduce in Mongo. The cookbook has a few examples, I would focus on these two.
I like to think of map-reduces in the context of "data warehousing jobs" or "rollups". You're basically taking detailed data and "rolling up" a smaller version of that data.
In SQL you would normally do this with sum() and avg() and group by. In MongoDB you would do this with a Map Reduce. The basic premise of a Map Reduce is that you have two functions.
The first function (map) is a basically a giant for loop that runs over your data and "emits" certain keys and values. The second function (reduce), is a giant loop over all of the emitted data. The map says "hey this is the data you want to summarize" and the reduce says "hey this array of values reduces to this single value"
The output from a map-reduce can come in many forms (typically flat files). In MongoDB, the output is actually a new collection.
C# Specifics
In MongoDB all of the Map Reduces are performed inside of the javascript engine. So both the map & reduce function are all written in javascript. The various drivers will allow you to build the javascript and issue the command, however, this is not how I normally do it.
The preferred method for running Map Reduce jobs is to compile the JS into a file and then mongo map_reduce.js. Generally you'll do this on the server somewhere as a cron job or a scheduled task.
Why?
Well, map reduce is not a "real-time", especially with a big data set. It's really designed to be used in a batch fashion. Don't get me wrong, you can call it from your code, but generally, you don't want users to initiate map reduce jobs. Instead you want those jobs to be scheduled and you want users to be querying the results :)
Map Reduce is a way to process data where you have a map stage/function that identifies all data to be processed and process it, row by row.
Then you have a reduce step/function that can be run multiple times, for example once per server in a cluster and then once in the client to return a final result.
Here is a Wiki article describing it in more detail:
http://en.wikipedia.org/wiki/MapReduce
And here is the documentation for MongoDB for Mapreduce
http://www.mongodb.org/display/DOCS/MapReduce
Simple example, find the longest string in a list.
The map step will loop over the list calculating the length of each string, the reduce step will loop over the result from map and for each line keep the longest one.
This can of cause be much more complex but that's the essence of it.