I have tried the below code snippet in a .NET console app and able to see the forecasted predictions.
var context = new MLContext();
DatabaseLoader loader = context.Data.CreateDatabaseLoader<TimeSeriesInput>();
Entities db = new Entities();
string query = "select cast([Date] as datetime) [Timestamp],cast(Energy as real) [Data] from _energy_hourly_for_ml";
var mldata = db.Database.SqlQuery<TimeSeriesInput>(query).AsEnumerable();
var data = context.Data.LoadFromEnumerable(mldata);
//_energy_hourly_for_ml : new table
var pipeline = context.Forecasting.ForecastBySsa(
nameof(TimeSeriesOutput.Forecast),
nameof(TimeSeriesInput.Data),
windowSize: 5,
seriesLength: 10,
trainSize: 100,
horizon: 4); //no of next set of output predictions
var model = pipeline.Fit(data);
var forecastingEngine = model.CreateTimeSeriesEngine<TimeSeriesInput, TimeSeriesOutput>(context);
var forecasts = forecastingEngine.Predict();
Now I want to save entire above code in database. What I want to do is:
fetch the above code from database
execute it dynamically
fetch the forecasted predictions as output from previous step execution
display it on the view
Let me know for any ref pointers on this please.
Related
I have 99% confidence machinelearning structure on ml.net, however it shows wrong anomaly.
I use the below code to detect it:
var mlcontext = new MLContext();
var dataView = mlcontext.Data.LoadFromEnumerable(list);
string outputColumnName = nameof(IidSpikePrediction.Prediction);
string inputColumnName = nameof(TimeSeriesData.VALUE);
var transformedData = mlcontext.Transforms.DetectIidSpike(outputColumnName, inputColumnName, 99, list.Count / 4).Fit(dataView).Transform(dataView);
var predictionColumn = mlcontext.Data.CreateEnumerable<IidSpikePrediction>(transformedData, reuseRowObject: false);
Here is the result.
The count pass to 6 from 5 but it says that this is alert but it is not.
How can we explain this issue?
Im trying to improve my model from ML.NET 0.5 to 0.6 and i have a question.
I Copy-paste example from ML.NET Cookbook that says:
// Create a new environment for ML.NET operations. It can be used for
exception tracking and logging,
// as well as the source of randomness.
var env = new LocalEnvironment();
// Create the reader: define the data columns and where to find them in the
text file.
var reader = TextLoader.CreateReader(env, ctx => (
// We read the first 11 values as a single float vector.
FeatureVector: ctx.LoadFloat(0, 10),
// Separately, read the target variable.
Target: ctx.LoadFloat(11)
),
// Default separator is tab, but we need a comma.
separator: ',');
// Now read the file (remember though, readers are lazy, so the actual
reading will happen when the data is accessed).
var data = reader.Read(new MultiFileSource(dataPath));
So i started to implementing it into my model:
using System;
using Microsoft.ML.Legacy;
using Microsoft.ML.Legacy.Data;
using Microsoft.ML.Legacy.Transforms;
using Microsoft.ML.Legacy.Trainers;
using Microsoft.ML.Legacy.Models;
using Microsoft.ML.Runtime.Data;
public static PredictionModel<CancerData, CancerPrediction> Train()
{
var pipeline = new LearningPipeline();
//0.6 way to upload data into model
var env = new LocalEnvironment();
var reader = Microsoft.ML.Runtime.Data.TextLoader.CreateReader(env, ctx => (
FeatureVector: ctx.LoadFloat(0, 30),
Target: ctx.LoadText(31)
),
separator: ';');
var data = reader.Read(new MultiFileSource("Cancer-Train.csv"));
//pipeline.Add(new TextLoader("Cancer-Train.csv").CreateFrom<CancerData>(useHeader: true, separator: ';'));
pipeline.Add(new Dictionarizer(("Diagnosis", "Label")));
pipeline.Add(data); //dont work, i just write it to show you what i want to do
//below the 0.5 way to load data into pipeline!
//pipeline.Add(new ColumnConcatenator(outputColumn: "Features",
// "RadiusMean",
// "TextureMean",
// .. and so on...
// "SymmetryWorst",
// "FractalDimensionWorst"));
pipeline.Add(new StochasticDualCoordinateAscentBinaryClassifier());
pipeline.Add(new PredictedLabelColumnOriginalValueConverter() { PredictedLabelColumn = "PredictedLabel" });
PredictionModel<CancerData, CancerPrediction> model = pipeline.Train<CancerData, CancerPrediction>();
model.WriteAsync(modelPath);
return model;
}
The question is, how to add var data into my exisitng pipeline? What i need to do, to var data from ver 0.6 works on 0.5 pipeline?
I don't think the LearningPipeline APIs are compatible with the new static typing APIs (e.g. TextLoader.CreateReader). The cookbook helps to show the new APIs for training and also other scenarios like using the model for predictions. This test might also be helpful for binary classification.
For your code specifically, I believe the training code would look something like:
var env = new LocalEnvironment();
var reader = Microsoft.ML.Runtime.Data.TextLoader.CreateReader(env, ctx => (
FeatureVector: ctx.LoadFloat(0, 30),
Target: ctx.LoadBool(31)
),
separator: ';');
var data = reader.Read(new MultiFileSource("Cancer-Train.csv"));
BinaryClassificationContext bcc = new BinaryClassificationContext(env);
var estimator = reader.MakeNewEstimator()
.Append(row => (
label: row.Target,
features: row.FeatureVector.Normalize()))
.Append(row => (
row.label,
score: bcc.Trainers.Sdca(row.label, row.features)))
.Append(row => (
row.label,
row.score,
predictedLabel: row.score.predictedLabel));
var model = estimator.Fit(data);
So for sqlite I would just do:
var d = new ConnectionHandler();
var writeData =
$"UPDATE `Books` SET Book_Author = #Book_Author WHERE ID = 1";
d.OpenCnx();
using (var cmd = new SQLiteCommand(writeData, d.cnx))
{
cmd.Parameters.AddWithValue("#Book_Author", $"New Book owner");
cmd.ExecuteNonQuery();
}
but for Mongodb for c# what would be the equivalent, or how would I do this
You can get some inspiration for updating a document in the quick-tour documentation provided on the MongoDB site: http://mongodb.github.io/mongo-csharp-driver/2.5/getting_started/quick_tour/
Just look at the Updating Document section.
But to save a click you can use the following code as an inspiration:
// Setup the connection to the database
var client = new MongoClient("mongodb://localhost");
var database = client.GetDatabase("Library");
var collection = database.GetCollection<BsonDocument>("books");
// Create a filter to find the book with ID 1
var filter = Builders<BsonDocument>.Filter.Eq("ID", 1);
var update = Builders<BsonDocument>.Update.Set("Book_Author", "New Book owner");
collection.UpdateOne(filter, update);
I hope this helps.
I'm using Intuit's .NET SDK for QBO, and am attempting to get a list of purchases from my qbo account. Here's my C# code...
var qboCashPurchaseQuery = new Intuit.Ipp.Data.Qbo.CashPurchaseQuery();
qboCashPurchaseQuery.PageNumber = 1;
qboCashPurchaseQuery.ResultsPerPage = 100;
var results = qboCashPurchaseQuery.ExecuteQuery<Intuit.Ipp.Data.Qbo.CashPurchase(context).ToList();
grdQuickBooksCustomers.DataSource = results;
I get a table with the number of records matching the number of purchases I expect, but the actual data returned is not what I need.
This is the columns and the first row of data I get back: (sorry for the formatting)
SyncToken Synchronized IdsType
0 Intuit.Ipp.Data.Qbo
This bit of code returns the same kind of weird data for some other functions such as InvoiceQuery. What am I doing wrong here to get this data? How might I actually return the detailed purchase data I need?
string q = "Select * from Purchase Where PaymentType='Cash' STARTPOSITION 1 MAXRESULTS 100";
dynamic Purchaselist = purchaseQueryService.ExecuteIdsQuery(q);
CashPurchase cashpurchase = new CashPurchase();
foreach (CashPurchase oCash in Purchaselist) {
//do something
}
var purchase = new Purchase();
var purchases = ds.FindAll(purchase, 0, 500);
That's how we query our datasets. You have to repeat the query if you've got more than 500 items.
I'm trying to do a select at a Mongo database
I'm using this DLL
MongoDB.Bson,MongoDB.Driver,MongoDB.Driver.Linq
My table have more than 55k rows
After some time occurs this error
Cursor Not Found
Here is my code
var client = new MongoClient(connectionString);
var server = client.GetServer();
var database = server.GetDatabase("Database");
var collection = database.GetCollection<DesktopSessions>("desktop_sessions");
var query = (from e in collection.AsQueryable<DesktopSessions>()
where e.created_at > new DateTime(2012, 7, 1)
select e);
foreach (var item in query)
{
string id = item._id.ToString();
}
How can I solve this problem?
I changed my code to this
var collection = database.GetCollection<DesktopSessions>("desktop_sessions");
var queryM = Query.GTE("created_at", new BsonDateTime(new DateTime(2012,7,1)));
var cursor = collection.Find(queryM);
cursor.SetFlags(QueryFlags.NoCursorTimeout);
It Works!!
Other option is to set the timeout for whole database which is what I am doing.
You can do it in configuration, command line, mongo shell, or even C#.
see here:
https://jira.mongodb.org/browse/SERVER-8188
This is the solution I am using currently in my init class
var db = this.MongoClient.GetDatabase("admin");
var cmd = new BsonDocumentCommand<BsonDocument>(new BsonDocument {
{ "setParameter", 1 },
{ "cursorTimeoutMillis", 3600000 }
});
db.RunCommand(cmd);
More information could be find here:
https://docs.mongodb.com/v3.0/reference/parameters/#param.cursorTimeoutMillis