Convert timespan string to seconds in elasticsearch / kibana - c#

We use ElasticSearch as one of our logging sinks for our C# WebApi project.
We've log messages with TimeSpan informations passed as a parameter to ElasticSearch.
Somethin like this:
Stopwatch stopwatch = Stopwatch.StartNew();
... // do something we want to measure
stopwatch.Stop();
_logger.LogDebug("Duration for doing: {doingDuration}", stopwatch.Elapsed);
So in Kibana we can see the log message template:
"Duration for doing: {doingDuration}"
and the variable:
labels.doingDuration (type: string)
Now we're trying to visualize in form of a line chart of doing durations (y-Axis), and timestamp of the entry (y-Axis).
Is it possible to parse doingDuration to a number in ElasticSearch / Kibana (DataView, IndexTemplate) or something or do I need to send those TimeSpan informations as TotalSeconds from C# to Elastic?
I've seen grok filter for logstash but are they applicable to logmessage directly send to indexes?

Related

How to ingest large amount of logs in ASP.Net Web API

I am new to API development and I want to create a Web API end point which will be receiving a large amount of log data. And I want to send that data to Amazon s3 bucket via Amazon Kinesis delivery stream. Below is a sample application which works FINE, but I have NO CLUE how to INGEST large inbound of data and in What format my API should be receiving data? How my API Endpoint should look like.
[HttpPost]
public async void Post() // HOW to allow it to receive large chunk of data?
{
await WriteToStream();
}
private async Task WriteToStream()
{
const string myStreamName = "test";
Console.Error.WriteLine("Putting records in stream : " + myStreamName);
// Write 10 UTF-8 encoded records to the stream.
for (int j = 0; j < 10000; ++j)
{
// I AM HARDCODING DATA HERE FROM THE LOOP COUNTER!!!
byte[] dataAsBytes = Encoding.UTF8.GetBytes("testdata-" + j);
using (MemoryStream memoryStream = new MemoryStream(dataAsBytes))
{
PutRecordRequest putRecord = new PutRecordRequest();
putRecord.DeliveryStreamName = myStreamName;
Record record = new Record();
record.Data = memoryStream;
putRecord.Record = record;
await kinesisClient.PutRecordAsync(putRecord);
}
}
}
P.S: IN real world app I will not have that for loop. I want my API to ingest large data, what should be the definition of my API? Do I need to use something called multiform/data, file? Please guide me.
Here is my thought process. As you are exposing a API for the logging, your input should contain below attributes
Log Level (info, debug, warn, fatal)
Log message (string)
Application ID
Application Instance ID
application IP
Host (machine in which the error was logged)
User ID (for whom the error occurred)
Time stamp in Utc (time at which the error occurred)
Additional Data (customisable as xml / json)
I will suggest exposing the API as AWS lambda via Gateway API as it will help in scaling out as load increases.
To take sample for how to build API and use model binding, you may refer https://learn.microsoft.com/en-us/aspnet/web-api/overview/formats-and-model-binding/model-validation-in-aspnet-web-api
I don't have much context so basically will try to provide answer from how I see it.
First instead of sending data to webapi I would send data directly to S3. In azure there is Share Access Token so you send request to you api to give you url where to upload file(there is many options but you can limit by time, limit by IP who can upload). So to upload file 1. Do call to get upload Url, 2. PUT to that url. Looks like in Amazon it called Signed Policy.
After that write lambda function which will be triggered on S3 upload, this function will be sending event (Again I dont know how its in AWS but in azure I will send Blob Queue message) this event will contain url to file and start position.
Write second Lambda which listens to events and do actually processing, so in my apps sometimes i know that to process N items it take 10 seconds so I usually choose N to be something not longer that 10-20 seconds, due to nature of deployments. After you processed N rows and not yet finished send same event but now Start position = Start position on the begging + N. More info how to read range
Designing this way you can process large files, even more you can be smarter because you can send multiple events where you can say Start Line, End Line so you will be able to process your file in multiple instances.
PS. Why I would not recommend you upload files to WebApi its because those files will be in memory, so lets say you have 1GB files sending from multiple sources in this case you will kill your servers in minutes.
PS2. Format of file depends, could be json since its the easiest way to read those files, but keep in mind that if you have large files it will be expensive to read whole file to memory. Here is example how to read them properly. So other option could be just flat file then will be easy to read it, since then you can read range and process it
PS3. In azure I would use Azure Batch Jobs

Does QueryRequest's Execute() method return the entire result (and not paginate it)?

When I execute my QueryRequest object, I get a totalRows of around 110,000 while the response rows are around 38,000. So I am not receiving the entire result and must do paging.
But I see that my QueryRequest object has no startIndex property.
How can I receive the entire result set?
I am using a Premium version of Google Analytics. Does Google still return 10MB of data with each request?
UPDATE: I don't think my question is a duplicate. What I meant by the question was how can I get a specific page of results when my QueryRequest has no startIndex property.
JobsResource j = null;
QueryRequest qr = null
...
j.qr.Query = "SELECT examplecolumns FROM myTable";
QueryResponse response = j.Query(qr, projectId).Execute();
Call the getQueryResults() method to fetch the rest of the results.
https://cloud.google.com/bigquery/docs/reference/v2/jobs/getQueryResults
You cannot return the entire result set because Google has general quota limits for all of their API's:
General Quota Limits (All APIs)
The following quota limits are shared between the Management API, Core Reporting API, MCF Reporting API, Metadata API, and Real Time Reporting API.
50,000 requests per project per day – can be increased
10 queries per second (QPS) per IP.
In the Developers Console this quota is referred to as the per-user limit. By default, it is set to 1 query per second (QPS) and can be adjusted to a maximum value of 10. If the per-user limit is set to a value larger than 10 QPS, the Google Analytics quota policy will still take effect and limit per-user requests to 10 QPS.
If your application makes all API requests from a single IP address (i.e. on behalf of your users) you should consider using the userIP or quotaUser parameters with each request to get full QPS quota for each user. See the query parameters summary for details.
For more information have a look at this link: Configuration and Reporting API Limits and Quotas
You can also find more information on the subject here:Querying Data
The following additional limits apply for querying data.
Maximum tables per query: 1,000
Maximum query length: 256 KB
The query / getQueryResults methods are used to push some of the waiting for job completion into the server. Clients may see faster notification of job complete when using this mechanism, and will receive the first page of the query results in that response, avoiding the need for one additional round trip to fetch the data.
The general mechanism for using these apis, in pseudo code, is:
response = query(...)
while (!response.jobComplete) {
response = getQueryResults(response.jobReference);
}
moreData = false
do {
// consume response.rows
moreData = response.pageToken != null
if (moreData) {
response = getQueryResults(response.jobReference, response.pageToken)
}
} while (moreData)
Note that some type safe languages will be more difficult to code this, as the first response in that do loop may be a QueryResponse or a GetQueryResultsResponse type, depending on whether the query job finished within the initial timeout on the query() call, or whether it finished within the while (!response.jobComplete) polling loop.

how to use GMAIL API query filter for datetime

I am using GMAIL API over REST interface to read mails from gmail server, my problem is when I am using date filter by giving a date as 'after:2014/8/20 before:2014/8/22' then the mails starting from 2014/8/20 12.30 PM onwards are downloaded (ideally it should consider mails from 12.00 AM). Mails from night 12.00 AM till noon 12.30 PM are skipped. I think server is using PST time zone.
Can I specify time in the filter? or is there a way to specify time zone so that I get all the mails.
code used:
UsersResource.MessagesResource.ListRequest request = null;
ListMessagesResponse response = null;
request = gmailServiceObj.Users.Messages.List(userEmail);
String query = "after:" + FromDate.Date.ToString("yyyy/M/dd") + " before:" + ToDate.Date.ToString("yyyy/M/dd") + " label:" + LabelID;
request.Q = query;
Thanks,
Haseena
The behavior of the API in this regard should be the same as the web UI, can you verify if that's not the case? The search query params are listed here:
https://support.google.com/mail/answer/7190?hl=en
It seems odd that it wouldn't deliver emails between 12:00AM and 12:30AM, what timezone is your client in? What's the timezone preference set to in the Gmail web interface for the user? You could try changing that preference and see if it helps? If not, one workaround I can think of is to have the filter from the day before and do the filtering client-side, as ugly as that is... :-/
It does appear that the timezone being used when processing these queries is always PST. There is currently no way to specify the timezone in the request, or have it use the timezone of the account. I'll follow up with the engineering team to come up with a resolution.

How to Access the Server Date in Dynamic CRM 2011 through Javascript or Plugin?

I Wants to get the server Date in Dynamic crm 2011
through new Date() i am getting only the system time.
Your question really isn't very unclear Hashim. You are trying new Date(); which is JScript. As this is client-side it will not look to the server to retrieve the date and hence you get local system time rather than server time.
You will need code to execute on the server-side to get server time. One solution is a plugin and you would simply use var myDate = DateTime.Now; to get the current system time.
You haven't elaborated on how you wish to use this so I can't tell you what to do with this value now you have it - but perhaps you are using it to populate an entity attribute, e.g. in a pre-update plugin:
if (pluginContext.InputParameters.Contains("Target") &&
pluginContext.InputParameters["Target"] is Entity)
{
// Obtain the target entity from the input parmameters.
var entity = (Entity)pluginContext.InputParameters["Target"];
// get current date/time
var now = DateTime.Now;
entity.Attributes.Add("new_mydatetimefield", now);
}

Replacing Timer text with result of DB calculation when Timer hits 00:00:00

I have a jQuery Countdown Timer that I am using, and I need to be able to access my Database and perform some calculations and then return the result:
$('#expireMessage').countdown({until: shortly,
expiryText: '<div class="over">It\'s all over</div>'});
$('#expireMessageStart').click(function() {
shortly = new Date();
shortly.setSeconds(shortly.getSeconds() + 5.5);
$('#expireMessage').countdown('change', {until: shortly});
});
Now, the above code just displays a countdown timer, and counts down. And when it hits
00:00:00
it displays a message "It's all over".
But what I need it to do is display a different message depending on the result of the DB calculations.
The DB work I can do, but I'm not sure how to go about retrieving that info from the database when using jQuery. I'd really appreciate your help.
Thank you
You need to set up something on the server side to talk to the database for you, then return the result in JSON format. What that something is depends on what your server-side code is written in. Are you using PHP? Java? ASP.NET?
I work primarily in ASP.NET, so one way I might tackle this is adding a WebMethod to my page that executes a database query, builds the message, serializes it to JSON, and returns it to the client.
In your JavaScript, you'll want to execute either an XMLHttpRequest (if you're using regular JavaScript) or a jQuery AJAX request.
Here's a very simple example of what a jQuery AJAX call might look like:
$.ajax({
url: 'http://mysite.com/getmymessage',
success: function( data ) {
// Here's where you'd update your countdown display, but I'm just writing to the console
console.log( 'The server says: ' + data.myDbResult );
}
});

Categories

Resources