I am trying to get the latest value/rate for a given pair using the Kraken API, but I cannot really figure it out.
Is there anyone that knows how to do it?
I am using the C# code provided on Git (https://github.com/trenki2/KrakenApi) and thought using the following function was the way to go:
client.GetRecentTrades("XETHZEUR",ID)
I however don't want to use an ID, which seems to be optional according to Kraken Website.
I just want to know what is the current value, nothing more.
I have also used GetTicker to get the last trade, but this does not come with a Time stamp and will not give the actual currency pair value.
Cheers
Both GetTicker and GetRecentTrades will give you the value of the last trade. You can use either depending on what other data you need. I guess there could be some difference because most likely Kraken caches the results.
Of the two methods above only GetRecentTrades will provide a timestamp.
Alternatively instead of getting the last trade you can call GetOrderBook and calculate the average between the lowest Ask price and highest Bid price.
Related
I'm currently working withing C# to put images in my api database (school project) and I want to store ISO, Aperture and Shutterspeed as well. Now those are stored in the metadata as APEX or EXIF and not the 'normal' values. I've already did some research and found a way to calculate the Aperture: var value = Math.Round(Math.Pow(2,apexValue/2),1); but it makes rounding fault for some (5.7 instead of 5.6, 22.6 instead of 22, ...) and so I was wondering if there is an easy way to convert those to the values know by people (aperture to f-stops, shutterspeed to seconds, iso to iso values)?
At this point, I'm looking into Property Item Descriptions but I'm a bit confused by them because if we take Aperture for example: the property is called: PropertyTagExifAperture and so you would think that it is an Efix value but the description states: Lens aperture. The unit is the APEX value. So what is it then, Efix or APEX?
Thanks for your time!
The workfront API isn't returning the same results as our web report:
On our web front-end on workfront one of the reports has a date range from $$TODAYbw to $$TODAYe+6m and it returned about ~500 rows.
I tried the same query on the API like so (formatted for easier reading)
/v7.0/RSALLO/search
?fields=DE:project:Probability,allocationDate,scheduledHours,project:name,project:status,roleID,project:status,role:name
&allocationDate_Mod=between
&allocationDate=$$TODAYbw
&allocationDate_Range=$$TODAYe+6m
&AND:0:project:status_Mod=notin
&AND:0:project:status=CPL
&AND:0:project:status=DED
&AND:0:project:status=REJ
&AND:0:project:status=UZF
&AND:0:project:status=IDA
&AND:0:roleID_Mod=in
&AND:0:roleID=55cb58b8001cc9bc1bd9767e080f6c10
&AND:0:roleID=55cb58b8001cc9bd9fc0f8b03a581493
&AND:0:roleID=55cb58b8001cc9bfaa01243cd6024b6d
&AND:0:roleID=55cb58b8001cc9c0afa399dece405efd
&$$LIMIT=1000
which returned barely any results. Notice the &allocationDate_Range=$$TODAYe+6m line. If I change it to read =$$TODAY+6m without the end of day modifier the API returns ~500 rows.
I went through every filter criteria and it's only the allocationDate range that is going wrong. I found this resource for the date modifiers and in it there is no e+6m example, yet it works on our web front-end report.
Is the API flawed or is the web report doing something extra in the background?
I don't have an exact solution for your problem, but I can confirm that the API does have some difficulty parsing wildcards like you're trying to use and they don't always come up the way we expect. Furthermore, the API doesn't parse things the same way as text mode reporting, so a query that looks great in the latter might return something different in the former.
If I may propose a different solution, since you're already coding this up outside of Workfront then I suggest you simply perform the date calculations on your own and pass explicit datetime objects to Workfront instead of allowing it to use its own logic. I know this doesn't answer the question of "what is a query that will return exactly what I want" but it should give you the correct end result.
For what it's worth, I spent about 15 minutes trying to get an example working on my end and I gave up after it kept returning values which should have been outside of my own date range.
I am translating some C#-code to Java, and have chosen JodaTime's DateTime class, to replace C#'s System.DateTime.
In C# the DateTime class has a Field called MaxValue and one called MinValue, which returns the biggest and smallest possible value that the DateTime object can hold.
I am trying to achieve the same with the JodaTime api. I have read some suggestions on other posts
This one: Java equivalent of .NET DateTime.MinValue, DateTime.Today answers how to make today's date in JodaTime, but when answering the second half of the question, about Min Values, they turn to Calendar and Date
Likewise I have seen suggestions about passing a maximized long value as constructor parameter, but it was criticized for being dependant on classes that might be changed in the future, and therefor might not be compatible or accurat after API updates.
So, is there a single positively correct way to do this? If not, is there a good way to achieve this?
Java 8 LocalDate has two values. LocalDate.MAX and LocalDate.MIN
LocalDate.MAX - The maximum supported LocalDate, '+999999999-12-31'. This could be used by an application as a "far future" date.
LocalDate.MIN - The minimum supported LocalDate, '-999999999-01-01'. This could be used by an application as a "far past" date.
Note: these do not translate to Long.MIN_VALUE or Long.MAX_VALUE.
I suggest using Java 8 if migrating from C# and how date/time works is important to you, as it has closures AND a new DateTime API based on JodaTime. This new DateTime API is the one you should be using if you are worried about the future of an API.
I think you can assume that Long.MIN_VALUE and Long.MAX_VALUE will never change as they are based on the definition of how a signed 64-bit values work. (How 64-bit values work was standardised before you were born, most likely) You can also assume that Date will not change as it hasn't change much since it was released and since it has been replaced there is even less reason to change it. In theory it might be deprecated, but in reality there is still too much code which uses it.
IMHO, I use long to represent a time in milli-seconds ala System.currentTimeMillis() and I use Long.MIN_VALUE and Long.MAX_VALUE.
If you are concerned about using good API and future proofing your code, I suggest you avoid using Calendar. Not that it is all bad, but there are good reasons to want to replace it.
Im trying to get a resource usage of specific resource in specific day.
I tryed to use Resource.TimeScaleData but it always returns 0 items.
While in debug mode, the Resource.TimephasedData ArrayList has 6118 items.
Maybe, there is a sample code for getting resource usage?
Can someone direct me, how to approach these?
Try this:
Set tsv = ActiveProject.Resources("your resource").TimeScaleData(#6/6/2013#, #6/6/2013#)
amt = tsv(1)
This will give you the amount of forecast work for the resource for the given day, expressed in minutes. If you want baseline work, you'll need to add the third argument (e.g. pjResourceTimescaledBaselineWork).
Let's say I have a database filled with people with the following data elements:
PersonID (meaningless surrogate autonumber)
FirstName
MiddleInitial
LastName
NameSuffix
DateOfBirth
AlternateID (like an SSN, Militarty ID, etc.)
I get lots of data feeds in from all kinds of formats with every reasonable variation on these pieces of information you could think of. Some examples are:
FullName, DOB
FullName, Last 4 SSN
First, Last, DOB
When this data comes in, I need to write something to match it up. I don't need, or expect, to get more than an 80% match rate. After the automated match, I'll present the uncertain matches on a web page for someone to manually match.
Some of the complexities are:
Some data matches are better than others, and I would like to assign weight to those. For example, if the SSN matches exactly but the name is off because someone goes by their middle name, I would like to assign a much higher confidence value to that match than if the names match exactly but the SSNs are off.
The name matching has some difficulties. John Doe Jr is the same as John Doe II, but not the same as John Doe Sr., and if I get John Doe and no other information, I need to be sure the system doesn't pick one because there's no way to determine who to pick.
First name matching is really hard. You have Bob/Robert, John/Jon/Jonathon, Tom/Thomas, etc.
Just because I have a feed with FullName+DOB doesn't mean the DOB field is filled for every record. I don't want to miss a linkage just because the unmatched DOB kills the matching score. If a field is missing, I want to exclude it from the elements available for matching.
If someone manually matches, I want their match to affect all future matches. So, if we ever get the same exact data again, there's no reason not to automatically match it up next time.
I've seen that SSIS has fuzzy matching, but we don't use SSIS currently, and I find it pretty kludgy and nearly impossible to version control so it's not my first choice of a tool. But if it's the best there is, tell me. Otherwise, are there any (preferably free, preferably .NET or T-SQL based) tools/libraries/utilities/techniques out there that you've used for this type of problem?
There are a number of ways that you can go about this, but having done this type of thing before i will go ahead and put out here that you run a lot of risk in having "incorrect" matches between people.
Your input data is very sparse, and given what you have it isn't the most unique, IF not all values are there.
For example with your First Name, Last Name, DOB situation, if you have all three parts for ALL records, then the matching gets a LOT easier for you to work with. If not though you expose yourself to a lot of potential for issue.
One approach you might take, on the more "crude" side of things is to simply create a process using a series of queries that simply identifies and classifies matching entries.
For example first check on an exact match on name and SSN, if that is there flag it, note it as 100% and move on to the next set. Then you can explicitly define where you are fuzzy so you know the potential ramification of your matching.
In the end you would have a list with flags indicating the match type, if any for that record.
This is a problem called record linkage.
While it's for a python library, the documentation for dedupe gives a good overview of how to approach the problem comprehensively.
Take a look at the Levenshtein Algoritm, which allows you to get 'the distance between two strings,' which can then be divided into the length of the string to get a percentage match.
http://en.wikipedia.org/wiki/Levenshtein_distance
I have previously implemented this to great success. It was a provider portal for a healthcare company, and providers registered themselves on the site. The matching was to take their portal registration and find the corresponding record in the main healthcare system. The processors who attended to this were presented with the most likely matches, ordered by percentage descending, and could easily choose the right account.
If the false positives don't bug you and your languages are primarily English, you can try algorithms like Soundex. SQL Server has it as a built-in function. Soundex isn't the best, but it does do a fuzzy matching and is popular. Another alternative is metaphone.