I have a c# application that uses Google Classroom's APIs and batch requests.
The batch patch I use is:
var batch = new BatchRequest (cservice, "https://classroom.googleapis.com/batch");
This was working fine until last week when I started to get errors due to Google's changes in batch requests: https://developers.googleblog.com/2018/03/discontinuing-support-for-json-rpc-and.html
On this blog the instructions for .NET state:
For global batch, confirm that your code does not explicitly specify the global batch path (i.e. "/batch" at the end of the path). If it does, change it to refer to the api/version specific path (example "/batch/library/v1").
And, in deed, this is also mirrored on the Google Classroom documentation.
My problem is that when I change the batch url and try to find the correct one, I get an error 404 as an invalid URL.
I have tried the following:
var batch = new BatchRequest (cservice, "https://www.googleapis.com/batch/classroom/v1");
var batch2 = new BatchRequest (cservice, "https://classroom.googleapis.com/batch/classroom/v1");
var batch3 = new BatchRequest (cservice, "https://classroom.googleapis.com/batch/v1");
All of them get a 404 error.
Any ideas as to what the new Google Classroom batch url should be?
Related
The Revit Addin is working perfectly and I have also converted correctly for Design automation. I have debugged it with local debugger. It worked perfect.
So I can say app bundle is perfectly doing well.
Now coming to the web application code, it works perfect until last line of "workItemStatus".
I need a rfa file and a big Json file as input file, to run the code. Both together will be 1 mb in size. But code is stack (endlessly waiting) when uploading file, workitem does not start.
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by.....
Url = string.Format("https://developer.api.autodesk.com/oss/v2/buckets/{0}/objects/{1}", bucketKey, inputFileNameOSS)
That post says, I will need to upload bigger files to another cloud service and use the signed-in URL instead of Forge oss bucket.
The code looks correct while debugging and it is stack, when it reach to the line
WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);
I have debugged the code, looks like perfectly working until "workItemStatus" value, in DesignAutomationController.cs "StartWorkItem".
Every Key and Value looks perfectly passed.
Is it because of the file size ? As the Json file is big, I am uploading it like the other input (.rfa/.rvt) files.
string callbackUrl = string.Format("{0}/api/forge/callback/designautomation?id={1}&outputFileName={2}", OAuthController.GetAppSetting("FORGE_WEBHOOK_URL"), browerConnectionId, outputFileNameOSS);
WorkItem workItemSpec = new WorkItem()
{
ActivityId = activityName,
Arguments = new Dictionary<string, IArgument>()
{
{ "inputFile", inputFileArgument },
{ "inputJsonFile", inputFileArgument1 },
{ "outputFile", outputFileArgument },
{ "onComplete", new XrefTreeArgument { Verb = Verb.Post, Url = callbackUrl } }
}
};
***WorkItemStatus workItemStatus = await _designAutomation.CreateWorkItemAsync(workItemSpec);***
return Ok(new { WorkItemId = workItemStatus.Id }); ```
I read in another stackoverflow post, that Forge does not allow more than 16kb upload to oss bucket by..
The 16kb limit is on a payload of design automation endpoints including the workitem. The limits are defined here. If the workitem payload exceeds 16kb you will see an error HTTP 413 Payload Too Large.
To send large json inputs to design automation, you may first upload the json to OSS (or even another storage service such as Amazon S3). Then call the workitem with a signed url to the json file (similar to the signed url for the rfa file).
Edit:
1. Large JSON files can be uploaded to OSS using Data Management endpoint.
2. A signed URL with read access can then be obtained for that object using endpoint.
3. The URL obtained can then be passed to Design Automation workitem payload as an input argument, instead of embedding the json contents into the payload.
I am using the Nuget package Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction
I have created a Custom Vision application in the Custom Vision portal and obtained API keys and a project ID.
Whenever I try to make a request to the API, I always get the following exception thrown:
HttpOperationException: Operation returned an invalid status code
'NotFound'
Here is my code:
HttpClient httpClient = new HttpClient();
CustomVisionPredictionClient customVisionPredictionClient = new CustomVisionPredictionClient(httpClient, false)
{
ApiKey = PredictionKey,
Endpoint = PredictionEndpoint,
};
var result = customVisionPredictionClient.PredictImageAsync(CUSTOM_VISION_PROJECT_GUID, imageData);
I have tried several different endpoints:
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction
https://southcentralus.api.cognitive.microsoft.com/customvision/Prediction/v1.0
https://southcentralus.api.cognitive.microsoft.com/customvision/v1.1/Prediction
though on the portal the listed one is the first of the list. I have also succesfuly exported my app on Azure, which gives me the second endpoint in the list but with no more success.
I have also set a default iteration as suggested in a similar issue that I found ( CustomVision: Operation returned an invalid status code: 'NotFound' ).
I have tried this sample https://github.com/Microsoft/Cognitive-CustomVision-Windows/tree/master/Samples/CustomVision.Sample which uses a deprecated windows client, to at least ensure my project information are correct and I was able to access the API.
Any insight would be appreciated
For the .NET client SDK, you need to specify the base endpoint URL without the version or the rest of the path. The version is automatically added by the client SDK. In other words, you'll want (assuming SouthCentralUS is your region):
PreditionEndpoint = "https://southcentralus.api.cognitive.microsoft.com";
CustomVisionPredictionClient customVisionPredictionClient = new CustomVisionPredictionClient()
{
ApiKey = PredictionKey,
Endpoint = PredictionEndpoint,
};
var result = customVisionPredictionClient.PredictImageAsync(CUSTOM_VISION_PROJECT_GUID, imageData);
As an aside, note that unless you want to fine-tune the behavior, you don't need to pass in an HttpClient object to CustomVisionPredictionClient constructor.
If you need more sample code, please take a look at the QuickStart.
How to use the Prediction API
If you have an image URL:
your endpoint would be something like this
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{Project-GUID}/url?iterationId={Iteration-ID}
Set Prediction-Key Header to : predictionId
Set Content-Type Header to : application/json
Set Body to : {"Url": "https://example.com/image.png"}
Or If you have an image file:
Endpoint would be like
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{ProjectGuid}/image?iterationId={Iteration-Id}
Set Prediction-Key Header to : Predcition-key
Set Content-Type Header to : application/octet-stream
Set Body to : <image file>
Remember, you can mark an iteration as Default so you can send data to it without specifying an iteration id. You can then change which iteration your app is pointing to without having to update your app.
Check my other answer on the similar issue using python
Python custom vision predictor fails
Hope it helps.
I am using Elizabeth's wrapper from https://github.com/mozts2005/ZendeskApi_v2
I want to pull a list of agents. I don't see any built in Function that will allow that.
I have tried using the endpoint of /api/v2/users.json?role=agent with the GetAllUsers() function but it still returns all of them.
Right now, I am going to add a custom field to be able to search for them, but that should not be the case, especially since Zendesk's API does have an option for returning users based on their role: /api/v2/users.json?role[]=admin&role[]=end-user
Can anyone help me out?
You can give a try to the Zendesk Search API:
from urllib.parse import urlencode
import requests
results = [] # Empty list to collect pagination results
credentials = 'your_zendesk_email', 'your_zendesk_password'
session = requests.Session()
session.auth = credentials
params = {
'query': 'type:user role:agent'
}
url = 'https://your_subdomain.zendesk.com/api/v2/search.json?' + urlencode(params)
while url:
response = session.get(url)
data = response.json()
results += data['results']
url = data['next_page'] # should return false according to the doc when the last page is reached
Useful resources:
zendesk api python tutorial
zendesk api pagination
Search endpoint seems to be supported also in the c# library you are using.
I'm currently in the process of syncing our product data to MailChimp using their batch function. Now, I've succeeded in adding products (the initial upload), but I struggle to update existing products.
What I have tried is the following:
1) Created a new batch, added an operation with method "PUT" and POSTING it to their API. This gives no error on uploading the batch, but the batch result returns a "Method not allowed"
2) Created a new batch, added an operation with method "PUT" and PUTTING it to their API. This returns a "Method not allowed" on batch upload.
3) Created a new batch, added an operation with method "POST" and POSTING it to their API. This gives an error because the product already exists.
I'm using a custom build integration for mailchimp, that uses a normal HttpClient.
This is the code that builds the operation:
var body = BuildProductBody(products.First());
var method = GetMethod(products.First());
batch.operations.Add(new SingleOperation
{
method = method,
operation_id = "asd" + products.First().Id,
path = string.Format("ecommerce/stores/{0}/products", GetStoreId(store)),
body = body
});
Did anyone succeed in updating products through their API?
I have a c# UWP App, that makes use of the OneDrive API to store files in the approot (special folder for my App only). I know, that I can get the total space of OneDrive this way, but it doesn't tell me, how much space my App takes.
Is there a fast way to tell, how much space my App takes to store these files there (instead of iterating through all items)?
As Brad said, approot like any other OneDrive item has a metadata. And in item's metadata has a size property which represents size of the item in bytes. So we can use this property to get the total space your app takes.
As App Folder described, we can use GET /drive/special/approot to get your app folder's metadata and when using OneDrive .NET SDK, the code will like:
var item = await oneDriveClient.Drive.Special.AppRoot.Request().GetAsync();
System.Diagnostics.Debug.WriteLine($"{item.Name}'s size is {item.Size}");
However as I tested, when we use this code in UWP, we will encounter a cache issue. Even your app folder's size has changed, this API will return the same value as the first time you run it.
This is because while Get metadata for a OneDrive item, it has an optional request headers if-none-match and if this request header is included and the eTag (or cTag) provided matches the current tag on the file, an HTTP 304 Not Modified response is returned.
And in UWP, using HttpClient will automatically add this header in request, if the eTag is not changed, HttpClient will not get the newest info, it will return the data it cached. According to Item resource type:
Note: The eTag and cTag properties work differently on containers (folders). The cTag value is modified when content or metadata of any descendant of the folder is changed. The eTag value is only modified when the folder's properties are changed, except for properties that are derived from descendants (like childCount or lastModifiedDateTime).
So in most case, app folder's eTag won't change and when we use OneDrive .NET SDK or default HttpClient in UWP to get app folder's metadata, we will get the cached data. To see this clearly, we can use fiddler to trace the network, and we will find in the request headers, If-None-Match is added and the real response from OneDrive is HTTP 304 Not Modified.
To solve this issue, we can use Windows.Web.Http.HttpClient class with HttpBaseProtocolFilter class and HttpCacheControl class to disable the cache like following:
var oneDriveClient = await OneDriveClientExtensions.GetAuthenticatedUniversalClient(new[] { "wl.signin", "onedrive.readwrite" });
var filter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
filter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
filter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;
var httpClient = new HttpClient(filter);
var request = new HttpRequestMessage(HttpMethod.Get, new Uri("https://api.onedrive.com/v1.0/drive/special/approot"));
request.Headers.Authorization = new Windows.Web.Http.Headers.HttpCredentialsHeaderValue("Bearer", oneDriveClient.AuthenticationProvider.CurrentAccountSession.AccessToken);
var response = await httpClient.SendRequestAsync(request);
var item = oneDriveClient.HttpProvider.Serializer.DeserializeObject<Item>(await response.Content.ReadAsStringAsync());
System.Diagnostics.Debug.WriteLine($"{item.Name}'s size is {item.Size}");
PS: To make this method work, we need make sure there is no local HTTP cache. So we'd better uninstall the app first and do not use await oneDriveClient.Drive.Special.AppRoot.Request().GetAsync() in the app.
When you fetch your app's folder (via approot) the value of the size property returned on the item should be a reflection of the amount of space your application is using (since for a folder the value is the sum of the size of all files stored within it, at any level).