Compiled Azure Function Monitoring: "No data available" - c#

I'm using a couple of compiled C# functions on Azure. They are working as expected, but when I click 'Monitor' on either of the functions, it just shows up "No data available".
I can see the function running on the 'Develop' tab's log, but would like an overview of the function's usage.
Is there something I am missing?

I'm seeing these too. I compared a function app that is working with one that isn't working. The one that isn't working returns a 404 when I click "refresh" in the monitor tab. It's hitting an URL like this:
https://[YOUR_APP].scm.azurewebsites.net/azurejobs/api/functions/definitions//invocations?limit=20
Note the double slashes before "invocations".
In a working app, it's more like:
https://[YOUR_APP].scm.azurewebsites.net/azurejobs/api/functions/definitions/[YOUR_APP]-[YOUR_FUNCTION]/invocations?limit=20
So something has happened to blow up the [YOUR_APP]-[YOUR_FUNCTION] part of the URL that the portal generates. Any ideas?
--
UPDATE: I think I fixed it.
I connected to the storage account associated with the function app using Microsoft Azure Storage Explorer. When I went to Tables > AzureWebJobsHostLogscommon, I noticed two things:
there was an entry for a function I had deleted
there was a function that I had created that had no entry (well, it existed in the "default-[YOUR_FUNCTION]" RowKey, but the "[YOUR_APP]-[YOUR_FUNCTION]" RowKey was missing
I added a new row for the missing "[YOUR_APP]-[YOUR_FUNCTION]" RowKey and set the OriginalName to the real function name. I went back to the portal, and poof! it started working.
I didn't bother deleting the extraneous entry from the deleted function. It didn't seem to hurt anything. But if any real function is missing, it seems to break the whole Monitor tab.
How it got that way, I'm not sure. Maybe it was doing something when I published an update.
--
UPDATE 2: Well, that got the Monitor tab working, but the data is "stale", as if someone background process isn't refreshing. I can see the log data appearing in Table Storage...
--
UPDATE 3: The stale data seems to be separate issue in the East US region, tracked at https://github.com/Azure/Azure-Functions/issues/259 ... the "no data available" issue I think was fixed by me correcting the AzureWebJobsHostLogscommon table as mentioned above.

We believe we have found an issue when explicit host ids are set in the host file, which is what is likely causing the problem.
We're tracking this issue here and will update it as we make progress.
As a workaround, please remove the id from your host.json file.

I also had the "No data available" problem. I solved it by adding an application setting that was missing: "FUNCTIONS_EXTENSION_VERSION": "~1"
Reference, David Ebbo comment on GitHub:
https://github.com/Azure/Azure-Functions/issues/259#issuecomment-300379674

Had the same issue, resolved it by emptying the host.json and making
"FUNCTIONS_EXTENSION_VERSION": "~3"

Related

Constantly get The local data store is currently in use by another operation when working on small projects

I use Visual Studio Team Services to store the source code of my projects as I work on them, I love the service, especially that it is free, but I have been running into the biggest pain lately.
Randomly when I go to save, modify, check in check out I get this error for every single file I am modifying. So if I am trying to save changes to 8 files I get this message 8 times and it takes 45 60 seconds of trying to check out for each file meaning to takes 6 - 8 minutes for the errors to stop (even if I hit cancel).
The local data store is currently in use by another operation
I looked it up online and found many people with the same issue but the response from MS has nothing to do with my situation.
http://blogs.msdn.com/b/phkelley/archive/2013/05/31/tf400030-the-local-data-store-is-currently-in-use-by-another-operation.aspx
It basically says this can happen when you have to many files in your workspace or have several large solution open at once.
This does not apply to me as I usually only have on solution open at a time and my projects are very small (400 -500 files).
Ran into this issue as well on VS 2013 and TFS - every time I opened my team explorer it would take 10+ seconds to show all projects, then when I would expand the project in source control, another 10+ seconds would roll by.
Earlier today I began to experience the "local storage is being used" error when trying to save data in class files. I did some original research, and this following link saved the day for sure. Now TFS is blazing!
Local Data Store Solved
What you do is edit workspace (including all projects associated), and change the "Location" dropdown from "Local" to "Server". It took about 4-5 minutes for the changes to finish, but well worth it.
Hopefully this will help someone down the road.
Lately I started to get same error message and Visual Studio started to work very slow with TFS and nuget. I have tried repair and uninstall but not solve the problem. At the and it was so painfully slow that I cannot continue working. (Expanding one item on source control explorer takes 10 seconds)
Here is my story and how my problem be solved:
I was mapped tfs folders separately not to get whole TFS because there are lots of irrelevant documents. After trying lots of fix suggestion, I thought this might be the problem because I did this separate mapping first time while I have been using TFS. I generally map and get all items at once and never met this issue before.
I removed all mappings and it was like magic. Error is gone, slow TFS source control is gone and it is rocket fast now. Just to be in a safe side I also delete my workspaces and create a new one and get all TFS items at once.
I found the error would be triggered when I had more than once instance of VS 2012+ running utilizing Source Control Explorer, Solution Explorer and/or Team Explorer windows. I've not had this problem when running a single instance of VS 2012+ (on updates 2+) utilizing Source Control Explorer, Solution Explorer and/or Team Explorer windows in tandem.
I found this article and gave it's suggestion a shot: to prevent multiple threads from accessing the data store simultaneously.
http://blogs.msdn.com/b/phkelley/archive/2013/05/31/tf400030-the-local-data-store-is-currently-in-use-by-another-operation.aspx
This proved to be a remedy for this issue.
I would add for other users with large file repositories, using source control and share this issue, it may be greatly beneficial to create multiple workspaces for each of your branches/repositories. I found that by doing this my queries to TFS sped up immensely and also helped with this error. I found this suggestion here: http://blogs.msdn.com/b/phkelley/archive/2013/05/30/using-multiple-workspaces-with-visual-studio.aspx. I share this as users mention TFS running slowly.
I also started getting the same error this week. Maybe there's something wrong with VS Update 3?
Simply could not work on any of the projects of the "broken" local workspace anymore.
VS would show all files as being checked out, but none were really.
Other local workspaces were working fine.
I tried removing a project from the workspace, but when trying to confirm it, I would receive the same TF400030 error again.
Suggestion
If nothing else works, you might want to try this: simply delete the whole workspace and create it again, this time separating projects into different workspaces. This worked for me.
You'll probably want to back up your files first.
I did as mentioned below and TFS started working fine
Close all the VS instances
Go to: C:\Users[UserName]\AppData\Local\Microsoft\VisualStudio\15.0_46af8b8e
Delete the privateregistry.bin file
Reopen the project solution
Above worked for me.
Had the same problem,can be fixed in 3 quick steps:
Remove current Workspace: Source Control Explorer->Work Space ListBox->workspaces... and remove the workspace.
- Make sure that all pending changes are checked in
Delete Workspace local folder.
- Its Better to delete the folder entirely.If eventually keeping some folders make sure to delete all $tf folders (hidden folders inside the workspace folder)
Remap the projects you need ( the less the better )
Hopes that helps.
In my case the cause was a compressed folder containing my local data store, shown in blue in Windows explorer. Removing the compression did the trick.
I ran into this error when renaming my workspace. After changing back to original everything worked fine again
Restarting the Visual studio resolved the issue for me.

EntityModel with Ideablade giving problems on server

Problem
When I test my project on live environment, it works perfect. All connections are properly made and both edmx will work properly.
However, when I publish it to our webserver, one of the edmx files will return a "The remote server returned error: Not Found"
(see picture)
Information
We are using a silverlight 5 project with IdeaBlade v6.1.7.0, Caliburn.Micro v1.3.1.0 and MefContrib.Silverlight v1.1.0.0
We have put a seperate Class Library with our edmx files in there.
We have 2 edmx files. One for normal data and one for localization.
Currently I have found out that the EDMX for StaticContent does not work! it will always return the error shown in the image above.
But when I test it on my local machine things work perfectly.
I am looking for anyone that can help me in anyway, if more information is needed; feel free to ask.
List of things I've already tried
I've tried to re-add the staticcontent edmx.
I've tried to combine them but this resulted in a lot more errors and difficulties.
I've tried to set the staticcontent edmx datasource key to the same as the other edmx
All custom dll's are set to "Copy Local = true"
You'll get the NotFound error on a client when it can't communicate with the EntityServer. This generally doesn't have anything to do with the EDMX itself, unless you're using data source extensions with each model.
Unfortunately, there are many reasons why a client won't be able to communicate with the server. You can first check if the service is running by navigating to its service page, as described here. If there's a problem with the service you'll see errors shown on this page.
Also see this page for information on deployment steps, to see if you've got everything needed.

When editing Resources.resx file, Resources.Designer.cs fails to update because TFS doesn't check it out

I'm using TFS source control.
When I add a new resource key to my resource file - Resources.resx - and hit save, TFS checks out Resources.resx but doesn't check out Resources.Designer.cs. This causes the update to Resources.Designer.cs to fail with error:
The command you are attempting cannot be completed because the file 'Resources.Designer.cs' that must be modified cannot be changed. If the file is under source control, you may want to check it out; if the file is read-only on disk, you may want to change its attributes.
The error is correct in that the file IS read only and the file IS NOT checked out. I don't want to have to manually check out the designer every time I add/edit a resource key. Does anybody know of a solution or work around to this issue?
Note that I have TFS set up to "check out on save" as opposed to "check out on edit". This is deliberate to reduce the amount of unedited checkouts.
EDIT:
This happens in other file types also. For example, I am using RazorGenerator to create compiled MVC views. The same problem occurs if I try to edit the .cshtml without checking out the .generated.cs first.
UPDATE:
This issue occurs on all (as far as I've seen) files that have an autogenerated code-behind: .resx, .edmx, .aspx, .cshtml (when using RazorGenerator for compiled views), etc. I've decided that it's not worth the pain just for having "on edit: do nothing" set. I've decided to reset this to "on edit: checkout automatically". Thanks to everybody for your input. No thanks to TFS team for this FAIL.
Well, I did not think this counts as an answer so I wrote it in comment.
Checkout on save is only triggering when you save file, it does not trigger when file is autogenerated (autogenerate is not trigger for save which does checkout, as this file is edited by custom tool assigned to resx).
I'm afraid you will not get proper answer (the one which will solve your problem) besides that it is by design, but it may be worth opening a case on connect and ask to change this behavior.
Why do you want to reduce the amout of unedited checkouts? If a file is checked in without changes, TFS notices and it will not show in the checkin history of the file.
You can test this yourself by checking out a single file and immediately checking in. TFS will tell you there where no changes and the checkout is undone.
So maybe consider setting it back to checkout on edit? As mentioned in the other answer, this will solve your problems...
I think this is the problem
Note that I have TFS set up to "check out on save" as opposed to
"check out on edit". This is deliberate to reduce the amount of
unedited checkouts.
To avoid above problem, revert back to default settings. Then download TFS power tools.
Then use this command to revert changes which are checked out but contain no edits
tfpt uu /noget
Update: On changing above setting the issue no longer occurs. For details, refer below discussion in comments.
I have to work with TFS at work. I've seen to many miracles and we've spend a lot of time figuring out where the problem is. TFS is the choice of my company, but it's not my favorite.
TFS (especially when server is slow and you have regular network problems) is a disaster for me as a developer. VS looks for modification only over files in solution, and as you can see not all of them. When you use third party tools (fitnesse for integration tests or custom build steps) wich requires to modify files outside VS - you'll probably get the same error as you have.
But we found a solution. On my machine I use git. We've installed git-tfs.
And all you need to remember is three magic commands
git tfs fetch
git merge remotes/tfs/default
git tfs ct
That's it. You will never break company rules. And at the same time you will be free of that kind of weird problems. We've forgot about that nightmare.
EDIT: Local workspaces in the upcoming TFS 2012 will solve several issues, and TFS 2012 will become closer to SVN, but it will not be DVCS. MS invest in integration with external DVCS - please, welcome - Git-TF.

Windows service won't install: The specified service already exists

I'm writing a small service in C# and I've installed it and uninstalled it a couple of times and all of a sudden it won't install again. I tried to uninstall it and it says there is nothing to uninstall, but when I install it again I get the following message:
Error 1001: The specified service already exists
Now, I've tried the following solutions:
Close the service manager (as an open service manager may hold a
handle to it)
Tried to find it with SC QUERY and delete is using SC DELETE
(according to
Service already exists (when it clearly doesn't))
Tried to remove it in regedit (doesn't exist there)
I've correctly added the project output to Custom Actions (install,
commit, rollback, uninstall)
Restarted the computer (!)
I'm running out of ideas. There is absolutely no proof that the service is installed on my computer and even though thousands of developers seems to have had this problem (and I've even had it myself previously) I've never heard of a situation where none of the standard solutions actually works.
What could I have missed?
EDIT
I've been into regedit and I tried again to find my service, but this time I exported the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ section and searched it. I can find my service in the dump under:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MyService
But then I go there in the regedit view, it's not there. Any suggestions? How did I screw that up :?
RE-EDIT
Disregard edit, the service only shows in regedit while the install i showing the error message, but that's event weirder, the service is installed, then breaks and rollbacks...
As a temporary solution, you can change the name of the service slightly ( e.g. add or remove one or two chars from the service_name) but keep the display_name the same.
I would suggest looking and Sysinternals Process Monitor activity and going backwards trying to find what happened before the error was reported. You might be able to see that for example a certain reg key was accessed.
I had a similar issue to this (the service was in stopped state and then deleted by an overzealous disk space tidier) and to solve it I copied my new service to the same location as marked in the "Path to Executable" box, and then started the service.
No issues so far.

"Invalid postback or callback argument" after new deployment

Recently deployed a project into production and have run into the "Invalid postback or callback argument" error. We haven't encountered this in testing at all and after some research have found that the problem occurs in the following situation:
Old version is published and accessed.
New version is published and accessed without clearing the Temporary Files.
Drop down is changed twice. (The first time everything works fine.)
The fix for the clients that have called in has been to clear their temporary internet files but this isn't the ideal fix. Can anyone think of a reason why this would be happening and stop happening after the temp files have been cleared?
BTW: The app is ASP.NET 3.5 written in C#. We're using a javascript call back in this particular control that's causing this issue.
We didn't want to use the "enableEventValidation=false" trick as this isn't a consistent issue. From pretty early on, we were able to fix the issue case by case by clearing the temp files.
After some more looking today it was suggested that we rename our js file and behold, the issue is resolved locally. Seems that each user has had our old js file stored.
As to why it was throwing the "Invalid Postback" rather than a javascript error, we're not sure. There are other ways of specifying the version number in the script tags but for now we opted for the rename.
Since you mentioned that you're using a javascript callback, this may be related to event validation. If that's so, there is a workaround for that problem which is adding the following line to web.config:
<pages enableEventValidation="false"/>
So that, event validation is disabled in your pages. To turn it off is not recommended for security reasons because it verifies that arguments are originated by the server control.
You can get detailed information here about why this error occurs.

Categories

Resources