I'm having a weird issue with my Azure App Function and I can't find anything on this.
I republished my function without changing its code, but suddenly the function stopped working and I'm getting this message as soon as I navigate to the function's page on Azure:
Error:
Error retrieving master key.
If I navigate to the function's settings, I can see that no keys have been generated and that host.json file is emptpy. Browsing my functions' files using Kudu, however, shows that file contents are correct.
Two more things make this weirder:
The function correctly works locally
If I take the code for another function and I deploy it on this one, my function works correctly, meaning that it's not an issue related to my function's configuration but rather to its code
Do you guys have any pointer on this?
EDIT:
Let me add more details on this.
Let's say I have 2 solutions, A.sln and B.sln.
I also have 2 App Functions on Azure, let's say F_1 and F_2.
A.sln and B.sln have the very same structure, the only difference is in business logic.
Same applies for F_1 and F_2, their only differences are the related storage accounts, as each function has its own.
Currently A.sln is deployed on F_1 and B.sln on F_2, and the only one working is F_1.
If I deploy A.sln on F_2, F_2 starts working, so my idea is that there's something wrong in B.sln's code because A.sln works with the very same configuration.
The Function App has a reference to a Storage account in application settings AzureWebJobsDashboard, AzureWebJobsStorage and WEBSITE_CONTENTAZUREFILECONNECTIONSTRING (if you are running on a consumption plan). Either clearing out this storage or simply recreating it fixed the problem.
I would also recommend creating separate storage accounts for every Function app - at least as long as these hard-to-find bugs are present. It is a lot easier to fix these kind of issues when they are only affecting a single Function app.
I don't know if this is the case here, but I found out that in my case (new deployment of Function App v3) host.json is empty on Azure, if there is a comment line in it. Removing comments solved my problem and host.json file is now deployed properly.
One of the reasons could be, the key inside the storage account might have been rotated. So, the connection strings referenced inside the AzureWebJobsDashboard and AzureWebJobsStorage of the azure function will be different.
Solution: Go to the storage account referenced in AzureWebJobsDashboard and AzureWebJobsStorage -> Access Keys -> Copy the connection string under key1 and use this for the AzureWebJobsDashboard and AzureWebJobsStorage.
Related
We have a wcf service that uses DevExpress XtraReports to generate a pdf file.
How this normally works is we have in the web.config the physical directory Example C:\PdfDocs\ that we specify as the path when executing the devexpress ExportToPdf function. This works fine on a normal virtual machine.
We are now busy moving to Microsoft Azure enviroment and I am having trouble getting this to work.
My Setup - The wcf service is created as a App Service. Unfortunately I am not at liberty to give names so lets assume the following:
App Service Name - testdocservice,
Url Azure gives - https://testdocservices.azurewebsites.net
What I have tried:
In Application settings, I have created a virtual directory. In the project itself I have created a folder that the virtual directory will point to.
The virtual path is https://testdocservices.azurewebsites.net/ItinDocs and the physical path is site\wwwroot\ItinDocuments
This is setup correctly as I have tested it by FTP test pdf in and then hit the following url: https://testdocservices.azurewebsites.net/ItinDocs/test.pdf
So in the wcf service I took a chance and set the location to render the pdf to "site\wwwroot\ItinDocuments" - This did not work.
The exception was as follows: Access to the path 'D:\Windows\system32\site\wwwroot\ItinDocuments\TestQuote21.pdf' is denied.
I then tried using Server.MapPath example:
QuoteV3 oQuote = new QuoteV3();
oQuote.DataSource = dSource;
oQuote.ExportToPdf(System.Web.HttpContext.Current.Server.MapPath($"~{ConfigurationManager.AppSettings["DocLocation"]}{fileName}"));
The DocLocation look like the following: site\wwwroot\ItinDocuments\
This also did not work. The following error is given:
'~https:/testdocservices.azurewebsites.net/ItinDocs/TestQuote21.pdf' is not a valid virtual path.
I thought the first character "~" could be a problem so I removed it and got the same error as above - 'https:/testdocservices.azurewebsites.net/ItinDocs/TravelQuote21.pdf' is not a valid virtual path.
I then noticed that the above errors only have one forward-slash after the https. At this point I am not sure if that could be causing the problem and then how to correct it as the Server.MapPath is generating that part.
In conclusion, I am not sure if I am even working in the right direction with the above approach. My knowledge around azure is still minimal.
Any help/assistance/solution would be greatly appreciated.
Many thanks.
This can be closed as I have instead setup azure storage and my pdfs are saving in a container instead.
Thanks.
I am taking a console app I have that loads new data into a database, and am turning it into a web job that runs at 2am so the stored data is updated daily. The console app works fine locally and is using an Azure SQL database. When running the webjob it fails with this message:
[09/22/2016 20:25:39 > 44575f: SYS ERR ] Job failed due to exit code -532462766
Through some research it looks like the webjob doesn't have my app.config file and thus is missing the correct connection string, but I'm not sure. Does anyone know how to get around this? Do I add a connection string to my .pubxml file, do it in my Azure portal, or could this be something else? Thanks!
web.config is a strange choice but I guess webjob falls back to it. You could also fix the problem by copying the programname.exe.config file along with the exe itself.
I had some problems with using the authorization before so I got a brand new everything - new computer, new OS, fresh installation of VS, new app and DB in a new resource group on the Azure. The whole shabang.
I can confirm that I can log in to the Azure DB as the screenshots below show.
I can see the databases, tables, users etc.
The problem is that, although it works locally (using the default connection string provided automagically for me), it doesn't perform very well in the Azure (although I'm using the publish file from there). It said something about the file not being found and according to this answer, I needed to change the connection string.
After I've altered it, I get the following error. Please note that the firewall is open and that I can access the DB when I run the code of my applications. I feel that there's something that goes wrong when the authentication part is automatically configured. I'm out of ideas on how to trouble-shoot it, though.
[SqlException (0x80131904): Login failed for user 'Chamster'.
This session has been assigned a tracing ID of '09121235-87f3-4a92-a371-50bc475306ca'. Provide this tracing ID to customer support when you need assistance.]
The connection string I'm using is this.
Server=tcp:f8goq0bvq7.database.windows.net,1433;
Database=Squicker;
User ID=Chamster#f8goq0bvq7;
Password=Abc123();
Encrypt=True;
TrustServerCertificate=False;
Connection Timeout=10;
This issue's bothered me for a while and I'll be bounting it in two days. Any suggestion's warmly appreciated.
I believe I've managed to resolve this weird issue. It appears that the user I'm using, despite being admin with all bells and whistles isn't recognized as admin when used in the connection string and trying to create the tables (which is the case at the first registration).
My solution was to create two logins - one with db_owner role and one with db_datareader and db_datawriter. First, I've used the elevated user in my connection string and registered a single user. That created the tables in the database as shown below.
Then, while able to continue as admin, I realized that we should try the demoted user and tada!, it worked perfectly. Once the tables were there, the whole shabeling behaved as expected.
To be perfectly sure, I dropped the tables from the database and there it was - the same issues as before. When I changed to the elevated user, the tables were restored allowing me to get back to the demoted one.
I also tried dropping the tables, confirming the issues to re-appear and then creating the tables manually. That works too! So basically,the only gotcha that caused it all was the original admin who's not treated as admin.
It might have to do with the fact that my Azure account's getting a bit old, LiveID used there is ancient and that didn't have an updated version of DB in Azure (the pull-up to v12 was carried out the 18th of December, so it's possible that it also was a requirement to get it working). I'm too tired and lazy to check that out and I realize that I've no idea how to get an "old" type of account. Besides, the issue will decrease and gradually vanish because the old accounts get upgraded eventually.
I've been updating a SP2010 solution which integrates an external content source into search via BCS. This solution deploys a feature (featureA) to the farm scope. I split it into two features, one (FeatureA) deploying to the farm scope, and one (featureB) to the site scope.
My update script does this:
Deactivate FeatureA on the farm
Update-SPSolution with the new wsp file (same name)
Activate FeatureA on the farm
Activate FeatureB on the two sites (on two different web apps)
The script bombs on the last two steps, saying
Enable-SPFeature : The Feature is either not found or not a Farm Level Feature. Use Url parameter to specify the scope of the Feature.
for the first one (farm), and
Enable-SPFeature : The Feature is not a Farm Level Feature and is not found in a Site level defined by the Url http://url-site
on the second one (sites)
This is a test run on the CI server, which means it will also crash on the production server.
However, deploying the package on my machine, and activating the features, works fine.
I've checked, the features are actually present in the SharePoint folder, so the deployment seems to have gone ok. I can't work out why SharePoint can't see them though. If I run Get-SPFeature, they are not in the list.
I've tried iisreset, to no avail.
EDIT:
I've managed to get SharePojnt to notice the two features, by using Install-SPFeature.
However, it still won't enable FeatureB, but errors out with:
Enable-SPFeature : Attempted to perform an unauthorized operation.
I'm at a bit of a loss once again.
You cannot use Update-SPSolution when new files have been added to the solution package.
From Update-SPSolution:
The Update-SPSolution cmdlet upgrades a deployed SharePoint solution in the farm. Use this cmdlet only if a new solution contains the same set of files and features as the deployed solution. If files and features are different, the solution must be retracted and redeployed by using the Uninstall-SPSolution and Install-SPSolution cmdlets, respectively.
For more information, see Adding Features during Solution Update
Ok, I got an asp.net web service using WSE2. It runs on a xp machine.
And I got the front end asp.net application in my win7 machine.
both in Framework 3.5.
In production environnement everthing is fine.
The problem is, when I run the "developpement" version of the front end, web services calls takes forever. And by forever, I mean eternity. Here we count eternity in minutes.
By "developpement" version, I mean that I run the instance that is bound to the visual studion (2008) solution. I use the local IIS web server.
My first thought was for a network/firewall problem between my two machines. But if, form visual studio, I "publish" the site to another virtual folder, then everithing works fine.
So I have http://localhost/MyDevApp and http://localhost/MyPublishedApp.
Both uses default app pool. Both have identical web.config. As far as I know, both virtual directories have exact same parameters.
But http://localhost/MyDevApp is terribly when calling web services, and http://localhost/MyPublishedApp runs at light's speed.
It is like this since 3 days now.
Doing some debuggin I can say that :
MyWebServiceRequest request = new MyWebServiceRequest ();
request.Url = "http://mywebserviceurl";
request.RequestSoapContext.Tokes.Add (MyUsernameToken);
//All these previous lines executes correctly, rapidly.
//THIS is the slow one.
request.CallWebServiceMethod ();
Does anyone have the slightest idea what the problem can be?
Edit
I also tried changing the virtual directory from my web site's property to something different (say http://localhost/MydDevApp2), with the same result.
Edit 2
Maybe it can be in cause, the site binded to the solution reside in c:\Projets\MySolution\MyDevApp while the "published" one is under c:\inetpub\wwwroot\MyPublishedApp. The c:\Projets folder is excluded from the antivirus scans, so normally it should be faster rather than slower.
Edit 3
I created another workspace (the solution is under team server source control) in c:\inetpub\wwwroot\Other, changed the url of the web project, compiled and run : no problem. So it really seems that the physical path where the files resides is causing this, while the fact of beeing bound to the solution is not.
Edit 4 (August 19)
Well, it seems that there not much to do. It's been about 10 days since my last update, and now the site under the new workspace is beginning to slow down too. So I moved the workspace on disk, now in c:\inetpub\wwwroot\Other2, and you know what? That's running fine again. Perhaps I will have to move it again in about 10 days.
Edit 5
I flagged my question to move to serverfault, finally it does not have nothing to do with programmation, as I first thought.
Hard to say without seeing your machine in config, but often times it means you have some kind of a problem with your dns server or hosts file that is causing the process to be slow to resolve the service.
Also, if you are using a proxy server, make sure you are bypassing it for any urls that call the service.
Finally, it appears that wse2 traces where on :
<microsoft.web.services2>
<diagnostics>
<trace enabled="true" input="InputTrace.log" output="OutputTrace.log" />
</diagnostics>
<policy>
<cache name="policyCache.config" />
</policy>
</microsoft.web.services2>
While the log file was growing more and more, it slow down and down... just disabling the trace solves the problem.