I recently deployed our first Azure webapp service and it was a pretty painless experience.
It was just a simple requestbin like api app to store id's fired by a webhook in to an azure data table and another end point to query if that ID is present. This is used to test the webhook in our deployment tests.
Works great, however at most I am expecting the may 60 table requests to hit the storage account a day in write and read pairs
in the last 24hr's I've received 10,23k requests (pretty consistently through the night) as well as queue and blob requests I don't have set up through the API Screenshot of azure data account requests
looking through the storage accounts audit logs I see almost exclusively list key operations with the 'Caller' column blank
audit log
does this mean this is an internal Azure process? Some are me but I would think that was me checking through the dash
The deploy tests themselves aren't live and the DataTable only includes the two initial test entities I inserted during testing so I can't really account for these requests. Rookie mistake I'm sure but any ideas?
Bonus: I use the below block to initialise my data table. it resides in the apiClient classes constructor method on a free tier instance. Does table.createIfNotExists() count as a data transaction and does being present in constructor hammer the call as azure moves across processes on the free tier
_storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
_tableClient = _storageAccount.CreateCloudTableClient();
table = _tableClient.GetTableReference("webhookreceipts");
// Create the table if it doesn't exist.
table.CreateIfNotExists();
thanks
Update:
I have left it running over night again and it appears to have followed the same pattern of cycling around 500 requests per hour through the night as before
Firstly, I would suggest you click Audit log "ListKeys" to see detailed info. Please focus on the properties 'EVENT TIMESTAMP' &'CALLER' to know when triggered and who did it. Secondly, please comment all the code related with Azure table to see the result. Thirdly, please create a new Azure account to see whether have the same issues. Everything works on my side. If this issue still exist, I would suggest you contact with Azure support to get a better help.
Related
I am currently working on a task where I need to synchronize the data, example:
Client makes a request to get data to the API (API has access to the database), client received the data and saves it to the list. That list is used for some time etc..... Then in the meanwhile another client accesses the same API and database and makes some changes... to the same table... After a while the first client want's to update his current data, since the table is quite big let's say 10 thousand records, grabbing the entire table again is inefficient. I would like to grab only the records that have been modified,deleted, or newly created. And then update the current list the client 1 has. if the client has no records, he classifies all of them as newly created (at start up) and just grabs them all. I would like to do as much checking on the client's side.
How would I go about this ? I do have fields such as Modified, LastSync, IsDeleted. So I can find the records I need but main issue is how to do it efficiently with minimal repetition.
At the moment I tried to get all the rows at first, then after I want to update (Synchronize) I get the minimal required info LastSync Modified IsDeleted Key, from the API, which I compare with what I have on the client and then send only keys of the rows that don't match to the server to get the entire values that match the keys. But I am not sure about efficiency of this also... not sure how to update the current list with those values efficiently the only way I can think of is using loop in loop to compare keys and update the list, but I know it's not a good approach.
This will never work as long as you do not do the checks on the server side. There is always a chance that someone post between your api get call to server and you post call to server. Whatever test you can do at the client side, you can do it on the server side.
Depending on your DB and other setup, you could accomplish this by adding triggers on the tables/fields that you want to track in the database and setting up a cache (could use Redis, Memcached, Aerospike, etc.) and a cache refresh service . If something in the DB is added/updated/deleted, you can set up your trigger to write to a separate archive table in the DB. You can then set up a job from, e.g., Jenkins (or have a Kafka connector -- there are many ways to accomplish this) to poll the archive tables and the original table for changes based on an ID or date or whatever criteria you need. Anything that has changed will be refreshed and then written back to the cache. Then your API wouldn't be accessing the DB at all. It would just call the cache for the most recent data whenever a client requests it. Your separate service wold be responsible for synchronization of data, database access, and keeping the cache up to date.
I'm using authbot (AuthBot) to login users on my bot, but, I'm using a large amount of data, and since than, it starts gives me an error that I'm overload (Stackoverflow)
So, I do as suggested, I create an Azure Table Storage, since than, my bot does not recognize the authentication. It seems Authbot cannot get \ set the data from table storage. Do you know something about it?
The current AuthBot uses the default state client. I've submitted a PR to fix this: https://github.com/MicrosoftDX/AuthBot/pull/37/files
In the interim, you can download the AuthBot source and include the changes to OAuthCallbackController in your project.
Edit:
This repo will eventually replace AuthBot: https://github.com/richdizz/BotAuth It is already using the correct state client interfaces.
I'm using a TelemetryClient class to send different types of telemetry (from my WPF app), but I have a problem with MetricTelemetry... all other types of telemetry work fine, but my MetricTelemetry custom data doesn't appear in the Metric Browser - Custom...
I call for example telemetryClient.TrackMetric("MyMetric", 1), then go to the Azure portal, but custom metrics still contains only "Azure Diag issues" field
I had the same issue. For me, it helped doing two things:
Call Flush() on the telemtryClient after sending the message
Logged in to Azure using inccognito mode, to away cached items
Dominik, does the problem still exist? Sometimes you might see a ~5-10 mins delay in time between you run the code to track metric and the metric shows up in the portal.
I am a complete newbie to Azure, my understanding of it: It is a online database.
I have inherited the following code:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
table = tableClient.GetTableReference(
Convert.ToString(System.Configuration.ConfigurationManager.AppSettings["AzureTable"]));
table.CreateIfNotExists();
Now what I am confused about is:
1. Where do I log in to see if the table is created (if the value for AzureTable is not created?)
2. I have a MSDN subscription, is this all I need to connect to/use Azure?
Thanks,
Andrew
table.CreateIfNotExists Returns a boolean value if the table was successfully created. MSDN Link
1b. You can use a convenience tool such as Azure Storage Explorer CodePlex Link to view all the tables as well as all the rows.
An MSDN subscription is all you need, depending on the level (premium, ultimate etc) will determine how much credit you have to spend on all the Azure services; You can see this amount and remaining amount by clicking on green "Credit Status" button on the portal once you have logged in.
The MSDN accounts by default have a spending limit so don't worry, it won't charge your card, only your services will be suspended till the next billing period. A recent change has been implemented where if you shutdown some of your services they don't incur any running costs and save you some MSDN credits.
I like AzureXplorer to see the contents of the storages (blobs, tables, queues).
Maybe it can be useful for you.
I have a page that executes a long process, parsing over 6 million rows from several csv files into my database.
Question is just as when the user clicks "GO" to start processing and parsing 6 million rows I would like to set a Session Variable that is immediately available to the rest of my web site application so that any user of the web site knows that a user with a unique ID number has started parsing files without having to wait until the end of the 6 million rows processed?
Also with jQuery and JSON, I'd like to get feedback on a webpage as to which csv file is being processed and how many rows have been processed.
There could be other people parsing files at the same time, how could I track all of this and stop any mix up etc with other users even though there is no login or user authentication on the site?
I'm developing in C# with .NET 4.0 Entity Framework 4.0, jQuery, and MS SQL 2008 R2.
I was thinking of using Session Variables however in my static [WebMethod] for my jQuery JSON calls I am not able to pull back my Session unless I'm using HttpContext.Current.Session but I am not sure how if this solution would work?
Any guidance or ideas would be mostly appreciated.
Thanks
First of all: Session variables are not supposed to be seen for any user everywhere.
when some client connects to the server, there is a session made for them by the server, and next time the same user requests (within the expiration time of the session), the session (and it's variables) are usable.
You can use a static class for this if you intend to.
for example
public static class MyApplicationStateBag
{
public static Dictionary<string,object> Objects {get; private set;}
}
and for your progress report. you can use a asp:Timer to check the progress percentage every second or two.
here is a sample code that I have written for asp:Timer within an UpdatePanel:
Writing a code trigger for the updatepanel.
I suggest you use a Guid for identifying the current progress as the key to your state bag.
The correct way of doing this is via services, for example WCF services. You don't want to put immense load on the web server, which is not supposed to do that.
The usual scenario:
User clicks on GO button
Web server creates a job and starts this job on a separate WCF service
Each job has ID and metadata (status, start time, etc.) that is persisted to the storage
Web server returns response with job ID to the user
User, via AJAX (JQuery) queries the job in the storage, once completed you can retrieve results
You can also save Job ID to the session
P.S. it's not a direct answer to your question, but I hope it helps