We're currently investigating an issue, that, according to the Firewall provider, we have at times around 1500 parallel sessions open. I have the strong suspicion, that our TFS-Replication, a Service, which fetches Workitems via TFS Object Model from an external TFS and saves some data into a local SQL database, is causing the issue.
The access to the object model is looking like this:
internal static async Task<IReadOnlyCollection<WorkItem>> QueryWorkItemsAsync(string wiqlQuery)
{
var tfsConfig = TFSConfiguration.Instances[Constants.TfsPrefix.External];
var uri = new Uri(tfsConfig.WebbaseUri);
var teamProject = new TfsTeamProjectCollection(uri, new VssBasicCredential(string.Empty, ConfigurationManager.AppSettings[Infrastructure.Constants.APP_SETTINGS_TFS_PAT]));
var workItemStore = teamProject.GetService<WorkItemStore>();
var query = new Query(workItemStore, wiqlQuery, null, false);
var result = await Task.Run(
() =>
{
var castedWorkItems = query.RunQuery().Cast<WorkItem>();
return castedWorkItems.ToList();
});
return result;
}
Nothing too fancy: A WIQL can be passed into the method. Currently, I'm fetching blocks, so the WIQL would look like
var wiql = "SELECT * FROM WorkItems";
wiql += $" WHERE [System.Id] > {minWorkItemId}";
wiql += $" AND [System.Id] <= {maxWorkItemId}";
wiql += " ORDER BY [System.Id] DESC";
Im pretty much doing nothing with this WorkItems except mapping some of their fields, but not writing, saving or anything. I didn't get any hint on the objects I'm using regarding open Sessions and also the WorkItem-Objects itself are only very short living in the memory.
Am I missing something here, that could explain the open sessions within that service?
The client object model does a number of things:
It keeps a connection pool for each user/projectcollection combination to speed up data transfers.
Each work item revision you hit needs to be fetched.
Each work item materialized from a query contains only the fields selected in the query, additional fields being accessed are fetched on demand.
The TFSTeamprojectCollection class implements IDisposable and must be cleaned up once in a while to ensure connections are closed. In internal cache is maintained, but it ensures that connections are closed.
Its probably a good idea to wrap this code in a try/catch block or provide the Team Project Collection through Dependency injection and manage the connection at a higher level (otherwise your additional fields will fail to be populated).
I don't know the very details behind the workitem class but I observed that when u e.g. specify in the select of the wiql only a few fields u can still access others ... And that is comparable slow. If I select all fields I later access through indexer it is much faster.
From that observation I would say: yes, a communication is kept open.
Related
Are failed commands (inserts, updated, deletes etc.) logged anywhere by Mongo DB?
I'm using the C# driver and some commands fail (e.g. inserts) due to duplicate unique key (enforced by an index), so I want to see in retrospect which documents were being inserted.
I would like to see the raw documents that failed, after the driver serialized them.
By the way, as I understand the Mongo oplog only contains successful commands.
Are failed commands (inserts, updated, deletes etc.) logged anywhere by Mongo DB?
I don't think they are, but maybe I haven't tried hard enough to find them yet.
However, you can log them in the application by setting the ClusterConfigurator on the MongoClientSettings like this
//Build the initial settings from the MongoConnectionString
MongoClientSettings settings = MongoClientSettings.FromConnectionString("MongoConnectionString");
//Subscribe on the following events
settings.ClusterConfigurator += cb =>
{
cb.Subscribe(delegate (CommandStartedEvent startedEvent) { Console.WriteLine($"Started: {startedEvent.Command} with OpId: {startedEvent.OperationId}"); });
cb.Subscribe(delegate (CommandSucceededEvent succeededEvent) { Console.WriteLine($"Succeeded OpId: {succeededEvent.OperationId}"); });
cb.Subscribe(delegate (CommandFailedEvent failedEvent) { Console.WriteLine($"Failed OpId: {failedEvent.OperationId}"); });
};
//Builld a MongoClient with the new Settings
var client = new MongoClient(settings);
This example will only write the commands that are being exected and if which OperationId failed or succeeded.
But from here on you can extend it by keeping track of which command got started and what OperationId it runs on.
For a completed example you can see this Gist (as it seems like too much code to post here).
Which can be called as this:
var settings = MongoClientSettings.FromConnectionString("MongoConnectionString");
new MongoTracking().ConfigureTracking(settings);
var client = new MongoClient(settings);
For the record, this does the logging in the application and not the database.
I am using MS Dynamics CRM SDK with C#. In this I have a WCF service method which creates an entity record.
I am using CreateRequest in the method. Client is calling this method with 2 identical requests one after other immediately.
There is a fetch before creating a record. If the record is available we are updating it. However, 2 inserts are happening at the exact time.
So 2 records with identical data are getting created in CRM.
Can someone help to prevent concurrency?
You should force the duplicate detection rule & decide what to do. Read more
Account a = new Account();
a.Name = "My account";
CreateRequest req = new CreateRequest();
req.Parameters.Add("SuppressDuplicateDetection", false);
req.Target = a;
try
{
service.Execute(req);
}
catch (FaultException<OrganizationServiceFault> ex)
{
if (ex.Detail.ErrorCode == -2147220685)
{
// Account with name "My account" already exists
}
else
{
throw;
}
}
As Filburt commented in your question, the preferred approach would be to use an Alternate Key and Upsert requests but unfortunately that's not an option for you if you're working with CRM 2013.
In your scenario, I'd implement a very lightweight cache in the WCF service, probably using the MemoryCache object from the System.Runtime.Caching.dll library (small example). Before executing the query to CRM, you can check if the record exists in the cache and continue with you current processing if it doesn't (remembering to add the record to the cache with a small expiration time for potential concurrent executions) or handle the scenario where the record already exists in the cache (and here you can go from having quite complex checks to detect and prevent potential data loss/unnecessary updates to a simple and stupid Thread.Sleep(1000)).
I am new to ASP.Net Web API. I need to achieve the following. I have two web API end points
http://someurl.azurewebsites.net/api/recipe/recipes This returns the recipes that are available.
[ID, Name, Type]
http://someurl.azurewebsites.net/api/recipe/{ID} This returns the recipe with [ID, Ingredients, Cost....]
I need to build a application to get the cheapest recipe in a timely manner.
I am able to get the desired result using the following code but at times it crashes and throws the following exception System.Threading.Tasks.TaskCanceledException: A task was canceled.
How can I achieve this in an efficient manner both using Controller or Javascript.
public ActionResult Index()
{
List<Receipe> receipelist = new List<Receipe>();
var baseAddress = "http://someurl.azurewebsites.net/api/recipe/recipes";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("x-access-token", "sjd1HfkjU83");
using (var response = client.GetAsync(baseAddress).Result)
{
if (response.IsSuccessStatusCode)
{
var jsonString = response.Content.ReadAsStringAsync().GetAwaiter().GetResult();
var Receipes = JsonConvert.DeserializeObject<List<Receipe>>(jsonString.Substring(jsonString.IndexOf("Receipes") + 8, (jsonString.Length - jsonString.IndexOf("Receipes") - 9)));
if (Receipes != null)
{
foreach (Receipe Receipe in Receipes)
{
var baseAddress1 = "http://someurl.azurewebsites.net/api//api/recipe/" + Receipe.ID;
using (var client1 = new HttpClient())
{
client1.DefaultRequestHeaders.Add("x-access-token", "sjd1HfkjU83");
using (var response1 = client1.GetAsync(baseAddress1).Result)
{
var jsonString1 = response1.Content.ReadAsStringAsync().GetAwaiter().GetResult();
receipelist.Add(JsonConvert.DeserializeObject<Receipe>(jsonString1));
}
}
}
}
}
else
{
Console.WriteLine("{0} ({1})", (int)response.StatusCode, response.ReasonPhrase);
}
}
}
return View(receipelist);
}
There are several ways to go about this. Without experimenting, it is not really possible to say which would be the most efficient way. Or may be we really don't need to be so efficient. Consider the following steps:
Try to cache the result of the first response, i.e list of recipes available (Endpoint #1). You could cache that for 'x' timespan depending on your business requirement. That will most likely save you a first trip. My guess is that the 'list of available recipes' will not change very often; so you could probably cache that for a while(hours/days?). You could use a dictionary object as a data structure and cache this.
If you have control of the Endpoint #2, then I'd recommend providing an api that takes in 'list of ids' instead of just one id. So in essence you are asking Endpoint#2 to return 'a list of recipes with price' in one api call rather than looping for each recipe at a time. You could probably cache this data as well, depending on how often the price changes. My guess is this won't change very often either. When you get the 'list of prices with id', efficiently plug the price info to existing dictionary.
When you hit an exception when making an api calls, you could always return the stale data to the users instead of not displaying any data/errors. Be sure to let the users know by labeling in the UI something like "Recipe as of 2 hours ago".
With the above changes, your application should be able to perform well and in the worst case, display stale data.
Keep in mind that depending on the problem domain, we don't always have to be up to the second/minute consistent with the data. In real life and in reality the data on the internet is always stale. For example, by the time your end users sees the price and decides to click to buy a recipe, the price of the recipe might have already changed!
Hope this helps.
I am writing an API to allow a client to consume time-ordered data. There is a lot of it (10k+ records per client) so I don't want to dump all of it this back to the client and order there, hence my need to both order it and page it server-side. I have the paging working, but I cannot see how to add ordering.
I've seen recommendations to sort on the client, but given the potential amount of data that is not going to work in this case. Is there a workaround?
Here is what I have so far:
var options = new FeedOptions {
MaxItemCount = 25,
RequestContinuation = continuationToken
}
var query = String.Format("SELECT * FROM TimelineEvent t WHERE t.acc_id = '{0}' AND t.removed != true", accountId);
// ORDER BY in the query text doesn't appear to work
var events = client.CreateDocumentQuery<TimelineEvent>(colSelfLink, query, options).AsDocumentQuery();
var response = await query.ExecuteNextAsync<TimelineEvent>();
It's not supported out of the box, but you can implement a stored procedure that does this.
The msft product group has supplied some code samples here: https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/sourcecode?fileId=132409&pathId=34503205
Look under server side script, JS folder, you'll see an "orderby" script that does this. Adjust to your needs and try it.
ORDER BY is now officially supported by DocumentDB
https://azure.microsoft.com/en-us/documentation/articles/documentdb-orderby/
I'm trying to get Item from mongodb Server, sometimes its work and after 4-5 attemps its stop resonding in the last row (I can't take out the object out side the query)
any one had it before? what is the right way to take out the object?
var client = new MongoClient(connectionString);
var server = client.GetServer();
var database = server.GetDatabase("myPlaces");
var collection = database.GetCollection<MongoPlace>("Places");
int startDay = int.Parse(Request.QueryString["day"]);
MongoPlace mp = collection.AsQueryable<MongoPlace>().Where(x => x.guid ==
Request.QueryString["id"]).FirstOrDefault();
It's likely you're hitting the default connection pool limit.
As it looks like this is a web application, you shouldn't be opening the client more than once per instance of your web application.
The MongoClient, MongoServer, MongoDatabase and MongoCollection are all thread-safe and generally there should only be one instance of each. (See here for more information).
You'd probably want to do this as the application starts and then maintain the connections statically until the application exits.
In my ASP.NET MVC applications, I usually add a "DatabaseConfig" class that's called in the same way as other app configurations. As an example here's some code I've got in the project I'm currently building using MongoDB (there isn't any error handling yet):
var client = new MongoClient(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString);
var server = client.GetServer();
DataLayer.Client = client;
DataLayer.Server = server;
var settings = new MongoDatabaseSettings(server, "default");
settings.WriteConcern = WriteConcern.Acknowledged;
DataLayer.Database = DataLayer.GetDatabase(settings);
Then, in Application_Start, I call an Initialize method that contains the code above.
DatabaseConfig.Initialize();