I'm developing a "Task Control System" that will allow its users to enter task description information including when to execute the task and what environment (OS, browser, etc.) the task requires.
The 'controller' saves the description information and schedules the task. When the scheduled time arrives, the scheduler retrieves the task information and 'queues' the task for a remote machine that matches the required environment.
My first cut at this used a relational database to persist the task descriptions and enough history information to track problems (about 2 weeks worth). But this is not a 'big data' problem and the relationships are simple and I need better performance.
So I'm looking for something that offers more performance.
I'm trying to use redis for this, but I'm having some problems. I'm using ServiceStack.Redis version 3.9.71.0 for the client and Redis 2.8.4 is the server.
This sample code is taken from Dan Swain's tutorial. It's updated to work with ServiceStack.Redis client v 3.9.71.0. Much of it works, but 'currentShippers.Remove(lameShipper);' does NOT work.
Can anyone see why that might be?
Thanks
public void ShippersUseCase()
{
using (var redisClient = new RedisClient("localhost"))
{
//Create a 'strongly-typed' API that makes all Redis Value operations to apply against Shippers
var redis = redisClient.As<Shipper>();
//Redis lists implement IList<T> while Redis sets implement ICollection<T>
var currentShippers = redis.Lists["urn:shippers:current"];
var prospectiveShippers = redis.Lists["urn:shippers:prospective"];
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Trains R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Trains,
UniqueRef = Guid.NewGuid()
});
currentShippers.Add(
new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "Planes R Us",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.Planes,
UniqueRef = Guid.NewGuid()
});
var lameShipper = new Shipper
{
Id = redis.GetNextSequence(),
CompanyName = "We do everything!",
DateCreated = DateTime.UtcNow,
ShipperType = ShipperType.All,
UniqueRef = Guid.NewGuid()
};
currentShippers.Add(lameShipper);
Dump("ADDED 3 SHIPPERS:", currentShippers);
currentShippers.Remove(lameShipper);
.
.
.
}
}
Fixed the problem by adding these overrides to the 'Shipper' class:
public override bool Equals(object obj)
{
if (obj == null)
{
return false;
}
var input = obj as Shipper;
return input != null && Equals(input);
}
public bool Equals(Shipper other)
{
return other != null && (Id.Equals(other.Id));
}
public override int GetHashCode()
{
return (int)Id;
}
This working example shows how to implement List<>.Contains, List<>.Find, and List<>.Remove. Once applied to the 'Shipper' class the problem was solved!
Related
I'm making some dApp using Unity & Nethereum.
I deployed one contract to the Ropsten Test Net using Remix. And I had abi & bytecode of that, so I made Definition & Service C# code using solodity package of VS Code.
I wanted to mint new NFT, and below is the code that I tried.
string url = "my infura - ropsten url";
string privateKey = "private Key of my MetaMask account";
string userAddress = "public address of my MetaMask account";
string contractAddress = "address of deployed contract";
var account = new Account(privateKey);
var web3 = new Web3(account, url);
var service = new MyNFTService(web3, contractAddress);
var mintReceipt = await service.MintRequestAndWaitForReceiptAsync(userAddress, "address of metadata");
But I can't get receipt even after a long time... Why is this happening? I can't get any answer about that, and I just have to wait.
I have tried everything that I can do, like SendTransactionAndWaitForReceiptAsnyc(), SignAndSendTransaction(), and so on.
The version of Nethereum is 4.1.1, and the version of Unity is 2019.4.21f1.
Below is the part of definition code. (mint)
public partial class MintFunction : MintFunctionBase { }
[Function("mint", "uint256")]
public class MintFunctionBase : FunctionMessage
{
[Parameter("address", "user", 1)]
public virtual string User { get; set; }
[Parameter("string", "tokenURI", 2)]
public virtual string TokenURI { get; set; }
}
And below is the part of service code. (mint)
public Task<string> MintRequestAsync(MintFunction mintFunction)
{
return ContractHandler.SendRequestAsync(mintFunction);
}
public Task<TransactionReceipt> MintRequestAndWaitForReceiptAsync(MintFunction mintFunction, CancellationTokenSource cancellationToken = null)
{
return ContractHandler.SendRequestAndWaitForReceiptAsync(mintFunction, cancellationToken);
}
public Task<string> MintRequestAsync(string user, string tokenURI)
{
var mintFunction = new MintFunction();
mintFunction.User = user;
mintFunction.TokenURI = tokenURI;
return ContractHandler.SendRequestAsync(mintFunction);
}
public Task<TransactionReceipt> MintRequestAndWaitForReceiptAsync(string user, string tokenURI, CancellationTokenSource cancellationToken = null)
{
var mintFunction = new MintFunction();
mintFunction.User = user;
mintFunction.TokenURI = tokenURI;
return ContractHandler.SendRequestAndWaitForReceiptAsync(mintFunction, cancellationToken);
}
I am struggle with this problem for five days... Please help me..
I solved it today! (But I didn't use my service code)
In my opinion, the reason why the transaction didn't work is that the miner can't mine my transaction. (Exactly, they can mine, but they didn't because mining other transaction will give them more money.)
In the document of Netherum, they speak nethereum can set the gas price as the average, but I though it didn't work. After I added a code to estimate and set the gas price, SendRequestAndWaitForReceiptAsync() worked very well. (And I could receive transaction hash.)
Below is the code that I used to solve this problem.
var mintHandler = web3.Eth.GetContractTransactionHandler<MintFunction>();
var mint = new MintFunction()
{
User = userAddress,
TokenURI = "Token URI"
};
mint.GasPrice = Web3.Convert.ToWei(25, UnitConversion.EthUnit.Gwei);
var estimate = await mintHandler.EstimateGasAsync(contractAddress, mint);
mint.Gas = estimate.Value;
var mintReceipt = await mintHandler.SendRequestAndWaitForReceiptAsync(contractAddress, mint);
Debug.Log(mintReceipt.TransactionHash);
On a single-instance MongoDB server, even with the write concern on the client set to journaled, one in every couple of thousand documents isn't replacable immediately after inserting.
I was under the impression that once journaled, documents are immediately available for querying.
The code below inserts a document, then updates the DateModified property of the document and tries to update the document based on the document's Id and the old value of that property.
public class MyDocument
{
public BsonObjectId Id { get; set; }
public DateTime DateModified { get; set; }
}
static void Main(string[] args)
{
var r = Task.Run(MainAsync);
Console.WriteLine("Inserting documents... Press any key to exit.");
Console.ReadKey(intercept: true);
}
private static async Task MainAsync()
{
var client = new MongoClient("mongodb://localhost:27017");
var database = client.GetDatabase("updateInsertedDocuments");
var concern = new WriteConcern(journal: true);
var collection = database.GetCollection<MyDocument>("docs").WithWriteConcern(concern);
int errorCount = 0;
int totalCount = 0;
do
{
totalCount++;
// Create and insert the document
var document = new MyDocument
{
DateModified = DateTime.Now,
};
await collection.InsertOneAsync(document);
// Save and update the modified date
var oldDateModified = document.DateModified;
document.DateModified = DateTime.Now;
// Try to update the document by Id and the earlier DateModified
var result = await collection.ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, document);
if (result.ModifiedCount == 0)
{
Console.WriteLine($"Error {++errorCount}/{totalCount}: doc {document.Id} did not have DateModified {oldDateModified.ToString("yyyy-MM-dd HH:mm:ss.ffffff")}");
await DoesItExist(collection, document, oldDateModified);
}
}
while (true);
}
The code inserts at a rate of around 250 documents per second. One in around 1,000-15,000 calls to ReplaceOneAsync(d => d.Id == document.Id && d.DateModified == oldDateModified, ...) fails, as it returns a ModifiedCount of 0. The failure rate depends on whether we run a Debug or Release build and with debugger attached or not: more speed means more errors.
The code shown represents something that I can't really easily change. Of course I'd rather perform a series of Update.Set() calls, but that's not really an option right now. The InsertOneAsync() followed by a ReplaceOneAsync() is abstracted by some kind of repository pattern that updates entities by reference. The non-async counterparts of the methods display the same behavior.
A simple Thread.Sleep(100) between inserting and replacing mitigates the problem.
When the query fails, and we wait a while and then attempt to query the document again in the code below, it'll be found every time.
private static async Task DoesItExist(IMongoCollection<MyDocument> collection, MyDocument document, DateTime oldDateModified)
{
Thread.Sleep(500);
var fromDatabaseCursor = await collection.FindAsync(d => d.Id == document.Id && d.DateModified == oldDateModified);
var fromDatabaseDoc = await fromDatabaseCursor.FirstOrDefaultAsync();
if (fromDatabaseDoc != null)
{
Console.WriteLine("But it was found!");
}
else
{
Console.WriteLine("And wasn't found!");
}
}
Versions on which this occurs:
MongoDB Community Server 3.4.0, 3.4.1, 3.4.3, 3.4.4 and 3.4.10, all on WiredTiger storage engine
Server runs on Windows, other OSes as well
C# Mongo Driver 2.3.0 and 2.4.4
Is this an issue in MongoDB, or are we doing (or assuming) something wrong?
Or, the actual end goal, how can I ensure an insert is immediately retrievable by an update?
ReplaceOneAsync returns 0 if the new document is identical to the old one (because nothing changed).
It looks to me like if your test executes fast enough the various calls to DateTime.Now could return the same value, so it is possible that you are passing the exact same document to InsertOneAsync and ReplaceOneAsync.
I'll begin this post by noting that I'm entirely new to the .NET world. ASP, EntityFramework, Linq, etc. are all mostly unknown magic at this point.
Having said that, I've built myself a neat Web API chat-like application with SignalR support for real-time events. It works quite well, but I'm having some performance problems with the Add function.
In my chat application, there are "Pads" (chat rooms) which contain a number of "Mates" and "Messages". Here's my Pad model for reference:
public class Pad
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public Guid PadId { get; set; }
public string StreetAddress { get; set; }
public int ZipCode { get; set; }
public virtual ICollection<Mate> Mates { get; set; }
public virtual ICollection<Message> Messages { get; set; }
}
My problem lies in my SignalR hub that processes a new Message sent to a particular pad. These two lines take about half a second to process.
pad.Messages.Add(msg); // pad is the Pad entity already fetched from the db context
db.Messages.Add(msg);
But they only take that long when Pad.Messages contains a large number of messages. Thousands. If I am sending to a pad with few to no messages, it executes almost instantly.
My initial 'trick' to improve the perceived performance here is to move the adding functions to after I send the notification back to the clients, but I realize something like this could present a potential problem later when there are tens or hundreds of thousands of messages in one pad.
Any advice here would be greatly appreciated!
Here is the entire message send method for reference:
public void SendMessage(string pad_id, string body)
{
var user_id = IdentityExtensions.GetUserId(Context.User.Identity);
body = body.Trim();
if (body.Length <= 0)
{
return;
}
// Check that the user belongs in this pad...
var user = (from u in db.Users
where u.Id == user_id
select u).First();
var pad = (from p in user.Pads where p.PadId == new Guid(pad_id) select p).FirstOrDefault();
if (pad != null) {
// Save the message to the database
var msg = new Message()
{
MessageId = new Guid(),
Author = user,
Body = body,
SendTime = DateTimeOffset.UtcNow,
Pad = pad
};
pad.Messages.Add(msg); // These two lines
db.Messages.Add(msg); // Are the culprit.
db.SaveChangesAsync();
Clients.Group(pad_id).messageReceived(user.Id, pad_id, body, DateTimeOffset.UtcNow); // Send message to clients
}
}
EDIT: I'm on EF 6.0.0
From your code, all you want to do is add 1 row (Message) to the table.
While constructing the Message entity, just use the padid instead of the Pad object.
That way you don't need to deal with all the pad objects. From a DB perspective, you just need to add a Message row with a PadId key.
var msg = new Message()
{
MessageId = new Guid(),
Author = user,
Body = body,
SendTime = DateTimeOffset.UtcNow,
PadId = new Guid(pad_id)
};
// pad.Messages.Add(msg); // don't need this.
db.Messages.Add(msg);
The above should always insert one row and not depend on the number of messages in the pad.
If it still gives a performance problem, then adding a single row is causing a lot of index updates to your table.
We have an email queue table in the database. It holds the subject, HTML body, to address, from address etc.
In Global.asax every interval, the Process() function is called which despatches a set number of emails. Here's the code:
namespace v2.Email.Queue
{
public class Settings
{
// How often process() should be called in seconds
public const int PROCESS_BATCH_EVERY_SECONDS = 1;
// How many emails should be sent in each batch. Consult SES send rates.
public const int EMAILS_PER_BATCH = 20;
}
public class Functions
{
private static Object QueueLock = new Object();
/// <summary>
/// Process the queue
/// </summary>
public static void Process()
{
lock (QueueLock)
{
using (var db = new MainContext())
{
var emails = db.v2EmailQueues.OrderBy(c => c.ID).Take(Settings.EMAILS_PER_BATCH);
foreach (var email in emails)
{
var sent = Amazon.Emailer.SendEmail(email.FromAddress, email.ToAddress, email.Subject,
email.HTML);
if (sent)
db.ExecuteCommand("DELETE FROM v2EmailQueue WHERE ID = " + email.ID);
else
db.ExecuteCommand("UPDATE v2EmailQueue Set FailCount = FailCount + 1 WHERE ID = " + email.ID);
}
}
}
}
The problem is that every now and then it's sending one email twice.
Is there any reason from the code above that could explain this double sending?
Small test as per Matthews suggestion
const int testRecordID = 8296;
using (var db = new MainContext())
{
context.Response.Write(db.tblLogs.SingleOrDefault(c => c.ID == testRecordID) == null ? "Not Found\n\n" : "Found\n\n");
db.ExecuteCommand("DELETE FROM tblLogs WHERE ID = " + testRecordID);
context.Response.Write(db.tblLogs.SingleOrDefault(c => c.ID == testRecordID) == null ? "Not Found\n\n" : "Found\n\n");
}
using (var db = new MainContext())
{
context.Response.Write(db.tblLogs.SingleOrDefault(c => c.ID == testRecordID) == null ? "Not Found\n\n" : "Found\n\n");
}
Returns when there is a record:
Found
Found
Not Found
If I use this method to clear the context cache after the delete sql query it returns:
Found
Not Found
Not Found
However still not sure if it's the root cause of the problem though. I would of thought the locking would definitely stop double sends.
The issue that your having is due to the way Entity Framework does its internal cache.
In order to increase performance, Entity Framework will cache entities to avoid doing a database hit.
Entity Framework will update its cache when you are doing certain operations on DbSet.
Entity Framework does not understand that your "DELETE FROM ... WHERE ..." statement should invalidate the cache because EF is not an SQL engine (and does not know the meaning of the statement you wrote). Thus, to allow EF to do its job, you should use the DbSet methods that EF understands.
for (var email in db.v2EmailQueues.OrderBy(c => c.ID).Take(Settings.EMAILS_PER_BATCH))
{
// whatever your amazon code was...
if (sent)
{
db.v2EmailQueues.Remove(email);
}
else
{
email.FailCount++;
}
}
// this will update the database, and its internal cache.
db.SaveChanges();
On a side note, you should leverage the ORM as much as possible, not only will it save time debugging, it makes your code easier to understand.
We are having an issue with searching a custom record through SuiteTalk. Below is a sample of what we are calling. The issue we are having is in trying to set up the search using the internalId of the record. The issue here lies in in our initial development account the internal id of this custom record is 482 but when we deployed it through the our bundle the record was assigned with the internal Id of 314. It would stand to reason that this internal id is not static in a site per site install so we wondered what property to set up to reference the custom record. When we made the record we assigned its “scriptId’ to be 'customrecord_myCustomRecord' but through suitetalk we do not have a “scriptId”. What is the best way for us to allow for this code to work in all environments and not a specific one? And if so, could you give an example of how it might be used.
Code (C#) that we are attempting to make the call from. We are using the 2013.2 endpoints at this time.
private SearchResult NetSuite_getPackageContentsCustomRecord(string sParentRef)
{
List<object> PackageSearchResults = new List<object>();
CustomRecord custRec = new CustomRecord();
CustomRecordSearch customRecordSearch = new CustomRecordSearch();
SearchMultiSelectCustomField searchFilter1 = new SearchMultiSelectCustomField();
searchFilter1.internalId = "customrecord_myCustomRecord_sublist";
searchFilter1.#operator = SearchMultiSelectFieldOperator.anyOf;
searchFilter1.operatorSpecified = true;
ListOrRecordRef lRecordRef = new ListOrRecordRef();
lRecordRef.internalId = sParentRef;
searchFilter1.searchValue = new ListOrRecordRef[] { lRecordRef };
CustomRecordSearchBasic customRecordBasic = new CustomRecordSearchBasic();
customRecordBasic.recType = new RecordRef();
customRecordBasic.recType.internalId = "314"; // "482"; //THIS LINE IS GIVING US THE TROUBLE
//customRecordBasic.recType.name = "customrecord_myCustomRecord";
customRecordBasic.customFieldList = new SearchCustomField[] { searchFilter1 };
customRecordSearch.basic = customRecordBasic;
// Search for the customer entity
SearchResult results = _service.search(customRecordSearch);
return results;
}
I searched all over for a solution to avoid hardcoding internalId's. Even NetSuite support failed to give me a solution. Finally I stumbled upon a solution in NetSuite's knowledgebase, getCustomizationId.
This returns the internalId, scriptId and name for all customRecord's (or customRecordType's in NetSuite terms! Which is what made it hard to find.)
public string GetCustomizationId(string scriptId)
{
// Perform getCustomizationId on custom record type
CustomizationType ct = new CustomizationType();
ct.getCustomizationTypeSpecified = true;
ct.getCustomizationType = GetCustomizationType.customRecordType;
// Retrieve active custom record type IDs. The includeInactives param is set to false.
GetCustomizationIdResult getCustIdResult = _service.getCustomizationId(ct, false);
foreach (var customizationRef in getCustIdResult.customizationRefList)
{
if (customizationRef.scriptId == scriptId) return customizationRef.internalId;
}
return null;
}
you can make the internalid as an external property so that you can change it according to environment.
The internalId will be changed only when you install first time into an environment. when you deploy it into that environment, the internalid will not change with the future deployments unless you choose Add/Rename option during deployment.