Elmah max log entries - c#

I'm saving exceptions generated by Elmah as XML files.
Is there any way to configure it so that it automatically removes files older than X days? Or perhaps a max number of files in the directory? Or do i need to created a custom batch job that does this?

From the Elmah Project Site for ErrorLogImplementations. (Italics added for emphasis)
XmlErrorLog
The XmlFileErrorLog stores errors into loose XML files in a configurable directory. Each error gets its own file containing all of its details. The files can easily be copied around, deleted, compressed or mailed to someone for further diagnostics. It does not require any database engine or setup, like with SQL Server and Oracle, so there is very little management overhead and you do not need to worry about additional costs when it comes to hosting plans. Although simple, it relies on the file system performance for shredding through the directory, reading files and sorting through them. A smart way of keeping logs based on XmlFileErrorLog running smoothly is to limit the number of files by scheduling a task to periodically archive the old logs and clean up the folder.
You will need to create a custom batch job that does this.

Related

How to use file instead of db?

My application have a lot of logs. Currently part of this logs I write into azure blob file, part of them into db.
I need to use only file in blob or on machine. I need to have possibility querying and filtering rows of this file to find that I need. I don't need to use existing logs, I can create any predefined structure.
The reason why I don't want to use db is the cost of db, given the rapid growth in db size.
So, what is the best way to implement this?
I will be glad for any suggestions.
Depending on what and how you are logging, application logs are typically written using the System.Diagnostic.Trace class. The log level and storage (file or blob) can be configured through the portal. Read more about that here.

TFS max check-in files limitation

I'm using TFS API to manage versions of my application's data.
In the first use i'm trying to convert all the data base data to the TFS workspace and then the check-in stuck for long time (can take more than hour if it not stuck forever), i'm dealing with 100,000-200,000 files to check-in.
There is any limitation in TFS of number of check-in files? if not, what can be the bottle neck of this operation?
Split the check in to small packages of files would help? if so, any recommended bulk size?
The number of changes in a changeset is stored as the CLR's int type.
So there's definitely an upper limit of int.MaxValue or 2,147,483,647.
More details you can refer the answer from Edward in this question:Is there a limit on the number of files in a changeset in TFS?
In other words, you are far from the check-in limitation. Check in process aborted may relate the network connection and current system load.
Moreover, just like above mentioned in the comment. It's not recommend to check in and version control database data file in TFS. Suggest you to create a script. Here is also a discussion about it: Do you use source control for your database items?
The databases themselves? No
The scripts that create them, including static data inserts, stored
procedures and the like; of course. They're text files, they are
included in the project and are checked in and out like everything
else.

Can 2 redis servers share the same snapshot dump file?

Is it possible to have a Redis server running on two machines and each server specifies in the config file the same snapshot dump file name and directory, with the directory and file obviously being shared between both machines?
RavenDB seems to work fine with that, I can setup the whole server file directory on a Dropbox folder on my machine and do the same on the other machine with the two drop boxes syncing while the RavenDb servers read and write data from/to the database that is stored within the drop box folder.
I understand both DBs' concepts are very different, I just use the RavenDB experience as example to explain what I try to accomplish. Please note this is just for developing purposes not to run in production.
I am running Redis in Version 2.4.5 as a Windows service and use BookSleeve as client within C# .Net 4.5
Thanks
Most certainly not. This would be a sure way to ensure a corrupt file.
You might want to watch progress on Redis Cluster (http://redis.io/topics/cluster-spec), currently at the specification stage.
The only time you would use the dump file on a system which does not have persistence enabled is on boot time. However, if persistence is disabled it doesn't read from the dump file.
Even without server specific data on the dump file the possibility for corruption comes at any and every point when both services write to the file. You could set the persistence settings to only save if there have been, say, 59 million changes in 60 seconds. This could allow you to read the file on load but not save to it. You would then need to use
redis config set save ""
To disable saving in both but be able to save when you want, you would do the above and issue a by save command.
I also have to advise against doing this over a shared file system, which is what you'll need to do this with multiple machines accessing the same file. In your case you are talking about Dropbox as your shared file system, but this is likely to kill performance if you are persisting to disk.
But ultimately, I'd have to ask why you think you need this?
If you are using one for read only, then use a slave or two and do reads on the slaves. This way you don't have to worry about multiple instances corrupting a persistence file. This avoid the need for shared storage as you have two nodes running each with a copy of the data. This provides redundancy and you can relatively easily work a master/slave failover setup.
Ultimately, if you are just using it to develop something against, I don't see the need for such a setup. Just store configuration where you can download it (Dropbox, github, etc) and develop away. It isn't difficult, and certainly less complicated, to simply copy your dump file to Dropbox or anywhere else you need it than to do what you describe.

Inserting large csv files into a database

We have an application on the web that must allow the user to upload files with zip codes, these files are .csv's files. Any user will be able to upload the file from their computer, the issue is that the file may contain thousands of records. Right now i am getting the file, making sure it has the right headers but I am pushing the records one by one into the database.
I am using c# asp.net, is there a better way to do this?, more efficient from the code?. We cant use any external importers or data importers or tools like sql server business intelligence. How can I do this?, i was reading something about putting it in memory and then push it to the database?. Any urls, examples or suggestions would be much appreciated.
Regards
Firstly, I'm pretty sure that what you are asking is actually "How do you process a large file and insert the processed data into the database?".
Now assuming I am correct I would say the question is akin to 'how long is a piece of string?'. The reality is that an implementation for processing large files into a database is highly specific to your requirements.
However, at the simplest end of the spectrum you could simply upload the file straight into a table (or folder) and create a windows service that runs every x minutes, traverses through the table, picks each file and processes your data using bulk inserts and the prepare method (which may give you some performance benefits).
Alternatively you could look at something like MSMQ (Microsoft Message Queuing) and save any uploaded files direct to a queue which is then completely independent of your application and can be processed at any point in time along with easily scaled out.
At the end of the day though, honestly I don't think anyone here can give you a 'correct' answer to your question cause there really isn't one and you'll only be able to find improvements to your implementation by experimentation.
if this contains up to a million record, best to do this is to create a service to manage the inserting of records into the database to avoid timeout and prevent the web iis stress.
if you make it a windows service you can notify the service to process the zip files in certain directory where it was uploaded.
also, i would suggest to use bulk insert for more faster database transactions.
if there are validation you can probably stage the data into a different database and validate the data then push to the final database.
Since these records are in the same table and would then not be related to each other, Parallel.ForEach may be a valid answer here. Assuming you have a static method (may not necessarily need to be static) that inserts an individual record into the db, you can run Parallel.ForEach loop over an array where each index of the array represents a line of the CSV.
This assumes that uploading the large file to the server isn't the initial issue. If that is also part of the issue I would reccomend zipping the file and then using something like SharpZipLib to unzip it once it is uploaded. Since text compresses very well this may be the biggest boon to performance from the user's perspective.

How should multiple processes on different Windows PCs use concurrently a file, stored in a shared directory?

Problem:
I have multiple instances of the same C# application running on different PCs (OS: Windows XP, Windows 7) in the same LAN. I have to share some configuration data among them. Each process must have read-write access to the data. My employer insists on storing these shared data in a file, which is in a shared directory on one of these PCs.
Possible solutions:
Exclusive file opening: The data is stored in a TXT file (serialization to and from a binary file is also an option). Each process uses File.Open with FileShare.None when trying to open the file. Getting an IOException means that the file is already in use, so the process has to wait and try again later.
SQL Server CE embedded DB: The data is stored in an SDF file. The engine can handle at most 256 simultaneous connections (v3.5 SP2), which is more than enough.
SQLite embedded DB: The data is stored in an SQLite DB file. The documentation says SQLite works, but may be unreliable when used on a network share.
Other?
What is the preferred way to do this?
Don't know if is the best way, but I've done this in C ages ago, it was working well for me.
Each process will read and create a personal copy of the file and then work on that.
At a fixed moment (upon process termination or triggered via some UI or whatever you feel like) each process will send its copy of the file to a master process in charge of rebuilding the original file in the shared directory and signaling the other process that they need to reload.
Each process reloads the file (containing infos coming from all the other processes).
Of course this solution requires that the file writing process has knowledge on how to rebuild the file and how to resolve conflicts (but this depends on data format)
You don't really describe the type of data you're working with so I'd say the answer varies.
Using a proper DBMS for this would be best if the data you are working with could generally be considered record/field oriented (and under rare circumstance even if it isn't). In this case I would recommend MSSQL CE since its runtime will mitigate multi-user issues for you.
SQLite was generally considered a single user/application database (at least back when I used it in C) though things could have changed in the last 5 years. If you're using .NET 4 then there are few free adapters available for use from what I've found unless you're comfortable with a mixed framework application.
I would only monitor the file locking manually if you're in a situation where the data is pretty flat by design (like a log file), though if it was log like data I would probably look into how some of the open source log libraries do it. You basically said you have control over the data structure so I'd suggest redesigning the data to be more normalized/rigid to avoid using this solution.
Create a web service and make your programs pull the configuration from there. You can control file locking from inside the web service and not have to deal with that at the program level. This also affords you the abstraction that if you decide to change how the settings are stored (e.g. move them from a file to a database) you can do this without having to make any changes to your program.

Categories

Resources