I need to know if a new record has been added to a particular table in my windows application. Table might be manipulated by different applications.
For now I'm using a Timer control and querying to see if there is a new record (I add record content somewhere in my application and delete record to avoid getting duplicate record), but of course this is not a clean way for this purpose.
Is there something like an event or something better than my approach?
I'm using entity framework 6
Update: I've read about SqlDependency but I don't know if it can be implemented using entity framework
You can give Linq2Cache a try to leverage SqlDependency, last time I hear EF6 was cleaned up its act and now formulates queries which are Query Notifications compatible.
Here's an existing StackOverflow article about that using the SQLDependency class.
Alternatively, you can cache the last record ID in memory/local to your program (in a file) or write the last processed record ID to a database table and search for anything more recent than that.
Related
I've hit a wall when it comes to adding a new entity object (a regular SQL table) to the Data Context using LINQ-to-SQL. This isn't regarding the drag-and-drop method that is cited regularly across many other threads. This method has worked repeatedly without issue.
The end goal is relatively simple. I need to find a way to add a table that gets created during runtime via stored procedure to the current Data Context of the LINQ-to-SQL dbml file. I'll then need to be able to use the regular LINQ query methods/extension methods (InsertOnSubmit(), DeleteOnSubmit(), Where(), Contains(), FirstOrDefault(), etc...) on this new table object through the existing Data Context. Essentially, I need to find a way to procedurally create the code that would otherwise be automatically generated when you do use the drag-and-drop method during development (when the application isn't running), but have it generate this same code while the application is running via command and/or event trigger.
More Detail
There's one table that gets used a lot and, over the course of an entire year, collects many thousands of rows. Each row contains a timestamp and this table needs to be divided into multiple tables based on the year that the row was added.
Current Solution (using one table)
Single table with tens of thousands of rows which are constantly queried against.
Table is added to Data Context during development using drag-and-drop, so there are no additional coding issues
Significant performance decrease over time
Goals (using multiple tables)
(Complete) While the application is running, use C# code to check if a table for the current year already exists. If it does, no action is taken. If not, a new table gets created using a stored procedure with the current year as a prefix on the table name (2017_TableName, 2018_TableName, 2019_TableName, and so on...).
(Incomplete) While the application is still running, add the newly created table to the active LINQ-to-SQL Data Context (the same code that would otherwise be added using drag-and-drop during development).
(Incomplete) Run regular LINQ queries against the newly added table.
Final Thoughts
Other than the above, my only other concern is how to write the C# code that references a table that may or may not already exist. Is it possible to use a variable in place of the standard 'DB_DataContext.2019_TableName' methodology in order to actually get the table's data into a UI control? Is there a way to simply create an Enumerable of all the tables where the name is prefixed with a year and then select the most current table?
From what I've read so far, the most likely solution seems to involve the use of a SQL add-on like SQLMetal or Huagati which (based solely from what I've read) will generate the code I need during runtime and update the corresponding dbml file. I have no experience using these types of add-ons, so any additional insight into these would be appreciated.
Lastly, I've seen some references to LINQ-to-Entities and/or LINQ-to-Objects. Would these be the components I'm looking for?
Thanks for reading through a rather lengthy first post. Any comments/criticisms are welcome.
The simplest way to achieve what you want is to redirect in SQL Server, and leave your client code alone. At design-time create your L2S Data Context, or EF DbContex referencing a database with only a single table. Then at run-time substitue a view or synonym for that table that points to the "current year" table.
HOWEVER this should not be necessary in the first place. SQL Server supports partitioning, so you can store all the data in a physically separate data structures, but have a single logical table. And SQL Server supports columnstore tables, which can compress and store many millions of rows with excellent performance.
i am currently developing a multi user wpf application.
First of all i wanna give you a short overview about the system.
Each client loads the full data from a sql server database. Each client can create and modify data which belongs to him. Data from other clients cannot be modified.
What i wanna do now is, sync my (long living -> context lives as long as the MainWindow is running) dbcontext, means when other clients write / update data in the database this data should be synced to each dbcontext, means each client can see the data of all other clients but is not allowed to modify / delete them (each record contains user field)
Now the big question is, how can i achieve the sync of the long-living dbcontext?
Is there any kind of sync framework (i have found the microsoft sync framework, but i think this solution does not fit for my problem.) I am looking for a method to sync the full dbcontext (all entities). Do i need to write my own sync, like polling the db?
Ok, i found a way to refresh all entities that were loaded first within the dbcontext using this lines of code:
var refreshableObjects = Remoting.Context.ChangeTracker.Entries().Select(c => c.Entity).ToList();
Remoting.Context.ObjectContext.Refresh(System.Data.Entity.Core.Objects.RefreshMode.StoreWins, refreshableObjects);
Now this only refreshes already existing entities. No the other thing is,
how can i get all new added records in the database and add them to my context.
Is there a way to reexecute my linq query and merge the results?
I am using Entityframework code first approach.
Thanks,
Mani
I have a database which is created in a separate project and a .edmx model file is generated by Entity Framework and created the model classes from the existing database.
There are several things that are added to the database (other parts of the backend, front end site, api, etc). Currently the method I have is a loop that checks for new entries in the database every 5 seconds (basically just a call to the table that looks for entries newer than the most recent entry I know of), and then I use the entry to perform actions that are non database related.
I was wondering if there was a better way to get new entries as opposed to constantly querying the database for something new. I was wondering if what I'm doing is fine, or if there's a better way to get new entries, preferably able to be built upon/with EF.
Thanks for any help!
If you want to notify your app as soon as any database records are inserted or updated or deleted and do some extra processing on them then you have two choices.
You can go with SqlDependency or SqlTableDependency. Both are used to notify the application when something on database changes. There is just one constraint where you must be able to enable the Broker for SQL server using ALTER DATABASE MyDatabase SET ENABLE_BROKER (This is important as some db doesn't support broker services i.e SQL Azure )
Here are some good links to explore both the approaches.
https://github.com/christiandelbianco/monitor-table-change-with-sqltabledependency
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/detecting-changes-with-sqldependency
I am trying to use LINQ to create a new record in a predefined (in production) table where the ID isnt IDENTITY.
I've seen the previous answers about using SQL and transaction to solve this:
Best way to get the next id number without "identity"
My question is if there is a LINQ / C# version of this too as I would like to have a function in C# that I can place in code with the rest of the functions.
For the record, we are using old SQL-server 2000 so no native .NET support inside the server.
EDIT
I was hoping someone would show the actual C# / LINQ code for it. Something about SELECT TOP 1 [TABLE_ID] ORDER BY DESC and then adding 1 to the value in TABLE_ID... but perhaps its a question that is too hard?
The reason for using a SQL-side identity is that SQL can control any parallelism issues. If two users call you C# identity code at the same time, you might end up with two IDs the same.
To work around this, either go with SQL identity, or generate GUIDs instead of sequential IDs.
These are very standard and common things to do, and there's not a lot of point doing anything else, as you'll have to deal with concurrency pain that's already been solved.
Having said that, you can do the obvious:
INSERT INTO my_table (id, ...)
VALUES ((SELECT MAX(id)+1 FROM my_table), ...)
Or something similar IF AND ONLY IF you set the table/transaction locking VERY carefully.
I think the best idea is when you open the page on Page_Load create a new record/row in the table then you can show the created id as the item id then when is the user submits the form then update that record with the all info.
You can schedule a job to clear the empty data if for example the user open the new page but decided not to continue and don't submit the form based on whatever condition you want (you might use bit as a condition for submitted or not).
We have built an application which needs a local copy of a table from another database. I would like to write an ado.net routine which will keep the local table in sync with the master. Using .net 2.0, C# and ADO.NET.
Please note I really have no control over the master table which is in a third party, mission critical app I don't wish to mess with.
For example Here is the master data table:
ProjectCodeId Varchar(20) [PK]
ProjectCode Varchar(20)
ProjectDescrip Varchar(50)
OtherUneededField int
OtherUneededField2 int
The local table we need to keep in sync...
ProjectCodeId Varchar(20) [PK]
ProjectCode Varchar(20)
ProjectDescrip Varchar(50)
Perhaps a better approach to this question is what have you done in the past to this type of problem? What has worked best for you or should be avoided at all costs?
My goal with this question is to determine a good way to handle this. So often I am combining data from two or more disjointed data sources. I haven't included database platforms for this reason, it really shouldn't matter. In this current situation both databases are MSSQL, but I prefer the solution not use linked databases or DTS, etc.
Sure, truncating the local table and refilling it each time from the master is an option, but with thousands of rows I don't think this is very efficient. Do you?
EDIT: First, recognize that what you are doing is hand-rolled replication and replication is never simple.
You need to track and apply all of the CRUD state changes. That said, ADO.NET can do this.
To track changes to the source you can use Query Notification with your source database. This requires special permission against the database so the owner of the source database will need to take action to enable this solution. I haven't used this technique myself, but here is a description of it.
See "Query Notifications in SQL Server (ADO.NET)"
Query notifications were introduced in
Microsoft SQL Server 2005 and the
System.Data.SqlClient namespace in
ADO.NET 2.0. Built upon the Service
Broker infrastructure, query
notifications allow applications to be
notified when data has changed. This
feature is particularly useful for
applications that provide a cache of
information from a database, such as a
Web application, and need to be
notified when the source data is
changed.
To apply changes from the source db table you need to retrieve the data from the target db table, apply the changes to the target rows and post the changes back to the target db.
To apply the changes you can either
1) Delete and reinsert all of the rows (simple), or
2) Merge row-by-row changes (hard).
Delete and reinsert is self explanatory, so I won't go into detail on that.
For row-by-row change tracking here is an approach. (I am assuming here that Query Notification doesn't give you row-by-row change information, so you have to calculate it.)
You need to determine which rows were modified and identify inserted and deleted rows. Create a DataView with a sort for each table to get a Find method you can use to lookup matching rows by ID.
Identify modified rows by using a datetime/timestamp column, or by comparing all field values. Copy modified values to the target row.
Identify added and deleted rows by looping over the respective table DataViews and using the Find method of the other DataView to identify rows that do not appear in the first table. Insert or delete rows from the target table as required. (The Delete method doesn't remove the row but marks it for deletion by the TableAdapter Update.)
Good luck!
+tom
I would push in the direction where the application that is inserting the data would insert into one db/table then the other in the same function. Make the application do the work, the db will be pushed already.
Some questions - what db platform? how are you using the data?
I'm going to assume you're just using this data as a lookup... and as you have no timestamp and no ability modify the existing table, i'd just blow away the local copy periodically and pull it down from the master table again.
Unless you've got a hell of a lot of data the overhead for this should be pretty small.
If you need to synch back to the master table, you'll need to do something a bit more exotic.
Can you use SQL replication? This would be preferable to writing code to do it no?