Build query in Faircom c-Tree Plus - c#

I have a Faircom c-tree database file (.dat & .idx).
I have connected it using ODBC Faircom Driver.
now I want to build queries on that database. actually i have already ready SQL version query but it is not working in c-tree database.
most of functions are not supported(like isnull,isnumeric,dateadd etc.)
help me to come out of this.

There is a developer's guide here for C#, as one of your tags is:.Net Guide
As well as a specific citing of your read example here:
C# Read Example
Citing the example from the above link, your read code would look something like this:
static void Initialize()
{
Console.WriteLine("INIT");
try
{
// This section is only needed for the AppServer DLL model.
bool AppServerModel = true;
if (AppServerModel)
{
// Set c-tree database engine configuration file name.
CTSession.SetConfigurationFile("ctsrvr.cfg");
// Start c-tree database engine.
CTSession.StartDatabaseEngine();
}
// allocate objects
MySession = new CTSession(SESSION_TYPE.CTREE_SESSION);
MyTable = new CTTable(MySession);
MyRecord = new CTRecord(MyTable);
}
catch(CTException E)
{
Handle_Exception(E);
}
try
{
// connect to server
Console.WriteLine("\tLogon to server...");
MySession.Logon("FAIRCOMS", "", "");
}
catch(CTException E)
{
Handle_Exception(E);
}
}
static void Display_Records()
{
bool found;
string custnumb;
string custname;
Console.Write("\tDisplay records...");
try
{
// read first record
found = MyRecord.First();
while (found)
{
custnumb = MyRecord.GetFieldAsString(0);
custname = MyRecord.GetFieldAsString(4);
Console.WriteLine("\n\t\t{0,-8}{1,-20}", custnumb, custname);
// read next record
found = MyRecord.Next();
}
}
catch(CTException E)
{
Handle_Exception(E);
}
}

Related

How To Add Migrations Function By C# Code and run it on created database in EF but Not By PackageManagerConsole

I have written a program that is running and I cannot delete the database. Now I want to add a series of new properties to the database tables. I know that I should use migration, but the point is that I have written the connection string manually with ConfigurationManager class and its functions, not in app.config. But now I have encountered a problem to add a new migration and update the database.
My question here is how to do the new migration and database update as code functions in C# instead of using package manager console and its commands i.e add-migration & update-database.
After many searches, I found a series of solutions like this link https://stackoverflow.com/questions/20216147/entity-framework-change-connection-at-runtime?answertab=modifieddesc#tab-top and using IDBConnectionIntercaptor, but in The test environment creates this database face with the error Cannot attach the file '***' as database '+++' .
my connection function is :
`
public static bool Connect(bool force = false)
{
bool result = false;
var selectedDb = new EventsDataContext();
selectedDb.ChangeDatabase
(
initialCatalog: Maincsb.InitialCatalog,
userId: "",
password: "",
dataSource: Maincsb.DataSource
);
EventsDataContext odb = null;
try
{
ConnectionInterceptor connectionInterceptor = new ConnectionInterceptor
{
connectionString = ConnectionManager.Maincsb.ConnectionString
};
DbInterception.Add(connectionInterceptor);
DbInterception.Add(new ModifyYeKeIntercpetor());
odb = new EventsDataContext();
odb.Database.Initialize(force);
Seeder(Maincsb.ConnectionString);
result = true;
}
catch (Exception e)
{
ErrorMessage.Show(e);
result = false;
}
finally
{
odb.Dispose();
odb = null;
}
return result;
}
`
And if the database has already been created, it will face the error that the database has changed and new features will not be added to the database.

How to update a SharePoint Online list if a new record is inserted in the database?

I have a MySQL database that holds personal information. Whenever a new employee gets hired he/she fills out some personal information and that data gets stored in a table.
After some research (and since I don't have access to the other systems -only the database) the plan is to build a C# console app that retrieves the data and check it against the SharePoint list. I want to update the list (create a new item) if a new record is in the database that does not exist in the SharePoint list from before.
Note that if the SharePoint list contains more columns, then the table with additional manual information.
I have posted the connection code against the database and how I retrieve the data.
How can I check if the item exists in the SharePoint list? Would anybody be able to provide an answer that includes code for creating and inserting the new item? I have two columns (in both the database and SP list) that could work as a primary key.
There is a REST API that supports CRUD so I guess this should be a no-brainer.
SharePoint list:
using System;
using System.Windows;
public class DbConnection
{
private String databaseName;
private String serverAddress;
private String pwd;
private String userName;
private Boolean connected;
private MySql.Data.MySqlClient.MySqlConnection conn;
public DbConnection(String databaseName, String serverAddress, String pwd, String userName)
{
this.databaseName = databaseName;
this.serverAddress = serverAddress;
this.pwd = pwd;
this.userName = userName;
connected = false;
}
public void Connect()
{
if (connected == true)
{
Console.Write("There is already a connection");
}
else
{
connected = false;
String connectionString = "server=" + serverAddress + ";" + "database=" + databaseName + ";" + "uid=" + userName + ";" + "pwd=" + pwd + ";";
Console.WriteLine(connectionString);
try
{
conn = new MySql.Data.MySqlClient.MySqlConnection(connectionString);
conn.Open();
Console.Write("Connection was succesfull");
}
catch (MySql.Data.MySqlClient.MySqlException ex)
{
MessageBox.Show(ex.Message);
}
}
}
public Boolean IsConnected()
{
return connected;
}
public MySql.Data.MySqlClient.MySqlConnection getConnection()
{
return conn;
}
public void Close()
{
conn.Close();
connected = false;
}
}
Then I retrieve the data like so:
using MySql.Data.MySqlClient;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace daily_CC_SP_update
{
class Program
{
static void Main()
{
DbConnection mySQLConn = new DbConnection(dbName, serverAddress, pwd, userName);
mySQLConn.Connect();
string sqlQuery = "SELECT * FROM tbl_CC_SP";
MySqlCommand sqlCom = new MySqlCommand(sqlQuery, mySQLConn.getConnection());
MySqlDataReader reader = sqlCom.ExecuteReader();
Console.WriteLine("Following output from DB");
if(reader.Read())
{
Console.WriteLine(reader.GetString(0));
}
//Keep the console alive until enter is pressed, for debugging
Console.Read();
mySQLConn.Close();
}
}
}
I will create a view in the database to retrieve the correct data.
First just to clarify :).. You are using SharePoint on-prem right? So we can use Farm solutions.
If yes then I would resolve this case with the fallowing solution.
I would develop an SPJob (SharePoint Timer job). It may only be included in Farm solution. Basically it looks like this:
create Farm project in solution
add class that inherits from SPJobDefinition and put Your logic in Execute method which You need to override (in this method create a standard SQL connection and Query this table from mySQL db then compare with Your SPList and do the work :) ) (also maybe here a good approach would be to store some credentials for this connection string in some SPList on some config site or somewhere... not to hardcode it ;))
For e.g.
public class CustomJob : SPJobDefinition
{
public CustomJob() : base() { }
public CustomJob(string jobName, SPService service) : base(jobName, service, null, SPJobLockType.None)
{
this.Title = jobName;
}
public CustomJob(string jobName, SPWebApplication webapp) : base(jobName, webapp, null, SPJobLockType.ContentDatabase)
{
this.Title = jobName;
}
public override void Execute(Guid targetInstanceId)
{
SPWebApplication webApp = this.Parent as SPWebApplication;
try
{
// Your logic here
}
catch (Exception ex)
{
SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("CustomJob - Execute", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);
}
}
}
add new feature to the solution that scope is webApplication and add event receiver to this feature
on feature active register Your Timer job (remember to remove it on deactive :))
public class Feature2EventReceiver : SPFeatureReceiver
{
const string JobName = "CustomJob";
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
try
{
SPSecurity.RunWithElevatedPrivileges(delegate ()
{
// add job
SPWebApplication parentWebApp = (SPWebApplication)properties.Feature.Parent;
DeleteExistingJob(JobName, parentWebApp);
CreateJob(parentWebApp);
});
}
catch (Exception ex)
{
SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("CustomJob-FeatureActivated", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);
}
}
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
lock (this)
{
try
{
SPSecurity.RunWithElevatedPrivileges(delegate ()
{
// delete job
SPWebApplication parentWebApp = (SPWebApplication)properties.Feature.Parent;
DeleteExistingJob(JobName, parentWebApp);
});
}
catch (Exception ex)
{
SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("CustomJob-FeatureDeactivating", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);
}
}
}
private bool CreateJob(SPWebApplication site)
{
bool jobCreated = false;
try
{
// schedule job for once a day
CustomJob job = new CustomJob(JobName, site);
SPDailySchedule schedule = new SPDailySchedule();
schedule.BeginHour = 0;
schedule.EndHour = 1;
job.Schedule = schedule;
job.Update();
}
catch (Exception)
{
return jobCreated;
}
return jobCreated;
}
public bool DeleteExistingJob(string jobName, SPWebApplication site)
{
bool jobDeleted = false;
try
{
foreach (SPJobDefinition job in site.JobDefinitions)
{
if (job.Name == jobName)
{
job.Delete();
jobDeleted = true;
}
}
}
catch (Exception)
{
return jobDeleted;
}
return jobDeleted;
}
}
deploy and activate Your feature on the web app (I think the best would be to configure the job to run every day or every hour)
a nice article with some example how to do that all may be found here (I know the article is for SP 2010 but it will work the same for 2013, 2016 and probably also 2019 (with this on-prem version I don't have much exp) :)
another article with same solution here (this for SP 2013)
** Update **
for SharePoint Online the above solution will not work as it is a farm solution. In Online the solution is as always something 'external' :).
For sure You already have some kind of server were You store solutions for SP online (like SP apps that are provider hosted.. etc).
My approach would be to develop a simple C# console app. First in this app do a SQL connection to mySql and query the table to get the data.. then using CSOM query SharePoint List to do the compare.
something like this
using (var clientContext = new ClientContext("url"))
{
CamlQuery camlQuery = new CamlQuery();
string query = "add some query here";
camlQuery.ViewXml = query;
collListItem = list.GetItems(camlQuery);
clientContext.Load(collListItem, items => items.Include( item => item["Title"], item => .... // add other columns You need here);
clientContext.ExecuteQuery();
if (collListItem.Count > 0)
{
// Your code here :)
}
}
Also be aware that You can run CSOM with credentials of different user (like some kind of admin) giving the network credentials like this:
NetworkCredential _myCredentials = new NetworkCredential("user", "password", "companydomain");
Also please be aware about threshold... in CSOM You can always use paginated query were You first get 5000 items after that under 5000 etc etc. until the collection is empty :).
After You run this console app manually a couple of times to be sure it is working properly, simply add this console app to task Scheduler on this server as a new task in task library. Also there You can provide trigger time like run every hour or day etc. here is a nice stack overflow post how to add this kind of task
.. I hope the answer now is better for Your environment :)
So I have had a great progress with my c# program. I have a fully functional connection between MySql database and SharePoint online using MySql.Data CSOM. Can manipulate and control all lists and data inside.
However I have one problem, nut not sure if this can be solved. Little to non information I could find around this topic.
I create a new ListItem. Set all the fields to a value. But there is one column that is of type "Person". Every employee has his own site where this links to, like Lookup. When adding a value to this field the server gives me the following error:
Microsoft.SharePoint.Client.ServerException: Invalid data has been used to update the list item. The field you are trying to update may be read only.
at Microsoft.SharePoint.Client.ClientRequest.ProcessResponseStream(Stream responseStream)
at Microsoft.SharePoint.Client.ClientRequest.ProcessResponse()
at Microsoft.SharePoint.Client.ClientRequest.ExecuteQueryToServer(ChunkStringBuilder sb)
at Microsoft.SharePoint.Client.ClientContext.ExecuteQuery()
at SPList.CreateNewItem(String userName, Int32 employeeNumber, String fullName, String firstName, String lastName, DateTime emplymentStart, DateTime employmentEnd, String department, String mobile, String address, String postcode, String postTown, String email) in C:\Users\fjs\source\repos\daily_CC_SP_update\SPList.cs:line 153
SharePoint field spec
Here is the code where I am creating the new Item.
using System;
using Microsoft.SharePoint.Client;
using System.Linq;
using System.Net;
public class SPList
{
private readonly ClientContext context;
private readonly List list;
private readonly ListItemCollection items;
private readonly Web web;
//Credentials may be needed, its commented out!
public SPList(String siteUrl, String listName, NetworkCredential netCred)
{
try
{
//NetworkCredential _myCredentials = netCred;
context = new ClientContext(siteUrl);
list = context.Web.Lists.GetByTitle(listName);
items = list.GetItems(CamlQuery.CreateAllItemsQuery());
web = context.Web;
context.Load(items);
context.Load(list);
context.Load(context.Web.Lists, lists => lists.Include(list => list.Title));
context.ExecuteQuery();
Console.WriteLine("Connected to SharePoint successfully!");
}
catch(Exception e)
{
Console.WriteLine(e);
}
}
public void CreateNewItem(String userName, int employeeNumber, String fullName, String firstName, String lastName, DateTime emplymentStart, DateTime employmentEnd, String department, String mobile, String address, String postcode, String postTown, String email)
{
try
{
ListItemCreationInformation newItemSepc = new ListItemCreationInformation();
ListItem newItem = list.AddItem(newItemSepc);
newItem["Title"] = userName;
newItem["Employee_x0020_Number"] = employeeNumber;
newItem["Full_x0020_Name"] = fullName;
newItem["First_x0020_Name"] = firstName;
newItem["Last_x0020_Name"] = lastName;
newItem["_x000a_Employment_x0020_start_x0"] = emplymentStart.Date;
newItem["Employment_x0020_end_x0020_date"] = employmentEnd.Date;
newItem["Department"] = department;
newItem["Mobile"] = mobile;
newItem["Adress"] = address;
newItem["Postcode"] = postcode;
newItem["Post_x0020_town"] = postTown;
newItem["Email"] = email;
newItem["Person"] = fullName;
newItem.Update();
context.ExecuteQuery();
}
catch(Exception e)
{
Console.WriteLine(e);
}
}
}
If i comment newItem["Person"] = fullName; out the code works fine. Can this somehow be fixed? Otherwise I have to eddid the item in SharePoint and add the value :/
The weird field names is because SharePoint stores it this way for some reason
The solution is to set the items["LockUpColumn"] not to a string but a lockup field

datagridview update row to csv file

I am working with Csv file and datagridview in a C# project for a inventory app, I try to update a row to CSV file!
i need to update if user edit a row current word with a new word but my problem here is i need save the current word and new word and get total in pseudo code example:
foreach (DataGridViewRow row in dataGridView1.Rows)
{
if(row in column is modified)
update specific row with comma to current file and load it...
}
Csv file is look like,
Current:
1;2;;4;5
Update:
1;2,A;;4;5 changed device A total: 1 time...
Next row modified :
1;A;;4,B,C;5 changed device B and C total change : 2 time...
With a database it's easy to update data but i don't have sql server installed so this option has not for me i think..
My goal is for tracking device out/in so if you have a solution please share it.
Short of using an SQL server, maybe something like this could help? LiteDB You'd have your LiteDB to host your data, and export it CSV whenever you need. Working with CSV files usually means you'll re-write the whole file every time there is an update to make... Which is slow and cumbersome. I recommend you use CSV to transport data from Point A to Point B, but not to maintain data.
Also, if you really want to stick to CSV, have a look at the Microsoft Ace OLEDB driver, previously known as JET driver. I use it to query CSV files, but I have never used it to update... so your mileage may vary.
Short of using an actual DataBase or a database driver, you'll have to use a StreamReader along with a StreamWriter. Read the file with the StreamReader, write the new file with the StreamWriter. In your StreanReader. This implies you'll have code in your StreamReader to find the correct Line(s) to update.
Here's the class I created and am using to interact with LiteDB. It's not all that robust, but it did exactly what I needed it to do at the time. I had to make changes to a slew of products hosted on my platform, and I used this to keep track of the progress.
using System;
using LiteDB;
namespace FixProductsProperty
{
public enum ListAction
{
Add = 0,
Remove,
Update,
Disable,
Enable
}
class DbInteractions
{
public static readonly string dbFilename = "MyDatabaseName.db";
public static readonly string dbItemsTableName = "MyTableName";
public void ToDataBase(ListAction incomingAction, TrackingDbEntry dbEntry = null)
{
if (dbEntry == null)
{
Exception ex = new Exception("dbEntry can not be null");
throw ex;
}
// Open database (or create if not exits)
using (var db = new LiteDatabase(dbFilename))
{
var backupListInDB = db.GetCollection<TrackingDbEntry>(dbItemsTableName);
//ovverride action if needed
if (incomingAction == ListAction.Add)
{
var tempone = backupListInDB.FindOne(p => p.ProductID == dbEntry.ProductID);
if (backupListInDB.FindOne(p => p.ProductID == dbEntry.ProductID) != null)
{
//the record already exists
incomingAction = ListAction.Update;
//IOException ex = new IOException("Err: Duplicate. " + dbEntry.ProductID + " is already in the database.");
//throw ex;
}
else
{
//the record does not already exist
incomingAction = ListAction.Add;
}
}
switch (incomingAction)
{
case ListAction.Add:
backupListInDB.Insert(dbEntry);
break;
case ListAction.Remove:
//backupListInDB.Delete(p => p.FileOrFolderPath == backupItem.FileOrFolderPath);
if (dbEntry.ProductID != 0)
{
backupListInDB.Delete(dbEntry.ProductID);
}
break;
case ListAction.Update:
if (dbEntry.ProductID != 0)
{
backupListInDB.Update(dbEntry.ProductID, dbEntry);
}
break;
case ListAction.Disable:
break;
case ListAction.Enable:
break;
default:
break;
}
backupListInDB.EnsureIndex(p => p.ProductID);
// Use Linq to query documents
//var results = backupListInDB.Find(x => x.Name.StartsWith("Jo"));
}
}
}
}
I use it like this:
DbInteractions yeah = new DbInteractions();
yeah.ToDataBase(ListAction.Add, new TrackingDbEntry { ProductID = dataBoundItem.ProductID, StoreID = dataBoundItem.StoreID, ChangeStatus = true });
Sorry... my variable naming convention sometimes blows...

How to delete table on Cassandra using C# Datastax driver?

var keyspace = "mydb";
var datacentersReplicationFactors = new Dictionary<string, int>(0);
var replication = ReplicationStrategies.CreateNetworkTopologyStrategyReplicationProperty(datacentersReplicationFactors);
using (var cluster = Cluster.Builder().AddContactPoints("my_ip").Build())
using (var session = cluster.Connect())
{
session.CreateKeyspaceIfNotExists(keyspace, replication, true);
session.ChangeKeyspace(keyspace);
var entityTable = new Table<Models.Entity>(session);
var attributeTable = new Table<Models.Attribute>(session);
entityTable.CreateIfNotExists(); // Worked
attributeTable.CreateIfNotExists(); // Worked
entityTable.Delete(); // Does nothing
attributeTable.Delete(); // Does nothing
}
EDIT: Without using raw queries session.Execute("DROP TABLE entities;"); working fine.
Delete() method is not intended for drop of the tables. It returns representation of a DELETE cql statement. If you call it, you just get {DELETE FROM entities}.
If you need to drop a table, the easiest way is just to execute DROP statement:
session.Execute("DROP TABLE entities;");
Unless there is already a method for dropping tables that I not aware of, you can use this extensions.
public static class DastaxTableExtensions
{
public static void Drop<T>(this Table<T> table)
{
table.GetSession().Execute($"DROP TABLE {table.Name};");
}
public static void DropIfExists<T>(this Table<T> table)
{
table.GetSession().Execute($"DROP TABLE IF EXISTS {table.Name};");
}
}
Then you can use it like this
entityTable.Drop();
attributeTable.DropIfExists();

Avoiding the new upsert in Azure Table Storage

Steve Marx writes about new extension methods to perform upserts in Azure Table Storage as part of the new storage protocol version here:
http://blog.smarx.com/posts/extension-methods-for-the-august-storage-features
However, what if I want to do the original operation of unconditional-merge-or-throw, rather than an upsert. I want to merge an object, updating a single field, but throw if the entity doesn't exist rather than create a new entity that contains only the properties I'm merging.
Is this possible? Note that I want to use upsert elsewhere, so I've taken to having IoC provide me with contexts created from GetDataServiceContext2011 instead of GetDataServiceContext. I suppose I could alternate between the two, but that won't help when the Azure team updates the official libraries.
According to MSDN:
The Insert Or Merge Entity operation uses the MERGE verb and must be
called using the 2011-08-18 version or newer. In addition, it does not
use the If-Match header. These attributes distinguish this operation
from the Update Entity operation, though the request body is the same
for both operations.
So, how do I get the storage library to emit a wildcard If-Match on save rather than emit no If-Match at all?
Just use AttachTo with an asterisk for an etag. That will result in an If-Match: *. Here's a complete working example:
class Entity : TableServiceEntity
{
public string Text { get; set; }
public Entity() { }
public Entity(string rowkey) : base(string.Empty, rowkey) { }
}
class Program
{
static void Update(CloudStorageAccount account)
{
var ctx = account.CreateCloudTableClient().GetDataServiceContext();
var entity = new Entity("foo") { Text = "bar" };
ctx.AttachTo("testtable", entity, "*");
ctx.UpdateObject(entity);
ctx.SaveChangesWithRetries();
}
static void Main(string[] args)
{
var account = CloudStorageAccount.Parse(args[0]);
var tables = account.CreateCloudTableClient();
tables.CreateTableIfNotExist("testtable");
var ctx = tables.GetDataServiceContext();
try { Update(account); } catch (Exception e) { Console.WriteLine("Exception (as expected): " + e.Message); }
ctx.AddObject("testtable", new Entity("foo") { Text = "foo" });
ctx.SaveChangesWithRetries();
try { Update(account); } catch (Exception e) { Console.WriteLine("Unexpected exception: " + e.Message); }
Console.WriteLine("Now text is: " + tables.GetDataServiceContext().CreateQuery<Entity>("testtable").Where(e => e.PartitionKey == string.Empty && e.RowKey == "foo").Single().Text);
tables.DeleteTableIfExist("testtable");
}
}

Categories

Resources