I have n number of services and under each service there will be 2 routes(some times more than 2). Under each route there are n number of stops. I am getting values from db as a table in below column order
ServiceId, Service Name,serviceLine color,RouteId, Route Name, Stop Id, Stop Name, Latitude, Longitude.
I want to convert it an object list of below format
public class Service
{
public string ServiceId { get; set; }
public string ServiceName { get; set; }
public string ServiceLineColor { get; set; }
public List<RouteData> RouteList { get; set; }
}
public class RouteData
{
public string RouteId { get; set; }
public string RouteCode { get; set; }
public string RouteName { get; set; }
public List<StopData> stopList { get; set; }
}
public class StopData
{
public string StopCode { get; set; }
public string StopName { get; set; }
public string Latitude { get; set; }
public string Longitude { get; set; }
public string StopType { get; set; }
}
Is there any easy way in linq to convert data in to below format? I wanted to avoid looping.. since i am getting nearly 1k records from db. Please help me to solve this issue.
Or is it best to use db calls to format data. i didn't prefer that because if there is 50 services i need to do 50 db calls and again have to do data formatting logic.
To avoid looping over the data structure each time, you could build up additional dictionaries that provide fast access of the objects by id:
var myServiceIndex = new Dictionary<string, Service>()
var myRouteDataIndex = new Dictionary<string, RouteData>()
Service service;
RouteData routData;
foreach (var record in databaseRecords)
{
if (myRouteDataIndex.TryGetValue(record.RouteId, out route))
{
// add stop data
}
else if (myServiceIndex.TryGetValue(record.ServiceId, out service)
{
// add route data
// add stop data
}
else
{
// add service
// add route data
// add stop data
}
}
You have a number of stops, and for each stop entry in database you have to map it to a C# object. In this case, looping is inevitable, as far as I see. Linq, and eg. entity framework, use looping internally.
One option is to use entity framework or Linq to SQL. It will give you strong type classes representing each DB table. But you have to change your DB schema and use foreign keys to link service, route, and stops.
C# code would look exactly like yours, which is auto generated by entity framework and in-sync with DB schema.
Second option is to convert manually. Note, your current schema doesn't complient with Third normal form. If you don't want to change your schema, you could read and generate them using group by clause.
Related
I have objects with many to many relationship.
public class Executor
{
public long Id { get; set; }
public string Name { get; set; }
public List<Competency> Competency { get; set; }
}
public class Competency
{
public long Id { get; set; }
public string CompetencyName { get; set; }
public List<Executor> Executor { get; set; }
}
I am using EF Core 5 and PostgreSQL DB. I can`t just add new Executor to DB, first I need to find all competencies in the DB because of this problem.
So, my code now is like this:
public async Task<ServiceResponse<ExecutorDto>> AddExecutor(ExecutorDto newExecutor, long userId)
{
var serviceResponse = new ServiceResponse<ExecutorDto>();
try
{
var executor = _mapper.Map<Executor>(newExecutor);
executor.Competency.Clear();
executor.Competency = _context.Competencies.Where(i => newExecutor.Competency.Contains(i)).ToList();
_context.Executors.Add(executor);
await _context.SaveChangesAsync();
...
But on the Save moment I have error.
The value of 'CompetencyExecutor (Dictionary<string, object>).CompetencyId' is unknown when attempting to save changes. This is because the property is also part of a foreign key for which the principal entity in the relationship is not known.
I was trying to resolve this in many ways, but I can`t find the solution.
Well, it was stupid, the problem was because one of the Competency in the List has Id=0. PostreSQL recognises 0 as NULL. Just need to change Id to 1 or another positive number.
I've currently built a Visual Studio C# project that saves API data into a database through Entity Framework. Every time I run the project the table in the database is wiped then data is re-added again. This is to stop duplication. However, I'm wondering if there is an alternate route where I don't have to wipe the data but I can just add new data that isn't already there?
Here is the code in my project. Starting from the method in my main class that uses RestSharp to obtain the API data, deserializes to a JSON format, then saves to my DB.
public static void getAllRequestData()
{
var client = new RestClient("[My API URL]");
var request = new RestRequest();
var response = client.Execute(request);
if (response.StatusCode == System.Net.HttpStatusCode.OK)
{
string rawResponse = response.Content;
AllRequests.Rootobject result = JsonConvert.DeserializeObject<AllRequests.Rootobject>(rawResponse);
using (var db = new TransitionContext())
{
db.RequestDetails.RemoveRange(db.RequestDetails); //Wipes Data
db.RequestDetails.AddRange(result.Operation.Details); //Adds Data
db.SaveChanges();
} //Utilising EF to save data to the DB
}
} //Method that calls and stores API data
Here is the Entity Framework class below, as you can see it just supports one table (dataset).
public class TransitionContext : DbContext
{
private const string connectionString = #"[My Server]";
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer(connectionString);
}
public DbSet<AllRequests.Detail> RequestDetails { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<AllRequests.Detail>().HasKey(r => r.Id);
}
}
And here is the DTO class, this holds the the data temrporarily and is used to be structured into a class fit for the API data.
public class AllRequests
{
public class Rootobject
{
public Operation Operation { get; set; }
}
public class Operation
{
public Result Result { get; set; }
public Detail[] Details { get; set; }
}
public class Result
{
public string Message { get; set; }
public string Status { get; set; }
}
public class Detail
{
[Key]
public int Id { get; set; }
public string Requester { get; set; }
public string WorkOrderId { get; set; }
public string AccountName { get; set; }
public string CreatedBy { get; set; }
public string Subject { get; set; }
public string Technician { get; set; }
public string IsOverDue { get; set; }
public long DueByTime { get; set; }
public string Priority { get; set; }
public long CreatedTime { get; set; }
public string IgnoreRequest { get; set; }
public string Status { get; set; }
}
}
Here is the table that is produced (irrelevant data blocked out).
In order to get Entity Framework working, I had to create an ID. This ID does not hold any API data, it simply starts from 1 to however many rows there are. However, WorkOrderId is a unique ID for each row. How would I be able to make this project scan for WorkOrderId's in the table and only add new data where a WorkOrderId wasn't already there?
Because ideally I want to be running this project every 5-10 minutes to keep the table constantly updated, however, at the moment I feel that wiping the table isn't the ideal way to go and is a long process. I would prefer it if I could implement this procedure instead. Any help or pointers would be greatly appreciated.
Short answer - use unique constraints and configure EF.
Long answer - you should get more familiar with databases and Entity Framework.
Database should ensure you that there are no duplicates.
Primary key (your Id here) is much more important than you think.
A table can have only one primary key, which may consist of single or multiple fields. When multiple fields are used as a primary key, they are called a composite key.
If a table has a primary key defined on any field(s), then you cannot have two records having the same value of that field(s).
(https://www.tutorialspoint.com/sql/sql-primary-key.htm)
What's more in some databases (SQL Server for example) you can have primary key built on several columns.
The other think is to properly configure Entity Framework. Basically you do this in OnModelCreating.
You can say here that a column should be unique, for example:
protected override void OnModelCreating(ModelBuilder builder)
{
builder.Entity<User>()
.HasIndex(u => u.Email)
.IsUnique();
}
You can read more about indices here: https://en.wikipedia.org/wiki/Database_index and here: How does database indexing work?
It seems that you should make field WorkOrderId your primary key and delete Id field. I think you should also read more about foreign keys: https://www.w3schools.com/sql/sql_foreignkey.asp
I've an UI (angular JS) where user would want to be able to add,update or delete multiple schedules for a given SoftwareImage (see the API model below) and from the backend(ASP.net web api) I want to be able to handle these cases,following is the algorithm we have between UI and backend but it seems a very hacked way,I want to get opinion from experts on how do you handle such scenarios?
namespace Dashboard.Model.ApiModels
{
public class PostcommitSchedule
{
public string SoftwareImage { get; set; }
public List<PostcommitdayTime> PostcommitdayTime { get; set; }
}
public class PostcommitdayTime
{
public string day { get; set; }
public string Time { get; set; }
}
}
SELECT query:
SELECT si.software_image,pbs.day_of_week_id,pbs.time_of_day FROM postcommitbuild_schedule pbs
join software_images si on si.software_image_id=pbs.software_image_id
where si.software_image='LNX.LA.3.6'
ALGORITHM
If empty object
Delete all records
Else
Run a select query(see above) to see if there are any existing records for the given SoftwareImage(SI)
If records exits
Delete the existing records
Insert the new records
Else
Insert the new records
I've started learning NoSQL on an example of RavenDB. I've started with a simplest model, let's say we have topics that were created by users:
public class Topic
{
public string Id { get; protected set; }
public string Title { get; set; }
public string Text { get; set; }
public DenormalizedUser User { get; set; }
}
public class DenormalizedUser
{
public string Id { get; set; }
public string Name { get; set; }
}
public class User
{
public string Id { get; protected set; }
public string Name { get; set; }
public DateTime Birthdate { get; set; }
//some other fields
}
We don't need the whole User for displaying a Topic, so I've denormalized it to DenormalizedUser, containing an Id and a Name.
So, here are the questions:
1) Is this approach correct for NoSQL?
2) How to handle cases when User changes the Name? Do I need to manually update all the Name fields in denormalized classes?
Shaddix you can use the Raven DB Include function to load the User using the UserId from your topic.
var topic = _session.Load<Topic>(topicId)
.Customize(x => x.Include<Topic>(y => y.UserId));
var user = _session.Load<User>(topic.UserId);
The Load for Topic will 'preload' the User and both Loads will only result in one GET request. (I couldn't reply directly to your response to Ayende due to my reputation).
You also use the alternative (and probably clearer) .Include() function without Customize().
http://docs.ravendb.net/consumer/querying/handling-document-relationships.html
shaddix,
You don't need to denormalize, you can hold a reference to the id and then Include that when you load from the server
1) Yes, this approach works fine and the result is, that you only need to load the topic-document when you want to display it along with the name of its user. However, as Ayende states, the perfomance will be nearly the same as if you didn't denormalize the user and just include it when needed. If you don't worry about multiple-server deployment I recommend that approach.
2) If you really want to denormalize the user, then you can update all topics referencing this user simply with a set based operation. Look at this: http://ravendb.net/faq/denormalized-updates
I currently have an Entity Framework model that collects data from a legacy database and I am currently using an int on my Id properties
I am attempting to build a search box with autocomplete capabilities and want to have the autocomplete function to return a subset of records based on whether the sample id either contains or starts with (final design decision not made yet) and I am running into problems with converting the integer id to a string as I would normally use a recs.Id.toString().StartsWith(recordId) but this is apparently not supported by the Entity Framework
Is there a way around this limitation ?
My code looks like the following
Model:
public class Sample
{
public Sample()
{
Tests = new List<Test>();
}
public int Id { get; set; }
public DateTime SampleDate { get; set; }
public string Container { get; set; }
public string Product { get; set; }
public string Name { get; set; }
public string Status { get; set; }
public virtual SamplePoint SamplingPoint { get; set; }
public virtual SampleTemplate SampleTemplate { get; set; }
public Customer ForCustomer { get; set; }
public virtual ICollection<Test> Tests { get; set; }
}
and the query I am currently trying to apply to this model
[HttpGet]
public JsonResult AutoComplete(string partialId)
{
var filteredSamples =
repo.AllSamples.Where( s =>
String.Compare(s.Status, "A", false) == 0
&& (s.Id.ToString()).StartsWith(partialId)
).ToList();
return Json(filteredSamples, JsonRequestBehavior.AllowGet);
}
Any ideas would be awesome I am out of ideas at this point
No matter what you do, this is going to result in some awful performance on large datasets, because you will not be able to use any indices. My recommendation would be to use a trigger or scheduled task to store the leading digit in a separate field and filter on that.
I ended up adding a view for autocomplete data and converting the data to string in the select statement and this solved my issue
Wild thought: how about your create a computed, persisted column on your database table, that converts your ID (INT) into a string?
Then you could:
put an index on that column
use a simple string comparison on that string column
Basically, you need this:
ALTER TABLE dbo.YourTable
ADD IDAsText AS CAST(ID AS VARCHAR(10)) PERSISTED
Now update you EF model - and now you should have a new string field IDAsText in your object class. Try to run your autocomplete comparisons against that string field.