Enterprise Architect 9: Add Note to Connector - c#

I want to add notes to connectors in an Enterprise Architect diagram programmatically.
So far I only managed to add notes to elements with the following code:
foreach (EA.Element element in Package.Elements)
{
foreach (EA.Connector conn in element.Connectors)
{
EA.Element newNote = Package.Elements.AddNew("MyNote", "Note");
newNote.Notes = "Some string";
newNote.Update();
//position calculation is left out here
EA.DiagramObject k = diagram.DiagramObjects.AddNew(position, "");
k.ElementID = newNote.ElementID;
k.Sequence = 9;
k.Update();
EA.Connector newConn = newNote.Connectors.AddNew("NewLink", "NoteLink");
newConn.SupplierID = conn.SupplierID;
newConn.Update();
EA.DiagramLink newLink = diagram.DiagramLinks.AddNew("newLink", "NoteLink");
newLink.ConnectorID = newConn.ConnectorID;
newLink.Update();
The image maybe makes it more clear what I actually want:
http://www.directupload.net/file/d/3536/6bkijpg2_png.htm
My question is: How do I get the note attached to the connector? I assume I have to change this line "newConn.SupplierID = conn.SupplierID;", but "newConn.SupplierID = conn.ConnectorID" causes an exception.
I would be very happy if someone could help me!
Best regards

EA handles note links to connectors very differently from note links to elements.
Connectors always run between two elements. In your example, there are four elements (two of type Activity named O1 and O2, and two of type Note; these are usually nameless) and three connectors (O1 - O2, "This is what I have" - O2, and one from O1 running off the edge of the image).
The thing that looks like a connector from "This is what I want" to the O1 - O2 connector is not, in fact, a connector at all -- it just looks like one. In the GUI, the link to a connector is unresponsive and you can't bring up a properties dialog for it. This is why.
The fact that the note is linked to the connector is stored in the note element itself, in the MiscData collection. What you need to do is add the string idref=<connector_id>; to MiscData(3). You may also need to set the Note's Subtype field to 1.
However, MiscData is read-only so you'll have to go into the database and update t_object (where elements are stored) directly. MiscData in the API corresponds to PDATA1 etc in the table. Note that the indices differ by one, so MiscData(0) corresponds to PDATA1, etc.
You will also need to use the undocumented Repository.Execute() since Repository.SQLQuery() only allows select statements.
So the following should work:
foreach (EA.Connector conn in element.Connectors) {
EA.Element newNote = Package.Elements.AddNew("MyNote", "Note");
newNote.Subtype = 1;
newNote.Notes = "Some string";
newNote.Update();
repository.Execute("update t_object set PDATA4='idref=" + conn.ConnectorID + ";' " +
where Object_ID=" + newNote.ElementID);
//position calculation is left out here
EA.DiagramObject k = diagram.DiagramObjects.AddNew(position, "");
k.ElementID = newNote.ElementID;
k.Sequence = 9;
k.Update();
}
You may need to set the element subtype after the database update, I'm not sure.
Element.Subtype values are undocumented in the API, as are the contents of Element.MiscData, so this solution isn't future-proof (but it's very unlikely EA will ever change the way it handles these things).

Related

AMO get partitions where data are processed but not indexes

I am writing a script that return all unprocessed partitions within a measure group using the following command:
objMeasureGroup.Partitions.Cast<Partition>().Where(x => x.State != AnalysisState.Processed)
After doing some experiments, it looks like this property indicates if the data is processed and doesn't mention the indexes.
After searching for hours, i didn't find any method to list the partitions where data is processed but indexes are not.
Any suggestions?
Environment:
SQL Server 2014
SSAS multidimensional cube
Script are written within a SSIS package / Script task
First, ProcessIndexes is an incremental operation. So if you run it twice the second time will be pretty quick because there is nothing to do. So I would recommend just running it on the cube and not worrying about whether it was previously run. However if you do need to analyze the current state then read on.
The best way (only way I know of) to distinguish whether ProcessIndexes has been run on a partition is to study the DISCOVER_PARTITION_STAT and DISCOVER_PARTITION_DIMENSION_STAT DMVs as seen below.
The DISCOVER_PARTITION_STAT DMV returns one row per aggregation with the rowcount. The first row of that DMV has a blank aggregation name and represents the rowcount of the lowest level data processed in that partition.
The DISCOVER_PARTITION_DIMENSION_STAT DMV can tell you about whether indexes are processed and which range of values by each dimension attribute are in this partition (by internal IDs, so not super easy to interpret). We assume at least one dimension attribute is set to be optimized so it will be indexed.
You will need to add a reference to Microsoft.AnalysisServices.AdomdClient also to simplify running these DMVs:
string sDatabaseName = "YourDatabaseName";
string sCubeName = "YourCubeName";
string sMeasureGroupName = "YourMeasureGroupName";
Microsoft.AnalysisServices.Server s = new Microsoft.AnalysisServices.Server();
s.Connect("Data Source=localhost");
Microsoft.AnalysisServices.Database db = s.Databases.GetByName(sDatabaseName);
Microsoft.AnalysisServices.Cube c = db.Cubes.GetByName(sCubeName);
Microsoft.AnalysisServices.MeasureGroup mg = c.MeasureGroups.GetByName(sMeasureGroupName);
Microsoft.AnalysisServices.AdomdClient.AdomdConnection conn = new Microsoft.AnalysisServices.AdomdClient.AdomdConnection(s.ConnectionString);
conn.Open();
foreach (Microsoft.AnalysisServices.Partition p in mg.Partitions)
{
Console.Write(p.Name + " - " + p.State + " - ");
var restrictions = new Microsoft.AnalysisServices.AdomdClient.AdomdRestrictionCollection();
restrictions.Add("DATABASE_NAME", db.Name);
restrictions.Add("CUBE_NAME", c.Name);
restrictions.Add("MEASURE_GROUP_NAME", mg.Name);
restrictions.Add("PARTITION_NAME", p.Name);
var dsAggs = conn.GetSchemaDataSet("DISCOVER_PARTITION_STAT", restrictions);
var dsIndexes = conn.GetSchemaDataSet("DISCOVER_PARTITION_DIMENSION_STAT", restrictions);
if (dsAggs.Tables[0].Rows.Count == 0)
Console.WriteLine("ProcessData not run yet");
else if (dsAggs.Tables[0].Rows.Count > 1)
Console.WriteLine("aggs processed");
else if (p.AggregationDesign == null || p.AggregationDesign.Aggregations.Count == 0)
{
bool bIndexesBuilt = false;
foreach (System.Data.DataRow row in dsIndexes.Tables[0].Rows)
{
if (Convert.ToBoolean(row["ATTRIBUTE_INDEXED"]))
{
bIndexesBuilt = true;
break;
}
}
if (bIndexesBuilt)
Console.WriteLine("indexes have been processed. no aggs defined");
else
Console.WriteLine("no aggs defined. need to run ProcessIndexes on this partition to build indexes");
}
else
Console.WriteLine("need to run ProcessIndexes on this partition to process aggs and indexes");
}
I am posting this answer as additional information of #GregGalloway excellent answer
After searching for a while, the only way to know if partition are processed is using DISCOVER_PARTITION_STAT and DISCOVER_PARTITION_DIMENSION_STAT.
I found an article posted by Daren Gossbel describing the whole process:
SSAS: Are my Aggregations processed?
In the artcile above the author provided two methods:
using XMLA
One way in which you can find it out with an XMLA discover call to the DISCOVER_PARTITION_STAT rowset, but that returns the results in big lump of XML which is not as easy to read as a tabular result set.
example
<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
<RequestType>DISCOVER_PARTITION_STAT</RequestType>
<Restrictions>
<RestrictionList>
<DATABASE_NAME>Adventure Works DW</DATABASE_NAME>
<CUBE_NAME>Adventure Works</CUBE_NAME>
<MEASURE_GROUP_NAME>Internet Sales</MEASURE_GROUP_NAME>
<PARTITION_NAME>Internet_Sales_2003</PARTITION_NAME>
</RestrictionList>
</Restrictions>
<Properties>
<PropertyList>
</PropertyList>
</Properties>
</Discover>
using DMV queries
If you have SSAS 2008, you can use the new DMV feature to query this same rowset and return a tabular result.
example
SELECT *
FROM SystemRestrictSchema($system.discover_partition_stat
,DATABASE_NAME = 'Adventure Works DW 2008'
,CUBE_NAME = 'Adventure Works'
,MEASURE_GROUP_NAME = 'Internet Sales'
,PARTITION_NAME = 'Internet_Sales_2003')
Similar posts:
How to find out using AMO if aggregation exists on partition?
Detect aggregation processing state with AMO?

Receive all information from classes

I have some code which I'm having problems which and hopefully somebody can assist me, basically I have a 'Player' class such as:
Player JanMccoy = new Player { playerFirstname = "Jan", playerSurname = "Mccoy", playerAge = 23,
playerCode = "MCC0001"};
I have about 10 players, all of which have a unique code to them self, basically this code is stored into a list box with the Name and Surname. How the data gets their isn't important, basically though there are 10 values in the listbox which look like "Jan Mccoy (MCC0001)"
Basically I want to now be able to get the age of the person in the class, I have an event for a button which when he gets the selected item from the listbox box I store into a string just the playerCode, which this code I need to be able to get the player age
I know this is SQL but I need something basically like:
SELECT * FROM MyClass WHERE playerCode = strPlayerCode
I however am not using SQL, I need something which can do that in C#
If I need to add anymore detail just ask, tried to explain as good as I can.
If you could point me into right direction also that be great also!
In c# there is Linq which works similar to SQL
For example:
SELECT * FROM MyClass WHERE playerCode = strPlayerCode
would be
var players = myListOfPlayers.Where(p => p.playerCode == strPlayerCode);
This will return a collection of all the players with that playercode
However, since you said the key is unique and you are only returning a single record FirstOrDefault will work fine without the need tor the where clause. like SELECT TOP 1 FROM ....
var player = myListOfPlayers.FirstOrDefault(p => p.playerCode == strPlayerCode);
Then I would try LINQ:
var player = players.Where(p => p.playerCode == "MCC001").FirstOrDefault();

Can't remove contacts from an address book in Outlook 2010

I can add a contact to a address book but for some reason I can't remove it. The code I'm executing is as follows.
String abName = "Name ofthe targetted address book";
Outlook.Folder addressBook;
if (targetFolder.Folders.OfType<Outlook.Folder>().Any(element
=> element.Name == abName))
addressBook = targetFolder.Folders[abName] as Outlook.Folder;
else
addressBook = targetFolder.Folders.Add(
abName, Outlook.OlDefaultFolders.olFolderContacts) as Outlook.Folder;
addressBook.ShowAsOutlookAB = true;
for (int i = addressBook.Items.Count - 1; i >= 0; i--)
if (!stringList.Any(element
=> element == addressBook.Items.OfType<Outlook.ContactItem>()
.ToList()[i].Email1Address))
addressBook.Items.OfType<Outlook.ContactItem>().ToList().RemoveAt(i);
The fetching of the address book works and the matching for strings too. I get into the RemoveAt line for the exactly correct contacts. There's no error or other message when I execute the removal. Still, the contact list remain unaffected.
Why?
What can I do to actually remove the contacts?
I suspect that I may be working on a copy of the actual list containing the contacts. The problem is that if I don't create a List, I'm not sure how to alter the list of contacts.
So, the most helpful answer would shed some light on how to alter addressBook (or perhaps addressBook.Items) given certain condition. E.g., say that we'd like to remove all the contants the name of whom starts with the letter "Q".
At this moment I can only think of a super ugly work-around and it's so rectum-ugly that I don't even mention it here. Really ugly...
You ae not removing an Outlook contact. You are removing an OUtlook object from your own List object.
You need to call ContactItem.Delete.
As a side note, do not use multiple dot notation when working with COM objects, especially in a loop - you will receive a brand new COM object for each dot.
Here is a solution
private void ClearContact(Outlook.Application outlookApplication)
{
Outlook.MAPIFolder contactFolder = outlookApplication.Session.GetDefaultFolder(OlDefaultFolders.olFolderContacts);
int total = contactFolder.Items.Count;
while (total > 0)
{
// first index number is 1 not 0
var contact = (Outlook.ContactItem)contactFolder.Items[1];
contact.Delete();
total = contactFolder.Items.Count;
}
}
I use netoffice outlook api
http://netoffice.codeplex.com/wikipage?title=Outlook_Example05
And use while loop to delete all contact

C# Reading and Summarizing Text File with LINQ

I've read MANY different solutions for the separate functions of LINQ that, when put together would solve my issue. My problem is that I'm still trying to wrap my head about how to put LINQ statements together correctly. I can't seem to get the syntax right, or it comes up mish-mash of info and not quite what I want.
I apologize ahead of time if half of this seems like a duplicate. My question is more specific than just reading the file. I'd like it all to be in the same query.
To the point though..
I am reading in a text file with semi-colon separated columns of data.
An example would be:
US;Fort Worth;TX;Tarrant;76101
US;Fort Worth;TX;Tarrant;76103
US;Fort Worth;TX;Tarrant;76105
US;Burleson;TX;Tarrant;76097
US;Newark;TX;Tarrant;76071
US;Fort Worth;TX;Tarrant;76103
US;Fort Worth;TX;Tarrant;76105
Here is what I have so far:
var items = (from c in (from line in File.ReadAllLines(myFile)
let columns = line.Split(';')
where columns[0] == "US"
select new
{
City = columns[1].Trim(),
State = columns[2].Trim(),
County = columns[3].Trim(),
ZipCode = columns[4].Trim()
})
select c);
That works fine for reading the file. But my issue after that is I don't want the raw data. I want a summary.
Specifically I need the count of the number of occurrences of the City,State combination, and the count of how many times the ZIP code appears.
I'm eventually going to make a tree view out of it.
My goal is to have it laid out somewhat like this:
- Fort Worth,TX (5)
- 76101 (1)
- 76103 (2)
- 76105 (2)
- Burleson,TX (1)
- 76097 (1)
- Newark,TX (1)
- 76071 (1)
I can do the tree thing late because there is other processing to do.
So my question is: How do I combine the counting of the specific values in the query itself? I know of the GroupBy functions and I've seen Aggregates, but I can't get them to work correctly. How do I go about wrapping all of these functions into one query?
EDIT: I think I asked my question the wrong way. I don't mean that I HAVE to do it all in one query... I'm asking IS THERE a clear, concise, and efficient way to do this with LINQ in one query? If not I'll just go back to looping through.
If I can be pointed in the right direction it would be a huge help.
If someone has an easier idea in mind to do all this, please let me know.
I just wanted to avoid iterating through a huge array of values and using Regex.Split on every line.
Let me know if I need to clarify.
Thanks!
*EDIT 6/15***
I figured it out. Thanks to those who answered it helped out, but was not quite what I needed. As a side note I ended up changing it all up anyways. LINQ was actually slower than doing it other ways that I won't go into as it's not relevent. As to those who made multiple comments on "It's silly to have it in one query", that's the decision of the designer. All "Best Practices" don't work in all places. They are guidelines. Believe me, I do want to keep my code clear and understandable but I also had a very specific reasoning for doing it the way I did.
I do appreciate the help and direction.
Below is the prototype that I used but later abandoned.
/* Inner LINQ query Reads the Text File and gets all the Locations.
* The outer query summarizes this by getting the sum of the Zips
* and orders by City/State then ZIP */
var items = from Location in(
//Inner Query Start
(from line in File.ReadAllLines(FilePath)
let columns = line.Split(';')
where columns[0] == "US" & !string.IsNullOrEmpty(columns[4])
select new
{
City = (FM.DecodeSLIC(columns[1].Trim()) + " " + columns[2].Trim()),
County = columns[3].Trim(),
ZipCode = columns[4].Trim()
}
))
//Inner Query End
orderby Location.City, Location.ZipCode
group Location by new { Location.City, Location.ZipCode , Location.County} into grp
select new
{
City = grp.Key.City,
County = grp.Key.County,
ZipCode = grp.Key.ZipCode,
ZipCount = grp.Count()
};
The downside of using File.ReadAllLines is that you have to pull the entire file into memory before operating over it. Also, using Columns[] is a bit clunky. You might want to consider my article describing using DynamicObject and streaming the file as an alternative implemetnation. The grouping/counting operation is secondary to that discussion.
var items = (from c in
(from line in File.ReadAllLines(myFile)
let columns = line.Split(';')
where columns[0] == "US"
select new
{
City = columns[1].Trim(),
State = columns[2].Trim(),
County = columns[3].Trim(),
ZipCode = columns[4].Trim()
})
select c);
foreach (var i in items.GroupBy(an => an.City + "," + an.State))
{
Console.WriteLine("{0} ({1})",i.Key, i.Count());
foreach (var j in i.GroupBy(an => an.ZipCode))
{
Console.WriteLine(" - {0} ({1})", j.Key, j.Count());
}
}
There is no point getting everything into one query. It's better to split the queries so that it would be meaningful. Try this to your results
var grouped = items.GroupBy(a => new { a.City, a.State, a.ZipCode }).Select(a => new { City = a.Key.City, State = a.Key.State, ZipCode = a.Key.ZipCode, ZipCount = a.Count()}).ToList();
Result screen shot
EDIT
Here is the one big long query which gives the same output
var itemsGrouped = File.ReadAllLines(myFile).Select(a => a.Split(';')).Where(a => a[0] == "US").Select(a => new { City = a[1].Trim(), State = a[2].Trim(), County = a[3].Trim(), ZipCode = a[4].Trim() }).GroupBy(a => new { a.City, a.State, a.ZipCode }).Select(a => new { City = a.Key.City, State = a.Key.State, ZipCode = a.Key.ZipCode, ZipCount = a.Count() }).ToList();

Error from use of C# Linq SQL CONCAT

I have the following three tables, and need to bring in information from two dissimilar tables.
Table baTable has fields OrderNumber and Position.
Table accessTable has fields OrderNumber and ProcessSequence (among others)
Table historyTable has fields OrderNumber and Time (among others).
.
var progress = from ba in baTable
from ac in accessTable
where ac.OrderNumber == ba.OrderNumber
select new {
Position = ba.Position.ToString(),
Time = "",
Seq = ac.ProcessSequence.ToString()
};
progress = progress.Concat(from ba in baTable
from hs in historyTable
where hs.OrderNumber == ba.OrderNumber
select new {
Position = ba.Position.ToString(),
Time = String.Format("{0:hh:mm:ss}", hs.Time),
Seq = ""
});
int searchRecs = progress.Count();
The query compiles successfully, but when the SQL executes during the call to Count(), I get an error
All queries combined using a UNION, INTERSECT or EXCEPT operator must have an equal number of expressions in their target lists.
Clearly the two lists each have three items, one of which is a constant. Other help boards suggested that the Visual Studio 2010 C# compiler was optimizing out the constants, and I have experimented with alternatives to the constants.
The most surprising thing is that, if the Time= entry within the select new {...} is commented out in both of the sub-queries, no error occurs when the SQL executes.
I actually think the problem is that Sql won't recognize your String.Format(..) method.
Change your second query to:
progress = progress.Concat(from ba in baTable
from hs in historyTable
where hs.OrderNumber == ba.OrderNumber
select new {
Position = ba.Position.ToString(),
Time = hs.Time.ToString(),
Seq = ""
});
After that you could always loop trough the progress and format the Time to your needs.

Categories

Resources