Aligning multiple line strings c# console app - c#

I'm making a blackjack game and would like to show the cards next to each other as the player draws them. I have the cards saved as strings within a card object and the strings look sort of like this I couldn't figure out how to actually put it in here:
public void CreateDeck()
{
myDeck[0] = new Card(1, "hearts", #"
______________
| A |
| |
| |
| |
| HEARTS |
| |
| |
| A |
|______________|", #"
______________
|//////////////|
|//////////////|
|//////////////|
|//////////////|
|//////////////|
|//////////////|
|//////////////|
|//////////////|
|//////////////|");
The ace of hearts is basically what I would like to output. I can output the cards but every time I output a second card it goes to a new line instead of next to the first card
I would like the output to look like this to display a players and dealers hand
______________ ______________
| J | | 10 |
| | | |
| | | |
| | | |
| CLUBS | | HEARTS |
| | | |
| | | |
| J | | 10 |
|______________| |______________|

You could try redrawing the whole set of cards as they are drawn. Keep the cards in a collection (let's say Cards) with a method to output the art (overriding the public ToString here) Then you will need to split by the carriage return and draw line by line as below. As others have suggested, you will find it much easier (less hacky) to do this in a Windows app.
var cardLines = cards.Select(x =>
x.ToString().Split('\r').ToList().Select(y => y.Replace("\r", string.Empty).Replace("\n", string.Empty)
).ToList()).ToList();
var maximumCardHeight = cardLines.Max(x => x.Count);
for (var i = 0; i < maximumCardHeight - 1; i++)
{
cardLines.ForEach(x =>
{
if (i < x.Count)
Console.Write(x[i]);
});
Console.WriteLine();
}

Related

Calculating Unique Combinations of Locations

my question is more of algorithm design nature than about programming. I have 6 buildings in my dataset and a table with distances from each building to each building:
| From_Building_ID | To_Building_ID | Distance_Mile |
+------------------+----------------+---------------+
| 1368 | 10692 | 167.201 |
| 1368 | 10767 | 216.307 |
| 1368 | 6377 | 359.002 |
| 1368 | 10847 | 362.615 |
| 1368 | 10080 | 67.715 |
| 6377 | 10692 | 488.3 |
| 6377 | 1368 | 359.002 |
| 6377 | 10080 | 327.024 |
| 6377 | 10767 | 150.615 |
| 6377 | 10847 | 41.421 |
| 10080 | 10847 | 330.619 |
| 10080 | 6377 | 327.024 |
| 10080 | 10767 | 184.329 |
| 10080 | 10692 | 166.549 |
| 10080 | 1368 | 67.715 |
| 10692 | 1368 | 167.201 |
| 10692 | 10767 | 345.606 |
| 10692 | 6377 | 488.3 |
| 10692 | 10847 | 491.898 |
| 10692 | 10080 | 166.549 |
| 10767 | 1368 | 216.307 |
| 10767 | 10692 | 345.606 |
| 10767 | 10080 | 184.329 |
| 10767 | 10847 | 154.22 |
| 10767 | 6377 | 150.615 |
| 10847 | 6377 | 41.4211 |
| 10847 | 10692 | 491.898 |
| 10847 | 1368 | 362.615 |
| 10847 | 10080 | 330.619 |
| 10847 | 10767 | 154.22 |
+------------------+----------------+---------------+
My goal is to get a short table that includes unique combination of buildings. If a combination between any two buildings has already appeared it should not appear twice, so eventually I should end up with half the number of rows of the original set. I will then sum up the distances (for compensation purposes). the end result should look similar to this:
+------------------+----------------+---------------+
| From_Building_ID | To_Building_ID | Distance_Mile |
+------------------+----------------+---------------+
| 1368 | 10692 | 167.201 |
| 1368 | 10767 | 216.307 |
| 1368 | 6377 | 359.002 |
| 1368 | 10847 | 362.615 |
| 1368 | 10080 | 67.715 |
| 6377 | 10692 | 488.3 |
| 6377 | 10080 | 327.024 |
| 6377 | 10767 | 150.615 |
| 6377 | 10847 | 41.421 |
| 10080 | 10847 | 330.619 |
| 10080 | 10767 | 184.329 |
| 10080 | 10692 | 166.549 |
| 10692 | 10767 | 345.606 |
| 10692 | 10847 | 491.898 |
| 10767 | 10847 | 154.22 |
+------------------+----------------+---------------+
I created a class in C# with the appropriate properties:
class Distances
{
public int FromBuldingID { get; set; }
public int ToBuildingID { get; set; }
public double Distance_Mile { get; set; }
public Distances(int f, int t, double mile)
{
FromBuldingID = f;
ToBuildingID = t;
Distance_Mile = mile;
}
}
and created a List<Distances> dist that contains all the distances as described.
I tried to select distinct distances, but the data is not reliable, so it's not a viable option,
(for example the distances between 6377 10847 and 10847 6377 are not the same).
I am trying now to design my algorithm, without much success so far:
for (int i = 0; i < dist.Count; i++)
{
if (true)// what may the condition be?
{
}
}
Any help would be appreciated. Thanks!
One way:
var uniques = dist.Where(d=>d.FromBuildingID < d.ToBuildingID).ToList();
A more robust way, which will take both A:B and B:A and use the one with the smallest Distance_Mile, and throw out the other.
var uniques = dist
.GroupBy(d=>new {
a=Math.Min(d.FromBuildingID, d.ToBuildingID),
b=Math.Max(d.FromBuildingID, d.ToBuildingID)
}).Select(d=>d.OrderBy(z=>z.Distance_Mile).First())
.ToList();
In either case, if you just want the sum, instead of the final .ToList(), just put .Sum(d=>d.Distance_Mile)
One way to think about this problem is to consider that we want to use the System.Linq extension method, Distinct() to filter our duplicate items, but that method uses the class's default equality comparer to determine if two instances are equal, and the default comparer uses a reference comparison, which doesn't work for our scenario.
Since we want to consider two instances equal if either their FromBuildingId and ToBuildindId properties are equal, or if one's FromBuildingId equals the other's ToBuildingId, and it's ToBuildingId equals the other's FromBuildingId, we need to override the class's default Equals (and GetHashCode) method with that logic:
public class Distance
{
public int FromBuildingId { get; set; }
public int ToBuildingId { get; set; }
public double TotalMiles { get; set; }
public Distance(int fromBuildingId, int toBuildingId, double totalMiles)
{
FromBuildingId = fromBuildingId;
ToBuildingId = toBuildingId;
TotalMiles = totalMiles;
}
public override bool Equals(object obj)
{
var other = obj as Distance;
return other != null &&
(other.FromBuildingId == FromBuildingId && other.ToBuildingId == ToBuildingId) ||
(other.FromBuildingId == ToBuildingId && other.ToBuildingId == FromBuildingId);
}
public override int GetHashCode()
{
unchecked
{
return 17 * (FromBuildingId.GetHashCode() + ToBuildingId.GetHashCode());
}
}
}
With this done, we can now use the Distinct method on our list:
var distances = new List<Distance>
{
new Distance(1, 2, 3.4),
new Distance(2, 1, 3.3), // Should be considered equal to #1
new Distance(5, 6, 7.8),
new Distance(5, 6, 7.2) // Should be considered equal to #3
};
// remove duplicates
var uniqueDistances = distances.Distinct().ToList();
// uniqueDistnaces will only have 2 items: the first and the third from distances.
And then it's just one more extension method to get the Sum of the distinct distances:
var sum = distances.Distinct().Sum(d => d.TotalMiles);
The other answers using LINQ are valid but be aware using LINQ is generally a choice of readability vs performance. If you wished your algorithm to be able to scale performance-wise to much larger datasets, you can use dictionarys with value tuples as keys to achieve fast duplicate checking for each combination in the list when looping through.
Dictionary<ValueTuple<int, int>, boolean> uniqueCombinations = new Dictionary<ValueTuple<int, int>, boolean>();
Be aware value tuples are only available from C# 7.0 onwards. Otherwise you can use standard tuples as the key which will have a performance decrease but the dictionary structure should still make it faster than using LINQ. Tuples are the cleanest way of using unique pairs for dictionary keys since arrays are awkward to compare, using hashcodes to compare rather than the actual values in it.
Insertion should be be done in (toBuildingId, fromBuildingId) order while checking for duplicates in the dictionary should be reverse order with (fromBuildingId, toBuildingId). The boolean value is largely unnecessary but a value is needed to use the unique properties of the Dictionary data structure for fast checking of duplicates.

Determine node level in tree command output

I'm attempting to populate a C# TreeView from the output of the DOS Tree command(tree /F /A > treeList.txt). I need to determine the level of each node in the text file line by line and store it as an integer. Is there a way that this can be determined through Regex expressions? Below is an example of the output from the Tree command:
Folder PATH listing
Volume serial number is ****-****
C:.
| info.txt
| treeList.txt
|
+---Folder1
| +---Sub1
| | | info.txt
| | | info2.txt
| | | info3.txt
| | |
| | \---Sub
| | | info.txt
| | |
| | \---Sub
| | info.txt
| | info2.txt
| |
| +---Sub2
| \---Sub3
+---Folder2
| | info.txt
| | info2.txt
| |
| +---Sub1
| | info.txt
| |
| +---Sub2
| +---Sub3
| | info.txt
| |
| \---Sub4
+---Folder3
| \---Sub1
+---Folder4
| +---Sub1
| \---Sub2
| info.txt
| info2.txt
|
\---Folder5
info.txt
This is an example of the output I'm trying to achieve:
info.txt 0
treeList.txt 0
Folder1 0
Sub1 1
info.txt 2
info2.txt 2
info3.txt 2
Sub 2
info.txt 3
Sub 3
info.txt 4
info2.txt 4
Folder2 0
And so on...
Any assistance or guidance is greatly appreciated.
I have an idea. You may want to replace in every line of the tree's string every special character of the tree by replacing:
[|\\-\+]
and after that count spaces between beginning of the line and name of the file or folder.
Number of spaces will tell You how deep you are in lvl.
Then You also may divide number of spaces by 3 and You will get aproximately a number of lvl.
What do You think?
Expression used to split text at beginning of node:
((?:[a-zA-Z0-9][+'-_ ()a-zA-Z0-9.]*))
Code used to determine Node level:
List<TreeItem> items=new List<TreeItem>();
int lineNum=0;
string line;
// Read the file
StreamReader file=new StreamReader("<Path>");
while((line=file.ReadLine())!=null) {
string[] parts=Regex.Split(line,"((?:[a-zA-Z0-9][+'-_ ()a-zA-Z0-9.]*))");
if(parts.Length>1) {
//Node level is determined by the number of characters preceding node text
items.Add(new TreeItem(parts[1],(parts[0].Length/4)-1));
}
lineNum++;
}
file.Close();

FIX Reading Repeating groups

I have a FIX log file. I'm iterating on the lines, putting each string into
Message m = new Message(str, false)
Because for some reason, validation fails on the file (even the first line). Now, I see that it's a 35=X type, and 268=4 (i.e. NoMDEntries=4, so I should have 4 groups in the message)
BUT, in the debug display I am not seeing any groups. m.base._groups has a count of 0.
The string in question is:
1128=9 | 9=363 | 35=X | 49=CME | 34=3151 | 52=20121216223556363 | 75=20121217 | 268=4 | 279=0 | 22=8 | 48=43585 | 83=902 | 107=6EH3 | 269=4 | 270=13186 | 273=223556000 | 286=5 | 279=0 | 22=8 | 48=43585 | 83=903 | 107=6EH3 | 269=E | 270=13186 | 271=9 | 273=223556000 | 279=0 | 22=8 | 48=43585 | 83=904 | 107=6EH3 | 269=F | 270=13185 | 273=223556000 | 279=1 | 22=8 | 48=43585 | 83=905 | 107=6EH3 | 269=0 | 270=13186 | 271=122 | 273=223556000 | 336=0 | 346=10 | 1023=1 | 10=179 |
Another thing is how do I read the groups? Instinctively, I want to do something like
for (int i = 1; i <= noMDEntries; i++) {
Group g = m.GetGroup(i);
int action = Int32.Parse(g.GetField(279));
....
}
But that's not how it works and I haven't found documentation with better explanations.
Thanks for the help,
Yonatan.
From your code snippets, I think you're using QuickFIX/n, the native C# implementation, so I will answer accordingly.
1) Your message construction is failing because you didn't provide a DataDictionary.
Use Message::FromString instead:
Message m = new Message();
m.FromString(msg_str, false, data_dic, data_dic, someMsgFactory);
Even better, use MarketDataIncrementalRefresh::FromString to get the right return type.
You can see some uses of this function here:
https://github.com/connamara/quickfixn/blob/master/UnitTests/MessageTests.cs
2) To read groups... well, QF/n has a doc page on that, which I think explains it pretty well.
http://quickfixn.org/tutorial/repeating-groups

Possible to add two columns together and group by that column using linq on IQueryable?

My data comes from IQueryable that looks like the following table when I return all of the DailyCollectionActivities.
CatType | CdDescr | CollTypeDescr | RollCasteDescr | TIFDescr | TADescr | Amount
--------------------------------------------------------------------------------
Cat 1 | Cd 1 | CollType 233 | Roll Caste 234 | TIF 2344 | TA 2343 | 344.35
Cat 1 | Cd 1 | CollType 222 | Roll Caste 235 | TIF 2345 | TA 2344 | 355.35
Cat 2 | Cd 2 | CollType 223 | Roll Caste 236 | TIF 2346 | TA 2345 | 664.44
Cat 3 | Cd 3 | CollType 255 | Roll Caste 236 | TIF 2347 | TA 2346 | 455.34
Cat 4 | Cd 4 | CollType 266 | Roll Caste 236 | TIF 2348 | TA 2347 | 455.44
I'm trying to find out if it's possible, using linq on the IQueryable<DailyCollectionActivity> data, to add two columns together and then group on that newly created column? For example I need to figure out how to add CatType to CdDescr (CatType + '-' + CdDescr) and then group by that newly created column and sum the Amounts. Finally, I then need to take the results of that query and bind it to a RadGrid.
To make things more interesting, the user is allowed to choose which columns get added together. I could wind up with a group by clause like (CatType + '-' CdDescr), (TIFDescr + '-' + TADescr).
Is this something that I can reasonably accomplish using Linq?
say its from categories
var result = from s in (
from p in categories
select new {
TypeDesr = p.CatType+"-"+ p.CdDesr,
CollTypeDescr = p.CollTypeDescr ,
RollCasteDescr = p.RollCasteDescr,
TIFDescr = p.TIFDescr,
TADescr = p.TADescr,
Amount = p.Amount
})
group s by s.TypeDesr into r
select r;

Antlr - decision matching with multiple alternatives

I'm trying to get a match to work for the following rule which will match operator identifiers, but it's not having it, specifically on the lines that match == and ..:
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u0080' .. '\u007F'
| '==' '='*
| '..' '.'*
)+
;
The rules with U_ prefixes refer to unicode group lexer fragments. I have removed the = character from the U_Sm fragment, so that shouldn't be an issue.
Valid identifiers could be the following:
==
!==
<<~
..
!..
<...
±϶⇎
Invalid identifiers would be:
. (Member access is done separately)
= (Assignment is done separately)
!. (Single dots or single equals signs in an identifier are disallowed)
As you can see, the rule can include two or more equals signs or full stops in an identifier (e.g. !.., <...>, !==) or as the whole identifier (e.g. .., ==, ===), but not one.
The errors and warnings given by the compiler are the following:
warning(200): Hydra.g3:6:18: Decision can match input such as "'='<EOT>" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input
warning(200): Hydra.g3:7:18: Decision can match input such as "'.'<EOT>" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input
error(201): Hydra.g3:8:9: The following alternatives can never be matched: 4
A test grammar to reproduce the errors:
grammar Test;
program :
Symbol
;
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u0080' .. '\u007F'
| '==' '='*
| '..' '.'*
)+
;
fragment U_Sm
: '\u002B'
| '\u003C' | '\u003E'
| '\u007C' | '\u007E'
| '\u00AC' | '\u00B1'
| '\u00D7' | '\u00F7'
| '\u03F6'
| '\u0606' .. '\u0608'
| '\u2044' | '\u2052'
| '\u207A' .. '\u207C'
| '\u208A' .. '\u208C'
| '\u2140' .. '\u2144'
| '\u214B'
| '\u2190' .. '\u2194'
| '\u219A' | '\u219B'
| '\u21A0' | '\u21A3'
| '\u21A6' | '\u21AE'
| '\u21CE' | '\u21CF'
| '\u21D2' | '\u21D4'
| '\u21F4' .. '\u22FF'
| '\u2308' .. '\u230B'
| '\u2320' | '\u2321'
| '\u237C'
| '\u239B' .. '\u23B3'
| '\u23DC' .. '\u23E1'
| '\u25B7' | '\u25C1'
| '\u25F8' .. '\u25FF'
| '\u266F'
| '\u27C0' .. '\u27C4'
| '\u27C7' .. '\u27CA'
| '\u27CC'
| '\u27D0' .. '\u27E5'
| '\u27F0' .. '\u27FF'
| '\u2900' .. '\u2982'
| '\u2999' .. '\u29D7'
| '\u29DC' .. '\u29FB'
| '\u29FE' .. '\u2AFF'
| '\u2B30' .. '\u2B44'
| '\u2B47' .. '\u2B4C'
| '\uFB29' | '\uFE62'
| '\uFE64' .. '\uFE66'
| '\uFF0B'
| '\uFF1C' .. '\uFF1E'
| '\uFF5C' | '\uFF5E'
| '\uFFE2'
| '\uFFE9' .. '\uFFEC'
;
fragment U_So
: '\u00A6' | '\u00A7'
| '\u00A9' | '\u00AE'
| '\u00B0' | '\u00B6'
| '\u0482' | '\u060E'
| '\u060F' | '\u06E9'
| '\u06FD' | '\u06FE'
| '\u07F6' | '\u09FA'
| '\u0B70'
| '\u0BF3' .. '\u0BF8'
| '\u0BFA' | '\u0C7F'
| '\u0CF1' | '\u0CF2'
| '\u0D79'
| '\u0F01' .. '\u0F03'
| '\u0F13' .. '\u0F17'
| '\u0F1A' .. '\u0F1F'
| '\u0F34' | '\u0F36'
| '\u0F38'
| '\u0FBE' .. '\u0FC5'
| '\u0FC7' .. '\u0FCC'
| '\u0FCE' | '\u0FCF'
| '\u0FD5' .. '\u0FD8'
| '\u109E' | '\u109F'
| '\u1360'
| '\u1390' .. '\u1399'
| '\u1940'
| '\u19E0' .. '\u19FF'
| '\u1B61' .. '\u1B6A'
| '\u1B74' .. '\u1B7C'
| '\u2100' | '\u2101'
| '\u2103' .. '\u2106'
| '\u2108' | '\u2109'
| '\u2114'
| '\u2116' .. '\u2118'
| '\u211E' .. '\u2123'
| '\u2125' | '\u2127'
| '\u2129' | '\u212E'
| '\u213A' | '\u213B'
| '\u214A' | '\u214C'
| '\u214D' | '\u214F'
| '\u2195' .. '\u2199'
| '\u219C' .. '\u219F'
| '\u21A1' | '\u21A2'
| '\u21A4' | '\u21A5'
| '\u21A7' .. '\u21AD'
| '\u21AF' .. '\u21CD'
| '\u21D0' | '\u21D1'
| '\u21D3'
| '\u21D5' .. '\u21F3'
| '\u2300' .. '\u2307'
| '\u230C' .. '\u231F'
| '\u2322' .. '\u2328'
| '\u232B' .. '\u237B'
| '\u237D' .. '\u239A'
| '\u23B4' .. '\u23DB'
| '\u23E2' .. '\u23E8'
| '\u2400' .. '\u2426'
| '\u2440' .. '\u244A'
| '\u249C' .. '\u24E9'
| '\u2500' .. '\u25B6'
| '\u25B8' .. '\u25C0'
| '\u25C2' .. '\u25F7'
| '\u2600' .. '\u266E'
| '\u2670' .. '\u26CD'
| '\u26CF' .. '\u26E1'
| '\u26E3'
| '\u26E8' .. '\u26FF'
| '\u2701' .. '\u2704'
| '\u2706' .. '\u2709'
| '\u270C' .. '\u2727'
| '\u2729' .. '\u274B'
| '\u274D'
| '\u274F' .. '\u2752'
| '\u2756' .. '\u275E'
| '\u2761' .. '\u2767'
| '\u2794'
| '\u2798' .. '\u27AF'
| '\u27B1' .. '\u27BE'
| '\u2800' .. '\u28FF'
| '\u2B00' .. '\u2B2F'
| '\u2B45' | '\u2B46'
| '\u2B50' .. '\u2B59'
| '\u2CE5' .. '\u2CEA'
| '\u2E80' .. '\u2E99'
| '\u2E9B' .. '\u2EF3'
| '\u2F00' .. '\u2FD5'
| '\u2FF0' .. '\u2FFB'
| '\u3004' | '\u3012'
| '\u3013' | '\u3020'
| '\u3036' | '\u3037'
| '\u303E' | '\u303F'
| '\u3190' | '\u3191'
| '\u3196' .. '\u319F'
| '\u31C0' .. '\u31E3'
| '\u3200' .. '\u321E'
| '\u322A' .. '\u3250'
| '\u3260' .. '\u327F'
| '\u328A' .. '\u32B0'
| '\u32C0' .. '\u32FE'
| '\u3300' .. '\u33FF'
| '\u4DC0' .. '\u4DFF'
| '\uA490' .. '\uA4C6'
| '\uA828' .. '\uA82B'
| '\uA836' | '\uA837'
| '\uA839'
| '\uAA77' .. '\uAA79'
| '\uFDFD' | '\uFFE4'
| '\uFFE8' | '\uFFED'
| '\uFFEE' | '\uFFFC'
| '\uFFFD'
;
fragment U_Sc
: '\u0024'
| '\u00A2' .. '\u00A5'
| '\u060B' | '\u09F2'
| '\u09F3' | '\u09FB'
| '\u0AF1' | '\u0BF9'
| '\u0E3F' | '\u17DB'
| '\u20A0' .. '\u20B8'
| '\uA838' | '\uFDFC'
| '\uFE69' | '\uFF04'
| '\uFFE0' | '\uFFE1'
| '\uFFE5' | '\uFFE6'
;
The range '\u0080' .. '\u007F' is invalid since 0x80 is larger than 0x7F.
It seems ANTLR has a problem with your nested repetition: ( ... ( ... )+ ... )+. Even though ANTLR's + and * are greedy by default (except for .* and .+), it appears that in such nested repetitions you need to explicitly tell ANTLR to either match ungreedy or greedy (greedy in your case).
The following rule does not produce any errors:
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u007F' .. '\u0080'
| '==' (options{greedy=true;}: '=')*
| '..' (options{greedy=true;}: '.')*
)+
;

Categories

Resources