I'm trying to implement a ZeroMQ pattern on the TCP layer between a c# application and distributed python servers. I've gotten a version working with the request-reply REQ/REP pattern and it seems relatively stable when testing on localhost. However, in testing, I've debugged a few situations, where I accidently send multiple requests before receiving a reply which apparently is not acceptable.
In practice the network will likely have lots of dropped packets and I suspect that I'll be dropping lots of replies and/or unable to send requests.
1) Is there a way to reset the connection between REQ/REP request-reply sockets?Would a REOUTER/DEALER pattern instead make more sense? As this is my first application with ZeroMQ, I was hoping to keep it simple.
2) Is there a good ZeroMQ mechanism for handling the connectivity events? I've been reading "the guide" and there are a few mentions of monitoring connections, but no examples. I found the ZMonitor, but can't get the events to trigger in c#.
Ad 1) No, there is not any socket link-management interface exposed to user to test/reset the state of the FSA-to-FSA link in ZeroMQ framework.
Yes, XREQ/XREP may help you overcome the deadlocks, that may & do happen in REQ/REP Scaleable Formal Communication Pattern:
Ref.: REQ/REP Deadlocks >>> https://stackoverflow.com/a/38163015/3666197
Fig.1: Why it is wrong to use a naive REQ/REPall cases when [1]in_WaitToRecvSTATE_W2R + [2]in_WaitToRecvSTATE_W2Rare principally unsalvageable mutual deadlock of REQ-FSA/REP-FSA Finite-State-Automata and will never reach the "next" in_WaitToSendSTATE_W2S internal state.
XTRN_RISK_OF_FSA_DEADLOCKED ~ { NETWORK_LoS
: || NETWORK_LoM
: || SIG_KILL( App2 )
: || ...
: }
:
[App1] ![ZeroMQ] : [ZeroMQ] ![App2]
code-control! code-control : [code-control ! code-control
+===========!=======================+ : +=====================!===========+
| ! ZMQ | : | ZMQ ! |
| ! REQ-FSA | : | REP-FSA! |
| !+------+BUF> .connect()| v |.bind() +BUF>------+! |
| !|W2S |___|>tcp:>---------[*]-----(tcp:)--|___|W2R |! |
| .send()>-o--->|___| | | |___|-o---->.recv() |
| ___/ !| ^ | |___| | | |___| ^ | |! \___ |
| REQ !| | v |___| | | |___| | v |! REP |
| \___.recv()<----o-|___| | | |___|<---o-<.send()___/ |
| !| W2R|___| | | |___| W2S|! |
| !+------<BUF+ | | <BUF+------+! |
| ! | | ! |
| ! ZMQ | | ZMQ ! |
| ! REQ-FSA | | REP-FSA ! |
~~~~~~~~~~~~~ DEADLOCKED in W2R ~~~~~~~~ * ~~~~~~ DEADLOCKED in W2R ~~~~~~~~~~~~~
| ! /\/\/\/\/\/\/\/\/\/\/\| |/\/\/\/\/\/\/\/\/\/\/! |
| ! \/\/\/\/\/\/\/\/\/\/\/| |\/\/\/\/\/\/\/\/\/\/\! |
+===========!=======================+ +=====================!===========+
Fig.2: One may implement a free-stepping transmission layer using several pure ZeroMQ builtins and add some SIG-layer tools for getting a full control of all possible distributed system states.
App1.PULL.recv( ZMQ.NOBLOCK ) and App1.PULL.poll( 0 ) are obvious
[App1] ![ZeroMQ]
code-control! code-control
+===========!=======================+
| ! |
| !+----------+ |
| .poll()| W2R ___|.bind() |
| ____.recv()<----o-|___|-(tcp:)--------O
| PULL !| |___| | :
| !| |___| | :
| !| |___| | :
| !+------<BUF+ | :
| ! | : ![App2]
| ! | : [ZeroMQ] ! code-control
| ! | : [code-control ! once gets started ...
| ! | : +=====================!===========+
| ! | : | ! |
| ! | : | +----------+! |
| ! | : | |___ |! |
| ! | : | |___| <--o-<.send()____ |
| ! | :<<-------<tcp:<|___| W2S|! PUSH |
| ! | : .connect() <BUF+------+! |
| ! | : | ! |
| ! | : | ! |
+===========!=======================+ : +=====================!===========+
Ad 2) No, but one may create one's own "ZeroMQ-consumables" to test the distributed system's ability to setup a new transport/signalling socket, being ready to dispose it, if the RTO-test fails to prove that both ( multiple ) sides are ready to setup + communicate over the ZeroMQ infrastructure ( notice, that the problems are not only with the ZeroMQ layer, but also the App-side need not be ready/in such a state to handle the expected communication interactions ( and may cause soft-locks / dead-locks ).
The best next step?
What I can do for your further questions right now is to direct you to see a bigger picture on this subject >>> with more arguments, a simple signalling-plane / messaging-plane illustration and a direct link to a must-read book from Pieter HINTJENS.
Related
I have this ef query that give me the following result
IQueryable<A>
| Id | count |
| 1 | 5 |
| 2 | 6 |
IQueryable<B>
| Id | count |
| 1 | 1 |
| 2 | 2 |
| 3 | 9 |
When I do
IQueryable<Something> C = A.union(B)
Result that I got is this
| Id | count |
| 1 | 5 |
| 2 | 6 |
| 1 | 1 |
| 2 | 2 |
| 3 | 9 |
Whish is logical.
What I want is a UnionBy(Id)
IQueryable<Something> C = A.unionBy(B,c=>c.Id)
and this work perfectly in my case
| Id | count |
| 1 | 5 | -- FROM A
| 2 | 6 | -- FROM A
| 3 | 9 | -- FROM B
If the Query A or B are already executed by that I mean a ToList() was made it work perfectly and I have no problem in anyway.
But in my case, both queries are not executed and thus using this function result in.
System.InvalidOperationException query could not be translated.
the alternative is to use a GroupBy however I have no idea how to replacte UnionBy behavior with the GroupBy
FYI: the query works perfectly using the IQueryable.Union
and it's mandatory in my case that the request stay in IQueryable and not executed until later
UPDATE
⚠️ The solution that I'm looking for must stay in IQueryable without a toList() execution
"query could not be translated" usually means that EF doesn't support a certain LINQ or language construct as it can't translate it into SQL. One way to make this work is to force the evaluation of the expression client-side by adding e.g. ToList() or likes on the query parts before executing the UnionBy:
IQueryable<Something> C = A.ToList().UnionBy(B.ToList(),c=>c.Id);
The solution is simple you filtre A From B using the following
IQueryable<Something> C = A.Union(B.where(b=> A.All(a=>a.Id != b.Id))
my question is more of algorithm design nature than about programming. I have 6 buildings in my dataset and a table with distances from each building to each building:
| From_Building_ID | To_Building_ID | Distance_Mile |
+------------------+----------------+---------------+
| 1368 | 10692 | 167.201 |
| 1368 | 10767 | 216.307 |
| 1368 | 6377 | 359.002 |
| 1368 | 10847 | 362.615 |
| 1368 | 10080 | 67.715 |
| 6377 | 10692 | 488.3 |
| 6377 | 1368 | 359.002 |
| 6377 | 10080 | 327.024 |
| 6377 | 10767 | 150.615 |
| 6377 | 10847 | 41.421 |
| 10080 | 10847 | 330.619 |
| 10080 | 6377 | 327.024 |
| 10080 | 10767 | 184.329 |
| 10080 | 10692 | 166.549 |
| 10080 | 1368 | 67.715 |
| 10692 | 1368 | 167.201 |
| 10692 | 10767 | 345.606 |
| 10692 | 6377 | 488.3 |
| 10692 | 10847 | 491.898 |
| 10692 | 10080 | 166.549 |
| 10767 | 1368 | 216.307 |
| 10767 | 10692 | 345.606 |
| 10767 | 10080 | 184.329 |
| 10767 | 10847 | 154.22 |
| 10767 | 6377 | 150.615 |
| 10847 | 6377 | 41.4211 |
| 10847 | 10692 | 491.898 |
| 10847 | 1368 | 362.615 |
| 10847 | 10080 | 330.619 |
| 10847 | 10767 | 154.22 |
+------------------+----------------+---------------+
My goal is to get a short table that includes unique combination of buildings. If a combination between any two buildings has already appeared it should not appear twice, so eventually I should end up with half the number of rows of the original set. I will then sum up the distances (for compensation purposes). the end result should look similar to this:
+------------------+----------------+---------------+
| From_Building_ID | To_Building_ID | Distance_Mile |
+------------------+----------------+---------------+
| 1368 | 10692 | 167.201 |
| 1368 | 10767 | 216.307 |
| 1368 | 6377 | 359.002 |
| 1368 | 10847 | 362.615 |
| 1368 | 10080 | 67.715 |
| 6377 | 10692 | 488.3 |
| 6377 | 10080 | 327.024 |
| 6377 | 10767 | 150.615 |
| 6377 | 10847 | 41.421 |
| 10080 | 10847 | 330.619 |
| 10080 | 10767 | 184.329 |
| 10080 | 10692 | 166.549 |
| 10692 | 10767 | 345.606 |
| 10692 | 10847 | 491.898 |
| 10767 | 10847 | 154.22 |
+------------------+----------------+---------------+
I created a class in C# with the appropriate properties:
class Distances
{
public int FromBuldingID { get; set; }
public int ToBuildingID { get; set; }
public double Distance_Mile { get; set; }
public Distances(int f, int t, double mile)
{
FromBuldingID = f;
ToBuildingID = t;
Distance_Mile = mile;
}
}
and created a List<Distances> dist that contains all the distances as described.
I tried to select distinct distances, but the data is not reliable, so it's not a viable option,
(for example the distances between 6377 10847 and 10847 6377 are not the same).
I am trying now to design my algorithm, without much success so far:
for (int i = 0; i < dist.Count; i++)
{
if (true)// what may the condition be?
{
}
}
Any help would be appreciated. Thanks!
One way:
var uniques = dist.Where(d=>d.FromBuildingID < d.ToBuildingID).ToList();
A more robust way, which will take both A:B and B:A and use the one with the smallest Distance_Mile, and throw out the other.
var uniques = dist
.GroupBy(d=>new {
a=Math.Min(d.FromBuildingID, d.ToBuildingID),
b=Math.Max(d.FromBuildingID, d.ToBuildingID)
}).Select(d=>d.OrderBy(z=>z.Distance_Mile).First())
.ToList();
In either case, if you just want the sum, instead of the final .ToList(), just put .Sum(d=>d.Distance_Mile)
One way to think about this problem is to consider that we want to use the System.Linq extension method, Distinct() to filter our duplicate items, but that method uses the class's default equality comparer to determine if two instances are equal, and the default comparer uses a reference comparison, which doesn't work for our scenario.
Since we want to consider two instances equal if either their FromBuildingId and ToBuildindId properties are equal, or if one's FromBuildingId equals the other's ToBuildingId, and it's ToBuildingId equals the other's FromBuildingId, we need to override the class's default Equals (and GetHashCode) method with that logic:
public class Distance
{
public int FromBuildingId { get; set; }
public int ToBuildingId { get; set; }
public double TotalMiles { get; set; }
public Distance(int fromBuildingId, int toBuildingId, double totalMiles)
{
FromBuildingId = fromBuildingId;
ToBuildingId = toBuildingId;
TotalMiles = totalMiles;
}
public override bool Equals(object obj)
{
var other = obj as Distance;
return other != null &&
(other.FromBuildingId == FromBuildingId && other.ToBuildingId == ToBuildingId) ||
(other.FromBuildingId == ToBuildingId && other.ToBuildingId == FromBuildingId);
}
public override int GetHashCode()
{
unchecked
{
return 17 * (FromBuildingId.GetHashCode() + ToBuildingId.GetHashCode());
}
}
}
With this done, we can now use the Distinct method on our list:
var distances = new List<Distance>
{
new Distance(1, 2, 3.4),
new Distance(2, 1, 3.3), // Should be considered equal to #1
new Distance(5, 6, 7.8),
new Distance(5, 6, 7.2) // Should be considered equal to #3
};
// remove duplicates
var uniqueDistances = distances.Distinct().ToList();
// uniqueDistnaces will only have 2 items: the first and the third from distances.
And then it's just one more extension method to get the Sum of the distinct distances:
var sum = distances.Distinct().Sum(d => d.TotalMiles);
The other answers using LINQ are valid but be aware using LINQ is generally a choice of readability vs performance. If you wished your algorithm to be able to scale performance-wise to much larger datasets, you can use dictionarys with value tuples as keys to achieve fast duplicate checking for each combination in the list when looping through.
Dictionary<ValueTuple<int, int>, boolean> uniqueCombinations = new Dictionary<ValueTuple<int, int>, boolean>();
Be aware value tuples are only available from C# 7.0 onwards. Otherwise you can use standard tuples as the key which will have a performance decrease but the dictionary structure should still make it faster than using LINQ. Tuples are the cleanest way of using unique pairs for dictionary keys since arrays are awkward to compare, using hashcodes to compare rather than the actual values in it.
Insertion should be be done in (toBuildingId, fromBuildingId) order while checking for duplicates in the dictionary should be reverse order with (fromBuildingId, toBuildingId). The boolean value is largely unnecessary but a value is needed to use the unique properties of the Dictionary data structure for fast checking of duplicates.
I'm attempting to populate a C# TreeView from the output of the DOS Tree command(tree /F /A > treeList.txt). I need to determine the level of each node in the text file line by line and store it as an integer. Is there a way that this can be determined through Regex expressions? Below is an example of the output from the Tree command:
Folder PATH listing
Volume serial number is ****-****
C:.
| info.txt
| treeList.txt
|
+---Folder1
| +---Sub1
| | | info.txt
| | | info2.txt
| | | info3.txt
| | |
| | \---Sub
| | | info.txt
| | |
| | \---Sub
| | info.txt
| | info2.txt
| |
| +---Sub2
| \---Sub3
+---Folder2
| | info.txt
| | info2.txt
| |
| +---Sub1
| | info.txt
| |
| +---Sub2
| +---Sub3
| | info.txt
| |
| \---Sub4
+---Folder3
| \---Sub1
+---Folder4
| +---Sub1
| \---Sub2
| info.txt
| info2.txt
|
\---Folder5
info.txt
This is an example of the output I'm trying to achieve:
info.txt 0
treeList.txt 0
Folder1 0
Sub1 1
info.txt 2
info2.txt 2
info3.txt 2
Sub 2
info.txt 3
Sub 3
info.txt 4
info2.txt 4
Folder2 0
And so on...
Any assistance or guidance is greatly appreciated.
I have an idea. You may want to replace in every line of the tree's string every special character of the tree by replacing:
[|\\-\+]
and after that count spaces between beginning of the line and name of the file or folder.
Number of spaces will tell You how deep you are in lvl.
Then You also may divide number of spaces by 3 and You will get aproximately a number of lvl.
What do You think?
Expression used to split text at beginning of node:
((?:[a-zA-Z0-9][+'-_ ()a-zA-Z0-9.]*))
Code used to determine Node level:
List<TreeItem> items=new List<TreeItem>();
int lineNum=0;
string line;
// Read the file
StreamReader file=new StreamReader("<Path>");
while((line=file.ReadLine())!=null) {
string[] parts=Regex.Split(line,"((?:[a-zA-Z0-9][+'-_ ()a-zA-Z0-9.]*))");
if(parts.Length>1) {
//Node level is determined by the number of characters preceding node text
items.Add(new TreeItem(parts[1],(parts[0].Length/4)-1));
}
lineNum++;
}
file.Close();
I have a FIX log file. I'm iterating on the lines, putting each string into
Message m = new Message(str, false)
Because for some reason, validation fails on the file (even the first line). Now, I see that it's a 35=X type, and 268=4 (i.e. NoMDEntries=4, so I should have 4 groups in the message)
BUT, in the debug display I am not seeing any groups. m.base._groups has a count of 0.
The string in question is:
1128=9 | 9=363 | 35=X | 49=CME | 34=3151 | 52=20121216223556363 | 75=20121217 | 268=4 | 279=0 | 22=8 | 48=43585 | 83=902 | 107=6EH3 | 269=4 | 270=13186 | 273=223556000 | 286=5 | 279=0 | 22=8 | 48=43585 | 83=903 | 107=6EH3 | 269=E | 270=13186 | 271=9 | 273=223556000 | 279=0 | 22=8 | 48=43585 | 83=904 | 107=6EH3 | 269=F | 270=13185 | 273=223556000 | 279=1 | 22=8 | 48=43585 | 83=905 | 107=6EH3 | 269=0 | 270=13186 | 271=122 | 273=223556000 | 336=0 | 346=10 | 1023=1 | 10=179 |
Another thing is how do I read the groups? Instinctively, I want to do something like
for (int i = 1; i <= noMDEntries; i++) {
Group g = m.GetGroup(i);
int action = Int32.Parse(g.GetField(279));
....
}
But that's not how it works and I haven't found documentation with better explanations.
Thanks for the help,
Yonatan.
From your code snippets, I think you're using QuickFIX/n, the native C# implementation, so I will answer accordingly.
1) Your message construction is failing because you didn't provide a DataDictionary.
Use Message::FromString instead:
Message m = new Message();
m.FromString(msg_str, false, data_dic, data_dic, someMsgFactory);
Even better, use MarketDataIncrementalRefresh::FromString to get the right return type.
You can see some uses of this function here:
https://github.com/connamara/quickfixn/blob/master/UnitTests/MessageTests.cs
2) To read groups... well, QF/n has a doc page on that, which I think explains it pretty well.
http://quickfixn.org/tutorial/repeating-groups
I'm trying to get a match to work for the following rule which will match operator identifiers, but it's not having it, specifically on the lines that match == and ..:
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u0080' .. '\u007F'
| '==' '='*
| '..' '.'*
)+
;
The rules with U_ prefixes refer to unicode group lexer fragments. I have removed the = character from the U_Sm fragment, so that shouldn't be an issue.
Valid identifiers could be the following:
==
!==
<<~
..
!..
<...
±϶⇎
Invalid identifiers would be:
. (Member access is done separately)
= (Assignment is done separately)
!. (Single dots or single equals signs in an identifier are disallowed)
As you can see, the rule can include two or more equals signs or full stops in an identifier (e.g. !.., <...>, !==) or as the whole identifier (e.g. .., ==, ===), but not one.
The errors and warnings given by the compiler are the following:
warning(200): Hydra.g3:6:18: Decision can match input such as "'='<EOT>" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input
warning(200): Hydra.g3:7:18: Decision can match input such as "'.'<EOT>" using multiple alternatives: 1, 2
As a result, alternative(s) 2 were disabled for that input
error(201): Hydra.g3:8:9: The following alternatives can never be matched: 4
A test grammar to reproduce the errors:
grammar Test;
program :
Symbol
;
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u0080' .. '\u007F'
| '==' '='*
| '..' '.'*
)+
;
fragment U_Sm
: '\u002B'
| '\u003C' | '\u003E'
| '\u007C' | '\u007E'
| '\u00AC' | '\u00B1'
| '\u00D7' | '\u00F7'
| '\u03F6'
| '\u0606' .. '\u0608'
| '\u2044' | '\u2052'
| '\u207A' .. '\u207C'
| '\u208A' .. '\u208C'
| '\u2140' .. '\u2144'
| '\u214B'
| '\u2190' .. '\u2194'
| '\u219A' | '\u219B'
| '\u21A0' | '\u21A3'
| '\u21A6' | '\u21AE'
| '\u21CE' | '\u21CF'
| '\u21D2' | '\u21D4'
| '\u21F4' .. '\u22FF'
| '\u2308' .. '\u230B'
| '\u2320' | '\u2321'
| '\u237C'
| '\u239B' .. '\u23B3'
| '\u23DC' .. '\u23E1'
| '\u25B7' | '\u25C1'
| '\u25F8' .. '\u25FF'
| '\u266F'
| '\u27C0' .. '\u27C4'
| '\u27C7' .. '\u27CA'
| '\u27CC'
| '\u27D0' .. '\u27E5'
| '\u27F0' .. '\u27FF'
| '\u2900' .. '\u2982'
| '\u2999' .. '\u29D7'
| '\u29DC' .. '\u29FB'
| '\u29FE' .. '\u2AFF'
| '\u2B30' .. '\u2B44'
| '\u2B47' .. '\u2B4C'
| '\uFB29' | '\uFE62'
| '\uFE64' .. '\uFE66'
| '\uFF0B'
| '\uFF1C' .. '\uFF1E'
| '\uFF5C' | '\uFF5E'
| '\uFFE2'
| '\uFFE9' .. '\uFFEC'
;
fragment U_So
: '\u00A6' | '\u00A7'
| '\u00A9' | '\u00AE'
| '\u00B0' | '\u00B6'
| '\u0482' | '\u060E'
| '\u060F' | '\u06E9'
| '\u06FD' | '\u06FE'
| '\u07F6' | '\u09FA'
| '\u0B70'
| '\u0BF3' .. '\u0BF8'
| '\u0BFA' | '\u0C7F'
| '\u0CF1' | '\u0CF2'
| '\u0D79'
| '\u0F01' .. '\u0F03'
| '\u0F13' .. '\u0F17'
| '\u0F1A' .. '\u0F1F'
| '\u0F34' | '\u0F36'
| '\u0F38'
| '\u0FBE' .. '\u0FC5'
| '\u0FC7' .. '\u0FCC'
| '\u0FCE' | '\u0FCF'
| '\u0FD5' .. '\u0FD8'
| '\u109E' | '\u109F'
| '\u1360'
| '\u1390' .. '\u1399'
| '\u1940'
| '\u19E0' .. '\u19FF'
| '\u1B61' .. '\u1B6A'
| '\u1B74' .. '\u1B7C'
| '\u2100' | '\u2101'
| '\u2103' .. '\u2106'
| '\u2108' | '\u2109'
| '\u2114'
| '\u2116' .. '\u2118'
| '\u211E' .. '\u2123'
| '\u2125' | '\u2127'
| '\u2129' | '\u212E'
| '\u213A' | '\u213B'
| '\u214A' | '\u214C'
| '\u214D' | '\u214F'
| '\u2195' .. '\u2199'
| '\u219C' .. '\u219F'
| '\u21A1' | '\u21A2'
| '\u21A4' | '\u21A5'
| '\u21A7' .. '\u21AD'
| '\u21AF' .. '\u21CD'
| '\u21D0' | '\u21D1'
| '\u21D3'
| '\u21D5' .. '\u21F3'
| '\u2300' .. '\u2307'
| '\u230C' .. '\u231F'
| '\u2322' .. '\u2328'
| '\u232B' .. '\u237B'
| '\u237D' .. '\u239A'
| '\u23B4' .. '\u23DB'
| '\u23E2' .. '\u23E8'
| '\u2400' .. '\u2426'
| '\u2440' .. '\u244A'
| '\u249C' .. '\u24E9'
| '\u2500' .. '\u25B6'
| '\u25B8' .. '\u25C0'
| '\u25C2' .. '\u25F7'
| '\u2600' .. '\u266E'
| '\u2670' .. '\u26CD'
| '\u26CF' .. '\u26E1'
| '\u26E3'
| '\u26E8' .. '\u26FF'
| '\u2701' .. '\u2704'
| '\u2706' .. '\u2709'
| '\u270C' .. '\u2727'
| '\u2729' .. '\u274B'
| '\u274D'
| '\u274F' .. '\u2752'
| '\u2756' .. '\u275E'
| '\u2761' .. '\u2767'
| '\u2794'
| '\u2798' .. '\u27AF'
| '\u27B1' .. '\u27BE'
| '\u2800' .. '\u28FF'
| '\u2B00' .. '\u2B2F'
| '\u2B45' | '\u2B46'
| '\u2B50' .. '\u2B59'
| '\u2CE5' .. '\u2CEA'
| '\u2E80' .. '\u2E99'
| '\u2E9B' .. '\u2EF3'
| '\u2F00' .. '\u2FD5'
| '\u2FF0' .. '\u2FFB'
| '\u3004' | '\u3012'
| '\u3013' | '\u3020'
| '\u3036' | '\u3037'
| '\u303E' | '\u303F'
| '\u3190' | '\u3191'
| '\u3196' .. '\u319F'
| '\u31C0' .. '\u31E3'
| '\u3200' .. '\u321E'
| '\u322A' .. '\u3250'
| '\u3260' .. '\u327F'
| '\u328A' .. '\u32B0'
| '\u32C0' .. '\u32FE'
| '\u3300' .. '\u33FF'
| '\u4DC0' .. '\u4DFF'
| '\uA490' .. '\uA4C6'
| '\uA828' .. '\uA82B'
| '\uA836' | '\uA837'
| '\uA839'
| '\uAA77' .. '\uAA79'
| '\uFDFD' | '\uFFE4'
| '\uFFE8' | '\uFFED'
| '\uFFEE' | '\uFFFC'
| '\uFFFD'
;
fragment U_Sc
: '\u0024'
| '\u00A2' .. '\u00A5'
| '\u060B' | '\u09F2'
| '\u09F3' | '\u09FB'
| '\u0AF1' | '\u0BF9'
| '\u0E3F' | '\u17DB'
| '\u20A0' .. '\u20B8'
| '\uA838' | '\uFDFC'
| '\uFE69' | '\uFF04'
| '\uFFE0' | '\uFFE1'
| '\uFFE5' | '\uFFE6'
;
The range '\u0080' .. '\u007F' is invalid since 0x80 is larger than 0x7F.
It seems ANTLR has a problem with your nested repetition: ( ... ( ... )+ ... )+. Even though ANTLR's + and * are greedy by default (except for .* and .+), it appears that in such nested repetitions you need to explicitly tell ANTLR to either match ungreedy or greedy (greedy in your case).
The following rule does not produce any errors:
Symbol
: ( U_Sm
| U_So
| U_Sc
| '\u007F' .. '\u0080'
| '==' (options{greedy=true;}: '=')*
| '..' (options{greedy=true;}: '.')*
)+
;