Byte to Character Conversion for a Telnet Stream - c#

I am getting a stream of byte data from a telnet session via TcpClient.GetStream().ReadByte(). I am then converting this byte data to ASCII via char casting. The data comes through fine, but with a lot of extra junk like 1[01;001H[0k[01.
Anyone have any idea what this extra junk might be?
UPDATE
More detailed response stream below
1[01;001H[0K[01;017H[0;1;4mTitle of Page Here[0;1m[0;1m[02;001H[02;051H[0KWed Mar 28, 2012 03:03 pm[02;051HDate Time Here[0J[03;001H[0J[23;001H[0J[0;1;7mPrompt Here[P]-- [0;1m[23;044H
When it should read
Title of Page Here
Date Time Here
Prompt Here

Parts of the 'junk' you're seeing are part of the Telnet protocol. The remote is trying to negotiate some options with you, and may also send you some other commands (although that's relatively rare in practice). See the TELNET COMMAND STRUCTURE section of the applicable RFC for the exact format and meaning of all possible commands.
In most cases, you'll be able to simply ignore any Telnet commands (including option negotiation) received, but you do have to filter them: as you discovered, simply treating a Telnet session as a clean TCP stream won't work.
In addition to protocol-level options, the remote may also assume you're a terminal, and send escape sequences to ensure the data is properly displayed. Interpreting or filtering those codes will depend on the type of terminal the remote is configured to use -- it's not unlikely you'll encounter a VT100, for example.
There's no real need to delve too deeply into the specs, by the way: it's entirely feasible to use something pre-built like this minimalistic Telnet library to deal with the most important details for you.
EDIT, 29 March 2012: The additional examples of the 'junk' you're seeing confirm that the remote is treating you as a VT100. For example: [0;1;4mTitle of Page Here corresponds to Set Attribute Mode: <ESC>[{attr1};...;{attrn}m and tries to make the page title appear bright (1) and underlined (4).
Simplest option here: as soon as you see an ESCape character (ASCII 27), ignore everything after that up to and including the first character that isn't in the list [;0123456789. That will strip the most common VT100 codes: there are a few that may require special handling, but those are rare, and anyway, you have the specs now.
But even if you strip the control codes, you may still end up with an unparseable data stream, especially if the host tries to maintain a fancy screen layout. For example, it may randomly update a status field (e.g. a clock) in the middle of a stream of values that you're interested in. If that's the case, you'll need a (virtual) VT100 emulator annex screen scraper. Those kinds of solutions mostly seem to involve expensive commercial software, although libvt100 - A purely .net/C# library for parsing a VT100/ANSI stream may work for you.

Related

French characters causing invalid hash [duplicate]

I'm setting up a new server and want to support UTF-8 fully in my web application. I have tried this in the past on existing servers and always seem to end up having to fall back to ISO-8859-1.
Where exactly do I need to set the encoding/charsets? I'm aware that I need to configure Apache, MySQL, and PHP to do this — is there some standard checklist I can follow, or perhaps troubleshoot where the mismatches occur?
This is for a new Linux server, running MySQL 5, PHP, 5 and Apache 2.
Data Storage:
Specify the utf8mb4 character set on all tables and text columns in your database. This makes MySQL physically store and retrieve values encoded natively in UTF-8. Note that MySQL will implicitly use utf8mb4 encoding if a utf8mb4_* collation is specified (without any explicit character set).
In older versions of MySQL (< 5.5.3), you'll unfortunately be forced to use simply utf8, which only supports a subset of Unicode characters. I wish I were kidding.
Data Access:
In your application code (e.g. PHP), in whatever DB access method you use, you'll need to set the connection charset to utf8mb4. This way, MySQL does no conversion from its native UTF-8 when it hands data off to your application and vice versa.
Some drivers provide their own mechanism for configuring the connection character set, which both updates its own internal state and informs MySQL of the encoding to be used on the connection—this is usually the preferred approach. In PHP:
If you're using the PDO abstraction layer with PHP ≥ 5.3.6, you can specify charset in the DSN:
$dbh = new PDO('mysql:charset=utf8mb4');
If you're using mysqli, you can call set_charset():
$mysqli->set_charset('utf8mb4'); // object oriented style
mysqli_set_charset($link, 'utf8mb4'); // procedural style
If you're stuck with plain mysql but happen to be running PHP ≥ 5.2.3, you can call mysql_set_charset.
If the driver does not provide its own mechanism for setting the connection character set, you may have to issue a query to tell MySQL how your application expects data on the connection to be encoded: SET NAMES 'utf8mb4'.
The same consideration regarding utf8mb4/utf8 applies as above.
Output:
UTF-8 should be set in the HTTP header, such as Content-Type: text/html; charset=utf-8. You can achieve that either by setting default_charset in php.ini (preferred), or manually using header() function.
If your application transmits text to other systems, they will also need to be informed of the character encoding. With web applications, the browser must be informed of the encoding in which data is sent (through HTTP response headers or HTML metadata).
When encoding the output using json_encode(), add JSON_UNESCAPED_UNICODE as a second parameter.
Input:
Browsers will submit data in the character set specified for the document, hence nothing particular has to be done on the input.
In case you have doubts about request encoding (in case it could be tampered with), you may verify every received string as being valid UTF-8 before you try to store it or use it anywhere. PHP's mb_check_encoding() does the trick, but you have to use it religiously. There's really no way around this, as malicious clients can submit data in whatever encoding they want, and I haven't found a trick to get PHP to do this for you reliably.
Other Code Considerations:
Obviously enough, all files you'll be serving (PHP, HTML, JavaScript, etc.) should be encoded in valid UTF-8.
You need to make sure that every time you process a UTF-8 string, you do so safely. This is, unfortunately, the hard part. You'll probably want to make extensive use of PHP's mbstring extension.
PHP's built-in string operations are not by default UTF-8 safe. There are some things you can safely do with normal PHP string operations (like concatenation), but for most things you should use the equivalent mbstring function.
To know what you're doing (read: not mess it up), you really need to know UTF-8 and how it works on the lowest possible level. Check out any of the links from utf8.com for some good resources to learn everything you need to know.
I'd like to add one thing to chazomaticus' excellent answer:
Don't forget the META tag either (like this, or the HTML4 or XHTML version of it):
<meta charset="utf-8">
That seems trivial, but IE7 has given me problems with that before.
I was doing everything right; the database, database connection and Content-Type HTTP header were all set to UTF-8, and it worked fine in all other browsers, but Internet Explorer still insisted on using the "Western European" encoding.
It turned out the page was missing the META tag. Adding that solved the problem.
Edit:
The W3C actually has a rather large section dedicated to I18N. They have a number of articles related to this issue – describing the HTTP, (X)HTML and CSS side of things:
FAQ: Changing (X)HTML page encoding to UTF-8
Declaring character encodings in HTML
Tutorial: Character sets & encodings in XHTML, HTML and CSS
Setting the HTTP charset parameter
They recommend using both the HTTP header and HTML meta tag (or XML declaration in case of XHTML served as XML).
In addition to setting default_charset in php.ini, you can send the correct charset using header() from within your code, before any output:
header('Content-Type: text/html; charset=utf-8');
Working with Unicode in PHP is easy as long as you realize that most of the string functions don't work with Unicode, and some might mangle strings completely. PHP considers "characters" to be 1 byte long. Sometimes this is okay (for example, explode() only looks for a byte sequence and uses it as a separator -- so it doesn't matter what actual characters you look for). But other times, when the function is actually designed to work on characters, PHP has no idea that your text has multi-byte characters that are found with Unicode.
A good library to check into is phputf8. This rewrites all of the "bad" functions so you can safely work on UTF8 strings. There are extensions like the mb_string extension that try to do this for you, too, but I prefer using the library because it's more portable (but I write mass-market products, so that's important for me). But phputf8 can use mb_string behind the scenes, anyway, to increase performance.
Warning: This answer applies to PHP 5.3.5 and lower. Do not use it for PHP version 5.3.6 (released in March 2011) or later.
Compare with Palec's answer to PDO + MySQL and broken UTF-8 encoding.
I found an issue with someone using PDO and the answer was to use this for the PDO connection string:
$pdo = new PDO(
'mysql:host=mysql.example.com;dbname=example_db',
"username",
"password",
array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8"));
In my case, I was using mb_split, which uses regular expressions. Therefore I also had to manually make sure the regular expression encoding was UTF-8 by doing mb_regex_encoding('UTF-8');
As a side note, I also discovered by running mb_internal_encoding() that the internal encoding wasn't UTF-8, and I changed that by running mb_internal_encoding("UTF-8");.
First of all, if you are in PHP before 5.3 then no. You've got a ton of problems to tackle.
I am surprised that none has mentioned the intl library, the one that has good support for Unicode, graphemes, string operations, localisation and many more, see below.
I will quote some information about Unicode support in PHP by Elizabeth Smith's slides at PHPBenelux'14
INTL
Good:
Wrapper around ICU library
Standardised locales, set locale per script
Number formatting
Currency formatting
Message formatting (replaces gettext)
Calendars, dates, time zone and time
Transliterator
Spoofchecker
Resource bundles
Convertors
IDN support
Graphemes
Collation
Iterators
Bad:
Does not support zend_multibyte
Does not support HTTP input output conversion
Does not support function overloading
mb_string
Enables zend_multibyte support
Supports transparent HTTP in/out encoding
Provides some wrappers for functionality such as strtoupper
ICONV
Primary for charset conversion
Output buffer handler
mime encoding functionality
conversion
some string helpers (len, substr, strpos, strrpos)
Stream Filter stream_filter_append($fp, 'convert.iconv.ISO-2022-JP/EUC-JP')
DATABASES
MySQL: Charset and collation on tables and on the connection (not the collation). Also, don't use mysql - mysqli or PDO
postgresql: pg_set_client_encoding
sqlite(3): Make sure it was compiled with Unicode and intl support
Some other gotchas
You cannot use Unicode filenames with PHP and windows unless you use a 3rd part extension.
Send everything in ASCII if you are using exec, proc_open and other command line calls
Plain text is not plain text, files have encodings
You can convert files on the fly with the iconv filter
The only thing I would add to these amazing answers is to emphasize on saving your files in UTF-8 encoding, I have noticed that browsers accept this property over setting UTF-8 as your code encoding. Any decent text editor will show you this. For example, Notepad++ has a menu option for file encoding, and it shows you the current encoding and enables you to change it. For all my PHP files I use UTF-8 without a BOM.
Sometime ago I had someone ask me to add UTF-8 support for a PHP and MySQL application designed by someone else. I noticed that all files were encoded in ANSI, so I had to use iconv to convert all files, change the database tables to use the UTF-8 character set and utf8_general_ci collate, add 'SET NAMES utf8' to the database abstraction layer after the connection (if using 5.3.6 or earlier. Otherwise, you have to use charset=utf8 in the connection string) and change string functions to use the PHP multibyte string functions equivalent.
I recently discovered that using strtolower() can cause issues where the data is truncated after a special character.
The solution was to use
mb_strtolower($string, 'UTF-8');
mb_ uses MultiByte. It supports more characters but in general is a little slower.
In PHP, you'll need to either use the multibyte functions, or turn on mbstring.func_overload. That way things like strlen will work if you have characters that take more than one byte.
You'll also need to identify the character set of your responses. You can either use AddDefaultCharset, as above, or write PHP code that returns the header. (Or you can add a META tag to your HTML documents.)
I have just gone through the same issue and found a good solution at PHP manuals.
I changed all my files' encoding to UTF8 and then the default encoding on my connection. This solved all the problems.
if (!$mysqli->set_charset("utf8")) {
printf("Error loading character set utf8: %s\n", $mysqli->error);
} else {
printf("Current character set: %s\n", $mysqli->character_set_name());
}
View Source
Unicode support in PHP is still a huge mess. While it's capable of converting an ISO 8859 string (which it uses internally) to UTF-8, it lacks the capability to work with Unicode strings natively, which means all the string processing functions will mangle and corrupt your strings.
So you have to either use a separate library for proper UTF-8 support, or rewrite all the string handling functions yourself.
The easy part is just specifying the charset in HTTP headers and in the database and such, but none of that matters if your PHP code doesn't output valid UTF-8. That's the hard part, and PHP gives you virtually no help there. (I think PHP 6 is supposed to fix the worst of this, but that's still a while away.)
If you want a MySQL server to decide the character set, and not PHP as a client (old behaviour; preferred, in my opinion), try adding skip-character-set-client-handshake to your my.cnf, under [mysqld], and restart mysql.
This may cause trouble in case you're using anything other than UTF-8.
The top answer is excellent. Here is what I had to on a regular Debian, PHP, and MySQL setup:
// Storage
// Debian. Apparently already UTF-8
// Retrieval
// The MySQL database was stored in UTF-8,
// but apparently PHP was requesting ISO 8859-1. This worked:
// ***notice "utf8", without dash, this is a MySQL encoding***
mysql_set_charset('utf8');
// Delivery
// File *php.ini* did not have a default charset,
// (it was commented out, shared host) and
// no HTTP encoding was specified in the Apache headers.
// This made Apache send out a UTF-8 header
// (and perhaps made PHP actually send out UTF-8)
// ***notice "utf-8", with dash, this is a php encoding***
ini_set('default_charset','utf-8');
// Submission
// This worked in all major browsers once Apache
// was sending out the UTF-8 header. I didn’t add
// the accept-charset attribute.
// Processing
// Changed a few commands in PHP, like substr(),
// to mb_substr()
That was all!

Server - client project (messanging protocols)

I want to do a project, that will include server and clients sides, using TcpSocket network communication (I use TcpListener for server and TcpClient for client side) and threading. But threading is not giving me any problems so far any longer.
But what it does, is something else... because the project will not include only chat, (but also creating new game, joining game, making moves, leaving game), I need to define somehow the message format.
I have read about messaging protocols and about using first few bytes of each message to tell the server what they are trying to do. The problem is that I do NOT know how to do it. So can someone show me an example of creating formated message?
Maybe its good to mention I use StreamReader and StreamWriter classes to pass data between server and clients. Is this a good way?
To add:
My problem now, is how to seperate this data, so that the server will know what to do with it. I have read about using 1nd few bytes to "be reserved" for the type of the message. But the problem is I don't know how to solve this issue. So far I was only using StreamReader and StreamWriter classes to pass only strings. If I use these kind of coding, it will all become too messy (not recognizable), if you know what I mean.
So I need to do something like it:
To send bytes:
1st few bytes the type of the data (but I don't know which class to use, maybe a BinaryWriter, and BinaryReader on the other side??)
the rest of the message
on server I have to have some code that will recognize these "1st few bytes" so the code will know what to do with the test of the message.
and based on these "1st few bytes" the code has to send data back to clients
Do you have any ideas on what this might look like, I mean as a skeleton (something basic, so I can work on with it).
Every bit of help would be very much appreciated.
I have found one example here on stackOverflow.com. It seems to be a code into a right direction. What do you think guys?
One option that might be much easier than defining your own protocol would be to use some existing library, like WCF or JSON-RPC.
Libraries like this have their disadvantages (they often produce output that is relatively big). If if you build your applications in a modular way, you can easily switch the communication backend later, when you find your first solution wasn't good enough.
You have a few options:
Make the message format textual - including its header. So, the header contains the message size, written out as text string, and ends with some terminator (\r\n for example.) HTTP takes this approach - the headers are text. When receiving the message, you process it with ReadLine, and parse header lines with stuff like int.TryParse(str).
Send message:
output.WriteLine(message.Length);
output.Write(message);
Receive message:
int len = int.Parse(input.ReadLine());
//message is the next 'len' characters
A second option: entire message is in a binary format - the message header is binary, and specifies amongst other things the message length. Message content is written as a stream of bytes, encoded using for example Encoding.UTF8.GetBytes(str).
Or, perhaps use the HTTP protocol? I believe there is basic support for HTTP server in the form of HttpListener. Here is a simple intro to HttpListener
Whatever you do, avoid using WCF :)

Transferring binary data using sockets?

I am thinking of making a server/client application that does this: the client(s) will connect to the server and the server will send back a list of folders (probably music or videos) that the client will be able to stream or download. I am going to make a GUI for this so that it won't be a text-only interface. I want it so that when the client connects to the server and gets the information about the media files that can be streamed/downloaded, the client will show the folders/files in an explorer-like interface (folders will show up with a folder icon, videos will show up with a video icon, etc.)
This seems really big to me but I really want to learn more about socket programming with C# and feel that getting my hands really dirty is the best way. As you can imagine, I have a lot of confusion about how to start this and what it is that I need to do to make this work.
The first part of my question is: If I want to let a client receive a list of files/folders and also have the ability to download/stream them, do I need to do something with transferring binary data using sockets or am I completely wrong on that? If I'm wrong, do I need to use something else like file readers and somehow use them over sockets?
The second part to my question is: when I transfer this information over to a client, how can I make the client show the appropriate icons for videos, folders, mp3s, etc.? I'm thinking that if I have to transfer binary data I will have to use that data to somehow populate the client GUI with the correct icons/data. I am stuck on exactly how to handle that. Are there certain methods/classes that I can use to parse that data and do what I want to do with it? Again, I don't know if I'm wrong about needing to transfer binary data but either way I am confused on what it is that's necessary to handle this.
I apologize if any of my terminology is wrong. I am obviously not an expert in C# yet ;)
Any time you are sending data between machines (or between processes), you are already talking binary data (you could argue that everything is binary data, but when working with it as an OO model it doesn't feel like it). The point here is that all you need to do is serialize your messages. There are a myriad of serialization frameworks - some aimed at convenience, some aimed to range-of-features, and som can use aimed at performance (minimum bandwidth etc).
Even the inbuilt ones would work - for example, you can use XmlSerializer etc to write to a Stream (such as the NetworkStream you typically use with a socket). But please - not BinaryFormatter (it will hurt you, honest). Personally I'd use something a bit less verbose such as protobuf-net, but I'm somewhat biased since I wrote it ;p
However, note that when working with sockets, you are typically sending multiple (separate) messages as part of the conversation. In order for each end to correctly and conveniently read the right data per-message from the stream, it is common to prefix each message with the length (number of bytes) - for example, as a little-endian int taking a fixed 4 bytes. So the reading process is:
read 4 bytes
interpret that as an int, let's say n
read n bytes
run that data through your deserializer, perhaps via MemoryStream
process the message
and the sending process might be:
serialize the message, perhaps to a MemoryStream
query the length and form the 4 byte prefix
write the prefix
write the data
You don't have to go to as low level as Socket, unless you just want to learn of course.
I suggest using WCF. It will completely meet your requirements, will be flexible and you will not have to deal with sockets directly.
WCF is a big framework, but you can quickly learn how to use it.
Introduction to WCF
Here you can find how to retrieve and show file icon in Windows.
If you client is hosted by Windows, you don't need to transfer icons, you can just retrieve image on the client by filename/extension.

What is a good format for command line output when it is being used for further processing?

I have written a console application in Delphi that queries information from several locations. This application will be launched by another process, and the output to STDOUT will be captured by the launching process.
The information I am retrieving is to be interpreted by the calling application for reporting purposes. What is the best way to output this data to STDOUT so that it can be easily parsed? JSON? XML? CSV? The data, specifically, is remote workstation information, so it will pull things back like running processes, and details about each process.
Does anyone have any experience with this or suggestions?
If you want something that can be easily parsed, especially if it has to be done quickly, go with the simplest format that can effectively communicate the information you need. CSV if you can, otherwise try JSON. Definitely not XML unless you really, really need all the extra complexity for some reason.
I'd go for a Tab-delimited file, if your data (as it seems) doesn't contain that character because it allows the fastest and simplest processing.
All the other formats are slower and more complicated (even if they give you more power).
The closest match is CSV but CSV needs to quote the item if the item contains some special characters defined by the CSV (space, comma, quotes etc.).
Because of the above thing, the Tab delimited format is the most compact one, hence it has the greatest speed over-the-wire. (Since you're talking about remote workstations I assume that you're on some kind of network).
Also, another thing worth mentioning is that the Tab delimited format is very readable thus making the debugging much easier, if needed.
As an aside, if the Tab character is present in your data stream you can choose another character which you are sure that cannot be. (For example #1 etc.). Of course, this if your usage scenario permits it.
HTH
It would depend entirely on what the launching process has available. If it's a small Delphi app, CSV is easy to parse with just TStringList. XML may be more heavy weight than JSON, but Delphi ships with an XML parser, and AFAIK, not a JSON parser.
The XML output format has the advantage that you can pipe it to a XSL formatter, so that the XML data can be converted to a user friendly HTML document. (You can almost have the cake and eat it too) ...

Is it possible to reliably auto-decode user files to Unicode? [C#]

I have a web application that allows users to upload their content for processing. The processing engine expects UTF8 (and I'm composing XML from multiple users' files), so I need to ensure that I can properly decode the uploaded files.
Since I'd be surprised if any of my users knew their files even were encoded, I have very little hope they'd be able to correctly specify the encoding (decoder) to use. And so, my application is left with task of detecting before decoding.
This seems like such a universal problem, I'm surprised not to find either a framework capability or general recipe for the solution. Can it be I'm not searching with meaningful search terms?
I've implemented BOM-aware detection (http://en.wikipedia.org/wiki/Byte_order_mark) but I'm not sure how often files will be uploaded w/o a BOM to indicate encoding, and this isn't useful for most non-UTF files.
My questions boil down to:
Is BOM-aware detection sufficient for the vast majority of files?
In the case where BOM-detection fails, is it possible to try different decoders and determine if they are "valid"? (My attempts indicate the answer is "no.")
Under what circumstances will a "valid" file fail with the C# encoder/decoder framework?
Is there a repository anywhere that has a multitude of files with various encodings to use for testing?
While I'm specifically asking about C#/.NET, I'd like to know the answer for Java, Python and other languages for the next time I have to do this.
So far I've found:
A "valid" UTF-16 file with Ctrl-S characters has caused encoding to UTF-8 to throw an exception (Illegal character?) (That was an XML encoding exception.)
Decoding a valid UTF-16 file with UTF-8 succeeds but gives text with null characters. Huh?
Currently, I only expect UTF-8, UTF-16 and probably ISO-8859-1 files, but I want the solution to be extensible if possible.
My existing set of input files isn't nearly broad enough to uncover all the problems that will occur with live files.
Although the files I'm trying to decode are "text" I think they are often created w/methods that leave garbage characters in the files. Hence "valid" files may not be "pure". Oh joy.
Thanks.
There won't be an absolutely reliable way, but you may be able to get "pretty good" result with some heuristics.
If the data starts with a BOM, use it.
If the data contains 0-bytes, it is likely utf-16 or ucs-32. You can distinguish between these, and between the big-endian and little-endian variants of these by looking at the positions of the 0-bytes
If the data can be decoded as utf-8 (without errors), then it is very likely utf-8 (or US-ASCII, but this is a subset of utf-8)
Next, if you want to go international, map the browser's language setting to the most likely encoding for that language.
Finally, assume ISO-8859-1
Whether "pretty good" is "good enough" depends on your application, of course. If you need to be sure, you might want to display the results as a preview, and let the user confirm that the data looks right. If it doesn't, try the next likely encoding, until the user is satisfied.
Note: this algorithm will not work if the data contains garbage characters. For example, a single garbage byte in otherwise valid utf-8 will cause utf-8 decoding to fail, making the algorithm go down the wrong path. You may need to take additional measures to handle this. For example, if you can identify possible garbage beforehand, strip it before you try to determine the encoding. (It doesn't matter if you strip too aggressive, once you have determined the encoding, you can decode the original unstripped data, just configure the decoders to replace invalid characters instead of throwing an exception.) Or count decoding errors and weight them appropriately. But this probably depends much on the nature of your garbage, i.e. what assumptions you can make.
Have you tried reading a representative cross-section of your files from user, running them through your program, testing, correcting any errors and moving on?
I've found File.ReadAllLines() pretty effective across a very wide range of applications without worrying about all of the encodings. It seems to handle it pretty well.
Xmlreader() has done fairly well once I figured out how to use it properly.
Maybe you could post some specific examples of data and get some better responses.
This is a well known problem. You can try to do what Internet Explorer is doing. This is a nice article in The CodeProject that describes Microsoft's solution to the problem. However no solution is 100% accurate as everything is based on heuristcs. And it is also no safe to assume that a BOM will be present.
You may like to look at a Python-based solution called chardet. It's a Python port of Mozilla code. Although you may not be able to use it directly, its documentation is well worth reading, as is the original Mozilla article it references.
I ran into a similar issue. I needed a powershell script that figured out if a file was text-encoded ( in any common encoding ) or not.
It's definitely not exhaustive, but here's my solution...
PowerShell search script that ignores binary files

Categories

Resources