Getting binary column from SQL Server database - c#

I need to import a column from a SQL Server database using C#.
When I use SQL Server Enterprise, it is shown as <binary> and when I run query on the SQL Server, it returns the right binary values.
However, when I try coding with C# like so:
SqlConnection conn = new SqlConnection("Server=portable;Database=data;Integrated Security=true;");
conn.Open();
SqlCommand cmd = new SqlCommand("SELECT bnry FROM RawData", conn);
SqlDataReader reader = cmd.ExecuteReader();
while(reader.Read())
{
Console.WriteLine(reader. //I do not know what to put here
}
reader.Close();
conn.Close();
When I put reader.GetSqlBinary(0));, I only get many SqlBinary<4096>s as output.
When I look at SQL Server Query Analyzer, when I try the same command, it gets me 0x0000.. type of code.
What should I put after reader. or is there another method of getting this data from the database?

You're getting back an array of bytes, so use :
byte[] bytes = (byte[])reader[0];
What you do with them from there depends on what the bytes represent.

The better way to do this is with a stream so that you properly dispose of the stream (probably coming via a socket connection) when finished. So I would use GetStream()
using(Stream stream = reader[0].GetStream())
{
//do your work on the stream here.
}

Given this table
create table dbo.bin_test
(
c1 binary(8) not null ,
c2 binary(8) null ,
c3 varbinary(8) not null ,
c4 varbinary(8) null ,
)
insert dbo.bin_test values ( 0x1234 , null , 0x1234 , null )
insert dbo.bin_test values ( 0x012345678 , 0x12345678 , 0x12345678 , 0x12345678 )
This code (IF you're going to use SQLBinary)
string connectString = "Server=localhost;Database=sandbox;Trusted_Connection=True;" ;
using ( SqlConnection connection = new SqlConnection(connectString) )
using ( SqlCommand cmd = connection.CreateCommand() )
{
cmd.CommandText = "select * from dbo.bin_test" ;
cmd.CommandType = CommandType.Text ;
connection.Open() ;
using ( SqlDataReader reader = cmd.ExecuteReader() )
{
int row = 0 ;
while ( reader.Read() )
{
for ( int col = 0 ; col < reader.FieldCount ; ++col )
{
Console.Write( "row{0,2}, col{1,2}: " , row , col ) ;
SqlBinary octets = reader.GetSqlBinary(col) ;
if ( octets.IsNull )
{
Console.WriteLine( "{null}");
}
else
{
Console.WriteLine( "length={0:##0}, {{ {1} }}" , octets.Length , string.Join( " , " , octets.Value.Select(x => string.Format("0x{0:X2}",x)))) ;
}
}
Console.WriteLine() ;
++row ;
}
}
connection.Close() ;
}
should produce:
row 0, col 0: length=8, { 0x12 , 0x34 , 0x00 , 0x00 , 0x00 , 0x00 , 0x00 , 0x00 }
row 0, col 1: {null}
row 0, col 2: length=2, { 0x12 , 0x34 }
row 0, col 3: {null}
row 1, col 0: length=8, { 0x00 , 0x12 , 0x34 , 0x56 , 0x78 , 0x00 , 0x00 , 0x00 }
row 1, col 1: length=8, { 0x12 , 0x34 , 0x56 , 0x78 , 0x00 , 0x00 , 0x00 , 0x00 }
row 1, col 2: length=4, { 0x12 , 0x34 , 0x56 , 0x78 }
row 1, col 3: length=4, { 0x12 , 0x34 , 0x56 , 0x78 }
But as noted, it's probably cleaner to simply do this:
byte[] octets = reader[0] as byte[] ;
if ( octets == null )
{
Console.WriteLine( "{null}");
}
else
{
Console.WriteLine( "length={0:##0}, {{ {1} }}" , octets.Length , string.Join( " , " , octets.Select(x => string.Format("0x{0:X2}",x)))) ;
}
And get the same result.

You should access data from SqlDataReader indexer (MSDN).
So it would look like this:
//..
Console.WriteLine((byte[])reader["bnry"]);
//..
UPDATE:
I guess I finally got where is your problem. We are not on the same page here. I will try to be as simple as possible.
For start you need to understand that all the information in computers are stored in memory as a bunch of bytes. It is quite cumbersome to work directly with bytes in memory, so different data types were introduced (int, string, Image, etc) to ease programmers life. Most objects in .NET still can be converted to their internal representation as a byte array in one or other way. During this conversion you loose the information about what byte array contains - it can easily be Image, string or even int array. To get back from binary representation, you need to know what byte array contains.
In your example you are trying to write out byte array directly. As output always needs to be text, byte array somehow needs to be converted to string. This is done by calling .ToString() function on byte array. Unfortunately for you, default implementation of .ToString() for complex objects just returns type name. That's where all those System.Byte[] and SqlBinary<4096> lines are coming from.
To get over this issue, before displaying result you need to convert byte array to necessary type. It seems that in your case byte array contains some textual information, so I guess you need to convert byte array to string. To do that, you need to know encoding (way how string is stored in memory) of the text.
Basically, your code should look like this:
SqlConnection conn = new SqlConnection("Server=portable;Database=data;Integrated Security=true;");
conn.Open();
SqlCommand cmd = new SqlCommand("SELECT bnry FROM RawData", conn);
SqlDataReader reader = cmd.ExecuteReader();
while(reader.Read())
{
var valueAsArray = (byte[])reader["bnry"];
//as there are different encodings possible, you need to find encoding what works for you
var valueAsStringDefault = System.Text.Encoding.Default.GetString(valueAsArray);
Console.WriteLine(valueAsStringDefault);
//...or...
var valueAsStringUTF8 = System.Text.Encoding.UTF8.GetString(valueAsArray);
Console.WriteLine(valueAsStringUTF8);
//...or...
var valueAsStringUTF7 = System.Text.Encoding.UTF7.GetString(valueAsArray);
Console.WriteLine(valueAsStringUTF7);
//...or any other encoding. Most of them you can find in System.Text.Encoding namespace...
}
reader.Close();
conn.Close();

Related

How to calculate hashbyte SHA1 using C#?

In a table I have a column URL which I am using to save urls. I am calculating the hash in another column by using formula (CONVERT([varbinary](20),hashbytes('SHA1',[URL]))). It's working fine.
Now I need to get similar function in C# to get hash so that I can compare and check that similar row doesnt exist before I insert a new row. I tried few links but no luck.
Here are the links:
http://weblogs.sqlteam.com/mladenp/archive/2009/04/28/Comparing-SQL-Server-HASHBYTES-function-and-.Net-hashing.aspx
How do I calculate the equivalent to SQL Server (hashbytes('SHA1',[ColumnName])) in C#?
** I found this link working. All I need to do is change formula in the db. but is it possible to make it in one line
**
http://forums.asp.net/t/1782626.aspx
DECLARE #HashThis nvarchar(4000);
DECLARE #BinHash varbinary(4000);
SELECT #HashThis = CONVERT(nvarchar(4000),'Password#Test');
SELECT #BinHash = HashBytes('SHA1', #HashThis);
SELECT cast(N'' as xml).value('xs:base64Binary(xs:hexBinary(sql:variable("#BinHash")))', 'nvarchar(4000)');
in c#
string pwd = "Password#Test";
var sha1Provider = HashAlgorithm.Create("SHA1");
var binHash = sha1Provider.ComputeHash(Encoding.Unicode.GetBytes(pwd));
Console.WriteLine(Convert.ToBase64String(binHash));
I am using sql server 2012. collation for the database is SQL_Latin1_General_CP1_CI_AS
Thanks
Paraminder
It's an encoding issue:
C#/.Net/CLR strings are, internally, UTF-16 encoded strings. That means each character is at least two bytes.
Sql Server is different:
char and varchar represent each character as a single byte using the code page tied to the collation used by that column
nchar and nvarchar represent each character as 2 bytes using the [old and obsolete] UCS-2 encoding for Unicode — something which was deprecated in 1996 with the release of Unicode 2.0 and UTF-16.
The big difference between UTF-16 and UCS-2 is that UCS-2 can only represent characters within the Unicode BMP (Basic Multilingual Plane); UTF-16 can represent any Unicode character. Within the BMP, as I understand it, UCS-2 and UTF-16 representations are identical.
That means that to compute a hash that is identical to the one that SQL Server computes, you're going to have to get a byte representation that is identical to the one that SQL Server has. Since it sounds like you're using char or varchar with the collation SQL_Latin1_General_CP1_CI_AS, per the documentation, the CP1 part means code page 1252 and the rest means case-insensitive, accent-sensitive. So...
You can get the encoding for code page 1252 by:
Encoding enc = Encoding.GetEncoding(1252);
Using that information, and given this table:
create table dbo.hash_test
(
id int not null identity(1,1) primary key clustered ,
source_text varchar(2000) collate SQL_Latin1_General_CP1_CI_AS not null ,
hash as
( hashbytes( 'SHA1' , source_text ) ) ,
)
go
insert dbo.hash_test ( source_text ) values ( 'the quick brown fox jumped over the lazy dog.' )
insert dbo.hash_test ( source_text ) values ( 'She looked like something that might have occured to Ibsen in one of his less frivolous moments.' )
go
You'll get this output
1: the quick brown fox jumped over the lazy dog.
sql: 6039D100 3323D483 47DDFDB5 CE2842DF 758FAB5F
c#: 6039D100 3323D483 47DDFDB5 CE2842DF 758FAB5F
2: She looked like something that might have occured to Ibsen in one of his less frivolous moments.
sql: D92501ED C462E331 B0E129BF 5B4A854E 8DBC490C
c#: D92501ED C462E331 B0E129BF 5B4A854E 8DBC490C
from this program
class Program
{
static byte[] Sha1Hash( string s )
{
SHA1 sha1 = SHA1.Create() ;
Encoding windows1252 = Encoding.GetEncoding(1252) ;
byte[] octets = windows1252.GetBytes(s) ;
byte[] hash = sha1.ComputeHash( octets ) ;
return hash ;
}
static string HashToString( byte[] bytes )
{
StringBuilder sb = new StringBuilder() ;
for ( int i = 0 ; i < bytes.Length ; ++i )
{
byte b = bytes[i] ;
if ( i > 0 && 0 == i % 4 ) sb.Append( ' ' ) ;
sb.AppendFormat( b.ToString("X2") ) ;
}
string s = sb.ToString() ;
return s ;
}
private static DataTable ReadDataFromSqlServer()
{
DataTable dt = new DataTable();
using ( SqlConnection conn = new SqlConnection( "Server=localhost;Database=sandbox;Trusted_Connection=True;"))
using ( SqlCommand cmd = conn.CreateCommand() )
using ( SqlDataAdapter sda = new SqlDataAdapter(cmd) )
{
cmd.CommandText = "select * from dbo.hash_test" ;
cmd.CommandType = CommandType.Text;
conn.Open();
sda.Fill( dt ) ;
conn.Close() ;
}
return dt ;
}
static void Main()
{
DataTable dt = ReadDataFromSqlServer() ;
foreach ( DataRow row in dt.Rows )
{
int id = (int) row[ "id" ] ;
string sourceText = (string) row[ "source_text" ] ;
byte[] sqlServerHash = (byte[]) row[ "hash" ] ;
byte[] myHash = Sha1Hash( sourceText ) ;
Console.WriteLine();
Console.WriteLine( "{0:##0}: {1}" , id , sourceText ) ;
Console.WriteLine( " sql: {0}" , HashToString( sqlServerHash ) ) ;
Console.WriteLine( " c#: {0}" , HashToString( myHash ) ) ;
Debug.Assert( sqlServerHash.SequenceEqual(myHash) ) ;
}
return ;
}
}
Easy!
I would suggest that that anytime a hash is created that it be done in a single place. Either in code or on the database. It will make your life easier in the long run. That would mean either changing you C# code to create the hash before inserting the record or doing the duplication check within a stored procedure instead.
Regardless though, the duplication check and insert should be synchronized such that no other inserts could occur between the time you check for any duplicates and when the record is actually inserted. Easiest way to do that would be to perform them both within the same transaction.
If you insist on leaving the logic as it stands I would then suggest that you create the hash in the database but expose it via a stored procedure or user defined function that could be called from your C# code.

How to fix the length for the string builder string in Winform C#?

In my project, I'm using a StringBuilder to create a table in RichEditBox. But in my column the data value is differ from each other. So it looks like shuffled data. But my excepted output is to be perfectly fix the length for the data in table. I can see the table as properly completed in my output .
Can anyone say how to fix the length to the string in StringBuilder in C#?
My code:
Connection();
try
{
string Today = "Student Test Mark Details";
string Line = "----------------------------------";
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd = new SqlCommand("select ExaminationName, ExaminationCenter, ExaminationDate, Subjects, MarkObtained, Total, Percentage, Grade from StudentMarksHistory where StudentCode='" + ICBEStudentCode.Text + "' and ExaminationName='" + TBExaminationName.Text + "' and ClassName='" + ICBEClassSection.Text + "' and Remark='Record Saved'", cs);
StringBuilder paragraph = new StringBuilder();
SqlDataReader dr = cmd.ExecuteReader();
paragraph.Append(Today).Append("\t\n");
paragraph.Append(Line).Append("\t\n\n\n\n\n");
paragraph.Append("ExamName").Append("\t");
paragraph.Append("ExamCenter").Append("\t");
paragraph.Append("ExamDate").Append("\t");
paragraph.Append("Subject").Append("\t\t\t");
paragraph.Append("Mark").Append("\t");
paragraph.Append("Total").Append("\t");
paragraph.Append("Percentage").Append("\t");
paragraph.Append("Grade").Append("\n\n");
while (dr.Read())
{
DateTime DateName = Convert.ToDateTime(dr["ExaminationDate"]);
string subject = dr["Subjects"].ToString().Trim();
//int Len = subject.Length;
//subject = subject.ToString().PadRight(50 - Len, ' ');
paragraph.Append(dr["ExaminationName"].ToString()).Append("\t");
paragraph.Append(dr["ExaminationCenter"].ToString()).Append("\t");
paragraph.Append(DateName.ToString("dd/MM/yyyy")).Append("\t");
paragraph.Append(subject);
paragraph.Append(' ', 15 - subject.Length);
//paragraph.Append(subject.PadRight(100));
paragraph.Append(dr["MarkObtained"].ToString()).Append("\t");
paragraph.Append(dr["Total"].ToString()).Append("\t");
paragraph.Append(dr["Percentage"].ToString()).Append("\t");
paragraph.Append(dr["Grade"].ToString()).Append("\n");
}
string Notes = "************[ Minimum Pass Mark is 35 ]*************";
paragraph.Append(" ").Append("\n\n\n\n");
paragraph.Append(Notes).Append("\r\n");
dr.Close();
cs.Close();
}
RTBMessage.Text = paragraph.ToString();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
Results expceted:
Student Test Mark Details
----------------------------------
ExamName ExamCenter ExamDate Subject Mark Total
Class Test 1 Room No1 24/05/2013 STOREDPROCEDURE 97 404
Class Test 1 Room No1 25/05/2013 DOTNET 86 404
Class Test 1 Room No1 26/05/2013 TAMIL 80 404
Class Test 1 Room No1 23/05/2013 SOCIAL 80 404
Class Test 1 Room No1 27/05/2013 COMPUTER 61 404
************[ Minimum Pass Mark is 35 ]*************
Do not use TABs to align the column but use Composite Formatting through AppendFormat method of the StringBuilder
Just as an example for your first column.
I suppose that ExamName should be inserted left aligned in a column large 20 characters.
You could write
paragraph.AppendFormat("{0:20}","ExamName");
while for the Grade column right aligned in a 6 char space
paragraph.AppendFormat("{0:-6}\r\n", "Total");
Of course this kind of alignement is dependent on the kind of font used in the RichTextBox. If you use a proportional font there is no way to align this text in the way you like.
At the end I really suggest you to use a DataGridView instead
As the lenght of your data is very fluctuating, do not add tabs "\t", this will not work if some data is longer than tabsize
You can use the
String.PadRight(Int32, Char)
Method if you know your maximum expected data lenght.
Remember the usage: the first parameter describes the total lengt not the number of characters to fill
http://msdn.microsoft.com/en-us/library/66f6d830(v=vs.71).aspx

Odd Oracle Error

I'm having a small issue with an Oracle command, given below:
command.CommandText = "SELECT ID, NAME, RATING, LENGTH, STARTTIME FROM SCHEDULE WHERE ID=301 AND ROWNUM=1 AND SCHEDULE.STARTTIME <= SYSDATE ORDER BY STARTTIME DESC;";
It runs perfectly well in Oracle SQL Developer, returning exactly what I need, but in C#, i get the following error:
ORA-06550: line 1, column 186:
PLS-00103: Encountered the symbol "," when expecting one of the following:
. ( * # % & = - + < / > at in is mod remainder not rem
<an exponent (**)> <> or != or ~= >= <= <> and or like like2
like4 likec as between || indicator multiset member
submultiset
Can anyone see any issues with it, or anything that is illegal within C#?
EDIT: Execution code:
command.Connection = conSQL;
using (IDataReader reader = command.ExecuteReader())
{
do
{
int count = reader.FieldCount;
while (reader.Read())
{
for (int i = 0; i < count; i++)
{
string setting = reader.GetName(i).ToString();
object value = reader.GetValue(i);
** Data assigned to variables here, hidden due to length of code**
** Follows pattern: object.property(reader.name) = reader.value **
}
}
} while (reader.NextResult());
}
dot not put ; at the end of the command, that's a command line tool convention, not part of sql proper (sqlplus also uses / as terminator for instance)
Name and Id are both special keywords in Oracle SQL. Try:
SELECT "ID", "NAME"...
Remove the trailing semi-colon on the SQL statement.
Share and enjoy.

Convert a file full of "INSERT INTO xxx VALUES" in to something Bulk Insert can parse

This is a followup to my first question "Porting “SQL” export to T-SQL".
I am working with a 3rd party program that I have no control over and I can not change. This program will export it's internal database in to a set of .sql each one with a format of:
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField], [BinaryField])
VALUES
(1 , 'Some Text' , 0x123456),
(2 , 'B' , NULL),
--(SNIP, it does this for 1000 records)
(999, 'E' , null);
(1000 , 'F' , null);
INSERT INTO [ExampleDB] ( [IntField] , [VarcharField] , BinaryField)
VALUES
(1001 , 'asdg', null),
(1002 , 'asdf' , 0xdeadbeef),
(1003 , 'dfghdfhg' , null),
(1004 , 'sfdhsdhdshd' , null),
--(SNIP 1000 more lines)
This pattern continues till the .sql file has reached a file size set during the export, the export files are grouped by EXPORT_PATH\%Table_Name%\Export#.sql Where the # is a counter starting at 1.
Currently I have about 1.3GB data and I have it exporting in 1MB chunks (1407 files across 26 tables, All but 5 tables only have one file, the largest table has 207 files).
Right now I just have a simple C# program that reads each file in to ram then calls ExecuteNonQuery. The issue is I am averaging 60 sec/file which means it will take about 23 hrs for it to do the entire export.
I assume if I some how could format the files to be loaded with a BULK INSERT instead of a INSERT INTO it could go much faster. Is there any easy way to do this or do I have to write some kind of Find & Replace and keep my fingers crossed that it does not fail on some corner case and blow up my data.
Any other suggestions on how to speed up the insert into would also be appreciated.
UPDATE:
I ended up going with the parse and do a SqlBulkCopy method. It went from 1 file/min. to 1 file/sec.
Well, here is my "solution" for helping convert the data into a DataTable or otherwise (run it in LINQPad):
var i = "(null, 1 , 'Some''\n Text' , 0x123.456)";
var pat = #",?\s*(?:(?<n>null)|(?<w>[\w.]+)|'(?<s>.*)'(?!'))";
Regex.Matches(i, pat,
RegexOptions.IgnoreCase | RegexOptions.Singleline).Dump();
The match should be run once per value group (e.g. (a,b,etc)). Parsing of the results (e.g. conversion) is left to the caller and I have not tested it [much]. I would recommend creating the correctly-typed DataTable first -- although it may be possible to pass everything "as a string" to the database? -- and then use the information in the columns to help with the extraction process (possibly using type converters). For the captures: n is null, w is word (e.g. number), s is string.
Happy coding.
Apparently your data is always wrapped in parentheses and starts with a left parenthesis. You might want to use this rule to split(RemoveEmptyEntries) each of those lines and load it into a DataTable. Then you can use SqlBulkCopy to copy all at once into the database.
This approach would not necessarily be fail-safe, but it would be certainly faster.
Edit: Here's the way how you could get the schema for every table:
private static DataTable extractSchemaTable(IEnumerable<String> lines)
{
DataTable schema = null;
var insertLine = lines.SkipWhile(l => !l.StartsWith("INSERT INTO [")).Take(1).First();
var startIndex = insertLine.IndexOf("INSERT INTO [") + "INSERT INTO [".Length;
var endIndex = insertLine.IndexOf("]", startIndex);
var tableName = insertLine.Substring(startIndex, endIndex - startIndex);
using (var con = new SqlConnection("CONNECTION"))
{
using (var schemaCommand = new SqlCommand("SELECT * FROM " tableName, con))
{
con.Open();
using (var reader = schemaCommand.ExecuteReader(CommandBehavior.SchemaOnly))
{
schema = reader.GetSchemaTable();
}
}
}
return schema;
}
Then you simply need to iterate each line in the file, check if it starts with ( and split that line by Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries). Then you could add the resulting array into the created schema-table.
Something like this:
var allLines = System.IO.File.ReadAllLines(path);
DataTable result = extractSchemaTable(allLines);
for (int i = 0; i < allLines.Length; i++)
{
String line = allLines[i];
if (line.StartsWith("("))
{
String data = line.Substring(1, line.Length - (line.Length - line.LastIndexOf(")")) - 1);
var fields = data.Split(new[] { ',' }, StringSplitOptions.RemoveEmptyEntries);
// you might need to parse it to correct DataColumn.DataType
result.Rows.Add(fields);
}
}

Postgres bytea column is returning string (char array) instead of byte array

I have been using C# to write a concrete provider implementation for our product for different databases. W/out getting into details, one of the columns is of byte array type (bytea in postgres - due to the preferences bytea was chosen over blob). The only problem, is that it does not return same value that was inserted. When I insert Int32 ("0") I get 8 [92 and 8x 48] (instead of [0,0,0,0]). I need a performance wise solution, that will return pure bytes I have inserted, instead of ASCII representation of value "0" on 8 bytes.
I am using Npgsql to retrive data. If someone knows solution for c# I will be happy to learn it as well.
Edit:
Postgres 9.0, .Net 3.5
Simplification
Command query: - inside it only does an insert statment
select InsertOrUpdateEntry(:nodeId, :timeStamp, :data)
Data parameter:
byte [] value = BitConverter.GetBytes((int)someValue);
Parameter is assigned as below
command.Parameters.Add(new NpgsqlParameter("data", NpgsqlDbType.Bytea)
{ Value = value });
Select statments:
select * from Entries
Same byte array I have entered, I want to get back. I would really appreciate your help.
Input: 0 0 0 0
Current Output: 92 48 48 48 48 48 48 48 48
Expected Output: 0 0 0 0
In Npgsql there is NpgsqlDataReader class to retrieve inserted rows, e.g:
NpgsqlConnection conn = new NpgsqlConnection(connStr);
conn.Open();
NpgsqlCommand insertCmd =
new NpgsqlCommand("INSERT INTO binaryData (data) VALUES(:dataParam)", conn);
NpgsqlParameter param = new NpgsqlParameter("dataParam", NpgsqlDbType.Bytea);
byte[] inputBytes = BitConverter.GetBytes((int)0);
Console.Write("Input:");
foreach (byte b in inputBytes)
Console.Write(" {0}", b);
Console.WriteLine();
param.Value = inputBytes;
insertCmd.Parameters.Add(param);
insertCmd.ExecuteNonQuery();
NpgsqlCommand selectCmd = new NpgsqlCommand("SELECT data FROM binaryData", conn);
NpgsqlDataReader dr = selectCmd.ExecuteReader();
if(dr.Read())
{
Console.Write("Output:");
byte[] result = (byte[])dr[0];
foreach(byte b in result)
Console.Write(" {0}", b);
Console.WriteLine();
}
conn.Close();
Result from C# app:
Input: 0 0 0 0
Output: 0 0 0 0
Result from pgAdmin:
"\000\000\000\000"
EDIT:
I found explanation why you getting:
92 48 48 48 48 48 48 48 48
I checked my code with previous version Npgsql2.0.10-bin-ms.net3.5sp1.zip and get above result (of course pgAdmin returns \000\000\000\000), so I think that best what you can do is to use another version without this bug.
ANSWER: User higher version of Npgsql than 2.0.10
Ran the same problem, but managed to solve the problem without having to resort to changing drivers.
PHP documentation has a good description of what's happening, Postgres is returning escaped data. Check your output against an ASCII table, when you see 92 48 ... it's the text lead in to an octal escape sequence, \0xx, just like PHP describes.
Postgres's binary data type explains the output escaped octets. Fret not, there are code examples.
The solution is to tell Postgres how to bytea output is escaped, which can be either escape or hex. In this case issue the following to Postgres via psql to match your data:
ALTER DATABASE yourdb SET BYTEA_OUTPUT TO 'escape';

Categories

Resources