Defined a gRPC Python service in a Docker.
The services in my PROTO file:
rpc ClientCommand(ClientRequest) returns (ClientResponse){}
Definition of "ClientResponse":
message ClientResponse{
int32 request_id = 1;
repeated int32 prediction_status = 2;
repeated string prediction_info = 3;
repeated string prediction_error = 4;
repeated string prediction_result_name = 5;
repeated bytes prediction_result = 6;
repeated bytes prediction_config = 7;
repeated bytes prediction_log = 8;
}
On client side, I want to catch the repeated bytes and convert them into a file(I know it better works with a stream but for the moment I would like to do it like this).
The repeated strings and integer I can perfectly convert to List --> OK
The repeated bytes I would like to convert to Byte[]. Their type: Google.ProtoBuf.Collections.RepeatedField<Google.ProtoBuf.ByteString>.
At first it seems to be impossible to convert this type to a Byte[]. Could somebody help me with this please? My solution temporary:
byte[] test = new byte[100];
Google.Protobuf.ByteString[] test2 = new Google.Protobuf.ByteString[100];
response.PredictionResult.CopyTo(test2,0);
test2.CopyTo(test,0);
WriteFile(#"C:\programs\file.txt", test);
Meanwhile I figure it out:
public void WriteFileResult(string fileName,Google.Protobuf.Collections.RepeatedField<ByteString> data)
{
byte[] bytes = new byte[data.Count];
ByteString[] dataByteString = new ByteString[data.Count];
string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
if (!path.EndsWith(#"\")) path += #"\";
if (File.Exists(Path.Combine(path, fileName)))
File.Delete(Path.Combine(path, fileName));
using (FileStream fs = new FileStream(Path.Combine(path, fileName), FileMode.CreateNew, FileAccess.Write))
{
data.CopyTo(dataByteString, 0);
bytes = dataByteString[0].ToByteArray();
fs.Write(bytes, 0, (int)bytes.Length);
_serviceResponseModel.Json = false;
//fs.Close()
}
}
Related
I am using ZXing.Net library to encode and decode my video file using RS Encoder. It works well by adding and and removing parity after encoding and decoding respectively. But When writing decoded file back it is adding "?" characters in file on different locations which was not part of original file. I am not getting why this problem is occurring when writing file back.
Here is my code
using ZXing.Common.ReedSolomon;
namespace zxingtest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
string inputFileName = #"D:\JM\bin\baseline_30.264";
string outputFileName = #"D:\JM\bin\baseline_encoded.264";
string Content = File.ReadAllText(inputFileName, ASCIIEncoding.Default);
//File.WriteAllText(outputFileName, Content, ASCIIEncoding.Default);
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);
ReedSolomonDecoder dec = new ReedSolomonDecoder(GenericGF.AZTEC_DATA_12);
//string s = "1,2,4,6,1,7,4,0,0";
//int[] array = s.Split(',').Select(str => int.Parse(str)).ToArray();
int parity = 10;
List<byte> toBytes = ASCIIEncoding.Default.GetBytes(Content.Substring(0, 500)).ToList();
for (int index = 0; index < parity; index++)
{
toBytes.Add(0);
}
int[] bytesAsInts = Array.ConvertAll(toBytes.ToArray(), c => (int)c);
enc.encode(bytesAsInts, parity);
bytesAsInts[1] = 3;
dec.decode(bytesAsInts, parity);
string st = new string(Array.ConvertAll(bytesAsInts.ToArray(), z => (char)z));
File.WriteAllText(outputFileName, st, ASCIIEncoding.Default);
}
}
}
And here is the Hex file view of H.264 bit stream
The problem is that you're handling a binary format as if it is a Text file with an encoding. But based on what you are doing you only seem to be interested in reading some bytes, process them (encode, decode) and then write the bytes back to a file.
If that is what you need then use the proper reader and writer for your files, in this case the BinaryReader and BinaryWriter. Using your code as a starting point this is my version using the earlier mentioned readers/writers. My inputfile and outputfile are similar for the bytes read and written.
string inputFileName = #"input.264";
string outputFileName = #"output.264";
ReedSolomonEncoder enc = new ReedSolomonEncoder(GenericGF.AZTEC_DATA_12);
ReedSolomonDecoder dec = new ReedSolomonDecoder(GenericGF.AZTEC_DATA_12);
const int parity = 10;
// open a file as stream for reading
using (var input = File.OpenRead(inputFileName))
{
const int max_ints = 256;
int[] bytesAsInts = new int[max_ints];
// use a binary reader
using (var binary = new BinaryReader(input))
{
for (int i = 0; i < max_ints - parity; i++)
{
//read a single byte, store them in the array of ints
bytesAsInts[i] = binary.ReadByte();
}
// parity
for (int i = max_ints - parity; i < max_ints; i++)
{
bytesAsInts[i] = 0;
}
enc.encode(bytesAsInts, parity);
bytesAsInts[1] = 3;
dec.decode(bytesAsInts, parity);
// create a stream for writing
using(var output = File.Create(outputFileName))
{
// write bytes back
using(var writer = new BinaryWriter(output))
{
foreach(var value in bytesAsInts)
{
// we need to write back a byte
// not an int so cast it
writer.Write((byte)value);
}
}
}
}
}
Yesterday my teachers gives me a task to make something like database in .txt file which has to contains hexes and a C# application which takes all hexes from this database, also with its offSets. Then i gotta use it,the offset, to take the hex from file on this offset and compare the both haxes, does they are same.
I am using fileSystemWatcher to "spy" chosen directory for new files and with one, two, three or little bit more files it works perfect but if i try to copy very "big" folder the application stops - "not responding".
I have try to find from where the problem comes like i adding and deleting functions and found the "black sheep" -the function which have to take the file's hex which is comply on the given offset.
public string filesHex(string path,int bytesToRead,string offsetLong)
{
byte[] byVal;
try
{
using (Stream fileStream = new FileStream(path, FileMode.Open, FileAccess.Read))
{
BinaryReader brFile = new BinaryReader(fileStream);
offsetLong = offsetLong.Replace("x", string.Empty);
long result = 0;
long.TryParse(offsetLong, System.Globalization.NumberStyles.HexNumber, null, out result);
fileStream.Position = result;
byte[] offsetByte = brFile.ReadBytes(0);
string offsetString = HexStr(offsetByte);
//long offset = System.Convert.ToInt64(offsetString, 16);
byVal = brFile.ReadBytes(bytesToRead);
}
string hex = HexStr(byVal).Substring(2);
return hex;
}
You could create a new Thread and run the filesHex method in it.
You can change your string inside the thread code and get it's value after like this:
public string hex="";
public void filesHex(string path,int bytesToRead,string offsetLong)
{
byte[] byVal;
using (Stream fileStream = new FileStream(path, FileMode.Open, FileAccess.Read))
{
BinaryReader brFile = new BinaryReader(fileStream);
offsetLong = offsetLong.Replace("x", string.Empty);
long result = 0;
long.TryParse(offsetLong, System.Globalization.NumberStyles.HexNumber, null, out result);
fileStream.Position = result;
byte[] offsetByte = brFile.ReadBytes(0);
string offsetString = HexStr(offsetByte);
//long offset = System.Convert.ToInt64(offsetString, 16);
byVal = brFile.ReadBytes(bytesToRead);
}
hex = HexStr(byVal).Substring(2);
}
This would be your call:
Thread thread = new Thread(() => filesHex("a",5,"A"));//example for parameters.
thread.Start();
string hexfinal=hex;//here you can acess the desired string.
Now it would not freeze the main UI thread because you run your method on a sperate thread.
Goodluck.
I'm working on a project to transfer files between two COM ports.
First , I'm taking file name and extension and size before I convert the file to a byte array and send it to the second COM.
the problem is that I get strange characters in the beginning of the first readline method where I'm sending file name, like this :
"\0R\0\0\0\0\0\b\0\0\0S\0BAlpha" // file name
".docx" // file extension
"11360" // file size
here is the code I'm using to send the files :
Send sfile = new Send();
string path = System.IO.Path.GetFullPath(op.FileName);
sfile.Bytes = File.ReadAllBytes(path);
int size = sfile.Bytes.Length;
sfile.FileName = System.IO.Path.GetFileNameWithoutExtension(path);
sfile.Extension = System.IO.Path.GetExtension(path);
FileStream fs = new FileStream(path,FileMode.Open);
BinaryReaderbr = new BinaryReader(fs);
serialPort1.WriteLine(sfile.FileName); // sending file name
serialPort1.WriteLine(sfile.Extension);// sending extension
serialPort1.WriteLine(size.ToString());// sending size
byte[] b1 = br.ReadBytes((int)fs.Length);
for (int i = 0; i <= b1.Length; i++)
{
serialPort1.Write(b1, 0, b1.Length);
}
br.Close();
fs.Dispose();
fs.Close();
serialPort1.Close();
and the code below is used to receive data being sent :
string path1 = serialPort1.ReadLine();
string path2 = serialPort1.ReadLine();
string path3 = serialPort1.ReadLine();
int size = Convert.ToInt32(path3);
string path0 = #"C:\";
string fullPath = path0 + path1 + path2;
// File.Create(fullPath);
FileStream fs = new FileStream(fullPath, FileMode.Create);
byte[] b1 = new byte[size];
for (int i = 0; i < b1.Length; i++)
{
serialPort1.Read(b1, 0, b1.Length);
}
fs.Write(b1, 0, b1.Length);
fs.Close();
serialPort1.Close();
You are not writing the bytes correctly. It should be:
byte[] b1 = br.ReadBytes((int)fs.Length);
serialPort1.Write(b1, 0, b1.Length);
The way you read them is completely wrong as well. It should be:
byte[] b1 = new byte[size];
for (int i = 0; i < b1.Length; )
{
i += serialPort1.Read(b1, i, b1.Length - i);
}
I have a requirement where I need to decrypt large files in memory because these files contain sensitive data (SSNs, DOBs, etc). In other words the decrypted data cannot be at rest (on disk). I was able to use the BouncyCastle API for C# and managed to make it to work for files up to 780 MB. Basically here's the code that works:
string PRIVATE_KEY_FILE_PATH = #"c:\pgp\privatekey.gpg";
string PASSPHRASE = "apassphrase";
string pgpData;
string[] pgpLines;
string pgpFilePath = #"C:\test\test.gpg";
Stream inputStream = File.Open(pgpFilePath, FileMode.Open);
Stream privateKeyStream = File.Open(PRIVATE_KEY_FILE_PATH, FileMode.Open);
string pgpData = CryptoHelper.DecryptPgpData(inputStream, privateKeyStream, PASSPHRASE);
string[] pgpLines = pgpData.Split(new string[] { "\r\n", "\n" }, StringSplitOptions.None);
Console.WriteLine(pgpLines.Length);
Console.ReadLine();
foreach (var x in pgpLines)
{
Console.WriteLine(x);
Console.ReadLine();
}
In the above code the entire decrypted data is stored in the pgpData string and that's fine for files up to 780MB as stated previously.
However, I will be getting much larger files and my solution above does not work as I get OutOfMemoryExeception exception.
I've been trying the code below, and I keep getting errors when the DecryptPgpData method below is invoked. When using 512 as the chunksize I get a "Premature end of stream in PartialInputStream" exception. When using 1024 as the chunksize I get a "Exception starting decryption" exception. My question is, is this the correct way to decerypt chunks of data using BouncyCastle? The pgp file I'm trying to decrypt was encrypted using the gpg.exe utility. Any help is much appreciated.....It's been almost two days that I've been trying to make this to work with no success.
string PRIVATE_KEY_FILE_PATH = #"c:\pgp\privatekey.gpg";
string PASSPHRASE = "apassphrase";
string pgpData;
string[] pgpLines;
string pgpFilePath = #"C:\test\test.gpg";
string decryptedData = string.Empty;
FileInfo inFile = new FileInfo(pgpFilePath);
FileStream fs = null;
fs = inFile.OpenRead();
int chunkSize = 1024;
byte[] buffer = new byte[chunkSize];
int totalRead = 0;
while (totalRead < fs.Length)
{
int readBytes = fs.Read(buffer, 0, chunkSize);
totalRead += readBytes;
Stream stream = new MemoryStream(buffer);
decryptedData = CryptoHelper.DecryptPgpData(stream, privateKeyStream, PASSPHRASE);
Console.WriteLine(decryptedData);
}
fs.Close();
Console.WriteLine(totalRead);
Console.ReadLine();
I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}