Sending image over socket and saving it on the server - c#

I'm facing currently some problems with my project and I hope that you are able to identify my problem as I'm not capable to see it by myself.
I'm trying to send a picture from a C# Client (Windows) to my C Server that is running on a linux system. I'm transmitting the image binary data via a TCP Socket and that works just fine, the problem is when I'm writing the received buffer on the linux system with fwrite, it seems that some information that is present in the buffer, is not written or written with a corrupted value to the file.
E.g. I'm trying to send this picture here:
And that one I get on the server:
The client:
public static void sendPicture(Image image)
{
Byte[] imageBytes;
using (MemoryStream s = new MemoryStream())
{
image.Save(s, ImageFormat.Jpeg);
imageBytes = s.ToArray();
s.Close();
}
if (imageBytes.Length <= 5242880)
{
try
{
NetworkStream stream = client.GetStream();
File.WriteAllBytes("before.jpg", imageBytes);
//Send image Size
Byte[] imgSize = BitConverter.GetBytes((UInt32)imageBytes.Length);
stream.Write(imgSize, 0, imgSize.Length);
//Get answer from server if filesize is ok
Byte[] data = new Byte[1];
// recv only looks if we have a partial read,
// works like stream.Read(data, 0, data.Length)
Int32 count = recv(stream, data, data.Length);
if (count != 1 || data[0] != 0x4)
return;
stream.Write(imageBytes, 0, imageBytes.Length);
...
}
The server:
...
// INFO_BUFFER_SIZE (1)
buffer = malloc(INFO_BUFFER_SIZE);
// does allow parital reads
if (read_from_client(connfd, buffer, INFO_BUFFER_SIZE) == NULL) {
error_out(buffer,
"Error during receiv of client data # read_from_client");
close(connfd);
continue;
}
// reconstruct the image_size
uint32_t image_size =
buffer[3] << (0x8 * 3) | buffer[2] << (0x8 *2) | buffer[1] << 0x8 |
buffer[0];
fprintf(stderr, "img size: %u\n", image_size);
// Check if file size is ok
if (check_control(image_size, 1, response) > 0) {
inform_and_close(connfd, response);
continue;
}
// Inform that the size is ok and we're ready to receive the image now
(void) send(connfd, response, CONT_BUFFER_SIZE, 0);
free(buffer);
buffer = malloc(image_size);
if (read_from_client(connfd, buffer, image_size) == NULL) {
error_out(buffer,
"Error during receiv of client data # read_from_client");
close(connfd);
continue;
}
FILE * f;
// Generate a GUID for the name
uuid_t guid;
uuid_generate_random(guid);
char filename[37];
uuid_unparse(guid, filename);
if ((f = fopen(filename, "wb")) == NULL) {
inform_and_close(connfd, 0x0);
error_out(buffer,
"Error while trying to open a file to save the data");
continue;
}
if (fwrite(buffer, sizeof(buffer[0]), image_size, f) != image_size) {
inform_and_close(connfd, 0x0);
error_out(buffer, "Error while writing to file");
continue;
}
char output[100];
(void) snprintf(output, sizeof(output), "mv %s uploads/%s.jpg",
filename, filename);
system(output);
free(buffer);
close(connfd);
...
If I receive the image and send it directly back to the client and the client then writes the received buffer to a file, everything is fine, there is not difference between the file that has been sent and the one it received. Because of that I’m quite sure that the transmission works as expected.
If you need anything more, let me know!

fopen creates a buffered I/O stream. But you aren't flushing the buffer and closing the file. Cutting out the error checking, you're doing:
f = fopen(filename, "wb");
fwrite(buffer, sizeof (buffer[0]), image_size, f);
You should add these at the end:
fflush(f);
fclose(f);
fclose will actually flush for you but it's best to do the flush separately so that you can check for errors prior to closing.
What's happening here (somewhat oversimplified) is:
fopen creates the file on disk (by using open(2) [or creat(2)] system call) and also allocates an internal buffer
fwrite repeatedly fills the internal buffer. Each time the buffer fills to some boundary (determined by BUFSIZ -- see setbuf(3) man page), fwrite flushes it to disk (i.e. it does a write(2) system call).
However, when you're finished, the end of the file is still sitting in the internal buffer -- because the library can't know that you're done writing to the file and you didn't happen to be on a BUFSIZ boundary. Either fflush or fclose will tell the library to flush out that last partial buffer, writing it to disk.. Then with fclose the underlying OS file descriptor will be closed (which you should always do anyway, or your server will have a "file descriptor leak").

Related

Sending large image through TCPClient c#

I have the following code to send a picture to a receiving application
public static void sendFile(string file, string ip)
{
using (TcpClient client = new TcpClient())
{
client.Connect(IPAddress.Parse(ip), 44451);
//Console.WriteLine(ip);
NetworkStream nwStream = client.GetStream();
MemoryStream ms = new MemoryStream();
Image x = Image.FromFile(file);
x.Save(ms, x.RawFormat);
byte[] bytesToSend = ms.ToArray();
nwStream.Write(bytesToSend, 0, bytesToSend.Length);
nwStream.Flush();
client.Close();
}
}
and I'm receiving the file on the other end with this
NetworkStream nwStream = clientCopy.GetStream();
byte[] buffer = new byte[clientCopy.ReceiveBufferSize];
//---read incoming stream---
int bytesRead = nwStream.Read(buffer, 0, clientCopy.ReceiveBufferSize);
MemoryStream ms = new MemoryStream(buffer);
Image returnImage = Image.FromStream(ms);
//ms.Flush();
//ms.Close();
String path;
if (!Directory.Exists(path = #"C:\Users\acer\AppData\Roaming\test"))
{
Directory.CreateDirectory(#"C:\Users\acer\AppData\Roaming\test");
}
string format;
if (ImageFormat.Jpeg.Equals(returnImage.RawFormat))
{
format = ".jpg";
}
else if (ImageFormat.Png.Equals(returnImage.RawFormat))
{
format = ".png";
}
else
{
format = ".jpg";
}
returnImage.Save(#"C:\Users\acer\AppData\Roaming\test\default_pic" + format, returnImage.RawFormat);
If i'm sending a picture that is small (around <20kb) the file is received 100% on the other end but if I send a file around >=100kb, the picture is received but only half of the image is loaded. I'm aware of the approach of reading the stream until all data is read but I don't know how to implement it right.
Thank you
You're only calling Read once, which certainly isn't guaranteed to read all the bytes. You could either loop, calling Read and copying the relevant number of bytes on each iteration, or you could use Stream.CopyTo:
var imageStream = new MemoryStream();
nwStream.CopyTo(imageStream);
// Rewind so that anything reading the data will read from the start
imageStream.Position = 0;
... or you could just read the image straight from the network stream:
// No need for another stream...
Image returnImage = Image.FromStream(nwStream);
(It's possible that would fail due to the stream being non-seekable... in which case using CopyTo as above would be the simplest option.)
The TCP protocol (like any other stream protocol) can't be used to transfer data as is. Most of the time it is impossible to know whether all data is arrived or whether it is received unrelated chunk of data together with the expected one. Therefore it is almost always needed to define underlying protocol, for example by sending a message header (like in HTTP) or defining a message separator (like line break in Telnet; however, using separators for big size messages are impractical). In most simple case it is enough to define very simple header that contains only the length of the message
Thus, in your case you can send 4 byte image length and then the image. On the server side you will read the 4 bytes size and then in the loop call the Read until complete message is recieved.
Please note that you can receive more bytes than expected. It means that the last chunk contains the beginning of the next message.

C++/zlib/gzip compression and C# GZipStream decompression fails

I know there's a ton of questions about zlib/gzip etc but none of them quite match what I'm trying to do (or at least I haven't found it). As a quick overview, I have a C# server that decompresses incoming strings using a GZipStream. My task is to write a C++ client that will compress a string compatible with GZipStream decompression.
When I use the code below I get an error that says "The magic number in GZip header is not correct. Make sure you are passing in a GZip stream." I understand what the magic number is and everything, I just don't know how to magically set it properly.
Finally, I'm using the C++ zlib nuget package but have also used the source files directly from zlib with the same bad luck.
Here's a more in depth view:
The server's function for decompression
public static string ReadMessage(NetworkStream stream)
{
byte[] buffer = new byte[512];
StringBuilder messageData = new StringBuilder();
GZipStream gzStream = new GZipStream(stream, CompressionMode.Decompress, true);
int bytes = 0;
while (true)
{
try
{
bytes = gzStream.Read(buffer, 0, buffer.Length);
}
catch (InvalidDataException ex)
{
Console.WriteLine($"Busted: {ex.Message}");
return "";
}
// Use Decoder class to convert from bytes to Default
// in case a character spans two buffers.
Decoder decoder = Encoding.Default.GetDecoder();
char[] chars = new char[decoder.GetCharCount(buffer, 0, bytes)];
decoder.GetChars(buffer, 0, bytes, chars, 0);
messageData.Append(chars);
Console.WriteLine(messageData);
// Check for EOF or an empty message.
if (messageData.ToString().IndexOf("<EOF>", StringComparison.Ordinal) != -1)
break;
}
int eof = messageData.ToString().IndexOf("<EOF>", StringComparison.Ordinal);
string message = messageData.ToString().Substring(0, eof).Trim();
//Returns message without ending EOF
return message;
}
To sum it up, it accepts a NetworkStream in, gets the compressed string, decompresses it, adds it to a string, and loops until it finds <EOF> which is removed then returns the final decompressed string. This is almost a match from the example off of MSDN.
Here's the C++ client side code:
char* CompressString(char* message)
{
int messageSize = sizeof(message);
//Compress string
z_stream zs;
memset(&zs, 0, sizeof(zs));
zs.zalloc = Z_NULL;
zs.zfree = Z_NULL;
zs.opaque = Z_NULL;
zs.next_in = reinterpret_cast<Bytef*>(message);
zs.avail_in = messageSize;
int iResult = deflateInit2(&zs, Z_BEST_COMPRESSION, Z_DEFLATED, (MAX_WBITS + 16), 8, Z_DEFAULT_STRATEGY);
if (iResult != Z_OK) zerr(iResult);
int ret;
char* outbuffer = new char[messageSize];
std::string outstring;
// retrieve the compressed bytes blockwise
do {
zs.next_out = reinterpret_cast<Bytef*>(outbuffer);
zs.avail_out = sizeof(outbuffer);
ret = deflate(&zs, Z_FINISH);
if (outstring.size() < zs.total_out) {
// append the block to the output string
outstring.append(outbuffer,
zs.total_out - outstring.size());
}
} while (ret == Z_OK);
deflateEnd(&zs);
if (ret != Z_STREAM_END) { // an error occurred that was not EOF
std::ostringstream oss;
oss << "Exception during zlib compression: (" << ret << ") " << zs.msg;
throw std::runtime_error(oss.str());
}
return &outstring[0u];
}
Long story short here, it accepts a string and goes through a pretty standard zlib compression with the WBITS being set to wrap it in a gzip header/footer. It then returns a char* of the compressed input. This is what is sent to the server above to be decompressed.
Thanks for any help you can give me! Also, let me know if you need any more information.
In your CompressString function you return a char* obtained from the a locally declared std::string. The string will be destroyed when the function returns which will release the memory at the pointer you've returned.
It's likely that something is being allocated to the this memory region and writing over your compressed data before it gets sent.
You need to ensure the memory containing the compressed data remains allocated until it has been sent. Perhaps by passing a std::string& into the function and storing it in there.
An unrelated bug: you do char* outbuffer = new char[messageSize]; but there is no call to delete[] for that buffer. This will result in a memory leak. As you're throwing exceptions from this function too I would recommend using std::unique_ptr<char[]> instead of trying to manually sort this out with your own delete[] calls. In fact I would always recommend std::unique_ptr instead of explicit calls to delete if possible.

System.OutOfMemoryException on server side for client files

I am getting data from client and saving it to the local drive on local host .I have checked it for a file of 221MB but a test for file of 1Gb gives the following exception:
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
Following is the code at server side where exception stems out.
UPDATED
Server:
public void Thread()
{
TcpListener tcpListener = new TcpListener(ipaddr, port);
tcpListener.Start();
MessageBox.Show("Listening on port" + port);
TcpClient client=new TcpClient();
int bufferSize = 1024;
NetworkStream netStream;
int bytesRead = 0;
int allBytesRead = 0;
// Start listening
tcpListener.Start();
// Accept client
client = tcpListener.AcceptTcpClient();
netStream = client.GetStream();
// Read length of incoming data to reserver buffer for it
byte[] length = new byte[4];
bytesRead = netStream.Read(length, 0, 4);
int dataLength = BitConverter.ToInt32(length,0);
// Read the data
int bytesLeft = dataLength;
byte[] data = new byte[dataLength];
while (bytesLeft > 0)
{
int nextPacketSize = (bytesLeft > bufferSize) ? bufferSize : bytesLeft;
bytesRead = netStream.Read(data, allBytesRead, nextPacketSize);
allBytesRead += bytesRead;
bytesLeft -= bytesRead;
}
// Save to desktop
File.WriteAllBytes(#"D:\LALA\Miscellaneous\" + shortFileName, data);
// Clean up
netStream.Close();
client.Close();
}
I am getting the file size first from client side followed by data.
1).Should i increase the buffer size or any other technique ?
2). File.WriteAllBytes() and File.ReadAllBytes() seems blocking and freezes the PC.Is there any async method for it to help provide the progress of file recieved at server side.
You don't need to read the whole thing to memory before writing it to disc. Just copy straight from the network stream to a FileStream:
byte[] length = new byte[4];
// TODO: Validate that bytesRead is 4 after this... it's unlikely but *possible*
// that you might not read the whole length in one go.
bytesRead = netStream.Read(length, 0, 4);
int bytesLeft = BitConverter.ToInt32(length,0);
using (var output = File.Create(#"D:\Javed\Miscellaneous\" + shortFileName))
{
netStream.CopyTo(output, bytesLeft);
}
Note that instead of calling netStream.Close() explicitly, you should use a using statement:
using (Stream netStream = ...)
{
// Read from it
}
That way the stream will be closed even if an exception is thrown.
The CLR has a per-object limit a bit short of 2GB. However that's the theory, in practice how much memory you can allocate depends on how much memory the framework allows you to allocate. I wouldn't expect it to allow you to allocate 1 GB data table. You should allocate smaller table, and write the data in chunks into disk file.
The "out of memory" exception happens because you are trying to place the entire file into memory before dumping it on disk. This is suboptimal, because you don't need the entire file in memory in order to write into the file: you can read it block-by-block in reasonably-sized increments, and write it out as you go.
Starting with .NET 4.0 you can use Stream.CopyTo method to accomplish this in a few lines of code:
// Read and ignore the initial four bytes of length from the stream
byte[] ignore = new byte[4];
int bytesRead = 0;
do {
// This should complete in a single call, but the API requires you
// to do it in a loop.
bytesRead += netStream.Read(ignore, bytesRead, 4-bytesRead);
} while (bytesRead != 4);
// Copy the rest of the stream to a file
using (var fs = new FileStream(#"D:\Javed\Miscellaneous\" + shortFileName, FileMode.Create)) {
netStream.CopyTo(fs);
}
netStream.Close();
Starting with .NET 4.5 you can use CopyToAsync, too, which would give you a way to do reading and writing asynchronously.
Note the code that drops the initial four bytes from the stream. This is done to avoid writing the length of the stream along with the "payload" bytes. If you have control over the network protocol, you could change the sending side to stop prefixing the stream with its length, and remove the code that reads and ignores it on the receiving side.

Sending data over TCP

I have a client server situation, where the client sends the data (a movie for example) to the server, the server saves that data to the HDD.
It sends the data by a fixed array of bytes. After the bytes are sent, the server asks if there is more, if yes, send more and so on. Every thing is going well, all the data gets across.
But when I try to play the movie, it cant be played and if I look to the file length of each movie (client and server) the server movie is bigger then the client movie.also when I look at the command screen at the end of the sending/receiving data there is more then a 100% of the bytes that are across.
The only thing I can think of that can be wrong is the fact that my server reads in the stream till the fixed buffer array is full and therefor has at the end more bytes then the client. However if that is the problem how can I solve this?
I've just added the 2methods of sending, because the tcp connection works, any help is welcome.
Client
public void SendData(NetworkStream nws, StreamReader sr, StreamWriter sw)
{
using (FileStream reader = new FileStream(this.path, FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[1024];
int currentBlockSize = 0;
while ((currentBlockSize = reader.Read(buffer, 0, buffer.Length)) > 0)
{
sw.WriteLine(true.ToString());
sw.Flush();
string wait = sr.ReadLine();
nws.Write(buffer, 0, buffer.Length);
nws.Flush();
label1.Text = sr.ReadLine();
}
sw.WriteLine(false.ToString());
sw.Flush();
}
}
Server
private void GetMovieData(NetworkStream nws, StreamReader sr, StreamWriter sw, Film filmInfo)
{
Console.WriteLine("Adding Movie: {0}", filmInfo.Titel);
double persentage = 0;
string thePath = this.Path + #"\films\" + filmInfo.Titel + #"\";
Directory.CreateDirectory(thePath);
thePath += filmInfo.Titel + filmInfo.Extentie;
try
{
byte[] buffer = new byte[1024]; //1Kb buffer
long fileLength = filmInfo.TotalBytes;
long totalBytes = 0;
using (FileStream writer = new FileStream(thePath, FileMode.CreateNew, FileAccess.Write))
{
int currentBlockSize = 0;
bool more;
sw.WriteLine("DATA");
sw.Flush();
more = Convert.ToBoolean(sr.ReadLine());
while (more)
{
sw.WriteLine("SEND");
sw.Flush();
currentBlockSize = nws.Read(buffer, 0, buffer.Length);
totalBytes += currentBlockSize;
writer.Write(buffer, 0, currentBlockSize);
persentage = (double)totalBytes * 100.0 / fileLength;
Console.WriteLine(persentage.ToString());
sw.WriteLine("MORE");
sw.Flush();
string test = sr.ReadLine();
more = Convert.ToBoolean(test);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
There is a reason why Read() returns the number of bytes read: it's possible it will return less than the size of the buffer. Because of this, you should do something like nws.Write(buffer, 0, currentBlockSize); in SendData(). But this will break your protocol, because the blocks won't have the size anymore.
But I find it hard to believe your code actually behaves the way you describe. That's because Read() in GetMovieData() also may not fill the whole buffer. Also, StreamReader is allowed to keep some data in an internal buffer, which would mean you could read some completely bogus data.
I think code like this, where you're combining Streams and StreamReaders/StreamWriters is a really bad idea. It would be hard to make it actually correct. What you should do instead is to make your protocol completely byte-based (not character-based), even if those bytes are ASCII-encoded "SEND".
Let me give it a try, but don't shoot me if it doesn't work.
I see that you have a buffer size of 1024, regardless of how many bytes there are left in the file that you send. Say you have a file of 2900 bytes, which would require to send 3 times, the last you send there will only be 852 bytes left to send. Yet, you create a buffer of 1024 and send over 1024 bytes. This means that your server receives 852 bytes of real data, and 172 zero-filled bytes. Even though, all those 172 bytes are save to the movie file on the server.
I guess there's an easy fix: When you write the data to the server, use the currentBlockSize as argument for the length. So in method SendData on the client, inside the while loop, change:
nws.Write(buffer, 0, buffer.Length);
to this:
nws.Write(buffer, 0, currentBlockSize);

C# TCP file transfer - Images semi-transferred

I am developing a TCP file transfer client-server program. At the moment I am able to send text files and other file formats perfectly fine, such as .zip with all contents intact on the server end. However, when I transfer a .gif the end result is a gif with same size as the original but with only part of the image showing as if most of the bytes were lost or not written correctly on the server end.
The client sends a 1KB header packet with the name and size of the file to the server. The server then responds with OK if ready and then creates a fileBuffer as large as the file to be sent is.
Here is some code to demonstrate my problem:
// Serverside method snippet dealing with data being sent
while (true)
{
// Spin the data in
if (streams[0].DataAvailable)
{
streams[0].Read(fileBuffer, 0, fileBuffer.Length);
break;
}
}
// Finished receiving file, write from buffer to created file
FileStream fs = File.Open(LOCAL_FOLDER + fileName, FileMode.CreateNew, FileAccess.Write);
fs.Write(fileBuffer, 0, fileBuffer.Length);
fs.Close();
Print("File successfully received.");
// Clientside method snippet dealing with a file send
while(true)
{
con.Read(ackBuffer, 0, ackBuffer.Length);
// Wait for OK response to start sending
if (Encoding.ASCII.GetString(ackBuffer) == "OK")
{
// Convert file to bytes
FileStream fs = new FileStream(inPath, FileMode.Open, FileAccess.Read);
fileBuffer = new byte[fs.Length];
fs.Read(fileBuffer, 0, (int)fs.Length);
fs.Close();
con.Write(fileBuffer, 0, fileBuffer.Length);
con.Flush();
break;
}
}
I've tried a binary writer instead of just using the filestream with the same result.
Am I incorrect in believing successful file transfer to be as simple as conversion to bytes, transportation and then conversion back to filename/type?
All help/advice much appreciated.
Its not about your image .. It's about your code.
if your image bytes were lost or not written correctly that's mean your file transfer code is wrong and even the .zip file or any other file would be received .. It's gonna be correpted.
It's a huge mistake to set the byte buffer length to the file size. imagine that you're going to send a large a file about 1GB .. then it's gonna take 1GB of RAM .. for an Idle transfering you should loop over the file to send.
This's a way to send/receive files nicely with no size limitation.
Send File
using (FileStream fs = new FileStream(srcPath, FileMode.Open, FileAccess.Read))
{
long fileSize = fs.Length;
long sum = 0; //sum here is the total of sent bytes.
int count = 0;
data = new byte[1024]; //8Kb buffer .. you might use a smaller size also.
while (sum < fileSize)
{
count = fs.Read(data, 0, data.Length);
network.Write(data, 0, count);
sum += count;
}
network.Flush();
}
Receive File
long fileSize = // your file size that you are going to receive it.
using (FileStream fs = new FileStream(destPath, FileMode.Create, FileAccess.Write))
{
int count = 0;
long sum = 0; //sum here is the total of received bytes.
data = new byte[1024 * 8]; //8Kb buffer .. you might use a smaller size also.
while (sum < fileSize)
{
if (network.DataAvailable)
{
{
count = network.Read(data, 0, data.Length);
fs.Write(data, 0, count);
sum += count;
}
}
}
}
happy coding :)
When you write over TCP, the data can arrive in a number of packets. I think your early tests happened to fit into one packet, but this gif file is arriving in 2 or more. So when you call Read, you'll only get what's arrived so far - you'll need to check repeatedly until you've got as many bytes as the header told you to expect.
I found Beej's guide to network programming a big help when doing some work with TCP.
As others have pointed out, the data doesn't necessarily all arrive at once, and your code is overwriting the beginning of the buffer each time through the loop. The more robust way to write your reading loop is to read as many bytes as are available and increment a counter to keep track of how many bytes have been read so far so that you know where to put them in the buffer. Something like this works well:
int totalBytesRead = 0;
int bytesRead;
do
{
bytesRead = streams[0].Read(fileBuffer, totalBytesRead, fileBuffer.Length - totalBytesRead);
totalBytesRead += bytesRead;
} while (bytesRead != 0);
Stream.Read will return 0 when there's no data left to read.
Doing things this way will perform better than reading a byte at a time. It also gives you a way to ensure that you read the proper number of bytes. If totalBytesRead is not equal to the number of bytes you expected when the loop is finished, then something bad happened.
Thanks for your input Tvanfosson. I tinkered around with my code and managed to get it working. The synchronicity between my client and server was off. I took your advice though and replaced read with reading a byte one at a time.

Categories

Resources