This question already has answers here:
How to write NAudio WaveStream to a Memory Stream?
(2 answers)
Closed 7 years ago.
I worked on this answer for my problem.
I want that 2 wave files that get of database with byte array type concatenate together and play then dispose it!
this is my code:
public static void Play()
{
List<byte[]> audio = dal.SelectSound("خدمات", "احیاء");
byte[] sound = new byte[audio[0].Length + audio[1].Length];
Stream outputSound = Concatenate(sound, audio);
try
{
WaveFileReader wavFileReader = new WaveFileReader(outputSound);
var waveOut = new WaveOut(); // or WaveOutEvent()
waveOut.Init(wavFileReader);
waveOut.Play();
}
catch (Exception ex)
{
Logs.ErrorLogEntry(ex);
}
}
public static Stream Concatenate(byte[] outputFile, List<byte[]> sourceFiles)
{
byte[] buffer = new byte[1024];
Stream streamWriter = new MemoryStream(outputFile);
try
{
foreach (byte[] sourceFile in sourceFiles)
{
Stream streamReader = new MemoryStream(sourceFile);
using (WaveFileReader reader = new WaveFileReader(streamReader))
{
int read;
while ((read = reader.Read(buffer, 0, buffer.Length)) > 0)
{
streamWriter.Write(buffer, 0, read);
}
}
}
}
return streamWriter;
}
but I get this error:
Not a WAVE file - no RIFF header
after execute this line:
WaveFileReader wavFileReader = new WaveFileReader(outputSound);
Thanks in advance.
WAV files are not simply an array of bytes, each WAV file has a 44-byte header (the RIFF header) to tell any software how to play it back. Among other things, this header contains information about the length of the file, so in concatenating two WAV files the way you're doing it, you'll end up with two big problems. Firstly, the first WAV file will still have its old header at the start, which will tell your software that the file shorter than it actually is, and secondly, the header of the second WAV file will be stuck in middle of your new file, which will probably sound very strange if you play it back!
So when you're concatenating your files, you'll need to do the following:
Remove the first 44 bytes of each file
Concatenate the two byte
arrays
Create a new header according to the RIFF header specification
Put this header at the
front of your concatenated byte arrays
Call WaveFileReader wavFileReader = new
WaveFileReader(outputSound);
Related
I have a method that takes a web-based image, converts it into a byte array and then passes that to a CDN as an image.
I have now used this successfully hundreds of times, however on migrating a particular set of images, I've noticed that these Jpeg files are identifying as .png files which when they arrive at the CDN are blank.
Using some code that I copied from the web, I am able to identify the file extension from the image byte array after it is built.
So, it's the conversion from the original image to byte array that is mysteriously updating the file type.
This is my method:-
public byte[] Get(string fullPath)
{
byte[] imageBytes = { };
var imageRequest = (HttpWebRequest)WebRequest.Create(fullPath);
var imageResponse = imageRequest.GetResponse();
var responseStream = imageResponse.GetResponseStream();
if (responseStream != null)
{
using (var br = new BinaryReader(responseStream))
{
imageBytes = br.ReadBytes(500000);
br.Close();
}
responseStream.Close();
}
imageResponse.Close();
return imageBytes;
}
I have also tried converting this to use MemoryStream instead.
I'm not sure what else I can do to ensure that this identifies the correct file type.
Edit
I have now updated the number of allowed bytes which has resulted in viable images.
However the issue with the JPEG files being altered to PNG is still ongoing.
It's only this selection of images that are affected.
These images were saved in an old CMS system so I do wonder if the way that they were saved is the cause?
Up to now, the code only reads 500,000 bytes for each file. If the file is larger than that, the end is truncated and the content is not valid anymore. In order to read all bytes, you can use the following code:
public byte[] Get(string fullPath)
{
List<byte> imageBytes = new List<byte>(500000);
var imageRequest = (HttpWebRequest)WebRequest.Create(fullPath);
using (var imageResponse = imageRequest.GetResponse())
{
using (var responseStream = imageResponse.GetResponseStream())
{
using (var br = new BinaryReader(responseStream))
{
var buffer = new byte[500000];
int bytesRead;
while ((bytesRead = br.Read(buffer, 0, buffer.length)) > 0)
{
imageBytes.AddRange(buffer);
}
}
}
}
return imageBytes.ToArray();
}
Above sample reads the data in chunks of 500,000 bytes - for most of your files, this should be sufficient. If a file is larger, the code reads more chunks until there are no more bytes to read. All the chunks are assembled in a list.
This asserts that all the bytes are read, even if the content is larger than 500,000 bytes.
I have implemented POC to read entire file content into Byte[] array. I am now succeeded to read files whose size below 100MB, when I load file whose size more than 100MB then it is throwing
Convert.ToBase64String(mybytearray) Cannot obtain value of the
local variable or argument because there is not enough memory
available.
Below is my code that I have tried to read content from file to Byte array
var sFile = fileName;
var mybytearray = File.ReadAllBytes(sFile);
var binaryModel = new BinaryModel
{
fileName = binaryFile.FileName,
binaryData = Convert.ToBase64String(mybytearray),
filePath = string.Empty
};
My model class is as below
public class BinaryModel
{
public string fileName { get; set; }
public string binaryData { get; set; }
public string filePath { get; set; }
}
I am getting "Convert.ToBase64String(mybytearray) Cannot obtain value of the local variable or argument because there is not enough memory available." this error at Convert.ToBase64String(mybytearray).
Is there anything which I need to take care to prevent this error?
Note: I do not want to add line breaks to my file content
To save memory you can convert stream of bytes in 3-packs. Every three bytes produce 4 bytes in Base64. You don't need whole file in memory at once.
Here is pseudocode:
Repeat
1. Try to read max 3 bytes from stream
2. Convert to base64, write to output stream
And simple implementation:
using (var inStream = File.OpenRead("E:\\Temp\\File.xml"))
using (var outStream = File.CreateText("E:\\Temp\\File.base64"))
{
var buffer = new byte[3];
int read;
while ((read = inStream.Read(buffer, 0, 3)) > 0)
{
var base64 = Convert.ToBase64String(buffer, 0, read);
outStream.Write(base64);
}
}
Hint: every multiply of 3 is valid. Higher - more memory, better performance, lower - less memory, worse performance.
Additional info:
File stream is an example. As a result stream use [HttpContext].Response.OutputStream and write directly to it. Processing hundreds of megabytes in one chunk will kill you and your server.
Think about total memory requirements. 100MB in string, leads to 133 MB in byte array, since you wrote about model I expect copy of this 133 MB in response. And remember it's just a simple request. A few such requests could drain your memory.
I would use two filestreams - one to read the large file, one to write the result back out.
So in chunks you would convert to base 64 ... then convert the resulting string to bytes ... and write.
private static void ConvertLargeFileToBase64()
{
var buffer = new byte[16 * 1024];
using (var fsIn = new FileStream("D:\\in.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using (var fsOut = new FileStream("D:\\out.txt", FileMode.CreateNew, FileAccess.Write))
{
int read;
while ((read = fsIn.Read(buffer, 0, buffer.Length)) > 0)
{
// convert to base 64 and convert to bytes for writing back to file
var b64 = Encoding.ASCII.GetBytes(Convert.ToBase64String(buffer));
// write to the output filestream
fsOut.Write(b64, 0, read);
}
fsOut.Close();
}
}
}
I am in needed to use the text data in to a program when someone prints a file.
I have basic ideas about the TCP/IP client and listener programming.
Already I could send and receive txt files between two machines.
But how to receive the file contents if the files were in docx,xlx,pdf or any other format?
My requirement is,
I wanted to use the contents (texts) of a file in to another program when someone prints a file.
Please suggest me if there is some alternative ways to do it.
Thank in advance.
Since you haven't posted any code I'll write the code part in "my way" but you should have a bit of understanding after reading this.
First on both of the ends ( client and server ) you should apply unified protocol which will describe what data you're sending. Example could be:
[3Bytes - ASCII extension][4Bytes - lengthOfTheFile][XBytes - fileContents]
Then in your sender you can receive data according to the protocol which means first your read 3 bytes to decide what format file has, then you read 4 bytes which will basically inform you how large file is incomming. Lastly you have to read the content and write it directly to the file. Example sender could look like this :
byte[] extensionBuffer = new byte[3];
if( 3 != networkStream.Read(extensionBuffer, 0, 3))
return;
string extension = Encoding.ASCII.GetString(extensionBuffer);
byte[] lengthBuffer = new byte[sizeof(int)];
if(sizeof(int) != networkStream.Read(lengthBuffer, 0, 3))
return;
int length = BitConverter.ToInt32(lengthBuffer, 0);
int recv = 0;
using (FileStream stream = File.Create(nameOfTheFile + "." + extension))
{
byte #byte = 0x00;
while( (#byte = (byte)networkStream.ReadByte() ) != 0x00)
{
stream.WriteByte(#byte);
recv++;
}
stream.Flush();
}
On the sender part you can read the file extension then open up the file stream get the length of the stream then send the stream length to the client and "redirect" each byte from FileStream into a networkStream. This can look something like :
FileInfo meFile = //.. get the file
byte[] extBytes = Encoding.ASCII.GetBytes(meFile.Extension);
using(FileStream stream = meFile.OpenRead())
{
networkStream.Write(extBytes, 0, extBytes.Length);
networkStream.Write(BitConverter.GetBytes(stream.BaseStream.Length));
byte #byte = 0x00;
while ( stream.Position < stream.BaseStream.Length )
{
networkStream.WriteByte((byte)stream.ReadByte());
}
}
This approach is fairly easy to implement and doesn't require big changes if you want to send different file types. It lack some validator but I think you do not require this functionality.
I am trying to convert audio MP3 files to WAV with a standard rate (48 KHz, 16 bits, 2 channels) by opening with "MediaFoundationReaderRT" and specifying the standard settings in it.
After the file is converted to PCM WAV, when I try to play the WAV file, it gives corrupt output:
Option 1 -
WaveStream activeStream = new MediaFoundationReaderRT([Open "MyFile.mp3"]);
WaveChannel32 waveformInputStream = new WaveChannel32(activeStream);
waveformInputStream.Sample += inputStream_Sample;
I noticed that if I read the audio data into a memory stream (wherein it appends the WAV header via "WaveFileWriter"), then things work fine:
Option 2 -
WaveStream activeStream = new MediaFoundationReaderRT([Open "MyFile.mp3"]);
MemoryStream memStr = new MemoryStream();
byte[] audioData = new byte[activeStream.Length];
int bytesRead = activeStream.Read(audioData, 0, audioData.Length);
memStr.Write(audioData, 0, bytesRead);
WaveFileWriter.CreateWaveFile(memStr, audioData);
RawSourceWaveStream rawSrcWavStr = new RawSourceWaveStream(activeStream,
new WaveFormat(48000, 16, 2));
WaveChannel32 waveformInputStream = new WaveChannel32(rawSrcWavStr);
waveformInputStream.Sample += inputStream_Sample;
However, reading the whole audio into memory is time-consuming. Hence I am looking at "Option 1" as noted above.
I am trying to figure out as to what exactly is the issue. Is it that the WAV header is missing which is causing the problem?
Is there a way in "Option 1" where I can append the WAV header to the "current playing" sample data, instead of converting the whole audio data into memory stream and then appending the header?
I'm not quite sure why you need either of those options. Converting an MP3 file to WAV is quite simple with NAudio:
using(var reader = new MediaFoundationReader("input.mp3"))
{
WaveFileWriter.CreateWaveFile("output.wav", reader);
}
And if you don't need to create a WAV file, then your job is already done - MediaFoundationReader already returns PCM from it's Read method so you can play it directly.
I am receiving a stream of data from a webservice and trying to save the contents of the stream to file. The stream contains standard lines of text alongside large chunks of xml data (on a single line). The size of the file is about 800Mb.
Problem: Receiving an out of memory exception when I process the xml section of each line.
==start file
line 1
line 2
<?xml version=.....huge line etc</xml>
line 3
line4
<?xml version=.....huge line etc</xml>
==end file
Current code, as you can see when it reads in the huge xml line then it spikes the memory.
string readLine;
using (StreamReader reader = new StreamReader(downloadStream))
{
while ((readLine = reader.ReadLine()) != null)
{
streamWriter.WriteLien(readLine); //writes to file
}
}
I was trying to think of a solution where I used both a TextReader/StreamReader and XmlTextReader in combination to process each section. As I get to the xml section I could switch to the XmlTextReader and use the Read() method to read each node thus stopping the memory spike.
Any suggestions on how I could do this? Alternatively, I could create a custom XmlTextReader that was able to read in these lines? Any pointers for this?
Updated
A further problem to this is that I need to read this file back in and split out the two xml sections to separate xml files! I converted the solution to write the file using a binary writer and then started to read the file back in using a binary reader. I have text processing to detect the start of the xml section and specifically which xml section so I can map it to the correct file! However this causes problems reading in the binary file and doing detection...
using (BinaryReader reader = new BinaryReader(savedFileStream))
{
while ((streamLine = reader.ReadString()) != null)
{
if (streamLine.StartsWith("<?xml version=\"1.0\" ?><tag1"))
//xml file 1
else if (streamLine.StartsWith("<?xml version=\"1.0\" ?><tag2"))
//xml file 2
XML may contain all content as one single line, so you'd probably better use a binary reader/writer where you can decide about the read/write size.
An example below, here we read BUFFER_SIZE bytes for each iteration:
Stream s = new MemoryStream();
Stream outputStream = new MemoryStream();
int BUFFER_SIZE = 1024;
using (BinaryReader reader = new BinaryReader(s))
{
BinaryWriter writer = new BinaryWriter(outputStream);
byte[] buffer = new byte[BUFFER_SIZE];
int read = buffer.Length;
while(read != 0)
{
read = reader.Read(buffer, 0, BUFFER_SIZE);
writer.Write(buffer, 0, read);
}
writer.Flush();
writer.Close();
}
I don't know if this causes you problems with encodings etc, but I think you will have to read the file as binary.
If all you want to do is copy one stream to another without modifying the data, you don't need the Stream text or binary helpers (StreamReader, StreamWriter, BinaryReader, BinaryWriter, etc.), simply copy the stream.
internal static class StreamExtensions
{
public static void CopyTo(this Stream readStream, Stream writeStream)
{
byte[] buffer = new byte[4096];
int read;
while ((read = readStream.Read(buffer, 0, buffer.Length)) > 0)
writeStream.Write(buffer, 0, read);
}
}
I think there is a memory leakage
Are you getting out of memory exception after processing a few lines or on the first line itself?
And there is no streamWriter.Flush() inside the while loop.
Don't you think there should be one?