I have 2 memory streams which are representing wav files in my Windows Phone 8.1 app. I want to play them but avoiding gaps between them, is there any way to do that without using Sleep methods or something like that?
I've tried already the Thread.Sleep() but I think it is making lot of gaps as my 2 files are 20ms duration each.
Assuming the audio is the same format and the wave headers have been stripped you could just concatenate the memory streams.
streamTwo.CopyTo(streamOne);
If the wave headers are still embedded then you'd need to skip over the one in the second stream - generally 44 bytes. If the formats are different then you'll need to find another technique.
Related
I´m doing a "Whatsapp" like app and I need to send user videos (from camera/gallery).
I need to send video from ios to android and from android to ios (windows phone in the future).
First thing I thought is to use camera params to record the video in low resolution, but that won´t help with recorded videos stored in the phone already.
Second thought was to zip the video file, but I guess this is not enough for very large files.
Third: actually compressing the video file generating a new file, and then zip it before sending it through the network.
So this is what I need before actually sending the video:
Compress the video file, generating a new file that will play nicely in
both platforms (ios and android)
Make the compressing process aysnc(as I don´t want to block the UI
thread for a really long time)
Zip it (this is the easy part, just for the record)
Any ideas or help are appreciated
You would best need to use your platforms framework to also leverage existing hardware support for encoding (mainly h.264 hardware encoding). A PCL solution would eat to much battery as it would need to run on CPU only giving you bad performance and even worst battery live.
This ties in with 1. Just use your platforms native method to execute the frameworks methods async.
Skip this part. It will increase overhead and disallow video streaming There are virtually 0 benefits from using a zip algorithm on top of an already compressed video stream.
Just make sure that you end up with a cross platform compatible video format like H264.
I've wrote a program to save all the depth frames of the Kinect depth images in OpenNI. I've used the simple viewer sample. The problem that not all the frames are saved!! I run my program for 10 sec and around 20 images only are saved although the application is set with 30fps!!
Could anyone please advise?
My colleague uses a 2-phase extraction. First write the images in binary, in order to avoid losing time during encoding or conversions. (You can use System.IO.FileStream and BinaryWriter for that). And then in another program, read the binary files to get raw depth or color images. You might make use of Matlab, OpenCV or another utility for this second part.
But keep in mind that, even this approach might cause some skipped/dropped frames. Personally, I've never achieved to obtain a constant 30 fps for a long period.
My requirement is to create an application that records desktop activities, along with audio, as a movie. After searching, I found that Microsoft Expression Encoder can be used to record desktop activities as a movie but the output file size is very large. For 10 seconds of video, it occupied around 30 to 40 MB. Also, it uses xesc format.
Is there is any other free API available to do this job?
Before you give up on Expression Encoder try adjusting:
ScreenCaptureJob.ScreenCaptureVideoProfile.Quality
Reducing the quality can greatly reduce the file size. Try it and see if the results are acceptable for you.
Reducing the framerate is actually unhelpful; I guess it forces a fixed framerate, whereas the default is to use a variable framerate based on activity.
If you don't like .xesc files you can transcode the video after you've captured it.
But 30 to 40MB for ten seconds is still way too much. I recorded ten seconds of (admittedly not very large, 1366x768) full-screen video at the default quality. With not much going on it took 300K; with lots of activity (constantly switching between full-screen apps) it took at most 1.5MB.
Reducing the quality reduced file sizes by about 50%.
Unless you're playing a full-screen video and trying to record that, you shouldn't see anything like 30 to 40MB. Perhaps you should look at your audio settings.
ScreenRecorderLib from nuget is good.
SharpAVI is taking too much of my disc space.
You will need to be careful for ScreenRecorderLib, it would require some time saving mp4 file in the end.
Make sure your program won't end before that happens.
I use FileInfo.Length to check if the file size is not growing anymore. This will determine if the saving is finished or not.
I need to play back 30 second audio clips, 1 per second, in winforms dotnet.
I am currently loading/playing the wav files from the filesystem, which works fine on a notebook, but is causing problems on a netbook. Can I pre-load all sound files into memory, if so how?
If you use the SoundPlayer to play your files you can preload the file with SoundPlayer.Load.
SoundPlayer sp = new SoundPlayer("filename");
sp.Load(); // preload
sp.Play();
Edit:
As noted by the documentation you may also use SoundPlayer.LoadAsync to load the sound in the background.
I'm inclined to say that you would load the file into a system.io.memorystream of some sort. Hopefully the libraries that play your file, will take a memorystream or memorystream can be converted into the data structure that this library takes.
Here's a recent example that creates a .wav file (a sine) in memory entirely from scratch and plays it. What you're trying to do should be much simpler, and you should be able to derive it from the sample posted.
Real low level sound generation in C#?
I'm trying to do a "remote desktop viewer".
For this I need to send the user's desktop - and it's alot of infomation for sockets...(especially if the resolution is high and the information can approach to 5.3MB (1680X1050))
So I started to compress with GZIP stream and the 5.3MB became 500KB, then I added my own compress algorithm (I think it called RLE) - taking near pixels and write it in format that 1) have 256 >> 3 = 32 colors(for red,blue,green each) and write how many pixels in a row have the same color. + GZIP.
That brought the compression to be in average 60~65KB - up to 200KB and it can be also under 5000 if the screen is totally white.
Now - I thought (and haven't implemented yet) about passing the difference between each frame - for each line I write where the difference(between the pixels) is starting and how long is the difference.
well, it can help - maybe I could get 30KB for each frame in average. but for sockets it's alot.
has anyone ever succeed to fit with this problem? (and how of course...)
There are standard algorithms for compressing images: e.g. JPEG.
A further optimization is to know something about the image: for example on a desktop, items like the Windows Start button, and various application icons, and widgets on the title bar, are standard: so instead of sending their pixels values, you can send their logical identifiers.
Yes people have succeeded with this problem: people who write remote desktop software, including the open source VNC.
You may wish to review the source code of a VNC.
Most VNC servers implement several different forms of compression.