I want my android application record live audio from microphone and add my own binary data as image frames. I will be recording board game and whole image can be saved to around 50 bytes. The content should be possible to stream online so I want to put the data to some standard media container.
I checked various classes but can't build it together.
In MediaRecorder and AudioRecord classes I miss functions to add any sort of special data / tracks.
In MediaMuxer I can't find on the other hand a way how to record live, seems to me it only combines existing tracks.
Am I missing something in classes mentioned above? Or can this thing be done with MediaCodec and MediaExtractor? I don't much understand how these should be used.
Link to existing code would be welcome, I can check on my own, I just don't know where to search for the answer.
Related
I'm currently trying to work with a Microsoft Hololens and I want to basically pull some data from a database, copy it into a form that I can use and then, using Unity, visualize it and have that go into the Hololens.
I have pretty limited coding experience and am looking to improve, but any resources and help you can give would be greatly appreciated!
So my basic understanding of how to go about this task is to:
Find the form of the data and put it into a .csv file
Draw data from the .csv file and use it to create a graph in Unity. I don't have a graphing asset to hand so if you can recommend one (free if possible!) let me know.
Using Unity's ability to display to the Hololens to show this as UI element
If this all works I also want to incorporate a system by which the image processing can be used to look at something and therefore generate this graph - I was thinking QR codes or something similar.
Do you guys have any advice, pitfalls and/or resources that could help me?
Thank you!
As said in the comments, the questions is very wide.
But, here you can get some points to start:
You can get the data (or the file) from a WebAPI or any REST service.
You can process as that dataset as you need in Unity.
You can search any graph asset in the Unity Asset Store and send the data to it and display in the hololens.
You can use the ZXing library to read the QR code generated to display the graph as well.
Hope this helps.
I asked a slightly similar question previously but never got a good handle on this specific piece. Here's the setup:
Webserver: I have coded my own webserver. Works great. I have no issues here. It can handle any known mime-type (that I define) and send files accordingly. It can do so dynamically as well to send information within the responses like date/time, database results etc. So I need no help here.
Webcam code: I have coded a windows control that allows me to select whatever webcam I have hooked up and to view a preview inside the control. So I need no help here.
Here's where the issue arises. I want to connect these two components together so I can grab frames and stream them out the web server to viewers who request video.
Reading, I find that I probably want to write some sort of routine that takes regular "snapshots" at 5/10/20/30 frames per second and then, on-the-fly push them to some sort of MPEG routine to convert to MP4 and push the resulting bytes out to the viewer.
Can someone help enlighten me on how to do this? I don't know what the request should be from the client's end (presumably HTML5) and how to elegantly handle the stream. Thanks in advance!!!
I should develop an application that records and plays recorded video while recording continues (without stopping graph). I know how to use and develop using DirectShow but I need architecture of my graph. What filters should I use?
I did it with raw video using DirectShow and SampleGrabber but I should compress video and I have no idea how to do it in compressed video file (which is not closed, because recording continues).
I think MPEG-2 recording is the best for my application but please guide me about filters I should use.
Thanks
Have you tried the StreamBufferEngine?
Recording a file and watching it at the same time is a little bit problematic, because the Demuxer needs to reparse the written file, to know about the new parts. I have never seen an demuxer doing this.
The other big problem is the file-locking! The DirectShow filewriter and most other similar filters lock the file for the writing. So no other process can open the file for reading.
You are searching for a TimeShift system. There are some thirt-party SDK for this. But it can also be implemented with own DirectShow Filters, but you will need a lot of time and knowledge for this. I know it's possible because I have done it in the videoplayer from my company (utilius fairplay 5).
I worked on that issue for weeks, SBE does not work well in H264 mux in transport
I came up with a solution , let me share
First , the encoder need to have small GOP , like 15 , not 150, otherwise the sync will take time and it will seen as a big hang.
The idea is to create directshow filter , start from Live source filter
(https://www.codeproject.com/Articles/158053/DirectShow-Filters-Development-Part-2-Live-Source)
Modified the filter , not to enter the frame from external program.
add winsock2 , for UDP binding , or multicast group.
Start receive the data from live source and sample it out to the output pin.
Suggest always to use Elecard SDk for setup.
On the sender you can use nw renderer but there you cannot change the encoder GOP
so open graph edit and build the sender graph
desktop capture->encoder->mux->nwrenderer.
Your new filter should know to receive the data and display it on screen
once you have that working you can than continue add the timeshift capabilities into your filter.
Allocate a very big buffer , 200 mega to 1 giga , up to you( on ram ofcourse)
the same data that you send to the output pin , copy it into that buffer, make it circular with rd and wr indexes.
You need to add interface to your filter, functions like
1. GoLive
2.SetSeekPoint
The way i did it is the following
I created a callback from the filter into the host ( c# ) that send the time and the wr pointer every 1 second or more( depends on the accurate i need)
In the host i created a list of those two information.
Now in c# i have the list of the wr pointer and its time.
easy to search and set the RD pointer back to the filter.
The filter have two modes than:
1. in live it send the current received data
2. in seek it send the data from the big buffer , following the rd pointer.
Hope it understood.
I would like to emulate video input from a webcam for testing purposes.
So I need to be able to emulate a software video capture device in Windows and be able to dynamically generate its output.
How can I achieve this?
I would prefer a solution in C# or C++.
You can use a Virtual Webcam (old link, but there are others) it will take a video/images file and will display it in a webcam device. Your system will think that its a normal device.
Then you will need to create something that will generate the video/images, if you need static image then its pretty easy to generate a bmp.
Old (no selected answer) question.... actually probably one of the oldest I've ever seen... but I came across this looking for an answer myself, I remembered the day when "Virtual Webcam" still existed (now just a chinese ad site).
Fear not! There are new sources to solve your decade long quest:
First of all, checkout OBS, open source does a LOT with video streams:
https://obsproject.com/
Second, checkout this virtual webcam plugin for it. Does exactly what you're talking about, and does use #qbeuek's suggestion of DirectDraw:
https://obsproject.com/forum/resources/obs-virtualcam.949/
It is written in C++, so grabbing the bits you need and rewriting to C# is left as an exercise to the reader, but the capability is there.
As far as I know, there is a set of COM interfaces that govern the recording and playback of audio and video in Windows. It used to be called DirectShow, but maybe in the meantime the name has been changed. Those interfaces are used to construct a graph of audio and video filters, to encode / decode the data stream.
The way to go:
- read about the Microsoft DirectShow API,
- implement a COM object that implements the video source interface,
I'm working on an ASP.NET app that allows users to upload video files. After the user uploads, I need to determine some of the attributes of the media - namely it's duration/length, resolution, and codec (if possible).
What's the simplest way to approach this? Should I use the WMP SDK - this seems to involve actually instantiating the media player on the server. Is there anything in the framework to do this, or do I need to rely on an external library?
I'm not concerned about displaying or streaming the video back to the user.
There is nothing in the framework, you will need some sort of library. The best I've seen (but it has been a year or so since I've looked) is taglib-sharp:
http://developer.novell.com/wiki/index.php/TagLib_Sharp
The site seems to be down right now, but I see that it's been ported to fink (for OSX) only a couple of months ago, so I assume that is temporary.
oops, just saw that you're not the first to ask a question along these lines and I'm not the first to suggest taglib-sharp:
View/edit ID3 data for MP3 files
(note: it supports audio and video files).
hth