I have a c# app.
At the moment I upload jpegs to my server to be rendered onto client browser to give the appearance of a video.
To save on bandwidth I am considering whether I can convert these jpegs to H264 video format and provide a constant stream to my server.
I have seen example (notably) on this forum that will do this using ffmpeg.
However, the examples show that the video is created and THEN can be uploaded to my server.
The only way I can see this working is to do continuous 'cut off''s of say 15 seconds worth of footage and upload to my server.
Is there a better way?
ADDITIONAL NOTES ABOUT MY APPLICATION
The main point of the application is to allow customers to view motion caught on their CCTV cameras - using IP cameras - via a web browser.
These images HAVE to be MPJEG format.
The other side of the application is that the customer also wants to see live streaming on the browser as well.
Normally, to do this with a high FPS rate and low bandwidth usage the H264 encoder is used. But, H264 uses predictive encoding which cannot be used for motion 'stills'.
The live streaming I have at the moment is not bad. But will never rival H264 either in FPS or/and bandwidth.
So, I wanted to see if I can have 2 streams going to my server. One would save motion 'stills' to the hard drive and the other will display the live feed.
To achieve this I would presume I would have to do this 'cut off' at time chosen by me - like 15 seconds.
The better option would be to never have a cut off but pipe the stream to my server.
FFMPEGServer seems to offer this but is not available for Windows OS.
I am unsure how to use FFMPEG to send a 'stream' to my server and then receive it on my server C# code.
Related
is there a way to live upload a video in a folder while im filming it?
I mean by that that while im recording a video i want at the same time uploading it on a folder:if i cut the record intentionnaly or unintentionnaly there will be a copy in the folder as it's live uploading.
you want to essentially ensure that if there is an issue, the video is still saved. This solution is not c# based, but more based on web streaming protocols.
Many live streaming services offer the capability to save a video after a live stream is completed. api.video allows you to live stream (to no one, if you desire), and upon completion, a copy of your video is available (nearly instantly) for playback on the server.
When people share images via Messenger, those images are compressed, both by the client app on the device, and by Messenger's servers before storing in their CDN. If Messenger is being used as a channel to a Microsoft BotFramework bot, only at that point in the bot's incoming message do you get the link to that attachment.
This is understandable, given the volume of images shared every day by Messenger users, and those users are essentially getting free image storage.
However, we would like to use these images for further processing - text extraction, for example. By the time we get access to these images, the compression has rendered them unusable for this purpose (20-40kb) - at least from iPhones. Android appears to be more relaxed, but at around 150kb an image this is still a serious downsizing from a 2MB original photo.
This seems like a Facebook-controlled setting - is it possible to set a more forgiving compression ratio that is applied to incoming media?
I am creating an application which takes video from a camera hosted on the web, runs it through a computer vision algorithm to detect humans (written in C# using EmguCV's OpenCV wrapper) and streams the processed video to an ASP.NET client.
The process I believed would work was to have Azure Media Services create a live stream channel for the video, and somewhere in the process inject my code to process the video. The algorithm uses a SQL database for much of its decision making, and so I thought to put it in a WebJob and have it process video as it is put in storage. I would much rather process it somewhere in the Azure Media Services process, instead of using a WebJob.
My question is: is there a way to process the video as it is coming in, so what is seen in storage is the processed video with boxes around the people (boxes placed by my algorithm which takes a frame as input and outputs a frame)? If so, where can I put my logic to do this, in the encoder setup?
Also, if you have another way of doing it please let me know! I am open to ideas! I plan on scaling this app to use more than one camera as input, and the client should be able to switch between feeds. This is off topic from my question but is a consideration. I know it is possible to have a WebJob take the video out of storage, process it, and put it back, but the app loses the "Live" aspect then.
Technology Stack:
Azure SQL DB created
Azure Website created
Azure Media Services and Storage created
Possible Azure WebJob to handle algorithm?
Thank you so much in advance for any help!
As of now Azure Media Services is not allowing to plug in user defined code into processing pipeline. You can select existing processor or utilize 3d party encoders which are currently presented through azure marketplace.
For now(based on requirements you have) i think you need to have a proxy VM which doing face recognition of incoming stream and redirect processed stream to Azure Media Services live channel. NGIX web server + ffmpeg + OpenCV can be a good solution to look into.
I'd like to create an application client-server in C# (WPF).
On the server side I want to make a capture from few cameras and I want to send a preview to client (monitoring system).
Apart from that client can select one of the preview and take the audio stream from one of the camera.
What library I can use to get stream from few webcams simultaneously?
What about audio streaming from one of the camera?
I don't know how to synchronize video and audio streaming.
Do you have any idea how to achieve this functionality?
I'm working on an application that is joining two projects in two different courses in my Software Engineering degree:
SWE 490: Component Based Software Engineering
SWE 344: IP and Client Server programing
Here's what's my application about :
Client Side (Desktop Based): Main function is to capture the webcam video and stream it to the server.
Server Side (Web Based): Main function is to receive the streamed video from the client and display it on the website in real time.
Brief Description of the application :
The users will be able to monitor their Webcams remotely by streaming their webcams output to a remote server that is accessible via the web. The system will also serve as a motion detection system (if activated by the user) to notify the users via email if any motion has been detected on their webcams. In addition the system also allows users to schedule recordings and watch them online through live streaming.
I'm preparing a proposal for the project and I've made some initial plans for the system structure that is represented below :
Client Side Components (Desktop) :
Server Side Components (Web Server) :
My Question :
My main issues are with the real time video streaming (sending and receiving components) as this is a new topic for me.
I know I can program a socket and send the captured videos as a stream of bytes to the main server, but what I'm concerned about is how am i going to display the received stream on the web browser at the server side.
My situation is similar to this question except that it's for video streaming and not image streaming.
I've been reading some articles and it seems like it can be done using Silverlight and I'm hoping someone can point me to the right direction.
Your opinions on the project in general are more than welcomed.
I know it's in VB.Net, but this article may be a useful reference guide.
P.s. you misspelled Quartz in your diagram ;)
I agree that Silverlight should probably be your first stop.
You can start here:
http://www.silverlight.net/community/samples/silverlight-samples/video-chat-35809/
[EDIT: 28/02/2014]
Okay so this is obviously no longer valid, you can stop down voting it already ...