How live video or audio streams work? - c#

I coding the server that generates html pages, so users can view those in their browsers.
It has onGetRequest event and this is handler for it:
var req = e.Request;
var res = e.Response;
var path = req.RawUrl.Replace("%20", " ");
if (path == "/")
path += "index.html";
if (path.Contains("/../"))
{
res.StatusCode = (int)HttpStatusCode.Forbidden;
return;
}
var content = this.ServerToRun.GetFile(path); //getting file to read
if (content == null)
{
res.StatusCode = (int)HttpStatusCode.NotFound;
return;
}
string extension = path.Substring(path.LastIndexOf('.'));
string auto_mime = PageControls.MimeTypeDeterminer.GetMimeTypeFor(extension);
if (string.IsNullOrEmpty(auto_mime))
{
if (extension.Length > 1)
res.ContentType = "application/" + extension.Substring(1);
else
res.ContentType = "application/unknown";
}
else
res.ContentType = auto_mime;
if (path.EndsWith(".html") || path.EndsWith(".htm"))
res.ContentEncoding = Encoding.UTF8;
res.WriteContent(content); //sending content to client
I don't understand what is needed to do for supporting live streams.
For example, I can record audio from microphone, so file will increase it's size every second.
I can do this in html code:
<audio>
<source src = "live.wav" type = "audio/wav" />
</audio>
The server will receive query for this file, read it till the end and send it to client, but right after this live.wav will get more chunks of sound that server will not send to client anymore.
So, I am stuck, how live streams ever work and what I need to do?
I have WebSocket opened to every client, so I can call some scripts.

You should use the Transfer-Encoding: Chunked HTTP header. This header allows you to send data in chunks without the need to specifiy a Content-Length, thus the client will not close the socket until the server indicates the last chunk has been sent. See https://en.wikipedia.org/wiki/Chunked_transfer_encoding.

Related

How to convert HTTP Web Response to IFormFile in C#

I am recieving a url from frontend.
I need to hit this url (a http request) and then validate the content of the file. I need to check whether this url is a image file or not.
Upon making the HTTP request, I got the HTTP Web Response. And I am stuck here. I tried converting this web response to byte array and then byte array to stream. So that I can pass this stream to FormFile class constructor to create file object and then validate the file content.
But upon converting to stream, the stream object is created with some exception.
enter image description here
So passing this stream object to FileForm constructor is giving me a file. But even that file got created with few exceptions.
enter image description here
I don't know how to proceed further.
This is how I decided to validate file.
const int imageMinimumBytes = 512;
if (file.ContentType.ToLower() != "image/jpg" &&
file.ContentType.ToLower() != "image/jpeg" &&
file.ContentType.ToLower() != "image/pjpeg" &&
file.ContentType.ToLower() != "image/png")
{
throw new IamException("Invalid image. Supported image types are JPG/JPEG/PNG");
}
if (Path.GetExtension(file.FileName).ToLower() != ".jpg"
&& Path.GetExtension(file.FileName).ToLower() != ".png"
&& Path.GetExtension(file.FileName).ToLower() != ".jpeg")
{
throw new IamException("Invalid image. Supported image types are JPG/JPEG/PNG");
}
if (!file.OpenReadStream().CanRead)
{
throw new IamException("Invalid image. Supported image types are JPG/JPEG/PNG");
}
byte[] buffer = new byte[imageMinimumBytes];
file.OpenReadStream().Read(buffer, 0, imageMinimumBytes);
string content = System.Text.Encoding.UTF8.GetString(buffer);
if (Regex.IsMatch(content,
#"<script|<html|<head|<title|<body|<pre|<table|<a\s+href|<img|<plaintext|<cross\-domain\-policy",
RegexOptions.IgnoreCase | RegexOptions.CultureInvariant | RegexOptions.Multiline))
{
throw new IamException("Invalid image. Supported image types are JPG/JPEG/PNG");
}type here
You don't need to get/load the whole file to your memory if you only need to validate the file, you can get content type directly from the response headers(using the HEAD request method)
var url = "https://en.wikipedia.org/wiki/Image#/media/File:Image_created_with_a_mobile_phone.png"
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Method = "HEAD"; // use HEAD method to retrieve only headers
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
string contentType = response.ContentType;
}

Response URL is returning previous Imgur image URL instead of current image URL

I've built a new Unity project that makes use of C# and System.Net to access Imgur's API. I'm able to pull images into my application from anywhere, and I can upload screenshots to Imgur through my new application's client ID. I'm able to get a decent amount of response data on a successful upload, but the URL in the response is always outdated by 1. If I just uploaded Screenshot D, the URL I get back will link to Screenshot C, and so on.
Here is what the response data looks like:
This is the result I've gotten across all attempts, which includes using w.UploadValuesAsync() and in trying both Anonymous and OAuth2 (no callback) versions of Imgur applications.
Here is the bulk of my code, which is sourced from here.
public void UploadThatScreenshot()
{
StartCoroutine(AppScreenshotUpload());
}
IEnumerator AppScreenshotUpload()
{
yield return new WaitForEndOfFrame();
ScreenCapture.CaptureScreenshot(Application.persistentDataPath + filename);
//Make sure that the file save properly
float startTime = Time.time;
while (false == File.Exists(Application.persistentDataPath + filename))
{
if (Time.time - startTime > 5.0f)
{
yield break;
}
yield return null;
}
//Read the saved file back into bytes
byte[] rawImage = File.ReadAllBytes(Application.persistentDataPath + filename);
//Before we try uploading it to Imgur we need a Server Certificate Validation Callback
ServicePointManager.ServerCertificateValidationCallback = MyRemoteCertificateValidationCallback;
//Attempt to upload the image
using (var w = new WebClient())
{
string clientID = "a95bb1ad4f8bcb8";
w.Headers.Add("Authorization", "Client-ID " + clientID);
var values = new NameValueCollection
{
{ "image", Convert.ToBase64String(rawImage) },
{ "type", "base64" },
};
byte[] response = w.UploadValues("https://api.imgur.com/3/image.xml", values);
string returnResponse = XDocument.Load(new MemoryStream(response)).ToString();
Debug.Log(returnResponse);
}
}
Why is the URL I get back from w.UploadValues(...) always behind by one?

Can't obtain Source Code of embedded ISSUU flash

First of all what I want to do is legal (since they let you download the pdf).
I just wanted to make a faster and automatic method of downloading the pdf.
For example: http://www.lasirena.es/article/&path=10_17&ID=782
It has an embedded flash pdf and when I download that page source code, the link to the pdf:
http://issuu.com/lasirena/docs/af_fulleto_setembre_andorra_sense_c?e=3360093/9079351
Doesn't show up, the only thing that I have on the source code is this: 3360093/9079351
I tried to find a way to build the pdf link from it, but I can't find the name "af_fulleto_setembre_andorra_sense_c" anywhere...
I've made plenty of automatic downloads like this, but it's the first time that I can't build or get the pdf link and I can't seem to find a way, is it even possible?
I tried to try and find jpg's links but without success either. Either way (jpg or pdf) is fine...
PS: the Document ID doesn't show on the downloaded source code either.
Thank you.
I thought a workaround for this, some might not consider this a solution but in my case works fine because it depends on the ISSUU publisher account.
The Solution itself is making a Request to ISSUU API connected with the publisher account I'm looking for.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://api.issuu.com/query?action=issuu.documents.list" +
"&apiKey=Inser Your API Key" +
"&format=json" +
"&documentUsername=User of the account you want to make a request" +
"&pageSize=100&resultOrder=asc" +
"&responseParams=name,documentId,pageCount" +
"&username=Insert your ISSUU username" +
"&token=Insert Your Token here");
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.Accept = "application/json";
try
{
using (WebResponse response = request.GetResponse())
{
var responseValue = string.Empty;
// grab the response
using (var responseStream = response.GetResponseStream())
{
using (var reader = new StreamReader(responseStream))
{
responseValue = reader.ReadToEnd();
}
}
if (responseValue != "")
{
List<string> lista_linkss = new List<string>();
JObject ApiRequest = JObject.Parse(responseValue);
//// get JSON result objects into a list
IList<JToken> results = ApiRequest["rsp"]["_content"]["result"]["_content"].Children()["document"].ToList();
for (int i = 0; i < results.Count(); i++)
{
Folheto folheto = new Folheto();
folheto.name = results[i]["name"].ToString();
folheto.documentId = results[i]["documentId"].ToString();
folheto.pageCount = Int32.Parse(results[i]["pageCount"].ToString());
string _date = Newtonsoft.Json.JsonConvert.SerializeObject(results[i]["uploadTimestamp"], Formatting.None, new IsoDateTimeConverter() { DateTimeFormat = "yyyy-MM-dd hh:mm:ss" }).Replace(#"""", string.Empty);
folheto.uploadTimestamp = Convert.ToDateTime(_date);
if (!lista_nomes_Sirena.Contains(folheto.name))
{
list.Add(folheto);
}
}
}
}
}
catch (WebException ex)
{
// Handle error
}
You have to pay attention to the Parameter "pageSize" the maximum permitted by the API is 100, this means the maximum number of results you get is 100, since the account I'm following has around 240 pdf's, I used this request once with the Parameter "resultOrder = asc" and another time with the value "resultOrder=desc".
This allowed me to get the first 100 pdfs and the latest 100 pdfs inserted.
Since I didn't need a history but just the pdf's they will be sending out from now, it didn't make a difference.
Finalizing my code I'm sending all the document's ID's to a sql database I made, and when I start the program, I make a check to see if the ID was already downloaded, if not it downloads the pdf, if yes it doesn't.
Hope someone can find this work around useful

Get Html source of current page in C# Windows Forms App

I am working on creating an Internet Explorer add on using BandOjects and C# Windows Forms Application, and am testing out parsing HTML source code. I have been currently parsing information based on the URL of the site.
I would like to get HTML source of the current page of an example site I have that uses a login. if I use the URL of the page I am on, it will always grab the source of the login page rather than the actual page, as my app doesn't recognize that I logged in. would i need to store my login credentials for the site using some kind of api? or is there a way to grab the current page of the HTML regardless? I would prefer the latter as it seemingly would be less trouble. Thanks!
I use this method in one of my apps:
private static string RetrieveData(string url)
{
// used to build entire input
var sb = new StringBuilder();
// used on each read operation
var buf = new byte[8192];
try
{
// prepare the web page we will be asking for
var request = (HttpWebRequest)
WebRequest.Create(url);
/* Using the proxy class to access the site
* Uri proxyURI = new Uri("http://proxy.com:80");
request.Proxy = new WebProxy(proxyURI);
request.Proxy.Credentials = new NetworkCredential("proxyuser", "proxypassword");*/
// execute the request
var response = (HttpWebResponse)
request.GetResponse();
// we will read data via the response stream
Stream resStream = response.GetResponseStream();
string tempString = null;
int count = 0;
do
{
// fill the buffer with data
count = resStream.Read(buf, 0, buf.Length);
// make sure we read some data
if (count != 0)
{
// translate from bytes to ASCII text
tempString = Encoding.ASCII.GetString(buf, 0, count);
// continue building the string
sb.Append(tempString);
}
} while (count > 0); // any more data to read?
}
catch(Exception exception)
{
MessageBox.Show(#"Failed to retrieve data from the network. Please check you internet connection: " +
exception);
}
return sb.ToString();
}
You have to just pass the url of the web page for which you need to retrieve the code.
For example:
string htmlSourceGoggle = RetrieveData("www.google.com")
Note: You can get un-comment the proxy configuration if you use proxy to access the internet. Replace the proxy address, username and password with the one you use.
For logging in via code. check this: Login to website, via C#

how to receive server push data in c#?

I am writing a program. my program receive data from a server through HTTP protocol. the data will be pushed by server to my program.
I tried to use WebRequest, but only received one session of data.
How can i keep the connection alive, to receive the data from server continuosly,
Any help is appreciated.
the following is the SDK document:
Under the authorization of GUEST or ADMIN, it is possible to get the series of live images
(Server push). To get the images, send the request to “/liveimg.cgi?serverpush=1” as shown
in the Figure. 2-1-1.
When the camera receives the above request from the client, it sends the return as shown
in the Figure. 2-2.
Each JPEG data is separated by “--myboundary”, and “image/jpeg” is returned as
“Content-Type” header, after “--myboundary”. For “Content-Length” header, it returns the
number of bytes in the --myboundary data (excluding “--myboundary”, each header, and
\r\n as delimiter). After the “Content-Length” header and “\r\n” (delimiter), the actual
data will be sent.
This data transmission will continue until the client stop the connection (disconnect), or
some network error occurs.
int len;
string uri = #"http://192.168.0.2/liveimg.cgi?serverpush=1";
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(uri);
req.Credentials = new NetworkCredential("admin", "admin");
req.KeepAlive = true;
string line = "";
HttpWebResponse reply = (HttpWebResponse)req.GetResponse();
Stream stream = reply.GetResponseStream();
System.Diagnostics.Debug.WriteLine(reply.ContentType);
StreamReader reader = new StreamReader(stream);
do
{
line = reader.ReadLine();
System.Diagnostics.Debug.WriteLine(line);
System.Threading.Thread.Sleep(300);
} while (line.Length>0);
You can keep an HTTP connection open for an extended period of time, if the server supports doing so. (As already mentioned, this will significantly limit the number of simultaneous users you can support.)
The server will need to be set Response.Buffer=false, and have an extended ScriptTimeout (I'm assuming your using ASP.NET on the server side). Once you do that, your page can keep sending Response.Write data as needed until whatever it is doing is done.
Your client will need to process the incoming Response before the connection is complete rather than blocking for the complete response.
You may want to take a look at StreamHub Push Server - its a popular Comet server and has an .NET Client SDK which allows you to receive real-time push updates in C# (or VB / C++).
If I'm understanding you correctly, your server is going to respond to some event by sending data to your client outside of the client making a request/response. Is this correct? If so, I wouldn't recommend trying to keep the connection open unless you have a very small number of clients -- there are a limited number of connections available, so keeping them open may rapidly result in an exception.
Probably the easiest solution would be to have the clients poll periodically for new data. This would allow you to use a simple server and you'd only have to code a thread on the client to request any changes or new work once every minute or thirty seconds or whatever your optimal time period is.
If you truly want to have the server notify the clients proactively, without them polling, then you'll have to do something other than a simple web server -- and you'll also have to code and configure the client to accept incoming requests. This may be difficult if your clients are running behind firewalls and such. If you go this route, WCF is probably your best choice, as it will allow you to configure server and client appropriately.
You need to get a cookie from IP cam and include that cookie in header of your next HttpWebRequest. Otherways it will always try to redirect you to "index.html".
Here is how you can do it...
BitmapObject is a class that serves as a container for Jpeg image, current date and eventual error text. Once a connection is established it will pool an image every 200 ms. Same should be applicable for continuous image stream obtained through "serverpush".
public void Connect()
{
try
{
request = (HttpWebRequest)WebRequest.Create("Http://192.168.0.2/index.html");
request.Credentials = new NetworkCredential(UserName,Password);
request.Method = "GET";
response = (HttpWebResponse)request.GetResponse();
WebHeaderCollection headers = response.Headers;
Cookie = headers["Set-Cookie"];//get cookie
GetImage(null);
}
catch (Exception ex)
{
BitmapObject bitmap = new BitmapObject(Properties.Resources.Off,DateTime.Now);
bitmap.Error = ex.Message;
onImageReady(bitmap);
}
}
private Stream GetStream()
{
Stream s = null;
try
{
request = (HttpWebRequest)WebRequest.Create("http://192.168.0.2/liveimg.cgi");
if (!Anonimous)
request.Credentials = new NetworkCredential(UserName, Password);
request.Method = "GET";
request.KeepAlive = KeepAlive;
request.Headers.Add(HttpRequestHeader.Cookie, Cookie);
response = (HttpWebResponse)request.GetResponse();
s = response.GetResponseStream();
}
catch (Exception ex)
{
BitmapObject bitmap = new BitmapObject(Properties.Resources.Off,DateTime.Now);
bitmap.Error = ex.Message;
onImageReady(bitmap);
}
return s;
}
public void GetImage(Object o)
{
BitmapObject bitmap = null;
stream = GetStream();
DateTime CurrTime = DateTime.Now;
try
{
bitmap = new BitmapObject(new Bitmap(stream),CurrTime);
if (timer == null)//System.Threading.Timer
timer = new Timer(new TimerCallback(GetImage), null, 200, 200);
}
catch (Exception ex)
{
bitmap = new BitmapObject(Properties.Resources.Off, CurrTime);
bitmap.Error = ex.Message;
}
finally
{
stream.Flush();
stream.Close();
}
onImageReady(bitmap);
}
If you are using a standard web server, it will never push anything to you - your client will have to periodically pull from it instead.
To really get server push data you have to build such server yourself.

Categories

Resources