MJPEG Video Stream from a Security Camera using PHP and cURL - php

I have multiple self-hosting network security cameras (Axis 2100, they are pretty old) that I want to broadcast to a website. The cameras support live streaming in MJPG format, but in order to stream the video from these cameras, I must make them public. For security reason, I want to restrict the viewing of these cameras through my website where I can authenticate the users. Since the webcams are on a separate host, I'm using cURL and PHP to log in to the cameras, get the MJPG image stream, and echo the live stream back to be displayed on the webpage.
header('content-type: multipart/x-mixed-replace; boundary=--myboundary');
while (#ob_end_clean());
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://XX.XXX.XXX.XX/axis-cgi/mjpg/video.cgi?resolution=320x240');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY | CURLAUTH_ANYSAFE );
curl_setopt($ch, CURLOPT_USERPWD, 'USERNAME:PASSWORD');
$im = curl_exec($ch);
echo $im;
curl_close($ch);
The problem is when I have multiple sessions of trying to access the same php page with the code above on the same browser, only one session get loaded while the rest remained blank and eventually displayed a 500 internal server error. It works when open it through multiple different browsers, however, with degrade in performance.
Ultimately, I would like to make it a webcam server where I can make one input stream connection from the camera and broadcast it out to multiple clients.
My website is hosted by GoDaddy on a linux server, and I'm not on a dedicated server, so I don't think I get to install any fancy open source video stream hosting server.
Thank You and sorry for such a long post.
-T.Ho

i had here a mortal combat all night with the same problem, and your detailed problem description helped me figured out what were the problems in my case.
nginx+phpFastcgi on windows must use multiple php-cgi.exe configuration (some respawn process problem)... but this is not the main thing (because your case is linux...)
and main thing is:
running multiple simple 'WGET httpTargetImgSite' from command line - do not reproduce a problem - is OK!!!
... so conclusion is that server-side must be good, and problem reason must be browser caching related!
if your php code is named videoproxy.php
- direct browser calling of: http://serverip/videoproxy.php
- or calling some html file
<html><body><img src='videoproxy.php' /></body></html>
... will have a problem.
but this code WILL NOT:
<html><body><script>
document.write("<img src='videoproxy.php?dummy="
+(new Date().valueOf()) +"' />");
</script></body></html>
(dummy unique number to preserve img caching)

Related

PHP curl request results in white page when the url contains dots or colons

This only happens on my webserver, not on the local system.
I have a curl request like this
ini_set('display_errors', 1);
error_reporting(E_ALL);
$url = 'http://***.***.***.***:8080/api_v1/oauth/token';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
$response = curl_exec($ch);
This makes the page loading for a while and just returns a white screen. It is really impossible to show errors, output or just anything else.
Whenever I change the url to another url (existing or not existing) i get proper errors or output if the url makes sense, as long as the url does not contain any dots or colons...
Is there any restriction for the usage or a curlopt I am missing?
I have no control over the target url, I need to consume the api in the ip:port structure.
UPDATE
The problem is not related to the target URL or data coming in: the same problem occurs when I enter a url that makes no sense at all as long as it doesn't contain . or :
I guess it is a setting on the webserver since all my tests work fine on localhost (MAMP)
Unfortunately I have no access to any logs or files except the ones I upload myself (one.com webhosting)
UPDATE 2
Turns out my hoster is blocking all outgoing traffic to implicit IP's and ports other than 80 and 443.
Cancelled my subscription and taking a decent provider now.
Thanks for the help
As #Quasimodo suggests, then I'd take a look in the log-file, if I were you. If you're on a Ubuntu-server using Apache, then look at /var/log/apache2/error.log. A neat trick is to open a terminal and write:
tail -f /var/log/apache2/error.log
This will open a running stream to the terminal. Then you can make your curl-request crash (in your browser) and then go back to the terminal and see what new and juicy errors you have received.
It's most likely some configuration-file on your server. So it would be helpful, if you write a couple of specs from that server, such as:
- Which web server you're using (Apache, Nginx, other)
- PHP version
... You can find all of these information easily using phpinfo.
My best guess is that you need to enable PHP_Curl for your server configuration, - but it is a buck-wild cowboy shot from the hip.
Addition 1
I can see that you've just editted the question (that it thinks for a while and then gives a blank screen). I'd say, that your curl-request might be trying to load a big amount of data, and that your PHP-configuration has a cap at 128mb (or something).
I'd check the PHPinfo for these two values:
max_input_vars
memory_limit
To see if either of them are suspiciously low.
Turns out my hoster is blocking all outgoing traffic to implicit IP's and ports other than 80 and 443. Cancelled my subscription and taking a decent provider now. Thanks for the help

PHP curl get request slow speed

I use curl in custom Zend Framework library to make a GET request to a Drupal website. On the Drupal end I use rest export pages that receive get request and return some data.
This is my curl request structure in the ZF2
$this->chanel = curl_init();
curl_setopt($this->chanel, CURLOPT_URL, "SOME URL LINK");
curl_setopt($this->chanel, CURLOPT_TIMEOUT, 30);
curl_setopt($this->chanel, CURLOPT_RETURNTRANSFER,1);
curl_setopt($this->chanel, CURLOPT_USERAGENT, 'Mozilla/5.001 (windows; U; NT4.0; en-US; rv:1.0) Gecko/25250101');
$result=curl_exec ($this->chanel);
curl_close ($this->chanel);
Both Drupal and Zend Framework websites are located on my localhost.
The execution time normally takes around 15 seconds. This is too long.
I tried the same link with a Restlet Client (Chrome Extension) and it takes around 1 second or less to execute and retrieve the data.
Do you have any suggestions why it is so slow and how I can improve the speed?
Try putting some loggers in your code, put time stamps in various code blocks and inside functions, check if curl is taking time or something else? Put timestamps and loggers after each line to debug the performance issue.
Also try using it from command line as follows:
curl --get "URL HERE"
And check if its fast or not, if its fast, the code you assume to be slow, try executing the direct command from your code.
Please use ip address instead of hostname.
If your Drupal in same machine with your ZF2 app, you can use 127.0.0.1.
I think that can be caused by DNS look up.

PHP file_get_contents behavior across versions and operating systems

I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php

Downloading remote files with PHP/cURL: a bit more robustly

I have a script that pulls URLs from the database and downloads them (pdf or jpg) to a local file.
Code is:
$cp = curl_init($remote_url);
$fp = fopen($dest_temp, "w");
#curl_setopt($cp, CURLOPT_FILE, $fp);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_exec($cp);
curl_close($cp);
fclose($fp);
If the remote file is there, it works fine. If the remote file is not there, it just bombs and the browser hangs forever.
What's the best approach to handling this, should I somehow ping for the file first? or can I set options above that will handle this. I tried setting timeouts but it had no effect.
this is my first experience using cURL
I used to use wget much as you're using curl and got frustrated with the lack of ability to know what is going on because its essentially calling out to an external program.
I use perl WWW:Mechanize and the link below is a PHP version which might be a bit more robust for you to be able to deal with such instances.
http://www.compasswebpublisher.com/php/www-mechanize-for-php
Hope this helps.

How do I transfer data to next server

I am trying to pass some information of a server in which the script is running to next url or server.
I tried using curl. I have following two disadvantages:
if cannot locate file it will tell file not found
it waits till the remote file execution is completed.
How can I overcome both of the things either by using curl or other commands?
Edit:
Now I would like to suppress the file not found message error message being displayed by curl even if the file really doesn't exists.
I do not want output from destination page so I don't want to wait till the execution of the destination page is finished. I just want to trigger the code and continue with another scripts remaining
Example:
I am trying to build a log system which will have its everything in next webserver. The client website which implements the log system will be sending some of the data required for the system by calling a file in my webserver.
Code I am using:
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://example.com/log.php?data=data");
curl_setopt($ch, CURLOPT_HEADER, 0);
// grab URL and pass it to the browser
curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
Why don't you want to use standard PHP features?
<?php
$logpage = file_get_contents('http://example.com/log.php?data=data');
echo $logpage;
?>

Categories