I want to transfer messages from a PHP-based web-frontend to a backend service on a linux server.
I am sending the messages with file_put_contents. The interface works well when the backend service listens and reads the pipe created with mkfifo mypipe.
However, I would like to be prepared for a situation in which the backend service fails. In this case the user of the frontend should be be notified and given alternative options.
Currently, when the backend is not running, the frontend becomes unresponsive, because file_put_contents blocks.
I tried various things to solve the problem. This includes trying to open the pipe with fopen before stream_context_create and setting a timeout with ini_set('default_socket_timeout', 10);
or
$context = stream_context_create(array('http'=>array(
'timeout' => 10
)
));
if(file_put_contents("mypipe",$data,FILE_APPEND | LOCK_EX,$context)==FALSE){
error_log("Could not write to pipe.");
}else{
echo "Sent message";
}
I also tried the PHP-function is_writable("mypipe"), but, as expected, it says yes independent of whether the receiver is listening.
How can I check if the pipe blocks and evade the situation that the frontend becomes unresponsive.
Do fopen, stream_set_blocking, fwrite, and fclose instead of using file_put_contents. When you do that, you'll be able to detect when it would block because fwrite will just return that it didn't write anything instead of blocking.
Related
I have thousands of html pages which are handled as php.
inside each page, is a line:
<? file_get_contents("http://www.something.com/get_html.php?id=something"); ?>
for some reason, suddenly this line has been slowing down the server. When the page loads, it waits around 15 seconds at this line before proceeding.
The answer here works, namely,
$context = stream_context_create(array('http' => array('header'=>'Connection: close\r\n')));
file_get_contents("http://www.something.com/somepage.html",false,$context);
which "tells the remote web server to close the connection when the download is complete".
However, this would require rewriting all the thousands of files. Is there a way to do the same thing from the get_html.php script?
this would be alot easier than rewriting all the pages. I tried sending
header("Connection: close"); in that script but no cigar.
To summarize, I am looking for the answer here but adapted to remote server side solution
You could easily do a find/replace in files in a certain directory with most editors. However, I would suggest you started caching results instead of poking your own or foreign servers subsequently for each request.
Is the remote server outside of your local network? If not you could query the database or something else directly over your scripts without a http call. Else you could cache your search results in Memcache or files for a couple of time. It depends on the size and varity of your data how much memory is required for caching.
This are only two examples how to get faster response times. There are many approaches to do this.
you may try the following:
http://www.php.net/manual/en/function.override-function.php
don't know if you can change your server configuration
Here are a couple of things for you to try. Try using cURL to make the request and see if it is still hanging up. Also, try fetching a different page on your site to see if it is also slow. These tests will help determine if it's that particular page or the connection that's hanging up. If another page is slow also, then modifying the 'get_html.php' page probably won't be much help.
To elaborate on Elias' answer, if the connection can easily be fixed by doing a find replace, you can use something like this from the command line in *nix:
perl -pi -w -e 's/search/replace/g;' *.php
-e means execute the following line of code.
-i means edit in-place
-w write warnings
-p loop
You'd have to test this out on a few files before doing all of them, but more specifically, you can use this to very quickly do a find/replace for all of your files:
perl -pi -w -e 's/(file_get_contents\("http:\/\/www.something.com\/somepage.html",false,\$context\)\;)/\$context = stream_context_create(array("http" => array("header" => "Connection: close\\r\\n")));\n$1/g;' *.php
I'm trying to develop an online management system for a very large FLAC music library for a radio station. It's got a beefy server and not many users, so I want to be able to offer a file download service where PHP transcodes the FLAC files into MP3/WAV depending on what the endpoint wants.
This works fine:
if($filetype == "wav") {
header("Content-Length: ". $bitrate * $audio->get_length());
$command = "flac -c -d ".$audio->get_filename().".flac";
}
ob_end_flush();
$handle = popen($command, "r");
while($read = fread($handle, 8192)) echo $read;
pclose($handle);
and allows the server to start sending the file to the user before the transcoding (well, decoding in this case) completes, for maximum speed.
However, the problem I'm getting is that while this script is executing, I can't get Apache to handle any other requests on the entire domain. It'll still work fine on other VirtualHosts on the same machine, but nobody can load any pages on this website while one person happens to be downloading a file.
I've also tried implementing the same thing using proc_open with no difference, and have played with the Apache settings for number of workers and the like.
Is the only way to stop this behaviour to use something like exec and waiting for the encoding process to finish before I start sending the user the file? Because that seems sub-optimal! :(
UPDATE: it seems that other people can still access the website, but not me - i.e. it's somehow related to sessions. This confuses me even more!
Use session_write_close() at some point before you start streaming... You may also want to stream_set_blocking(false) on the read pipe.
In our application, authentication is handled via set of Controller Plugins that validate the user etc.
I want to serve a large (video) file only to authenticated users- the obvious way to do this is via readfile() in the controller, but I'm finding it hits the PHP memory limit - presumably the output from the controller is buffered somewhere.
How can I turn off buffering just for this one controller?
EDIT: Thanks for all the useful tips about flushing any existing output buffering - I guess I was specifically looking for a way of doing this within the framework though?
Interesting problem... You could try:
// ...
public function largeFileAction()
{
// this clears all active output buffers
while (ob_get_level()) {
ob_end_clean();
}
readfile('path/to/large/file');
exit(); // to prevent further request handling
}
// ...
Ok, I might be totally wrong here, but I think to have read somewhere OB has to be enabled for ZendLayout and placeholder helpers to work, so you'd have to disable them for the downloadAction (you probably aint gonna need them for serving the file anyway).
Would something like this achieve what you want to do?
class DownloadController
{
public function downloadAction()
{
$this->_helper->layout()->disableLayout();
$this->_helper->viewRenderer->setNoRender(true);
// authenticate user if not done elsewhere already
header( /* ... the usual stuff ... */);
filepassthru(/* some path outside webroot */);
exit;
}
}
As Tyson wrote, your best choice (if you have full control over the server) is to validate users credentials and redirect him (302 temporary redirect) to the URL where he can download the file.
To prevent reuse of this URLs we are using Lighttpd and its mod_secdownload that allows you to generate a hash that is valid for the specified amount of time.
nginx has X-Accel-Redirect and Apache has mod_xsendfile.
If you decide to implement a separate lightweight web server there are other benefits as well (mainly lower memory consumption while serving static files and faster response times).
If you decide to go this route you will either have to add another IP address to the server and bind Apache only to the one IP address, and the other server (lighty of nginx) to the other because they are web servers the both listen on port 80. And changing the port for one of the servers is not a good idea because a lot of people do not have access to higher ports.
If adding another IP address is not an option you can install nginx on port 80 and use it as a reverse proxy to pass the dynamic requests to Apache which can listen on another port and serve all of the static files.
Considering using an external script to output the file, and stream it to the browser using PHP's passthru function.
If on a Linux-based system, you could try something like passthru("cat video_file.flv");
However, a better practice is to avoid this streaming (from within PHP) altogether and issue the client a 301 HTTP redirection to the URL of the actual static resource so that the webserver can handle streaming it directly.
I don't think you can actually. As far as i know php buffers all output before sending it to the requester.
you could increase the memory limit using ini_set()
$handle = fopen('/path/to/file', 'r');
$chunk_size = 8192;
while ($chunk = fread($handle, $chunk_size)) {
echo $chunk;
ob_flush();
}
This will probably need some tweaking, such as adding correct headers and reading in binary mode if necessary, but the basic idea is sound. I have used this method successfully to send 50+ MB files, with a 16 MB PHP memory limit.
I'm on a Linux system where I am not allowed to use the 'ping' application (ping: icmp open socket: Operation not permitted). However, the script that I am writing (PHP, but I can use an exec() call to any script/program if needed) needs to determine if a host is 'alive'. How can I go about this without using 'ping'?
If ping can't do it, you can't do it in a different language. Here is an analogy that may help you understand why. Let's say there is a file on the file system and you want to its contents. You run cat filename and it says cat: filename: Permission denied. Do you think Perl (or any other language) will fair better than C did here? Let's try:
#!/usr/bin/perl
use strict;
use warnings;
die "usage: $0 filename" unless #ARGV == 1;
my $filename = shift;
open my $fh, "<", $filename
or die "could not open $filename: $!\n";
print while <$fh>;
When run against the file it says could not open filename: Permission denied. No matter what language you try to use, you are going to get Operation not permitted.
That said, there are other methods of determining if a machine is alive. If there is a server that is known to always be running on the machine, you could try to connect to it. Note that you don't need to finish the connection (e.g. log in), just the fact that you can successfully initiate the connection is enough to know that box is up.
To do a ping (ICMP) you need root access.
The only way you have is to do a TCP or UDP ping.
If you want an example check the code of Cacti or you can use hping to do it for you
Or you can set SUID bit on "ping" program on unix ;)
http://us2.php.net/manual-lookup.php?pattern=socket
But if you can't open a socket with ping, it's unlikely that you can use any of these. Talk to your hosting provider.
The PHP Manual gives user supplied code for an implementation of a ping in PHP. Unfortunately, it requires root access so it's not likely you'll be able to use that either. One alternative is to use curl and look at the values returned by curl_getinfo():
c = curl_init('http://www.site.com/');
curl_exec($c);
$info = curl_getinfo($ch);
It is nowhere near being equivalent to ping, but still maybe suitable for your needs.
I'm trying to make a PHP script, I have the script finished but it takes like 10 minutes to finish the process it is designed to do. This is not a problem, however I presume I have to keep the page loaded all this time which is annoying. Can I have it so that I start the process and then come back 10mins later and just view the log file it has generated?
Well, you can use "ignore_user_abort(true)"
So the script will continue to work (keep an eye on script duration, perhaps add "set_time_limit(0)")
But a warning here: You will not be able to stop a script with these two lines:
ignore_user_abort(true);
set_time_limit(0);
Except you can directly access the server and kill the process there! (Been there, done an endless loop, calling itself over and over again, made the server come to a screeching stop, got shouted at...)
Sounds like you should have a queue and an external script for processing the queue.
For example, your PHP script should put an entry into a database table and return right away. Then, a cron running every minute checks the queue and forks a process for each job.
The advantage here is that you don't lock an apache thread up for 10 minutes.
I had lots of issues with this sort of process under windows; My situation was a little different in that I didn't care about the response of the "script"- I wanted the script to start and allow other page requests to go through while it was busy working away.
For some reason; I had issues with it either hanging other requests or timing out after about 60 seconds (both apache and php were set to time out after about 20 minutes); It also turns out that firefox times out after 5 minutes (by default) anyway so after that point you can't know what's going on through the browser without changing settings in firefox.
I ended up using the process open and process close methods to open up a php in cli mode like so:
pclose(popen("start php myscript.php", "r"));
This would ( using start ) open the php process and then kill the start process leaving php running for however long it needed - again you'd need to kill the process to manually shut it down. It didn't need you to set any time outs and you could let the current page that called it continue and output some more details.
The only issue with this is that if you need to send the script any data, you'd either do it via another source or pass it along the "command line" as parameters; which isn't so secure.
Worked nicely for what we needed though and ensures the script always starts and is allowed to run without any interruptions.
I think shell_exec command is what you are looking for.
However, it is disables in safe mode.
The PHP manual article about it is here: http://php.net/shell_exec
There is an article about it here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is another option which you can use, run the script CLI...It will run in the background and you can even run it as a cronjob if you want.
e.g
> #!/usr/bin/php -q
<?php
//process logs
?>
This can be setup as a cronjob and will execute with no time limitation....this examples is for unix based operation system though.
FYI
I have a php script running with an infinite loop which does some processing and has been running for the past 3 months non stop.
You could use ignore_user_abort() - that way the script will continue to run even if you close your browser or go to a different page.
Think about Gearman
Gearman is a generic application framework for farming out work to
multiple machines or processes. It allows applications to complete
tasks in parallel, to load balance processing, and to call functions
between languages. The framework can be used in a variety of
applications, from high-availability web sites to the transport of
database replication events.
This extension provides classes for writing Gearman clients and
workers.
- Source php manual
Offical website of Gearman
In addition to bastiandoeen's answer you can combine ignore_user_abort(true); with a cUrl request.
Fake a request abortion setting a low CURLOPT_TIMEOUT_MS and keep processing after the connection closed:
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
pros
No need to switch methods (Compatible windows & linux)
No need to implement connection handling via headers and buffer (Independent from Browser and PHP version)
cons
Need curl extension
Resources
curl timeout less than 1000ms always fails?
http://www.php.net/manual/en/function.curl-setopt.php#104597
http://php.net/manual/en/features.connection-handling.php
Zuk.
I'm pretty sure this will work:
<?php
pclose(popen('php /path/to/file/server.php &'));
echo "Server started. [OK]";
?>
The '&' is important. It tells the shell not to wait for the process to exit.
Also You can use this code in your php (as "bastiandoeen" said)
ignore_user_abort(true);
set_time_limit(0);
in your server stop command:
<?php
$output;
exec('ps aux | grep -ie /path/to/file/server.php | awk \'{print $2}\' | xargs kill -9 ', $output);
echo "Server stopped. [OK]";
?>
Just call StartBuffer() before any output, and EndBuffer() when you want client to close connection. The code after calling EndBuffer() will be executed on server without client connection.
private function StartBuffer(){
#ini_set('zlib.output_compression',0);
#ini_set('implicit_flush',1);
#ob_end_clean();
#set_time_limit(0);
#ob_implicit_flush(1);
#ob_start();
}
private function EndBuffer(){
$size = ob_get_length();
header("Content-Length: $size");
header('Connection: close');
ob_flush();ob_implicit_flush(1);
}