My goal is to early flush the header part of my website while my php script is stitching the rest of the page together and sends it once its done. Important is that the chunks are sent compressed to the browser. (I am using Apache/2.2 and PHP/5.3.4)
Right now I am trying to achieve this by calling ini_set("zlib.output_compression", "On") in my PHP script. But if I use flush() anywhere in my script even at the end the compression won't work anymore.
Questions are:
a) By using this method zlib will flush the output buffer and send the compressed chunk to the browser once the size of this output buffer is reached?
b) If so is there any way to fine control when zlip will send my chunk not by just setting the internal buffer size of zlib? Default is 4KB.
c) Are there any good alternatives to achieve an early compressed flush maybe with more fine control regarding the time when I want to flush it? Maybe I am totally on the wrong path :)
It's been a LONG time since i had to use zlib compression on OB (more on why later). However, let me try and convince you to turn OFF zlib compression on OB in PHP. First of all, a little background to ensure we are on the same page.
HOW DOES OB WORK
Everytime php prints something, without OB it would be sent straight to apache and from apache to the browser. Instead, with OB, the output stops at apache and waits until the data is flushed (to the browser) or until the script ends and the data is flushed automatically. This saves quite a lot of time and resources when generating a page by buffering the Apache to Web Browser stage of the process.
WHY NOT TO USE OB COMPRESSION IN PHP
Why would you make PHP compress it? It should be the servers job to do such tasks (as well as compress js files for example). What you "should" do to drastically free apache to process php is to install NGINX as a front to the public. It's VERY easy to setup as a reverse proxy and you can even install it on the SAME server as php and apache.
So set NGINX on port 80, put apache on say 8080 (and only allow nginx to connect, but don't worry if you leave it public for a little time as it was already public and great for debugging to bypass nginx so no security issues should rise - but i recommend you don't leave it public for to long). Then make nginx reverse proxy to apache, cache all static files which offloads that from apache (because nginx serves them instead) meaning apache can do more php requests, and also get nginx to perform OUTPUT COMPRESSION ;) freeing up apache and php to do even more requests. As an added benefit, nginx can also serve static files much faster than Apache and Nginx also uses much less ram and can handle much more connections.
Even an nginx newbie could get nginx setup after reading a few tutorials online and complete everything i just said within 1 day. 1 day well spent as well.
Remember to KEEP output buffering ON however for PHP to Apache but turn zlib compression OFF on PHP and enable it instead on nginx.
Related
Im having a problem where we run an upgrade for our web application.
After the upgrade script completes and access the web app via the browser, we get file not found errors on require_once() because we shifted some files around and PHP still has the old directory structure cached.
If we have the default 120 seconds for the realpath_cache_ttl to expire, then everything resolves itself, but this is not acceptable for obvious reasons.
So I tried using clearstatcache with limited success. I created a separate file (clearstatcache.php) that only calls this function (this is a one line file), and placed a call to it in our install script via curl:
<?php
clearstatcache(true);
This does not seem to work, however if I call this file via the browser it immediately begins to work.
I'm running PHP version 5.3
I started looking at the request header differences between my browser and curl, and the only thing I can see that might matter is the PHPSESSID cookie.
So my question is, does the current PHPSESSID matter (I don't think it should). Am I doing something wrong with my curl script? I am using
curl -L http://localhost/clearstatcache.php
EDIT: Upon further research, I've decided this probably has something to do with multiple apache processes running. clearstatcache will only clear the cache of the current apache process - when the browser is making a request a different apache process serves the request, and this process still has the old cache.
Given that the cache is part of the Apache child process thanks to mod_php, your solution here is probably going to be restarting the Apache server.
If you were using FastCGI (under Apache or another web server), the solution would probably be restarting whatever process manager you were using.
This step should probably become part of your standard rollout plan. Keep in mind that there may be other caches that you might also need to clear.
I am programming a service that has to force the download of a file.
I know the possibility of setting the HTTP-headers using PHP and then sending the file using the readfile-function. But I think this is not a good way for sending larger files because it would need a lot of server performance and the maximum execution time of the PHP-scripts would be exceeded.
So is it possible to send the HTTP-headers using PHP (I have to modify them depending on entries in a mysql-database.) and then let Apache send the file body?
I have to add that I could also use perl scripts but I also do not see a possibility for doing this in a cgi-script.
Thanks.
You can do this strictly with apache if the location and/or filetype of the download is known ahead of time:
<Location /downloads>
SetEnvIf Request_URI ".attachment-extension$" FILENAME=$0
Header set "Content-disposition" "attachment; filename=%{FILENAME}"
</Location>
because it would need a lot of server performance
I don't believe this is the case. As long as sending the file to the client is the last thing the script does before it terminates, the difference in CPU/RAM performance between sending the file from PHP and letting Apache handle it directly should be minimal, if there is one at all.
Unless you have a very large number of Gbps bandwidth on your server with an incredibly fast HDD setup, you would run into a bandwidth problem long before you ran into a server system resources problem.
Admittedly this discussion is based largely on conjecture (since I know nothing about your hosting setup), so YMMV.
the maximum execution time of the PHP-scripts would be exceeded
So just call set_time_limit(0);. That what it's for.
I found this link: http://jasny.net/articles/how-i-php-x-sendfile/. X-Sendfile is a module which seems to allow exactly what I want to do. It redirects the body of the HTTP-response and will Apache let handle everything. I just installed and tried it and it works fine.
I'm experimenting with some video content delivery using VLC and Apache Reverse Proxy. Since VLC can support http streaming, I'm sure that it will work with a Apache Reverse Proxy (I haven't tried this yet, but I don't see why it wouldn't work).
Before letting Apache proxy the http video stream, I would like to run a script first. Is there an option in Apache to do this?
If not, can someone think of a way for PHP to do some magic first, and then somehow redirect to the http video stream, without making a VLC or Windows Media Player client cry? By doing it this way, the Apache Reverse Proxy would just have to point to the PHP script only.
Either way, the idea of the script it to start the VLC streaming server.
Thanks
if you really want to do it in apache you can always write your own module :)
alternatively you can use mod_rewrite with the prg option (rewrite map).
where you basically have a rewrite rule processed by an external program.
you can do whatever you want there (logging, etc).
don't forget to set a rewritelock file, or you will experience strange behaviour.
you could also do "everything" in php and then use the apache module mod_xsendfile where you just pass a header in php containing the locatin of the file in the filesystem.
it will not be disclosed to the client but catched by the apache module and served by apache. your php process will terminate regularely.
theese are the best out of the box options i can think of.
if nothing of this works because you need to catch some stuff during or at the end of the transfer you could just echo the files content with php. with correct output buffering you can achieve accetable performance on that.
or you could do some logfile postprocessing if this solves your problem.
A have setup an internal proxy kind of thing using Curl and PHP. The setup is like this:
The proxy server is a rather cheap VPS (which has slow disk i/o at times). All requests to this server are handled by a single index.php script. The index.php fetches data from another, fast server and displays to the user.
The data transfer between the two servers is very fast and the bottleneck is only the disk i/o on the proxy server. Since there is only one index.php - I want to know
1) How do I ensure that index.php is permanently "cahced" in Apache on the proxy server? (Googling for php cache, I found many custom solutions that will cache the "data" output by php I want to know if there are any pre-build modules in apache that will cache the php-script itself?).
2) Is the data fetched from the backend server alway in the RAM/cache on the proxy server? (assuming there is enough memory)
3) Does apache read any config files or other files from disk when handling requests?
4) Does apache wait for logs to be written to disk before serving the content - if so I will disable logging on the proxy server (or is there way to ensure content is first served without waiting for logs to be written).?
Basically, I want to eliminate disk i/o all together on the 'proxy' server.
Thanks,
JP
1) Install APC (http://pecl.php.net/apc), this will compile your PHP script once and keep it in shared memory for the lifetime of the webserver process (or a given TTL).
2) If your script fetches data and does not cache/store it on the filesystem, it will be in RAM, yes. But only for the duration of the request. PHP uses a 'share-nothing' strategy which means -all- memory is released after a request. If you do cache data on the filesystem, consider using memcached (http://memcached.org/) instead to bypass file i/o.
3) If you have .htaccess support activated, Apache will search for those in each path leading to your php file. See Why can't I disable .htaccess in Apache? for more info.
4) Not 100% sure, but it probably does wait.
Why not use something like Varnish which is explicitly built for this type of task and does not carry the overhead of Apache?
I would recommend "tinyproxy" for this puprose.
Does everything you want very efficeintly.
I want to setup a automated backup via PHP, that using http/s I can "POST" a zip file request to another server and send over a large .zip file , basically, I want to backup an entire site (and its database) and have a cron peridocally transmit the file over via a http/s. somethiling like
wget http://www.thissite.com/cron_backup.php?dest=www.othersite.com&file=backup.zip
The appropriate authentication security can be added afterwords....
I prefer http/s because this other site has limited use of ftp and is on a windows box. So the I figure the sure way to communicate with it is via http/s .. the other end would have a correspondign php script that would store the file.
this process needs to be completely programmatic (ie. Flash uploaders will not work, as this needs a browser to work, this script will run from a shell session)//
Are there any generalized PHP libraries or functions that help with this sort of thing? I'm aware of the PHP script timeout issues, but I can typically alter php.ini to minimize this.
I'd personally not use wget, and just run from the shell directly for this.
Your php script would be called from cron like this:
/usr/local/bin/php /your/script/location.php args here if you want
This way you don't have to worry about yet another program to handle things (wget) if your settings are the same at each run then just put them in a config file or directly into the PHP script.
Timeout can be handled by this, makes PHP script run an unlimited amount of time.
set_time_limit(0);
Not sure what libraries you're using, but look into CRUL to do the POST, should work fine.
I think the biggest issues that would come up would be more sever related and less PHP/script related, i.e make sure you got the bandwidth for it, and that your PHP script CAN connect to an outside server.
If its at all possible I'd stay away from doing large transfers over HTTP. FTP is far from ideal too - but for very different reasons.
Yes it is possible to do this via ftp, http and https using curl - but this does not really solve any of the problems. HTTP is optimized around sending relatively small files in erlatively short periods of time - when you stray away from that you will end up undermining a lot of the optimization that's applied to webservers (e.g. if you've got a setting for maxrequestsperchild you could be artificially extending the life of processes which should have stopped, and there's the interaction between LimitRequest* settings and max_file_size, not to mention various timeouts and the other limit settings in Apache).
A far more sensible solution is to use rsync over ssh for content/code backups and the appropriate database replication method for the DBMS you are using - e.g. mysql replication.