Related
I am unable to understand and run a simple PHP script in FCGI mode. I am learning both Perl and PHP and I got the Perl version of FastCGI example below to work as expected.
Perl FastCGI counter:
#!/usr/bin/perl
use FCGI;
$count = 0;
while (FCGI::accept() >= 0) {
print("Content-type: text/html\r\n\r\n",
"<title>FastCGI Hello! (Perl)</title>\n",
"<h1>FastCGI Hello! (Perl)</h1>\n",
"Request number ", $++count,
" running on host <i>$ENV('SERVER_NAME')</i>");
}
Searching for similar in PHP found talk about "fastcgi_finish_request" but have no clue how
to accomplish the counter example in PHP, here is what I tried:
<?php
header("content-type: text/html");
$counter++;
echo "Counter: $counter ";
//http://www.php.net/manual/en/intro.fpm.php
fastcgi_finish_request(); //If you remove this line, then you will see that the browser has to wait 5 seconds
sleep(5);
?>
Perl is not PHP. This must not mean that you can not most often interchange things and port code between the two, however when it comes to runtime environments there are bigger differences you can not just interchange.
FCGI is on the request / protocol level already which is fully abstracted in the PHP runtime and you therefore have not as much control in PHP as you would have with Perl and use FCGI;
Therefore you can not just port that code.
Next to that fastcgi_finish_request is totally unrelated to the Perl code. You must have confused it or thrown it in by sheer luck to give it a try. However it's not really useful in this counter example context.
PHP and HTTP are stateless.
All data is only relevant for the current, ongoing request.
If you need to save state, you might consider storing the data into cookie, session, cache or db.
So the implementation of this "counter" example will be different for PERL and PHP.
Your usage of fastcgi_finish_request won't bring the functionality you expect from PERL.
Think about a long running calculation, where you output data in the middle.
You can do that with fastcgi_finish_request, the data is then pushed to the browsers, while the long running tasks keeps running.
Opening happens together FASTCGI+PHP.
Normally the connection would be open till PHP finishes, then FASTCGI would be closed.
Except you reach the exec timeout of PHP (exec timeout) or fastcgi timeout (connection timeout). fastcgi_finish_request handles the case, where the fascgi connection to the browser is closed BEFORE PHP finishes execution.
Simple Hit Counter Example for PHP
<?php
$hit_count = #file_get_contents('count.txt'); // read count from file
$hit_count++; // increment hit count by 1
echo $hit_count; // display
#file_put_contents('count.txt', $hit_count); // store the new hit count
?>
Honestly, that's not even how you should do it using Perl either.
Instead, I'd recommend using CGI::Session to track session information:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser);
use CGI::Session;
my $q = CGI->new;
my $session = CGI::Session->new($q) or die CGI->Session->errstr;
print $session->header();
# Page View Count
my $count = 1 + ($session->param('count') // 0);
$session->param('count' => $count);
# HTML
print qq{<html>
<head><title>Hello! (Perl)</title></head>
<body>
<h1>Hello! (Perl)</h1>
<p>Request number $count running on host <i>$ENV{SERVER_NAME}</i></p>
</body>
</html>};
Alternatively, if you really want to go barebones, you could keep a local file as demonstrated in: I still don't get locking. I just want to increment the number in the file. How can I do this?
I am trying to implement a realtime chat application using PHP . Is it possible to do it without using a persistent data storage like database or file . Basically what I need is a mediator written in PHP who
accepts messages from client browsers
Broadcasts the message to other clients
Forgets the message
You should check out Web Sockets of html5. It uses two way connection so you will not need any database or file. Any chat message comes to the server will directly sent to the other users browser without any Ajax call. But you need also to setup web socket server.
Web sockets are used in many real time applications as well. I am shortly planing to write full tutorial on that. I will notify you.
Just tried something I had never done before in response to this question. Seemed to work but I only tested it once. Instead of using a Socket I had an idea of using a shared Session variable. Basically I forced the Session_id to be the same value regardless of the user therefore they are all sharing the same data. From a quick test it seems to work. Here is what I did:
session_id('12345');
session_start();
$session_id = session_id();
$_SESSION['test'] = $_SESSION['test'] + 1;
echo "session: {$session_id} test: {$_SESSION['test']} <br />";
So my thought process was that you could simply store the chat info in a Session variable and force everyone regardless of who they are to use a shared session. Then you can simply use ajax to continually reload the current Session variable, and use ajax to edit the session variable when adding a message. Also you would probably want to set the Session to never expire or have a really long maxlifetime.
As I said I just played around with this for a few minutes to see if it would work.
You will want to use Sockets. This article will cover exactly what you want to do: http://devzone.zend.com/209/writing-socket-servers-in-php/
When I tried to solve the same problem, I went with Nginx's Push Module. I chose to go this way since I had to support older browsers (that usually won't support WebSockets) and had no confidence in setting up an appropriate solution like Socket.io behind a TCP proxy.
The workflow went like this:
The clients connect through long-polling to my /subscriber location, which is open to all.
The /publisher location only accepts connections from my own server
When a client subscribes and talks, it basically just asks a PHP script to handle whatever data is sent.
This script can do validation, authorization, and such, and then forwards (via curl) the message in a JSON format to the /publisher.
Nginx's Push Module handles sending the message back to the subscribers and the client establishes a new long-polling connection.
If I had to do this all over again, then I would definitely go the Socket.io route, as it has proper fallbacks to Comet-style long-polling and has great docs for both Client and Server scripts.
Hope this helps.
If you have a business need for PHP, then adding another language to the mix just means you then have two problems.
It is perfectly possible to run a permanent, constantly-running daemonised PHP IRCd server: I know, because I've done it, to make an online game which ran for years.
The IRC server part I used is a modified version of WaveIRCd:
http://sourceforge.net/projects/waveircd/
I daemonised it using code I made available here:
http://www.thudgame.com/node/254
That code might be overkill: I wrote it to be as rugged as I could, so it tries to daemonise using PHP's pcntl_fork(), then falls back to calling itself recursively in the background, then falls back to perl, and so on: it also handles the security restrictions of PHP's safe mode in case someone turns that on, and the security restrictions imposed by being called through cron.
You could probably strip it down to just a few lines: the bits with the comments "Daemon Rule..." - follow those rules, and you'll daemonize your process just fine.
In order to handle any unexpected daemon deaths, etc, I then ran that daemoniser every minute through cron, where it checked to see if the daemon was already running, and if so either quietly died, or if the daemon was nonresponsive, killed it and took its place.
Because of the whole distributed nature of IRC, it was nicely rugged, and gave me a multiplayer browser game with no downtime for a good few years until bit-rot ate the site a few months back. I should try to rewrite the front end in Flash and get it back up again someday, when I have time...
(I then ran another daemonizer for a PHP bot to manage the game itself, then had my game connect to it as a java applet, and talk to the bot to play the game, but that's irrelevant here).
Since WaveIRCd is no longer maintained, it's probably worth having a hunt around to find if anyone else has forked the project and is supporting it.
[2012 edit: that said, if you want your front end to be HTML5/Javascript, or if you want to connect through the same port that HTTP connects through, then your options are more limited than when using Flash or Java. In that case, take the advice of others, and use "WebSockets" (poor support in most current browsers) or the "Socket.io" project (which uses WebSockets, but falls back to Flash, or various other methods, depending what the browser has available).
The above is for situations where your host allows you to run a service on another port. In particular, many have explicit rules in their ToS against running an IRCd.]
[2019 edit: WebSockets are now widely supported, you should be fine using them. As a relevant case study, Slack is written in PHP (per https://slack.engineering/taking-php-seriously-cf7a60065329), and for some time supported the IRC protocol, though I believe that that has since been retired. As its main protocol, it uses an API based on JSON over WebSockets (https://api.slack.com/rtm). This all shows that a PHP IRCd can deliver enterprise-level performance and quality, even where the IRC protocol is translated to/from another one, which you'd expect to give poorer performance.]
You need to use some kind of storage as a buffer. It IS plausable not to use file or db (which also uses a file). You can try using php's shared memory functions, but I don't know any working solution so you'll have to do it from scratch.
Is it possible to do it without using a persistent data storage like
database or file?
It is possible but you shouldn't use. Database or file based doesn't slows down chat. It will be giving additional security to your chat application. You can make web based chat using ajax and sockets without persistent data.
You should see following posts:
Is database based chat room bad idea?
Will polling from a SQL DB instead of a file for chat application increase performance?
Using memcached as a database buffer for chat messages
persistent data in php question
https://stackoverflow.com/questions/6569754/how-can-i-develop-social-network-chat-without-using-a-database-for-storing-the-c
File vs database for storage efficiency in chat app
PHP is not a good fit for your requirements (in a normal setup like apache-php, fastcgi etc.), because the PHP script gets executed from top to bottom for every request and cannot maintain any state between the requests without the use of external services or databases/files (Except e.g. http://php.net/manual/de/book.apc.php, but it is not intended for implementing a chat and will not scale to multiple servers.)
You should definitely look at Node.js and especially the Node.js module Socket.IO (A Websocket library). It's incredibly easy to use and rocks. Socket.IO can also scale to multiple chat servers with an optional redis backend, which means it's easier to scale.
Trying to use $_SESSION with a static session id as communication channel is not a solution by the way, because PHP saves the session data into files.
One solution to achieving this is by writing a PHP socket server.
<?php
// Set time limit to indefinite execution
set_time_limit (0);
// Set the ip and port we will listen on
$address = '192.168.0.100';
$port = 9000;
$max_clients = 10;
// Array that will hold client information
$clients = Array();
// Create a TCP Stream socket
$sock = socket_create(AF_INET, SOCK_STREAM, 0);
// Bind the socket to an address/port
socket_bind($sock, $address, $port) or die('Could not bind to address');
// Start listening for connections
socket_listen($sock);
// Loop continuously
while (true) {
// Setup clients listen socket for reading
$read[0] = $sock;
for ($i = 0; $i < $max_clients; $i++)
{
if ($client[$i]['sock'] != null)
$read[$i + 1] = $client[$i]['sock'] ;
}
// Set up a blocking call to socket_select()
$ready = socket_select($read,null,null,null);
/* if a new connection is being made add it to the client array */
if (in_array($sock, $read)) {
for ($i = 0; $i < $max_clients; $i++)
{
if ($client[$i]['sock'] == null) {
$client[$i]['sock'] = socket_accept($sock);
break;
}
elseif ($i == $max_clients - 1)
print ("too many clients")
}
if (--$ready <= 0)
continue;
} // end if in_array
// If a client is trying to write - handle it now
for ($i = 0; $i < $max_clients; $i++) // for each client
{
if (in_array($client[$i]['sock'] , $read))
{
$input = socket_read($client[$i]['sock'] , 1024);
if ($input == null) {
// Zero length string meaning disconnected
unset($client[$i]);
}
$n = trim($input);
if ($input == 'exit') {
// requested disconnect
socket_close($client[$i]['sock']);
} elseif ($input) {
// strip white spaces and write back to user
$output = ereg_replace("[ \t\n\r]","",$input).chr(0);
socket_write($client[$i]['sock'],$output);
}
} else {
// Close the socket
socket_close($client[$i]['sock']);
unset($client[$i]);
}
}
} // end while
// Close the master sockets
socket_close($sock);
?>
You would execute this by running it through command line and would always have to run for your PHP clients to connect to it. You could then write a PHP client that would connect to the socket.
<?php
$fp = fsockopen("www.example.com", 80, $errno, $errstr, 30);
if (!$fp) {
echo "$errstr ($errno)<br />\n";
} else {
$out = "GET / HTTP/1.1\r\n";
$out .= "Host: www.example.com\r\n";
$out .= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
while (!feof($fp)) {
echo fgets($fp, 128);
}
fclose($fp);
}
?>
You would have to use some type of ajax to call with jQuery posting the message to this PHP client.
http://devzone.zend.com/209/writing-socket-servers-in-php/
http://php.net/manual/en/function.fsockopen.php
Better use a node.js server for this. WebSockets aren't cross-browser nowadays (except socket.io for node.js that works perfect)
in short answer, you can't.
the current HTTP/HTML implementation doesn't support the pushstate so the algorithm of your chat app should follow :
A: sent message
B,C,D: do while a new message has been sent get this message.
so the receivers always have to make a new request and check if a new message has been sent. (AJAX Call or something similar )
so always there are a delay between the sent event and the receive event.
which means the data must be saved in something global, like db or file system.
take a look for :
http://today.java.net/article/2010/03/31/html5-server-push-technologies-part-1
You didn't say it had to all be written it PHP :)
Install RabbitMQ, and then use this chat implementation built on top of websockets and RabbitMQ.
Your PHP is pretty much just 'chat room chrome'. It's possible most of your site would fit within the 5 meg limit of offline HTML5 content, and you have a very flexible (and likely more robust than if you did it yourself) chat system.
It even has 20 messages of chat history if you leave the room.
https://github.com/videlalvaro/rabbitmq-chat
If You need to use just PHP, then You can store chat messages in session variables, session could be like object, storing a lot of information.
If You can use jQuery then You could just append paragraph to a div after message has been sent, but then if site is refreshed, messages will be gone.
Or combining, store messages in session and update that with jQuery and ajax.
Try looking into socket libraries like ZeroMQ they allow for instant transport of the message, and are quicker than TCP, and is realtime. Their infrastructure allows for instant data send between points A and B, without the data being stored anywhere first (although you can still choose to).
Here's a tutorial for a chat client in ZeroMQ
I have noticed a few websites such as hypem.com show a "You didnt get served" error message when the site is busy rather than just letting people wait, time out or refresh; aggravating what is probably a server load issue.
We are too loaded to process your request. Please click "back" in your
browser and try what you were doing again.
How is this achieved before the server becomes overloaded? It sounds like a really neat way to manage user expectation if a site happens to get overloaded whilst also giving the site time to recover.
Another options is this:
$load = sys_getloadavg();
if ($load[0] > 80) {
header('HTTP/1.1 503 Too busy, try again later');
die('Server too busy. Please try again later.');
}
I got it from php's site http://php.net/sys_getloadavg, altough I'm not sure what the values represent that the sys_getloadavg returns
You could simply create a 500.html file and have your webserver use that whenever a 50x error is thrown.
I.e. in your apache config:
ErrorDocument 500 /errors/500.html
Or use a php shutdown function to check if the request timeout (which defaults to 30s) has been reached and if so - redirect/render something static (so that rendering the error itself cannot cause problems).
Note that most sites where you'll see a "This site is taking too long to respond" message are effectively generating that message with javascript.
This may be to do with the database connection timing out, but that assumes that your server has a bigger DB load than CPU load when times get tough. If this is the case, you can make your DB connector show the message if no connection happens for 1 second.
You could also use a quick query to the logs table to find out how many hits/second there are and automatically not respond to any more after a certain point in order to preserve QOS for the others. In this case, you would have to set that level manually, based on server logs. An alternative method can be seen here in the Drupal throttle module.
Another alternative would be to use the Apache status page to get information on how many child processes are free and to throttle id there are none left as per #giltotherescue's answer to this question.
You can restrict the maximum connection in apache configuration too...
Refer
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
http://www.howtoforge.com/configuring_apache_for_maximum_performance
This is not a strictly PHP solution, but you could do like Twitter, i.e.:
serve a mostly static HTML and Javascript app from a CDN or another server of yours
the calls to the actual heavy work server-side (PHP in your case) functions/APIs are actually done in AJAX from one of your static JS files
so you can set a timeout on your AJAX calls and return a "Seems like loading tweets may take longer than expected"-like notice.
You can use the php tick function to detect when a server isn't loading for a specified amount of time, then display an error messages. Basic usage:
<?php
$connection = false;
function checkConnection( $connectionWaitingTime = 3 )
{
// check connection & time
global $time,$connection;
if( ($t = (time() - $time)) >= $connectionWaitingTime && !$connection){
echo ("<p> Server not responding for <strong>$t</strong> seconds !! </p>");
die("Connection aborted");
}
}
register_tick_function("checkConnection");
$time = time();
declare (ticks=1)
{
require 'yourapp.php' // load your main app logic
$connection = true ;
}
The while(true) is just to simulate a loaded server.
To implement the script in your site, you need to remove the while statement and add your page logic E.G dispatch event or front controller action etc.
And the $connectionWaitingTime in the checkCOnnection function is set to timeout after 3 seconds, but you can change that to whatever you want
We would like to implement a method that checks mysql load or total sessions on server and
if this number is bigger than a value then the next visitor of the website is redirected to a static webpage with a message Too many users try later.
One way I implemented it in my website is to handle the error message MySQL outputs when it denies a connection.
Sample PHP code:
function updateDefaultMessage($userid, $default_message, $dttimestamp, $db) {
$queryClients = "UPDATE users SET user_default_message = '$default_message', user_dtmodified = '$dttimestamp' WHERE user_id = $userid";
$resultClients = mysql_query($queryClients, $db);
if (!$resultClients) {
log_error("[MySQL] code:" . mysql_errno($db) . " | msg:" . mysql_error($db) . " | query:" . $queryClients , E_USER_WARNING);
$result = false;
} else {
$result = true;
}
}
In the JS:
function UpdateExistingMsg(JSONstring)
{
var MessageParam = "msgjsonupd=" + JSON.encode(JSONstring);
var myRequest = new Request({url:urlUpdateCodes, onSuccess: function(result) {if (!result) window.open(foo);} , onFailure: function(result) {bar}}).post(MessageParam);
}
I hope the above code makes sense. Good luck!
Here are some alternatives to user-lock-out that I have used in the past to decrease load:
APC Cache
PHP APC cache (speeds up access to your scripts via in memory caching of the scripts): http://www.google.com/search?gcx=c&ix=c2&sourceid=chrome&ie=UTF-8&q=php+apc+cache
I don't think that'll solve "too many mysql connections" for you, but it should really really help your website's speed in general, and that'll help mysql threads open and close more quickly, freeing resources. It's a pretty easy install on a debian system, and hopefully anything with package management (perhaps harder if you're using a if you're using a shared server).
Cache the results of common mysql queries, even if only within the same script execution. If you know that you're calling for certain data in multiple places (e.g. client_info() is one that I do a lot), cache it via a static caching variable and the info parameter (e.g.
static $client_info;
static $client_id;
if($incoming_client_id == $client_id){
return $client_info;
} else {
// do stuff to get new client info
}
You also talk about having too many sessions. It's hard to tell whether you're referring to $_SESSION sessions, or just browsing users, but too many $_SESSION sessions may be an indication that you need to move away from use of $_SESSION as a storage device, and too many browsing users, again, implies that you may want to selectively send caching headers for high use pages. For example, almost all of my php scripts return the default caching, which is no cache, except my homepage, which displays headers to allow browsers to cache for a short 1 hour period, to reduce overall load.
Overall, I would definitely look into your caching procedures in general in addition to setting a hard limit on usage that should ideally never be hit.
This should not be done in PHP. You should do it naturally by means of existing hard limits.
For example, if you configure Apache to a known maximal number of clients (MaxClients), once it reaches the limit it would reply with error code 503, which, in turn, you can catch on your nginx frontend and show a static webpage:
proxy_intercept_errors on;
error_page 503 /503.html;
location = /503.html {
root /var/www;
}
This isn't hard to do as it may sound.
PHP isn't the right tool for the job here because once you really hit the hard limit, you will be doomed.
The seemingly simplest answer would be to count the number of session files in ini_get("session.save_path"), but that is a security problem to have access to that directory from the web app.
The second method is to have a database that atomically counts the number of open sessions. For small numbers of sessions where performance really isn't an issue, but you want to be especially accurate to the # of open sessions, this will be a good choice.
The third option that I recommend would be to set up a chron job that counts the number of files in the ini_get('session.save_path') directory, then prints that number to a file in some public area on the filesystem (only if it has changed), visible to the web app. This job can be configured to run as frequently as you'd like -- say once per second if you want better resolution. Your bootstrap loader will open this file for reading, check the number, and give the static page if it is above X.
Of course, this third method won't create a hard limit. But if you're just looking for a general threshold, this seems like a good option.
I have a PHP script that acts as a JSON API to my backend database.
Meaning, you send it an HTTP request like: http://example.com/json/?a=1&b=2&c=3... it will return a json object with the result set from my database.
PHP works great for this because it's literally about 10 lines of code.
But I also know that PHP is slow and this is an API that's being called about 40x per second at times and PHP is struggling to keep up.
Is there a way that I can compile my PHP script to a faster executing format? I'm already using PHP-APC which is a bytecode optimization for PHP as well as FastCGI.
Or, does anyone recommend a language I rewrite the script in so that Apache can still process the example.com/json/ requests?
Thanks
UPDATE: I just ran some benchmarks:
PHP script takes 0.6 second to
complete
If I use the generated SQL from the PHP script above and run the query from the same web server but directly from within the MySQL command, meaning, network latency is still in play - the fetched result set takes only 0.09 seconds to complete.
As you notice, PHP is literally 1 order of magnitude slower in generating the results. Network does not appear to be the major bottleneck in this case, though I agree it typically is the root cause.
Before you go optimizing something, first figure out if it's a problem. Considering it's only 10 lines of code (according to you) I very much suspect you don't have a problem. Time how long the script takes to execute. Bear in mind that network latency will typically dwarf trivial script execution times.
In other words: don't solve a problem until you have a problem.
You're already using an opcode cache (APC). It doesn't get much faster than that. More to the point, it rarely needs to get any faster than that.
If anything you'll have problems with your database. Too many connections (unlikely at 20x per second), too slow to connect or the big one: query is too slow. If you find yourself in this situation 9 times out of 10 effective indexing and database tuning is sufficient.
In the cases where it isn't is where you go for some kind of caching: memcached, beanstalkd and the like.
But honestly 20x per second means that these solutions are almost certainly overengineering for something that isn't a problem.
I've had a lot of luck with using PHP, memcached and nginx's memcache module together for very fast results. The easiest way is to just use the full URL as the cache key
I'll assume this URL:
/widgets.json?a=1&b=2&c=3
Example PHP code:
<?
$widgets_cache_key = $_SERVER['REQUEST_URI'];
// connect to memcache (requires memcache pecl module)
$m = new Memcache;
$m->connect('127.0.0.1', 11211);
// try to get data from cache
$data = $m->get($widgets_cache_key);
if(empty($data)){
// data is not in cache. grab it.
$r = mysql_query("SELECT * FROM widgets WHERE ...;");
while($row = mysql_fetch_assoc($r)){
$data[] = $row;
}
// now store data for next time.
$m->set($widgets_cache_key, $data);
}
var_dump(json_encode($data));
?>
That in itself provides a huge performance boost. If you were to then use nginx as a front-end for Apache (put Apache on 8080 and nginx on 80), you could do this in your nginx config:
worker_processes 2;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log off;
sendfile on;
keepalive_timeout 5;
tcp_nodelay on;
gzip on;
upstream apache {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name _;
location / {
if ($request_method = POST) {
proxy_pass http://apache;
break;
}
set $memcached_key $uri;
memcached_pass 127.0.0.1:11211;
default_type text/html;
proxy_intercept_errors on;
error_page 404 502 = /fallback;
}
location /fallback {
internal;
proxy_pass http://apache;
break;
}
}
}
Notice the set $memcached_key $uri; line. This sets the memcached cache key to use REQUEST_URI just like the PHP script. So if nginx discovers a cache entry with that key it will serve it directly from memory, and you never have to touch PHP or Apache. Very fast.
There is an unofficial Apache memcache module as well. Haven't tried it but if you don't want to mess with nginx this may help you as well.
The first rule of optimization is to make sure you actually have a performance problem. The second rule is to figure out where the performance problem is by measuring your code. Don't guess. Get hard measurements.
PHP is not going to be your bottleneck. I can pretty much guarantee that. Network bandwidth and latency will dwarf the small overhead of using PHP vs. a compiled C program. And if not network speed, then it will be disk I/O, or database access, or a really bad algorithm, or a host of other more likely culprits than the language itself.
If your database is very read-heavy (I'm guessing it is) then a basic caching implementation would help, and memcached would make it very fast.
Let me change your URL structure for this example:
/widgets.json?a=1&b=2&c=3
For each call to your web service, you'd be able to parse the GET arguments and use those to create a key to use in your cache. Let's assume you're querying for widgets. Example code:
<?
// a function to provide a consistent cache key for your resource
function cache_key($type, $params = array()){
if(empty($type)){
return false;
}
// order your parameters alphabetically by key.
ksort($params);
return sha1($type . serialize($params));
}
// you get the same cache key no matter the order of parameters
var_dump(cache_key('widgets', array('a' => 3, 'b' => 7, 'c' => 5)));
var_dump(cache_key('widgets', array('b' => 7, 'a' => 3, 'c' => 5)));
// now let's use some GET parameters.
// you'd probably want to sanitize your $_GET array, however you want.
$_GET = sanitize($_GET);
// assuming URL of /widgets.json?a=1&b=2&c=3 results in the following func call:
$widgets_cache_key = cache_key('widgets', $_GET);
// connect to memcache (requires memcache pecl module)
$m = new Memcache;
$m->connect('127.0.0.1', 11211);
// try to get data from cache
$data = $m->get($widgets_cache_key);
if(empty($data)){
// data is not in cache. grab it.
$r = mysql_query("SELECT * FROM widgets WHERE ...;");
while($row = mysql_fetch_assoc($r)){
$data[] = $row;
}
// now store data for next time.
$m->set($widgets_cache_key, $data);
}
var_dump(json_encode($data));
?>
You're already using APC opcode caching which is good. If you find you're still not getting the performance you need, here are some other things you could try:
1) Put a Squid caching proxy in front of your web server. If your requests are highly cacheable, this might make good sense.
2) Use memcached to cache expensive database lookups.
Consider that if you're handling database updates, your MySQL performance is what, IMO, needs attention. I would expand the test harness like so:
run mytop on the dbserver
run ab (apache bench) from a client, like your desktop
run top or vmstat on the webserver
And watch for these things:
updates to the table forcing reads to wait (MyISAM engine)
high load on the webserver (could indicate low memory conditions on webserver)
high disk activity on webserver, possibly from logging or other web requests causing random seeking of uncached files
memory growth of your apache processes. If your result sets are getting transformed into large associative arrays, or getting serialized/deserialized, these can become expensive memory allocation operations. Your code might need to avoid calls like mysql_fetch_assoc() and start fetching one row at a time.
I often wrap my db queries with a little profiler adapter that I can toggle to log unusually query times, like so:
function query( $sql, $dbcon, $thresh ) {
$delta['begin'] = microtime( true );
$result = $dbcon->query( $sql );
$delta['finish'] = microtime( true );
$delta['t'] = $delta['finish'] - $delta['begin'];
if( $delta['t'] > $thresh )
error_log( "query took {$delta['t']} seconds; query: $sql" );
return $result;
}
Personally, I prefer using xcache to APC, because I like the diagnostics page it comes with.
Chart your performance over time. Track the number of concurrent connections and see if that correlates to performance issues. You can grep the number of http connections from netstat from a cronjob and log that for analysis later.
Consider enabling your mysql query cache, too.
Please see this question. You have several options. Yes, PHP can be compiled to native ELF (and possibly even FatELF) format. The problem is all of the Zend creature comforts.
Since you already have APC installed, it can be used (similar to the memcached recommendations) to store objects. If you can cache your database results, do it!
http://us2.php.net/manual/en/function.apc-store.php
http://us2.php.net/manual/en/function.apc-fetch.php
From your benchmark it looks like the php code is indeed the problem. Can you post the code?
What happens when you remove the MySQL code and just put in a hard-coded string representing what you'll get back from the db?
Since it takes .60 seconds from php and only .09 seconds from a MySQL CLI I will guess that the connection creation is taking too much time. PHP creates a new connection per request by default and that can be slow sometimes.
Think about it, depending on your env and your code you will:
Resolve the hostname of the MySQL server to an IP
Open a connection to the server
Authenticate to the server
Finally run your query
Have you considered using persistent MySQL connections or connection pooling?
It effectively allows you to jump right to query step from above.
Caching is great for performance as well. I think others have covered this pretty well already.