I was wondering how can I write a PHP script that needs a longer compile time?
I want to do this to test in the OPCache extension works.
Later edit:
When a PHP script is loading, the code is compiled into bytecode and this bytecode will be interpreted by CPU. The compilation process usually takes some milliseconds but I need to make this time extremely large to test the OPCache extension from PHP 5.5. This extension should cache the script bytecode so that it won't need to compile the script again.
As #PaulCrovella said in the comments, what I needed was ApacheBench.
By using the command ab http://localhost/index.php on a script with about 600.000 lines of code, the results were:
On the first benchmark test:
Server Software: Apache/2.4.9
Server Hostname: localhost
Server Port: 80
Document Path: /index.php
Document Length: 4927 bytes
Concurrency Level: 1
Time taken for tests: 0.944 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 5116 bytes
HTML transferred: 4927 bytes
Requests per second: 1.06 [#/sec] (mean)
Time per request: 944.054 [ms] (mean)
Time per request: 944.054 [ms] (mean, across all concurrent requests)
Transfer rate: 5.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 944 944 0.0 944 944
Waiting: 939 939 0.0 939 939
Total: 944 944 0.0 944 944
On the second benchmark test:
Server Software: Apache/2.4.9
Server Hostname: localhost
Server Port: 80
Document Path: /index.php
Document Length: 4927 bytes
Concurrency Level: 1
Time taken for tests: 0.047 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 5116 bytes
HTML transferred: 4927 bytes
Requests per second: 21.28 [#/sec] (mean)
Time per request: 47.003 [ms] (mean)
Time per request: 47.003 [ms] (mean, across all concurrent requests)
Transfer rate: 106.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 47 47 0.0 47 47
Waiting: 43 43 0.0 43 43
Total: 47 47 0.0 47 47
Related
we have a large Woocommerce website hosted on Google VM servers (E2 4vCPU + 16 GB RAM + 100 GB SSD Storage), recently we created a mobile application using flutter and WordPress -Woocommerce- API, but we faced a three issues
if two users open the app at the same time, data loading synchronously not parallel, it seems the server dealing with a single request only at the same time
fetching data is very slow, to load a product maybe it will take more than 5 seconds on a HighSpeed Internet connection
if the user using the app the website will take more time to load, it seems the server dealing with a single request only at the same time
Environment Details
server NGINX + MySQL + PHP-FPM
PHP 7.4
OS: Centos 8
All tables Engin - InnoDB
we did the following
increased PHP.ini memory limit to 15GB
increased PHP.ini timeout limit to 3000 sec
Linked website with cloudflare. - Installed Cache Plugin W3 Total Cache
Increase NGINX MAX Clients to 150
I am looking for a suggestion on how I can allow to server/MySQL handle a lot of requests?
PHP-FPM Config
pm.max_children = 200
pm.start_servers = 50
pm.min_spare_servers = 50
pm.max_spare_servers = 150
;pm.max_requests = 500
;php_admin_value[memory_limit] = 128M
;request_terminate_timeout = 0
;rlimit_core = 0
;rlimit_files = 1024
;pm.process_idle_timeout = 10s;
;pm.max_requests = 500
pm = dynamic
; process.priority = -19
PHP ini
;user_ini.cache_ttl = 300
implicit_flush = Off
;unserialize_max_depth = 4096
;realpath_cache_size = 4096k
;realpath_cache_ttl = 120
zend.exception_ignore_args = On
max_input_time = 6000
max_execution_time = 3000
;max_input_nesting_level = 64
memory_limit = 15000M
post_max_size = 800M
;mysqlnd.mempool_default_size = 16000
;mysqlnd.net_read_timeout = 31536000
;mysqlnd.net_cmd_buffer_size = 2048
soap.wsdl_cache_enabled=1
soap.wsdl_cache_ttl=86400
PHP-FPM www-status, when the app opening
pool: www
process manager: ondemand
start time: 03/Feb/2023:11:59:36 +0000
start since: 25801
accepted conn: 14613
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 0
active processes: 20
total processes: 20
max active processes: 20
max children reached: 53
slow requests: 0
************************
pid: 14270
state: Running
start time: 03/Feb/2023:19:08:39 +0000
start since: 58
requests: 17
request duration: 563199
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14272
state: Running
start time: 03/Feb/2023:19:09:03 +0000
start since: 34
requests: 12
request duration: 267198
request method: GET
request URI: /index.php
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14273
state: Finishing
start time: 03/Feb/2023:19:09:03 +0000
start since: 34
requests: 7
request duration: 5577206
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14274
state: Running
start time: 03/Feb/2023:19:09:03 +0000
start since: 34
requests: 7
request duration: 2475191
request method: GET
request URI: /index.php?consumer_key=xx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14275
state: Running
start time: 03/Feb/2023:19:09:03 +0000
start since: 34
requests: 8
request duration: 2731158
request method: GET
request URI: /index.php?status=publish&category=189&orderby=popularity&per_page=5&consumer_key=xxx&consumer_secret=yy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14276
state: Running
start time: 03/Feb/2023:19:09:04 +0000
start since: 33
requests: 8
request duration: 934026
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14277
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 3243188
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14278
state: Finishing
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 5822217
request method: GET
request URI: /index.php?consumer_key=xx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14279
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 7
request duration: 220
request method: GET
request URI: /www-status?full
content length: 0
user: -
script: -
last request cpu: 0.00
last request memory: 0
************************
pid: 14280
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 5309224
request method: GET
request URI: /index.php?status=publish&per_page=20&page=1&skip_cache=1&stock_status=instock&consumer_key=xxx&consumer_secret=yy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14281
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 7
request duration: 2048190
request method: GET
request URI: /index.php?consumer_key=xx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14282
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 5482457
request method: GET
request URI: /index.php?consumer_key=xx&consumer_secret=yy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14283
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 5262082
request method: GET
request URI: /index.php?consumer_key=xx&consumer_secret=yy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14288
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 7
request duration: 330128
request method: GET
request URI: /index.php?per_page=100&page=1&per_page=10&consumer_key=xxx&consumer_secret=yy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14291
state: Running
start time: 03/Feb/2023:19:09:07 +0000
start since: 30
requests: 6
request duration: 4372096
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14293
state: Running
start time: 03/Feb/2023:19:09:10 +0000
start since: 27
requests: 6
request duration: 710233
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14296
state: Running
start time: 03/Feb/2023:19:09:12 +0000
start since: 25
requests: 5
request duration: 4356106
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14297
state: Running
start time: 03/Feb/2023:19:09:13 +0000
start since: 24
requests: 5
request duration: 3993473
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14299
state: Running
start time: 03/Feb/2023:19:09:15 +0000
start since: 22
requests: 4
request duration: 5846079
request method: GET
request URI: /index.php?status=publish&category=190&orderby=popularity&per_page=5&consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
************************
pid: 14269
state: Running
start time: 03/Feb/2023:19:08:37 +0000
start since: 60
requests: 15
request duration: 355854
request method: GET
request URI: /index.php?consumer_key=xxx&consumer_secret=yyy
content length: 0
user: -
script: /var/www/mydomain.com/html/index.php
last request cpu: 0.00
last request memory: 0
First of all make sure your FPM-CONFIG is actually read
you have set pm.max_children to 200 , and with a memory_limit of 15GB
to have this config, you better have more than 3000GB of RAM (200 x 15GB/15000M) - which you dont.
My suggestion is -
reduce max memory usage to 512MB
php.ini
memory_limit = 512M
fpm-config -
set PM to ondemand
pm=ondemand
pm.max_children = 20
pm.start_servers = 4
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.process_idle_timeout = 10
Debugging further
add this to fpm-config
pm.status_path = /www-status
nginx conf - (you might need to adjust this to fit yours)
location ~ ^/(www-status)$ {
fastcgi_pass 127.0.0.1:9000; # replace this or use the unix socket
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
head onto site.com/www-status and youll see the fpm usage info, with this you can find out how much you actually need.
also try to use Nginx caching, this will be a huge help for your server
use NGINX cahing - https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching
I have long running worker that iterates over 5M records using batch processing. I use standard Laravel's function chunkById for this.
As long as i can see, i have not reached 200M of memory usage, which i can see in output of docker stats:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
fd05760e5d96 case-place-partners_case-place-partners_app_1 21.71% 140.5MiB / 7.666GiB 1.79% 919MB / 103MB 113MB / 21.4MB 19
Additionally, i have memory_get_usage() and memory_get_usage(true) everywhere and i dont see numbers higher than 52428800.
Output of journalctl -k | grep -i -e memory -e oom:
Aug 19 09:28:41 mirokko-i3 kernel: Memory: 8023772K/8259584K available (12291K kernel code, 1319K rwdata, 3900K rodata, 1612K init, 3616K bss, 235812K reserved, 0K cma-reserved)
Aug 19 09:28:41 mirokko-i3 kernel: Freeing SMP alternatives memory: 32K
Aug 19 09:28:41 mirokko-i3 kernel: x86/mm: Memory block size: 128MB
Aug 19 09:28:41 mirokko-i3 kernel: Freeing initrd memory: 9024K
Aug 19 09:28:41 mirokko-i3 kernel: check: Scanning for low memory corruption every 60 seconds
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused decrypted memory: 2040K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 1612K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 2012K
Aug 19 09:28:41 mirokko-i3 kernel: Freeing unused kernel image memory: 196K
Aug 19 09:28:57 mirokko-i3 kernel: [TTM] Zone kernel: Available graphics memory: 4019344 KiB
Aug 19 09:28:57 mirokko-i3 kernel: [TTM] Zone dma32: Available graphics memory: 2097152 KiB
Output of docker inspect container_id located here
Seems like Laravel has job timeout. Just run worker with --timeout="0" to disable this feature or set your own value.
I'm using php curl with nginx as a proxy. here is my code:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_PROXY, $proxy);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$curl_scraped_page = curl_exec($ch);
curl_close($ch);
echo $curl_scraped_page;
after sometime that this running the nginx load is extreamly slow and sometime it returns error 500.
the log says
failed (24: Too many open files),
some more details:
root#proxy-s2:~# ulimit -Hn
4096
root#proxy-s2:~# ulimit -Sn
1024
There is nothing else running on the server, and no other script is using this proxy.
Is it nginx bug? how to resolve it?
or
What else could it be? how can it be resolved?
I didn't change the default nginx configuration
Nginx restart solved the problem (temporarily I guess)
here is my nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 8080;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
top
top - 09:23:55 up 21:51, 1 user, load average: 0.09, 0.13, 0.08
KiB Mem: 496164 total, 444328 used, 51836 free, 12300 buffers
KiB Swap: 0 total, 0 used, 0 free. 336228 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8 root 20 0 0 0 0 S 0.0 0.0 4:57.56 rcuos/0
4904 nobody 20 0 97796 14128 1012 R 0.0 2.8 4:19.82 nginx
7 root 20 0 0 0 0 S 0.0 0.0 2:11.35 rcu_sched
3 root 20 0 0 0 0 S 0.0 0.0 0:18.50 ksoftirqd/0
832 root 20 0 139208 6808 172 S 0.0 1.4 0:13.11 nova-agent
45 root 20 0 0 0 0 S 0.0 0.0 0:06.21 xenbus
74 root 20 0 0 0 0 S 0.0 0.0 0:03.03 kworker/u30:1
155 root 20 0 0 0 0 S 0.0 0.0 0:02.73 jbd2/xvda1-8
46 root 20 0 0 0 0 R 0.0 0.0 0:02.39 kworker/0:1
57 root 20 0 0 0 0 S 0.0 0.0 0:01.91 kswapd0
1 root 20 0 33448 2404 1136 S 0.0 0.5 0:01.47 init
391 root 20 0 18048 1336 996 S 0.0 0.3 0:00.97 xe-daemon
1034 syslog 20 0 255840 2632 784 S 0.0 0.5 0:00.90 rsyslogd
1107 root 20 0 61364 3048 2364 S 0.0 0.6 0:00.73 sshd
40 root rt 0 0 0 0 S 0.0 0.0 0:00.29 watchdog/0
316 root 20 0 19472 456 252 S 0.0 0.1 0:00.12 upstart-udev-br
6 root 20 0 0 0 0 S 0.0 0.0 0:00.11 kworker/u30:0
1098 root 20 0 23652 1036 784 S 0.0 0.2 0:00.08 cron
7935 root 20 0 105632 4272 3284 S 0.0 0.9 0:00.07 sshd
330 root 20 0 51328 1348 696 S 0.0 0.3 0:00.06 systemd-udevd
7953 root 20 0 22548 3428 1680 S 0.0 0.7 0:00.05 bash
678 root 20 0 15256 524 268 S 0.0 0.1 0:00.04 upstart-socket-
8647 root 20 0 25064 1532 1076 R 0.0 0.3 0:00.03 top
mpstat
root#proxy-s2:~# mpstat
Linux 3.13.0-55-generic (proxy-s2) 07/09/2015 _x86_64_ (1 CPU)
09:22:17 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
09:22:17 AM all 0.94 0.00 1.63 0.16 0.00 2.16 0.92 0.00 0.00 94.20
iostat
root#proxy-s2:~# iostat
Linux 3.13.0-55-generic (proxy-s2) 07/09/2015 _x86_64_ (1 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.94 0.00 3.80 0.16 0.92 94.19
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvdc 0.01 0.02 0.00 1710 0
xvda 3.16 4.19 88.56 322833 6815612
Please try below ,do the following changes in your limits.conf.
vi /etc/security/limits.conf
For open files
soft nofile 64000
hard nofile 64000
For max user processes
soft nproc 47758
hard nproc 47758
For max memory size
soft rss unlimited
hard rss unlimited
For virtual memory
soft as unlimited
hard as unlimited
Just put this on atop of Nginx configuration file:
worker_rlimit_nofile 40000;
events {
worker_connections 4096;
}
I think I found the problem:
here is the nginx error.log
2015/07/09 14:17:27 [error] 15390#0: *7549 connect() failed (111: Connection refused) while connecting to upstream, client: 23.239.194.233, server: , request: "GET http://www.lgqfz.com/ HTTP/1.1", upstream: "http://127.0.0.3:80/", host: "www.lgqfz.com", referrer: "http://www.baidu.com"
2015/07/09 14:17:29 [error] 15390#0: *8121 connect() failed (111: Connection refused) while connecting to upstream, client: 204.44.65.119, server: , request: "GET http://www.lgqfz.com/ HTTP/1.1", upstream: "http://127.0.0.3:80/", host: "www.lgqfz.com", referrer: "http://www.baidu.com"
2015/07/09 14:17:32 [error] 15390#0: *8650 connect() failed (101: Network is unreachable) while connecting to upstream, client: 78.47.53.98, server: , request: "GET http://188.8.253.161/ HTTP/1.1", upstream: "http://188.8.253.161:80/", host: "188.8.253.161", referrer: "http://188.8.253.161/"
It was a DDOS attack on my PROXY that I stopped by allowing only my IP to access the PROXY.
I found it to be common lately - when u crawl a site, and the site identify you as a crawler, it will sometime DDOS attack your proxy until they go black.
One example of such site is amazon.com
My understanding is that Memcached::increment is atomic.
I have this code:
include('../clibootstrap.php');
$key = 'ad_1';
$mc = $app['memcache'];
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL,true);
usleep(10);
$mc->increment($key, 1, 0);
die('OK');
$mc is an instance of \MemCached
Now I try to benchmark it using Apache Bench:
# ab -n2000 -c100 http://somehost.com/foo.php
Benchmarking somehost.com (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests
Server Software: Apache/2.2.22
Server Hostname: somehost.com
Server Port: 80
Document Path: /foo.php
Document Length: 2 bytes
Concurrency Level: 100
Time taken for tests: 4.821 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Total transferred: 352000 bytes
HTML transferred: 4000 bytes
Requests per second: 414.82 [#/sec] (mean)
Time per request: 241.067 [ms] (mean)
Time per request: 2.411 [ms] (mean, across all concurrent requests)
Transfer rate: 71.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.0 0 5
Processing: 33 237 34.5 237 323
Waiting: 33 237 34.5 237 323
Total: 38 237 34.0 237 323
Percentage of the requests served within a certain time (ms)
50% 237
66% 255
75% 263
80% 266
90% 274
95% 279
98% 287
99% 294
100% 323 (longest request)
Now I expect the value of the 'ad_1' key to be exactly 2000, so let's check with telnet:
# telnet localhost 11211
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
get ad_1
VALUE ad_1 0 4
1997
END
version
VERSION 1.4.13
stats
STAT pid 5527
STAT uptime 248
STAT time 1414164851
STAT version 1.4.13
STAT libevent 2.0.16-stable
STAT pointer_size 64
STAT rusage_user 0.092005
STAT rusage_system 0.268016
STAT curr_connections 5
STAT total_connections 2006
STAT connection_structures 23
STAT reserved_fds 20
STAT cmd_get 1
STAT cmd_set 0
STAT cmd_flush 0
STAT cmd_touch 0
STAT get_hits 1
STAT get_misses 0
STAT delete_misses 0
STAT delete_hits 0
STAT incr_misses 0
STAT incr_hits 1997
STAT decr_misses 0
STAT decr_hits 0
STAT cas_misses 0
STAT cas_hits 0
STAT cas_badval 0
STAT touch_hits 0
STAT touch_misses 0
STAT auth_cmds 0
STAT auth_errors 0
STAT bytes_read 144026
STAT bytes_written 112049
STAT limit_maxbytes 67108864
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT threads 4
STAT conn_yields 0
STAT hash_power_level 16
STAT hash_bytes 524288
STAT hash_is_expanding 0
STAT expired_unfetched 0
STAT evicted_unfetched 0
STAT bytes 73
STAT curr_items 1
STAT total_items 1998
STAT evictions 0
STAT reclaimed 0
END
I am using PHP-memcached version 2.2
Any ideas as to why the ad_1 value is not 2000 ?
And how can I make sure that MemCached::increment() becomes atomic?
Also, in the line VALUE ad_1 0 4, what does the 4 mean ?
I used a semaphore, changing my code to
include('../clibootstrap.php');
$key = 'ad_1';
$mc = $app['memcache'];
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL,true);
$sem = sem_get(1234, 1);
if (sem_acquire($sem)) {
$mc->increment($key, 1, 1);
sem_release($sem);
}
usleep(10);
die('OK');
and now it works.
i.e to replace Apache with a PHP application that sent back html files when http requests for .php files are sent?
How practical is this?
It's already been done but if you want to know how practical it is, then i suggest you install and test with Apache bench to see the results:
http://nanoweb.si.kz/
Edit, A benchmark from the site:
Server Software: aEGiS_nanoweb/2.0.1-dev
Server Hostname: si.kz
Server Port: 80
Document Path: /six.gif
Document Length: 28352 bytes
Concurrency Level: 20
Time taken for tests: 3.123 seconds
Complete requests: 500
Failed requests: 0
Broken pipe errors: 0
Keep-Alive requests: 497
Total transferred: 14496686 bytes
HTML transferred: 14337322 bytes
Requests per second: 160.10 [#/sec] (mean)
Time per request: 124.92 [ms] (mean)
Time per request: 6.25 [ms] (mean, across all concurrent requests)
Transfer rate: 4641.91 [Kbytes/sec] received
Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.9 0 13
Processing: 18 100 276.4 40 2739
Waiting: 1 97 276.9 39 2739
Total: 18 100 277.8 40 2750
Percentage of the requests served within a certain time (ms)
50% 40
66% 49
75% 59
80% 69
90% 146
95% 245
98% 449
99% 1915
100% 2750 (last request)
Apart from Nanoweb, there is also a standard PEAR component to build standalone applications with a built-in webserver:
http://pear.php.net/package/HTTP_Server
Likewise the upcoming PHP 5.4 release is likely to include an internal mini webserver which facilitates simple file serving. https://wiki.php.net/rfc/builtinwebserver
php -S localhost:8000
Why reinvent the wheel? Apache or any other web server has had a lot of work put into it by a lot of skilled people to be stable and to do everything you wanted it to do.
Just FYI, PHP 5.4 just released with in-built webserver. Now you can run a local server with very simple commands like -
$ cd ~/public_html
$ php -S localhost:8000
And you'll see the requests and responses like this -
PHP 5.4.0 Development Server started at Thu Jul 21 10:43:28 2011
Listening on localhost:8000
Document root is /home/me/public_html
Press Ctrl-C to quit.
[Thu Jul 21 10:48:48 2011] ::1:39144 GET /favicon.ico - Request read
[Thu Jul 21 10:48:50 2011] ::1:39146 GET / - Request read
[Thu Jul 21 10:48:50 2011] ::1:39147 GET /favicon.ico - Request read
[Thu Jul 21 10:48:52 2011] ::1:39148 GET /myscript.html - Request read
[Thu Jul 21 10:48:52 2011] ::1:39149 GET /favicon.ico - Request read