i.e to replace Apache with a PHP application that sent back html files when http requests for .php files are sent?
How practical is this?
It's already been done but if you want to know how practical it is, then i suggest you install and test with Apache bench to see the results:
http://nanoweb.si.kz/
Edit, A benchmark from the site:
Server Software: aEGiS_nanoweb/2.0.1-dev
Server Hostname: si.kz
Server Port: 80
Document Path: /six.gif
Document Length: 28352 bytes
Concurrency Level: 20
Time taken for tests: 3.123 seconds
Complete requests: 500
Failed requests: 0
Broken pipe errors: 0
Keep-Alive requests: 497
Total transferred: 14496686 bytes
HTML transferred: 14337322 bytes
Requests per second: 160.10 [#/sec] (mean)
Time per request: 124.92 [ms] (mean)
Time per request: 6.25 [ms] (mean, across all concurrent requests)
Transfer rate: 4641.91 [Kbytes/sec] received
Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.9 0 13
Processing: 18 100 276.4 40 2739
Waiting: 1 97 276.9 39 2739
Total: 18 100 277.8 40 2750
Percentage of the requests served within a certain time (ms)
50% 40
66% 49
75% 59
80% 69
90% 146
95% 245
98% 449
99% 1915
100% 2750 (last request)
Apart from Nanoweb, there is also a standard PEAR component to build standalone applications with a built-in webserver:
http://pear.php.net/package/HTTP_Server
Likewise the upcoming PHP 5.4 release is likely to include an internal mini webserver which facilitates simple file serving. https://wiki.php.net/rfc/builtinwebserver
php -S localhost:8000
Why reinvent the wheel? Apache or any other web server has had a lot of work put into it by a lot of skilled people to be stable and to do everything you wanted it to do.
Just FYI, PHP 5.4 just released with in-built webserver. Now you can run a local server with very simple commands like -
$ cd ~/public_html
$ php -S localhost:8000
And you'll see the requests and responses like this -
PHP 5.4.0 Development Server started at Thu Jul 21 10:43:28 2011
Listening on localhost:8000
Document root is /home/me/public_html
Press Ctrl-C to quit.
[Thu Jul 21 10:48:48 2011] ::1:39144 GET /favicon.ico - Request read
[Thu Jul 21 10:48:50 2011] ::1:39146 GET / - Request read
[Thu Jul 21 10:48:50 2011] ::1:39147 GET /favicon.ico - Request read
[Thu Jul 21 10:48:52 2011] ::1:39148 GET /myscript.html - Request read
[Thu Jul 21 10:48:52 2011] ::1:39149 GET /favicon.ico - Request read
Related
I was getting the following error occasionally. I've search through stackoverflow but all are due to other issue (either cgi or permission) If i can load the following page occassionally (about success 644,000 times in a day and failed 8,000 times.) then I'm sure it's not about permission.
I need to reload the following page stably every 15seconds.
[Sat Aug 01 15:21:03.393569 2020] [core:error] [pid 9328:tid 15760] [client 127.0.0.1:52411] End of script output before headers: index.php, referer: http://127.0.0.1/Admin/index.php?type=14
The following is my apache setting for a 16 core, 32GB cloud instance. Running on windows server 2008 environment + apache 2.4 + PHP 7.0 ONLY. Page reload via chrome + super auto reload plus plugin.
<IfModule mpm_winnt_module>
MaxConnectionsPerChild 100000
ThreadsPerChild 1920
</IfModule>
FcgidMaxRequestLen 51200000
KeepAliveTimeout 50
MaxKeepAliveRequests 100
Timeout 600
The script is doing the calculation with PHP and refresh via timeout of JS.
JS refresh every 15s and the plugin refresh the entire page every 1 hour.
Any advice?
I have a simple "what's my ip" running. It gets called by ajax every 5 seconds.
The php is simply <?php echo $_SERVER['REMOTE_ADDR'];?>
But, looking in the apache server log at requests from the same client, same browser, a table of the size of the frequencies vs the size of requests, shows the output size is not consistent.
I do not use cookies on that site.
467 417
164 570
140 385
40 538
15 4205
4 4173
1 600
1 5834
1 559
1 530
1 2002
How can the size vary? Same question, same answer, I would assume.
...
Edit:
When I do curl -i, I get this; size 330. But the size transferred as shown in log is consistently 3956 bytes?!?
The server is on https; so some certificate exchange is being done, still it should be the same?
HTTP/1.1 200 OK
Date: Thu, 06 Dec 2018 08:46:21 GMT
Server: Apache/2.4.25 (Debian) SVN/1.9.5 PHP/5.6.30-0+deb8u1 mod_python/3.3.1 Python/2.7.13 OpenSSL/1.0.2l mod_perl/2.0.10 Perl/v5.24.1
X-Powered-By: PHP/5.6.30-0+deb8u1
Access-Control-Allow-Origin: *
Content-Length: 10
Content-Type: text/html; charset=UTF-8
I was wondering how can I write a PHP script that needs a longer compile time?
I want to do this to test in the OPCache extension works.
Later edit:
When a PHP script is loading, the code is compiled into bytecode and this bytecode will be interpreted by CPU. The compilation process usually takes some milliseconds but I need to make this time extremely large to test the OPCache extension from PHP 5.5. This extension should cache the script bytecode so that it won't need to compile the script again.
As #PaulCrovella said in the comments, what I needed was ApacheBench.
By using the command ab http://localhost/index.php on a script with about 600.000 lines of code, the results were:
On the first benchmark test:
Server Software: Apache/2.4.9
Server Hostname: localhost
Server Port: 80
Document Path: /index.php
Document Length: 4927 bytes
Concurrency Level: 1
Time taken for tests: 0.944 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 5116 bytes
HTML transferred: 4927 bytes
Requests per second: 1.06 [#/sec] (mean)
Time per request: 944.054 [ms] (mean)
Time per request: 944.054 [ms] (mean, across all concurrent requests)
Transfer rate: 5.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 944 944 0.0 944 944
Waiting: 939 939 0.0 939 939
Total: 944 944 0.0 944 944
On the second benchmark test:
Server Software: Apache/2.4.9
Server Hostname: localhost
Server Port: 80
Document Path: /index.php
Document Length: 4927 bytes
Concurrency Level: 1
Time taken for tests: 0.047 seconds
Complete requests: 1
Failed requests: 0
Total transferred: 5116 bytes
HTML transferred: 4927 bytes
Requests per second: 21.28 [#/sec] (mean)
Time per request: 47.003 [ms] (mean)
Time per request: 47.003 [ms] (mean, across all concurrent requests)
Transfer rate: 106.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 47 47 0.0 47 47
Waiting: 43 43 0.0 43 43
Total: 47 47 0.0 47 47
My website exits (error 500) if a script takes more than 60 sec to be executed, but i don't understand why.
Phpinfo:
max_execution_time = 600
max_input_time = 600
In my httpd.conf file:
timeout = 600
So i don't get how to increase this parameter.
I found in my phpinfo (but i have no idea if it's related or not):
default_socket_timeout = 60
mysql.connect_timeout = 60
I think the mysql.connect_timeout is not related at all (i got the error on a page with a sleep(65); only...)
I finaly found the answer !
I will share the answer since i think it may help someone else !
I found in the apache error_log the following :
[Tue Jul 09 15:17:47 2013] [warn] [client 212.198.111.252] mod_fcgid: read data timeout in 45 seconds
[Tue Jul 09 15:17:47 2013] [error] [client 212.198.111.252] Premature end of script headers: test_max_execution.php
I then modified a file located in /etc/httpd/conf.d/ named fcgid.conf
I increased 3 parameters (FcgidIOtimeout, FcgidIdleTimeout & FcgidConnectTimeout) and everything seems to work properly now !
Have a nice day and thank you for paying attention to my question !
Frederic
Have a look at…
PHP set_time_limit()
PHP Runtime Configuration
…and:
MySQL server has gone away - in exactly 60 seconds
Happy reading :-)
I'm running into a strange issue.... This all worked fine last night when I coded the damn thing, but I reinstalled WAMP on my local dev server and now I'm running into problems.
I'm attempting to retrieve results from Sphinx through the PHP api. I'm just executing the most basic of queries as a test...
$searchtest = Sphinx::factory();
$results = $searchtest->Query('');
And $results contains the sphinx results as expected
...
[total] => 1000
[total_found] => 30312
[time] => 0.004
However, when I profile this small piece of code, it's telling me that PHP is taking an extra second to process the query!
test (1) - 1.066703 s
The problem gets worse on my production code which runs several Sphinx searches, yesterday... everything was running fine and each search took 0.004sec (or an equally small amount of time) but today, the page takes several seconds to run all the search queries! (this is on an isolated dev server, so no traffic issues)
results (1) - 1.046128 s
sidebar_data (1) - 10.388812 s
featured (1) - 1.034211 s
Each separate query to the Sphinx daemon takes an extra second to come back! (sidebar_data hits the search server 10 times)
What is going on here? I've wasted a bunch of time trying to figure it out and I'm stumped. I even reinstalled Sphinx from scratch. Since sphinx itself is reporting [time] => 0.004 fast access times, is the problem something to do with PHP?
What should I do to diagnose the problem?
Edit: I looked at the output from searchd --console, sure enough, it confirms that the search queries are quite quick to run, but if you look at the time, they are being executed approx. one per second... PHP is causing some delay somehow(??)
[Sun May 8 09:57:29.923 2011] 0.012 sec [all/1/ext 15039 (0,25)] [main]
[Sun May 8 09:57:30.996 2011] 0.020 sec [all/1/rel 30 (0,20) #city] [main]
[Sun May 8 09:57:32.034 2011] 0.016 sec [all/1/rel 50 (0,20) #make] [main]
[Sun May 8 09:57:33.061 2011] 0.015 sec [all/1/rel 15 (0,20) #style] [main]
[Sun May 8 09:57:34.099 2011] 0.017 sec [all/1/rel 25 (0,20) #colour] [main]
[Sun May 8 09:57:35.122 2011] 0.009 sec [all/1/rel 1 (0,20) #field] [main]
[Sun May 8 09:57:36.145 2011] 0.011 sec [all/2/rel 1 (0,20) #field] [main]
[Sun May 8 09:57:37.174 2011] 0.010 sec [all/2/rel 1 (0,20) #field] [main]
[Sun May 8 09:57:38.187 2011] 0.003 sec [all/2/rel 431 (0,20)] [main]
[Sun May 8 09:57:39.240 2011] 0.005 sec [all/2/rel 12627 (0,20)] [main]
[Sun May 8 09:57:40.292 2011] 0.005 sec [all/2/rel 13021 (0,20)] [main]
[Sun May 8 09:57:41.343 2011] 0.001 sec [all/3/rel 200 (0,20)] [main]
At a first look I'd have guessed some kind of DNS resolution problem could be involved but it seems like your running searchd on the same host as PHP
Rather than poke around trying to guess what is causing this I would recommend profiling the PHP code running on the machine. I would install xdebug, enable profiling and then analyse the output in webcachegrind. It should be able to point you to which functions are slow to run and give you a better clue as to what's wrong.