Testing Response Time of PHP - php

For my research purposes, I have modified the php sinterpreter and I want to test the response time of the new modified interpreter. So I am looking for a script that would help me achieve my objective.
I could simply request for a page which records start time and the end time but I want to measure the average time taken for 100 requests. Will Ajax or something be of help.

A 'page' seems to imply 'test PHP in the webserver' (rather then command line).
Apache's ab works pretty OK for it:
$ ab -c 5 -n 200 http://example.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking example.com (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests
Server Software: Apache
Server Hostname: example.com
Server Port: 80
Document Path: /
Document Length: 596 bytes
Concurrency Level: 5
Time taken for tests: 15.661 seconds
Complete requests: 200
Failed requests: 0
Write errors: 0
Total transferred: 173600 bytes
HTML transferred: 119200 bytes
Requests per second: 12.77 [#/sec] (mean)
Time per request: 391.532 [ms] (mean)
Time per request: 78.306 [ms] (mean, across all concurrent requests)
Transfer rate: 10.82 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 184 193 8.9 190 235
Processing: 184 196 13.2 192 280
Waiting: 184 196 13.2 192 280
Total: 368 390 15.0 387 469
Percentage of the requests served within a certain time (ms)
50% 387
66% 393
75% 398
80% 400
90% 410
95% 418
98% 423
99% 446
100% 469 (longest request)

Related

php curl localhost is slow when making concurrent requests

I have an interesting issue which I am not sure what the root cause is. I have a server and two virtual hosts A and B with ports running on 80 and 81 respectively. I have written a simple PHP code on A which looks like this:
<?php
echo "from A server\n";
And another simple PHP code on B:
<?php
echo "B server:\n";
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
// close curl resource to free up system resources
curl_close($ch);
echo $output;
When making concurrent requests using ab, I get the following results:
ab -n 10 -c 5 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient).....done
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 5
Time taken for tests: 2.680 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 1720 bytes
HTML transferred: 260 bytes
Requests per second: 3.73 [#/sec] (mean)
Time per request: 1340.197 [ms] (mean)
Time per request: 268.039 [ms] (mean, across all concurrent requests)
Transfer rate: 0.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 2 1339 1408.8 2676 2676
Waiting: 2 1339 1408.6 2676 2676
Total: 3 1340 1408.8 2676 2677
Percentage of the requests served within a certain time (ms)
50% 2676
66% 2676
75% 2676
80% 2676
90% 2677
95% 2677
98% 2677
99% 2677
100% 2677 (longest request)
But making 1000 requests with concurrency level 1 is extremely fast:
$ ab -n 1000 -c 1 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 26 bytes
Concurrency Level: 1
Time taken for tests: 1.659 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 172000 bytes
HTML transferred: 26000 bytes
Requests per second: 602.86 [#/sec] (mean)
Time per request: 1.659 [ms] (mean)
Time per request: 1.659 [ms] (mean, across all concurrent requests)
Transfer rate: 101.26 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 1 1 10.3 1 201
Waiting: 1 1 10.3 1 201
Total: 1 2 10.3 1 201
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 2
100% 201 (longest request)
Can anyone explain why this happened? I really want to know the root cause. Is it an issue of curl? It doesn't feel like an network bottleneck or open file issue since the concurrency is just 5. By the way, I also try the same thing with guzzlehttp, but the result is the same. I use ab on my laptop and the server is in the same local network. Plus, it certainly has nothing to do with network bandwidth because requests between host A and B are done at localhost.
I have modified the code so that testing is more flexible:
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
I try the file_get_contents, but there is no obvious differences when switching to file_get_contents. When the concurrency is 1, all methods are good. But they all start downgrading when concurrency increases.
I think I find something related to this issue so I just post another question concurrent curl could not resolve host. This might be the root cause but I don't have any answer yet.
After trying for so long, I think this is definitely related to name resolving. And here is the php script that can execute at concurrent level 500
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$opt = 1;
$url = 'http://localhost:81/a.php';
switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, $url);
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_PROXY, 'localhost');
// $output contains the output string
$output = curl_exec($ch);
curl_close($ch);
echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url, ['proxy' => 'localhost']);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}
echo "app server:\n";
What really matters are curl_setopt($ch, CURLOPT_PROXY, 'localhost'); and $response = $client->request('GET', $url, ['proxy' => 'localhost']);. It tells curl to use localhost as proxy.
And here is the result of ab test
ab -n 1000 -c 500 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80
Document Path: /b.php
Document Length: 182 bytes
Concurrency Level: 500
Time taken for tests: 0.251 seconds
Complete requests: 1000
Failed requests: 184
(Connect: 0, Receive: 0, Length: 184, Exceptions: 0)
Non-2xx responses: 816
Total transferred: 308960 bytes
HTML transferred: 150720 bytes
Requests per second: 3985.59 [#/sec] (mean)
Time per request: 125.452 [ms] (mean)
Time per request: 0.251 [ms] (mean, across all concurrent requests)
Transfer rate: 1202.53 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 4.9 5 14
Processing: 9 38 42.8 22 212
Waiting: 8 38 42.9 22 212
Total: 11 44 44.4 31 214
Percentage of the requests served within a certain time (ms)
50% 31
66% 37
75% 37
80% 38
90% 122
95% 135
98% 207
99% 211
100% 214 (longest request)
But still why name resolving failed at concurrency level 5 when not using localhost as proxy?
The virtual host setting is very simple and clean, and almost everything is in default configuration. I do not use iptables on this server, neither do I config anything special.
server {
listen 81 default_server;
listen [::]:81 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
Find something interesting! If you do another ab test right after the first one in about 3 seconds. The second ab test pretty quick.
Without using localhost as proxy
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only.
Using localhost as proxy
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
I think it still means that the issue is name resolving. But Why?
Assumption: nginx is not listening to localhost:81
I tried adding listen 127.0.0.1:81; to nginx, and it shows no effect.
Find myself making some mistakes about using curl proxy, that does not work! Update other details later.
Solved, not related to proxy, or anything. The root cause is pm.start_servers in php-fpm's www.conf.
Ok, after so many days of trying to solve this issue, I finally find out why. And It's not name resolving. I can't believe that it takes so many days to track down the root cause which is the number of pm.start_servers in php-fpm's www.conf. Initially, I set the number of pm.start_servers to 3 which is why the ab test to localhost always gets worse after concurrency level 3. While php-cli has no issue of limited number of php process, thus, php-cli always performs great. After increaing pm.start_servers to 5, the ab test result is as fast as php-cli. If this is the reason why your php-fpm is slow, you should also think about changing the number of pm.min_spare_servers, pm.max_spare_servers, pm.max_children and anything related.

MediaWiki's file cache is ignored after migration

Problem: I have set up a MediaWiki with file caching enabled, but when I migrate the file cache to another MediaWiki, the cache is bypassed.
Background: I have set up MediaWiki 1.26.2 with apache2 as the front-end web server and mariadb as the MySQL database, populated with the Danish wikipedia.
I have enabled the file cache to accelerate performance in LocalSettings.php:
# Enable file caching.
$wgUseFileCache = true;
$wgFileCacheDirectory = "/tmp/wikicache";
$wgShowIPinHeader = false;
# Enable sidebar caching.
$wgEnableSidebarCache=true;
# Enable page compression.
$wgUseGzip = true;
# Disable pageview counters.
$wgUseGzip = true;
# Enable miser mode.
$wgMiserMode = true;
Goal: Migrate the file cache, which is located under /tmp/wikicache, to another MediaWiki server. This does not seem to work, as the cache is skipped.
Use case: node server hosts MediaWiki, where I have migrated (copied) the file cache from another MediaWiki server, as well as the same LocalSettings.php.
Here is a cached page:
root#server:~# find /tmp/ -name DNA*
/tmp/wikicache/3/39/DNA.html.gz
On another node, client, I use the apache benchmark ab to measure the connection time when requesting that page. TL;DR; only 10% of the requests succeed with a time of ~20 sec, which is roughly the time needed to query the database and retrieve the whole page.
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184182 bytes
Concurrency Level: 10
Time taken for tests: 27.744 seconds
Complete requests: 100
Failed requests: 90
(Connect: 0, Receive: 0, Length: 90, Exceptions: 0)
Total transferred: 118456568 bytes
HTML transferred: 118417968 bytes
Requests per second: 3.60 [#/sec] (mean)
Time per request: 2774.370 [ms] (mean)
Time per request: 277.437 [ms] (mean, across all concurrent requests)
Transfer rate: 4169.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 123 2743 7837.1 145 27743
Waiting: 118 2735 7835.6 137 27723
Total: 123 2743 7837.2 145 27744
Percentage of the requests served within a certain time (ms)
50% 145
66% 165
75% 168
80% 170
90% 24788
95% 26741
98% 27625
99% 27744
100% 27744 (longest request)
If I subsequently request again the same page, it is served in ~0.15 seconds. I observe the same performance even if I flush MySQL's cache with RESET QUERY CACHE:
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184179 bytes
Concurrency Level: 10
Time taken for tests: 1.564 seconds
Complete requests: 100
Failed requests: 41
(Connect: 0, Receive: 0, Length: 41, Exceptions: 0)
Total transferred: 118456541 bytes
HTML transferred: 118417941 bytes
Requests per second: 63.93 [#/sec] (mean)
Time per request: 156.414 [ms] (mean)
Time per request: 15.641 [ms] (mean, across all concurrent requests)
Transfer rate: 73957.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 129 150 18.8 140 189
Waiting: 120 140 18.0 130 171
Total: 129 150 18.8 141 189
Percentage of the requests served within a certain time (ms)
50% 141
66% 165
75% 169
80% 170
90% 175
95% 181
98% 188
99% 189
100% 189 (longest request)
So, why isn't the file cache working when I migrate it to another MediaWiki server?

Laravel 5.2 High cpu with long routes

I've setup a ubuntu 14.04 with php 5.5 and apache 2.4.
I installed a fresh laravel 5.2. No database connections in the project.
I then when to app/Http/routes.php and edited to:
Route::get('/', function () {
return view('welcome');
});
Route::get('/test/direct', function () {
return view('welcome');
});
So basically I have 2 routes just showing the welcome view.
I then run:
ab -n 9999999 -t 300 -c 30 http://xxxxx/laravel52/public
The Cpu never goes over 6% and I get the following results:
Server Software: Apache/2.4.7
Server Hostname: xxxxx
Server Port: 80
Document Path: /laravel52/public
Document Length: 328 bytes
Concurrency Level: 30
Time taken for tests: 146.271 seconds
Complete requests: 50000
Failed requests: 0
Non-2xx responses: 50000
Total transferred: 28550000 bytes
HTML transferred: 16400000 bytes
Requests per second: 341.83 [#/sec] (mean)
Time per request: 87.763 [ms] (mean)
Time per request: 2.925 [ms] (mean, across all concurrent requests)
Transfer rate: 190.61 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 15 47 77.0 40 3157
Processing: 17 41 28.1 37 2140
Waiting: 17 40 26.9 37 2140
Total: 40 87 84.3 78 3208
Percentage of the requests served within a certain time (ms)
50% 78
66% 83
75% 86
80% 89
90% 100
95% 120
98% 162
99% 228
100% 3208 (longest request)
I then run:
ab -n 9999999 -t 300 -c 30 http://xxxxx/laravel52/public/test/direct
The Cpu immediately goes up to 100% and at the end I get these results:
Server Software: Apache/2.4.7
Server Hostname: xxxxx
Server Port: 80
Document Path: /laravel52/public/test/direct
Document Length: 1023 bytes
Concurrency Level: 30
Time taken for tests: 300.001 seconds
Complete requests: 11888
Failed requests: 0
Total transferred: 24585740 bytes
HTML transferred: 12161424 bytes
Requests per second: 39.63 [#/sec] (mean)
Time per request: 757.070 [ms] (mean)
Time per request: 25.236 [ms] (mean, across all concurrent requests)
Transfer rate: 80.03 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 12 29.4 8 1020
Processing: 75 740 790.0 609 14045
Waiting: 74 738 789.9 608 14043
Total: 88 752 789.4 622 14050
Percentage of the requests served within a certain time (ms)
50% 622
66% 835
75% 952
80% 1020
90% 1237
95% 1536
98% 2178
99% 2901
100% 14050 (longest request)
It seems that if it is not the root route laravel spikes the cpu if there are a lot of connections. This also happend with a fresh install in laravel 4.2.
Can anyone point out why this happens? I really need this solved.
My server has a 8 core Intel(R) Core(TM) i7-4771 CPU # 3.50GHz with 8GB of RAM.
Thanks.
You can cache your routes to speed up route resolution, but route caching does not work with Closure based routes. To use route caching, you must convert any Closure routes to use controller classes.
To cache your routes execute php artisan route:cache and to clear the cache php artisan route:clear.
You should also consider php artisan optimize to compile common classes in a single file, thus reducing the number of includes in each request, and php artisan config:cache to combines all the configuration file into a single file for faster loading.
Forget this one. It is not a Laravel problem. It is a problem specific to this machine. A while back I upgraded ubuntu 12.04 to 14.04. In doing so it upgraded apache 2.2 to 2.4 but kept part of the configuration. The problem should be there since other frameworks like Magento behave the same way.

App Engine Win SDK PHP timeout stuck at 30 seconds, should be 60?

I'm using Google's latest Windows App Engine PHP SDK, v1.9.38 to run some long running scripts on the local dev server and for some reason they're timing out at 30 seconds. Error is e.g. "Fatal error: The request was aborted because it exceeded the maximum execution time. in [my script path]\timertest.php on line 8"
The time out is supposed to be 60 seconds for automatic scaling! I'm not sure what I'm missing here... I'm doing various file processing in one script but I then wrote a test script to see if that failed at 30 secs too, and it did. Script is:
<?php
$a = 1;
do
{
syslog(LOG_INFO, $a.' Sleeping for 10 secs...\n');
sleep(10);
$a++;
}
while($a < 8)
?>
Output is:
INFO: 1 Sleeping for 10 secs...\n
INFO: 2 Sleeping for 10 secs...\n
INFO: 3 Sleeping for 10 secs...\n
ERROR:root:php failure (255) with:
stdout:
X-Powered-By: PHP/5.5.26
Content-type: text/html
<br />
<b>Fatal error</b>: The request was aborted because it exceeded the maximum execution time. in <b>[my script path]\timertest.php</b> on line <b>8</b><br />
INFO 2016-06-02 20:52:56,693 module.py:788] default: "GET /testing/timertest.php HTTP/1.1" 500 195
I was thinking it was a config error somewhere, but not sure what or where. My app.yaml is very standard:
application: ak2016-1
version: 1
runtime: php55
api_version: 1
handlers:
# Serve php scripts.
- url: /(.+\.php)$
script: \1
login: admin
and php.ini too:
google_app_engine.disable_readonly_filesystem = 1
upload_max_filesize = 8M
display_errors = "1"
display_startup_errors = "1"
As I say, this is an issue with the local dev SDK server only, I'm not bothered about the online live side as files I'm processing are local (and need to remain so).
Thanks for any suggestions etc!
I deployed the app sample on the Request Timer documentation, and was not able to duplicate your issue. My requests all timeout after ~60 seconds:
$ time curl https://<project-id>.appspot.com/timeout.php
Got timeout! Cleaning up...
real 1m0.127s
user 0m0.021s
sys 0m0.010s
I then copied your code, app.yaml, and php.ini to see if I could duplicate that, and received the following in my syslogs:
INFO: 1 Sleeping for 10 secs...\n
INFO: 2 Sleeping for 10 secs...\n
INFO: 3 Sleeping for 10 secs...\n
INFO: 4 Sleeping for 10 secs...\n
INFO: 5 Sleeping for 10 secs...\n
INFO: 6 Sleeping for 10 secs...\n
INFO: PHP Fatal error: The request was aborted because it exceeded the maximum execution time. in /base/data/home/apps/.../timeout2.php on line 9
However, if you continue to have issues with requests timing out after 30 seconds, I would suggest moving the offending code into task queues. I hope this helps!

APC making PHP 5.3 slower?

I recently learned about APC (I know, I'm late to the show) and decided to try it out on my development server. I did some benchmarking with ApacheBench, and to my surprise I've found that things are running slower than before.
I haven't made any code optimizations to use apc_fetch or anything, but I was under the impression the opcode caching should make a positive impact on its own?
C:\Apache24\bin>ab -n 1000 http://localhost/
This is ApacheBench, Version 2.3 <$Revision: 1178079 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Finished 1000 requests
Server Software: Apache/2.4.2
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 22820 bytes
Concurrency Level: 1
Time taken for tests: 120.910 seconds
Complete requests: 1000
Failed requests: 95
(Connect: 0, Receive: 0, Length: 95, Exceptions: 0)
Write errors: 0
Total transferred: 23181893 bytes
HTML transferred: 22819893 bytes
Requests per second: 8.27 [#/sec] (mean)
Time per request: 120.910 [ms] (mean)
Time per request: 120.910 [ms] (mean, across all concurrent requests)
Transfer rate: 187.23 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 1
Processing: 110 120 7.2 121 156
Waiting: 61 71 7.1 72 103
Total: 110 121 7.2 121 156
Percentage of the requests served within a certain time (ms)
50% 121
66% 122
75% 123
80% 130
90% 131
95% 132
98% 132
99% 137
100% 156 (longest request)
Here's the APC section of my php.ini. I've left most things at the default except for expanding the default size to 128MB instead of 32.
[APC]
apc.enabled = 1
apc.enable_cli = 1
apc.ttl=3600
apc.user_ttl=3600
apc.shm_size = 128M
apc.slam_defense = 0
Am I doing something wrong, or do I just need to use apc_fetch/store to really get a benefit from APC?
Thanks for any insight you guys can give.
Enabling APC with default settings will make a noticeable (to say the least) difference in response times for your PHP script. You don't have to use any of its specific store/fetch functions to get benefits from APC. In fact, normally you don't even need a benchmark to tell the difference; the difference should be apparent by simply navigating through your site.
If you don't see any difference and your benchmarks don't have some kind of error, then I'd suggest that you start debugging the issue (enable error reporting, check the logs, etc).

Categories