Im tried hhvm on my vServer and have problems with the memory used. The performance is great, but the used memory consumption is horrible. I have a vServer with min 4GB and max 8GB memory and hhvm uses after 1 day about 2.4GB of the available memory - but still rising.
Is there a option in server.ini to set the max memory which should be used for the hhvm process?
I'm currently running Typo3 and Prestashop inside hhvm
location ~ \.php$ {
try_files $uri =404;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
}
and server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
;hhvm.server.port = 9000
hhvm.server.file_socket = /var/run/hhvm/hhvm.sock
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
The HHVM wiki has a fairly complete list of options. I'm not aware of any one that controls maximum memory usage.
But what is the runtime supposed to do when it hits that maximum, anyways? I'm not sure it would be a useful option.
If you're seeing monotonically increasing memory usage over time, you should file a new issue on GitHub so we can help you get a heap profile and figure out what is causing the memory increase. That shouldn't be happening. There are a few known bugs that we might be able to help you work around -- any usage of create_function is known to leak right now, for example -- or maybe you've found some new leak that we can fix.
Related
I use KnpSnappyBundle 1.6.0 and wkhtmltopdf 0.12.5 to generate PDFs from HTML in PHP like so:
$html = $this->renderView(
'pdf/template.html.twig',
[ 'entity' => $entity, ]
);
return new PdfResponse($snappy->getOutputFromHtml($html,
['encoding' => 'UTF-8', 'images' => true]), 'file'.$entity->getUniqueNumber().'.pdf'
);
My issue: on my production server, when I refer to assets (images or css) that are hosted on the same server as my code, generating a PDF takes around 40-50 seconds. Even when I only use a tiny image that is hosted on the same server it takes 40 seconds. I could use images that are much larger that are hosted on another server and generating the PDF will happen instantly.
My server is not slow in serving assets or files in general. If I simply render out the HTML as a page it happens instantly (with or without the assets). When I locally (on my laptop) request assets from my production server to generate a PDF it also happens instantly.
The assets I require in the HTML that needs to be rendered to PDF all have absolute URLs, this is required for wkhtmltopdf to work. For example: <img src="https://www.example.com/images/logo.png"> The difficult thing is, everything works but just very slowly. There is no pointing to a non-existent asset that would cause a time-out.
I first thought it might have to do with wkhtmltopdf, so I tried different versions and different settings, but this did not change anything. I also tried to point to another domain on the same server, the problem remains. I tried not using the KnpSnappyBundle, but the problem also remains.
So my guess now is that it is a server issue (or a combination with wkhtmltopdf). I am running Nginx-1.16.1 and serve all content over SSL. I have OpenSSL 1.1.1d 10 Sep 2019 (Library: OpenSSL 1.1.1g 21 Apr 2020) installed and my OS is Ubuntu 18.04.3 LTS. Everything else works as expected on this server.
When I look in the Nginx access logs, I can see a get request is made by my own IP-address when using assets from the same server. I cannot understand though why this is taking so long and I have run out of ideas of what to try next. Any ideas are appreciated!
I'll add my Nginx config for my domain (in case it might help):
server {
root /var/www/dev.example.com/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name dev.example.com www.dev.example.com;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
location ~ \.(?:jpg|jpeg|gif|png|ico|woff2|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|js|css)$ {
gzip_static on;
# Set rules only if the file actually exists.
if (-f $request_filename) {
expires max;
access_log off;
add_header Cache-Control "public";
}
try_files $uri /index.php$is_args$args;
}
error_log /var/log/nginx/dev_example_com_error.log;
access_log /var/log/nginx/dev_example_com_access.log;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/dev.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/dev.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = dev.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name dev.example.com www.dev.example.com;
listen 80;
return 404; # managed by Certbot
}
Udate 5 Aug 2020: I tried wkhtmltopdf 0.12.6, but this gives me the exact same problem. The "solution" that I posted as an answer to my question a few months ago is far from perfect which is why I am looking for new suggestions. Any help is appreciated.
This sounds like a DNS issue to me. I would try adding an entry in /etc/hosts for example:
127.0.0.1 example.com
127.0.0.1 www.example.com
And pointing your images to use that domain
I have not found the root of my problem. However, I have found a workaround. What I have done is:
Install wkhtmltopdf globally (provided by my distribution):
sudo apt-get install wkhtmltopdf
This installs wkhtmltopdf 0.12.4 (on 5 Nov 2019) through the Ubuntu repositories. This is an older version of wkhtmltopdf and running this by itself gave me a myriad of problems. To solve this, I now run it inside xvfb. First install it by running:
sudo apt-get install xvfp
Then change the binary path of the wrapper you use that points to wkhtmltopdf to:
'/usr/bin/xvfb-run /usr/bin/wkhtmltopdf'
In my case, I use KnpSnappyBundle and set the binary path in my .env file In knp_snappy.yaml I set binary: '%env(WKHTMLTOPDF_PATH)%' and in .env I set WKHTMLTOPDF_PATH='/usr/bin/xvfb-run /usr/bin/wkhtmltopdf' (as described above). I can now generate PDFs although there are some issues with the layout.
Not sure if this is acceptable for you or not, but in my case, I always generate an HTML file that can stand on it's own. I convert all CSS references to be included directly. I do this programatically so I can still keep them as separate files for tooling. This is fairly trivial if you make a helper method to include them based on the URI. Likewise, I try to base64 encode all the images and include those as well. Again, I keep them as separate files and do this programatically.
I then feed this "self-contained" html to wkhtmltopdf.
I'd share some examples, but my implementation is actually C# & Razor.
That aside, I would also build some logging into those helpers with timestamps if you're still having problems so you can see how long the includes are taking.
I'm not sure what the server setup is, but possibly there's a problem connecting to the NAS or something.
You could also stand to throw some logging with timestamps around the rest of the steps to get a feel exactly which steps are taking a long time.
Other tips, I try to use SVGs (where I can) for images, and try not to pull large (or any) CSS libraries into the html that becomes the pdf.
There is a machine with nginx and php-fpm on it. There are 2 servers, 2 php-fpm pools (each one with chroot) and 2 directories that has the same structure and similiar files/php classes.
One pool is listening on 127.0.0.1:22333 while another on 127.0.0.1:22335.
The problem is when I make a request to the second server it is somehow executed on the first pool. More strange that sometimes it takes some PHP classes from one directory (of the first pool), sometimes from another. There is not a specific pattern, it seems that it happens randomly.
e.g: Nginx logs show that request comes to the second server and php-fpm logs shows that is was handled in the first pool.
But it never happens other way around (requests to the first server are always executed with first php-fpm pool)
Pools are set up in the same way:
same user
same group
pm = dynamic
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 300
chroot = ...
chdir = /
php_flag[display_errors] = on
php_admin_value[error_log] = /logs/error.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 64M
catch_workers_output = yes
php_admin_value[upload_tmp_dir] = ...
php_admin_value[curl.cainfo] = ...
Nginx servers directive for php looks like:
fastcgi_pass 127.0.0.1:2233X;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /;
fastcgi_param SCRIPT_FILENAME $fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_intercept_errors off;
Had the same problem.
Best answer on this so far was on ServerFault which suggested opcache.enable=0, which pointed me to an quite interesting behavior of PHP.
the APC/OPcache cache is shared between all PHP-FPM pools
Digging further through opcache documentation I found this php.ini option:
opcache.validate_root=1
opcache.validate_root boolean
Prevents name collisions in chroot'ed environments. This should be enabled in all chroot'ed environments to prevent access to files outside the chroot.
Setting this option to 1 (default is 0) and restarting php-fpm fixed the problem for me.
EDIT:
Searching for the right words (validate_root) I found much more on this bug:
https://bugs.php.net/bug.php?id=69090
https://serverfault.com/a/877508/268837
Following the notes from the bug discussion, you should also consider setting opcache.validate_permission=1
Can anyone please suggest any solution for gateway timeout 504 error when running cron job on shared hosing. I have tried sleep function but it didn't work, I have following function for cron job -
public function checkOrderStatus(){
$orders = Order::select('id')
->whereNotIn('status', ['COMPLETED', 'CANCELLED', 'PARTIAL', 'REFUNDED'])
->where('api_order_id', '!=', null)
->orderBy('id', 'desc')
->pluck('id')
->toArray();
$collection = collect($orders);
$chunks = $collection->chunk(20);
$request = new \Illuminate\Http\Request();
foreach($chunks as $ids){
foreach($ids as $id){
$request->replace(['id' => $id]);
$rep = $this->getOrderStatusFromAPI($request);
}
sleep(10);
}
}
getOrderStatusFromAPI() function calls 3rd party API to fetch some records.
checkOrderStatus() function currently fetching around 300 records in each cron call. Please suggest any solution other than server upgrade. Thanks much!!
There are multiple solutions to your problem. If you're using NGINX with FastCGI try:
Changes in php.ini
Try raising max_execution_time setting in php.ini file (CentOS path is /etc/php.ini):
max_execution_time = 150
Changes in PHP-FPM
Try raising request_terminate_timeout setting in php.ini file (CentOS path is /etc/php-fpm.d):
request_terminate_timeout = 150
Changes in Nginx Config
Finally, add fastcgi_read_timeout variable inside our Nginx virtual host configuration:
location ~* \.php$ {
include fastcgi_params;
fastcgi_index index.php;
fastcgi_read_timeout 150;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
Reload PHP-FPM and Nginx:
service php–fpm restart
service nginx restart
I'm trying to take advantage of nginx upstream using socket but receiving errors in my log:
connect() to unix:/var/run/user_fpm2.sock failed (2: No such file or directory) while connecting to upstream
I might be going about this wrong and looking for some advice/input.
Here's the nginx conf block:
upstream backend {
server unix:/var/run/user_fpm1.sock;
server unix:/var/run/user_fpm2.sock;
server unix:/var/run/user_fpm3.sock;
}
And:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_pass backend;
fastcgi_index index.php;
include fastcgi_params;
}
Then, I have 3 PHP pools at /etc/php/7.0/fpm/pool.d/ that look pretty much the same as below. The only difference between the pools is _fpm1, _fpm2, and _fpm3 to match the upstream block.
[user]
listen = /var/run/user_fpm1.sock
listen.owner = user
listen.group = user
listen.mode = 0660
user = user
group = user
pm = ondemand
pm.max_children = 200
pm.process_idle_timeout = 30s
pm.max_requests = 500
request_terminate_timeout = 120s
chdir = /
php_admin_value[session.save_path] = "/home/user/_sessions"
php_admin_value[open_basedir] = "/home/user:/usr/share/pear:/usr/share/php:/tmp:/usr/local/lib/php"
I've noticed the /var/run always ONLY has the user_fpm3.sock file.
Am I going about this wrong? Is it possible to make this upstream config work? All advice and critique welcome.
I'm running PHP7 on Debian Jessie with nginx 1.10.3 - Server has 6 CPU's and 12GB RAM.
Thanks in advance.
UPDATE: I figured the answer myself, but leaving the question in case someone else is trying to do the same thing, or there's a way to optimize this further.
All I had to do was change my pool names to [user_one], [user_two], and [user_three]
Changing the the name of each PHP pool fixed the problem, like so:
[user_one]
[user_two]
[user_three]
UPDATE
Still in pain ... nothing found :(
I'm honestly willing to donate to anyone who could jelp me solve this, it's getting obsessional lol.
On a Proxmox distrib, I have a VM with a Debian installed.
ON this Debian : Nginx / PHP5-FPM / APC / Memcached and MySQL are running with a big MAGENTO multi-website setup.
Sometimes, (randomly or around 9am depends) The server load is increasing.
What I can see during this peek is :
High number of PHP-FPM instances in HTOP
Higyh number of MySQL connexions with most of them in sleeping state with a big "moment" value like 180 or sometimes more.
Server's memory is not full, free -h tells me memory is not the issue here.
TCP connexions from visitors is not high so, I don't think trafic is the issue neither
Looks like there is something (a php script I would say), that is triggered either by the cron or by a visitor (like a research or something else), and it's taking a lot of time to process, probably locking some MySQL tables and preventing other processes to run, leading to a massive freeze.
I'm trying hard to figure out what is causing this problem, or just find "ways" to debug it eficiently.
What I tried already :
Tracing some of the php processes with HTOP to find
some informations. That's how I found out that mysql's process had some message idnicating it cannot connect to a ressource because it was busy.
Searched in /var/log/messages and /var/log/syslog for information but got nothing relevant.
Searched in /var/log/mysql for some error logs but got nothing at all.
Searched in /var/log/php5-fpm.log and got many messages indicating that processes are exiting with code 3 after a "LONG" period of time (probably the process trying to get mysql ressource and never getting it ?) like :
WARNING: [pool www] child 23839 exited with code 3 after 1123.453563 seconds from start
or even :
WARNING: [pool www] child 29452 exited on signal 15 (SIGTERM) after 2471.593537 seconds from start
Searched in Nginx website's error file and found multiple messages indicating that visitors connexions timed out due to the 60 seconds timeout I set in Nginx config file.
Here are my settings :
Nginx website's config file :
location ~ \.php$ {
if (!-e $request_filename) {
rewrite / /index.php last;
}
try_files $uri =404;
expires off;
fastcgi_read_timeout 60s;
fastcgi_index index.php;
fastcgi_split_path_info ^(.*\.php)(/.*)?$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Nginx main config file :
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
fastcgi_read_timeout 60;
client_max_body_size 30M;
PHP-FPM is in onDemand mode
default_socket_timeout = 60
mysql.connect_timeout = 60
PHP-FPM pool's config file
pm=ondemand
pm.max_children = 500
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.process_idle_timeout = 10s;
pm.process_idle_timeout = 10s;
pm.max_requests = 5000 (was thinking about reducing this value to force processes to respawn, if someone has experience with it, I'm interested in hearing it)
Thank you for your time reading this, I will update the content here if needed.
Regards
Sorcy
Did you check the cronjobs in crontab and Magento to make sure this is not any job?
Does this weird server behaviour slowdown your site? Im not sure, but this can also be an Slowloris DDos attack, where a lot HTTP connections open and because of a bug doesnt get closed. Maybe I gave you a hint with that.