Vagrant+Ubuntu 14.04+Nginx+HHVM = slow + crashing - php

As per my last question, I'm trying to speed up Laravel by running it under HHVM.
This required me to update my server to 64-bit, so I'm running Trusty64 now. I installed HHVM and Nginx via deb packages. I'm not entirely sure my nginx configuration is right, I scraped this off the net:
server {
listen 80 default_server;
root /vagrant/public;
index index.php index.html index.htm;
server_name localhost;
access_log /var/log/nginx/localhost.laravel-access.log;
error_log /var/log/nginx/locahost.laravel-error.log error;
charset utf-8;
location / {
try_files \$uri \$uri/ /index.php?\$query_string;
}
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
error_page 404 /index.php;
include /etc/nginx/hhvm.conf; # The HHVM Magic Here
}
And my site does load the first few times I hit it. It's now taking more than twice as long to load than with the built-in PHP server. After several refreshes, the page stops loading altogether, nginx gives a 504 gateway timeout, I can no longer SSH in to my server, and vagrant takes several minutes just to shut down. Whatever it's doing, it's completely killing my server.
I heard HHVM uses some kind of JIT and requires warming up, then kicks in after several loads? Could this be what's destroying my server? How do I fix that?

Post Update: I must eat my words!!!!
I moved the laravel code from a share virtual box folder to a non-shared directory, now HHVM loads the laravel welcome screen in less than 10ms once JIT kicks in.
There is a startup config setting that indicates the number of request needed before HHVM JIT kicks in.
Here's the ini file I use for my HHVM:
pid = /var/run/hhvm/pid
hhvm.server.file_socket=/var/run/hhvm/hhvm.sock
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
hhvm.mysql.typed_results = false
hhvm.eval.jit_warmup_requests = 1
The jit_warmup_requests = 1 indicates one reload before optimization, but you can set it to 0 for it to immediately kick in. I think if not specified, it's like 10 requests or something.
Regardless, I've the same setup as you, nginx, hhvm 3.0.1, laravel, Ubuntu 14, on a VirtualBox image using a shared folder. I use the same shared folder with a second image running PHP-FPM5.5. The PHP-5.5 loads the laravel 'you have arrived' page in 50ms or so, while the HHVM version is more on the order of 250ms--much less perfomance than expected. I'm going to test running the code in a non-shared directory, but I doubt this is the issue.
I suspect it has to do with the amount of code that must be evaluated at runtime vs. compile time. I know if I run code with lots of magic methods and variable-variables HHVM doesn't exactly shine. Laravel has some funky autoloading going on to achieve it's hiding of namespaces and such and this might have some impact, but I'd need to go deeper to make a strong stand on this.
I have this in my nginx config file to pass scripts to HHVM (note I use sockets, not TCP and don't use hack yet.)
location ~ \.php$ {
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}

Related

Unable to get nginx/php-fpm consistently working with docker-compose and a custom wordpress install

I'm honestly just getting quite frustrated. I feel like I keep fiddling with my nginx conf, and I'll get it working, until I restart my PC or docker. All of the docker functionality seems to be working fine, I can build and docker-compose up without problem, but it seems like every time I restart my computer or docker I have to fiddle with my nginx conf again until it works. I'm using wordpress, and currently have wordpress installed to a subdirectory wp and my wp-content in a different subdirectory at content.
My nginx configuration looks like this:
server {
listen 80;
root /var/www;
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
try_files $uri $uri/index.php /index.php;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
and it was working yesterday while I was working, but after restarting the PC overnight, and trying to work on the site today, I'm getting the standard php-fpm issue FastCGI sent in stderr: "Primary script unknown". I'm just curious if anybody has a solution, or knows why I'm having this issue. I understand that the "Primary script unknown" error usually means that my SCRIPT_FILENAME param is bad, but it was working yesterday, and as far as I can tell should always point to /var/www/index.php, which is where my wordpress index file is.
Should I basically just hardcode $document_root/index.php ?
EDIT: I literally just rebuilt my docker containers with docker-compose build and everything is working as intended. So I guess this question is more associated with docker than nginx/php, but I would still love some guidance. Would there be some reason my docker containers are ephemeral and need rebuilding on boot everytime?

Curl error : No route to host

So we're building a web application in PHP and we're trying to make requests to an external API. Problem is that we're getting a curl error:
cURL error 7: Failed to connect to external.api.com port 443: No route to host
A little bit of background now.
We're making requests using Guzzle.
We're hosting on Apache, which is running on a Linux machine and we're also using SSL.
The API is also using SSL, therefore the port 443 in error message.
The HTTP requests include a certificate for authentication.
I've managed to get it running on two different development environments but not on the production one. I suspect the problem is in the configuration of Apache, as if we haven't made it available to make requests to certain IP or port. I have no idea how to check it. I've read that I might have to change the file /etc/network/interface yet I haven't found any info on what to write there.
I've also read I have to run $ netstat -rn for answers yet I'm not sure what to look there.
EDIT:
Can't even make a simple get request without any parameters and anything.
Yet I can make requests to https://google.com and https://facebook.com. Will write more in a few.
After a lot of debugging and testing all of my code I contacted the service, whose API I was trying to consume.
They were an European service provider and they had whitelisted European IP's. Our production server was in the USA and after they whitelisted our IP, everything worked.
It worked for me for apache (httpd)
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
netstat -aln | grep 443 will show if your webserver is listening on that port.
Depending on which webserver you have installed your configuration file, for the site will be at /etc/nginx/sites-available/default, /etc/nginx/sites-available/yourSite, /etc/nginx/nginx.conf or some other similar paths for apache.
Wherever it is located, your configuration file should contain something like the following:
server {
listen 80;
listen 443 ssl;
server_name yourSite.com;
root "/path/to/yourSite";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /path/to/webserver/youSite.error.log error;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
location ~ /\.ht {
deny all;
}
ssl_certificate /path/to/yourSite.crt;
ssl_certificate_key /path/to/yourSite.key;
}
After changing this file make sure to sudo service nginx reload or sudo service nginx restart (or the relative apache command).
sudo service nginx configtest or sudo nginx -t will help with debugging the config file.
After searching for about a whole day I found that the problem was in the iptables rules.
In my case the solution was to restore the iptables rules as follows:
create a file containing the following text:
*filter
:INPUT ACCEPT [10128:1310789]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [9631:1361545]
COMMIT*
run the command: sudo iptables-restore < /path/to/your/previously/created/file
This will hopefully fix your problem if it is an iptables issue.
today I hava face up the same question, like that
I use curl http://localhost:8080 to check my tomcat can work or not
And it's error Failed to connect to ::1: No route to host
Finally I hava solve it.Obviously, it's the problem of your Apache tomcat.
So you must check your logs firstly. If you found your port was used, find that course and kill it. Then restart your tomcat.
find the course by port: netstan -lnp | grep port
kill course: kill -9 ****

Running CGI scripts on NGINX

I know this question has already been asked, but there were no clear answers on that question (How to run CGI scripts on Nginx) that would help me. In my case, I have installed NGINX by using source code, and have fixed my .config file so that I can read .php files using FASTCGI successfully. However, I am having quite some issues when it comes to running CGI scripts. I know I have FAST CGI installed and set up, so am I supposed to be naming these .cgi files .fcgi instead? Or am I supposed to include someway for the .cgi file to know that it is working with FAST CGI?? I tried toying around with the nginf.conf file to include .fcgi, and it looks something like this right now:
worker_processes 2;
pid logs/nginx.pid;
error_log syslog:server=unix:/dev/log,facility=local7,tag=nginx,severity=error;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx,severity=info combined;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
root /home/parallels/Downloads/user_name/nginx/html;
location / {
index index.html index.htm new.html;
autoindex on;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
include fastcgi_params;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
location ~ \.pl|fcgi$ {
try_files $uri =404;
gzip off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.pl;
#fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
#error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
However, whenever I run a .fcgi script such as
#!/usr/bin/perl
print "Content-type: text/html\n\n";
print "<html><body>Hello, world.</body></html>";
I am greeted with a screen that looks like this:
I'm pretty sure this is not normal; I should just be seeing Hello, world. on my screen, not all the code as well. Please let me know if my thinking that is actually wrong and this is supposed to be the correct output.
Additionally, on a side note, if I had this as my files.fcgi file:
#!/usr/bin/perl
my $output = `ls`;
print $output
Running something like this returns a list of all files in the directory that the .fcgi file is located in. Is there anyway I could display this on the web browser? Looking at examples online, it seems like people have been able to just run file.fcgi on their browser and see the output of the shell command (which led me to believe I'm doing something wrong, because when I run it on the command line it lists all the files but on the browser, it just prints out my code). Does anyone know what I could possibly doing wrong, assuming I am doing something wrong. If you need any more information, please let me know!
Thank you, and have a good day!
nginx does not support CGI scripts, and cannot launch FastCGI scripts on its own — it can only connect to FastCGI processes that are already running.
If you want to run CGI scripts, use a web server that supports them, such as Apache. While there are some workarounds, they will just confuse you at this stage.
Search for "fastcgi wrapper" to find various programs designed to bridge the gap between "modern" webservers that don't like spawning processes to handle requests and traditional CGI programs.
[nginx ] one socket [wrapper ] ,-- forks a single subprocess
[runs ] == connection ==> [also runs ] ---- running your CGI program
[continuously] per request [continuously] `-- for each request
While the standard CGI API is "for each request, the server calls your program with env vars to describe the request and with the body (if any) on stdin and your program is expected to emit a response on stdout and exit", the fcgi API expects your program to be constantly running and handle requests handed to it on a socket -- in that way, it's really more like a server. See http://en.wikipedia.org/wiki/FastCGI

magento not able to login in admin panel

I am new to magento. When i try to login in admin panel it gives me below error. It works when i off my session auto start off. But doing this my other application don't work on server. I am using magento 1.9 version. To add i am running nginx not apache
Fatal error: Mage_Admin_Model_Observer::actionPreDispatchAdmin(): The script tried to execute a method or access a property of an incomplete object. Please ensure that the class definition "Mage_Admin_Model_User" of the object you are trying to operate on was loaded before unserialize() gets called or provide a __autoload() function to load the class definition in /var/www/html/magento/magento/app/code/core/Mage/Admin/Model/Observer.php on line 62
Ok, so for your information, ngnix is not parsing htaccess at all and Magento heavily rely on htaccess for security.
Before even considering your problem, please know that, if your server have anything else than a local access, you are at risk, because, as you can see in the app/etc/local.xml file, accessible to everyone, you are giving the world your database access.
Please have a complete reading of this document : http://info.magento.com/rs/magentocommerce/images/MagentoECG-PoweringMagentowithNgnixandPHP-FPM.pdf where you can find a basic ngnix configuration for Magento :
server {
listen 80 default;
server_name magento.lan www.magento.lan; # like ServerName in Apache
root /var/www/magento; # document root, path to directory with files
index index.html index.php;
autoindex off; # we don’t want users to see files in directories
location ~ (^/(app/\|includes/\|lib/\|/pkginfo/\|var/\|report/config.
xml)\|/\.svn/\|/\.git/\|/.hta.+) {
deny all; #ensure sensitive files are not accessible
}
location / {
try_files $uri $uri/ /index.php?$args; # make index.php handle requests for
/
access_log off; # do not log access to static files
expires max; # cache static files aggressively
}
location \~\* \.(jpeg\|jpg\|gif\|png\|css\|js\|ico\|swf)$ {
try_files $uri $uri/ #proxy; # look for static files in root directory and
ask backend if not successful
expires max;
access_log off;
}
location #proxy {
fastcgi_pass fpm_backend; # proxy everything from this location to backend
}
location \~\.php$ {
try_files $uri =404; # if reference to php executable is invalid return 404
expires off; # no need to cache php executable files
fastcgi_read_timeout 600;
fastcgi_pass fpm_backend; # proxy all requests for dynamic content to
# backend configured in upstream.conf
fastcgi_keep_conn on; # use persistent connects to backend
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root${fastcgi_script_name};
fastcgi_param MAGE_RUN_CODE default; # Store code is defined in
#administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_TYPE store;
}
}
Then, when and only when you have an access denied on the file app/etc/local.xml, please consider adding ngnix tag to your question, then an user with more ngnix knowledge can maybe help you further than me (since it is more a sysadmin job than a "coder" like me job).
All I can say is : it looks like if you add fastcgi_param PHP_VALUE "session.auto_start=0"; under the section
location \~\.php$ {
fastcgi_param PHP_VALUE "session.auto_start=0";
#... more come here but I'm shortening just for the specific problem
}
That should do the trick.
Can you Please clear your Cache, and restart mysqld and clear browser cache also.
canyou please share your website link,
Magento will not work with session.auto_start enabled, because some action would take place before the session start.
A workaround if you really don't want to disable it for your other app is to edit the .hatccess of your Magento and add php_flag session.auto_start 0 in it.

Setting up site with homebrewed nginx got 404 on site? Reason why it is happening?

I found many links regarding this type of topic , so far i still could not solve my problem.
I have just installed nginx via homebrew. Here are the steps that i did :
Added site name to etc/hosts
127.0.0.1 mysite.com
On my usr/local/etc/nginx, i created folder using
mkdir sites
(most instructions i have read so far already have sites-enabled or sites-default on thier setup, but mine was clean so i created one.) Then within the folder i created file just using vim :
vim mysite
then in the file i have this :
server {
listen 80;
server_name mysite.com;
root /Users/myname/mysite/mainsite;
client_max_body_size 10M;
# serve static files
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
expires 30d;
}
location / {
index index.php index.html index.htm;
try_files $uri $uri/ /index.php$is_args$args;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /opt/local/share/nginx/html;
}
location ~ \\.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param FF_BOOTSTRAP_ENVIRONMENT dev;
fastcgi_param FF_BOOTSTRAP_CONFIG webroot/dev;
fastcgi_buffer_size 1024k;
fastcgi_buffers 1024 1024k;
fastcgi_busy_buffers_size 1024k;
include /usr/local/etc/nginx/fastcgi.conf;
}
}
After this i include my created folder to nginx.conf and nginx.cnf.default but after this i still get a 404 error. The above configuration on mysite file, except for some directory changes, worked on my other computer but some how i cant replicate for it to work, I tried revising and editing my directory in root but i still get 404. Did I miss some important stuff when configuring? Or what are the other possible reasons why i cannot access mysite.com after the above configuration or how i would get 404. Also i think no other background applications are currently running because i have just restarted the computer to see it the site doesnt work.. Any more suggestions why this might be happening? Thanks in advance
404 :(
First of all you mentioned the missing sites-enabled part, probably cause you're using centos or some other distro, I've explained this part on my answer on another question
Your site isn't working because nginx can't see the config file, simply creating a folder in anywhere doesn't work, you need to tell nginx to look into you config file, if you're configuration is in usr/local/etc/nginx like you said, then you need to move this config file mysite to usr/local/etc/nginx/conf.d at least,
or create the sites-available, sites-enabled pair like I explained in my other answer and move mysite to sites-available then symlink to it inside sites-enabled
Of course make sure you point things to the right path since your nginx lives inside usr/local/etc/nginx instead of /etc/nginx

Categories