I am currently using a javascript framework being served up using Nginx, for example on the following url
www.myjsapp.com
I am also using Laravel 5.6 to build an API.
Instead of building 2 hosts, one for the JS app and one for Laravel, I want to be able to serve up the Laravel API on the following URL.
www.jsapp.com/api
Is this possible or do I have to always use 2 hosts?
The nginx server block for myjsappcom is as follows;
server {
listen 80;
listen 443 ssl http2;
server_name .myjsapp.com;
root "/home/project/myjsapp";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/myjsapp.com-error.log error;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
}
location ~ /\.ht {
deny all;
}
ssl_certificate /etc/nginx/ssl/myjsapp.com.crt;
ssl_certificate_key /etc/nginx/ssl/myjsapp.com.key;
}
You can separate the servers using location blocks in your nginx config file:
different /location blocks will capture different url schemes and pass them to the respective servers (node or laravel).
server {
server_name mysjapp.com;
#other configurations like root, logs
location / {
#node server config
}
location /api {
#laravel server config
}
}
Related
probably somebody faced this issue.
Through the time I get error "no input file specified" from nginx.
There is the log from nginx container "FastCGI sent in stderr: "Unable to open primary script: /var/www/public/index.php (No such file or directory)" while reading response header from upstream".
It happens only for https, on local machine allright.
There is a full log
As you can see, sometimes I got response 200, sometimes 404, from same file.
This is my nginx config:
server {
index index.php index.html index.htm;
root /var/www/public;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
rewrite ^/core/authorize.php/core/authorize.php(.*)$ /core/authorize.php$1;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off; access_log off;
}
location = /robots.txt {
log_not_found off; access_log off; allow all;
}
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
}
I have been solved the problem. Instead nginx I set up apache, because laravel 5.8 refuses to work with nginx. Latest version of laravel works properly with nginx in docker
I have started off with a docker image which has a processor for php files in place.
Requesting the normal index.php file goes without issue, but the routes I added are less accessible.
One of the routes for example is /time, accessing this is done (with the php builtin webserver) with
index.php/time, and it would return me the time. Now the docker I used has php working perfectly, just that I think one of the rules is breaking this functionality for me. tried a lot of changes, but cant seem to find the issue.
$app->get('/time', function ($request, $response, $args) {
// Current Unix timestamp
$seconds = time();
// Current Unix timestamp with microseconds
$milliseconds = microtime(true)*1000;
// Nanosecond precision
// This is default as per InfluxDB standard
$nanoseconds = $milliseconds*1000000;
// Return
return $response->withJson(array(
'seconds' => $seconds,
'milliseconds' => intval($milliseconds),
'nanoseconds' => intval($nanoseconds)
), 200);
});
This is the configuration of the nginx server I am serving in the docker. One of the things I have tried is adding the location /ping { line. But to no avail, /ping, /time keep redirecting to a 404 not found.
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/html/public;
index index.php index.html index.htm;
# Make site accessible from http://localhost/
server_name _;
# Disable sendfile as per https://docs.vagrantup.com/v2/synced-folders/virtualbox.html
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
# Add option for x-forward-for (real ip when behind elb)
#real_ip_header X-Forwarded-For;
#set_real_ip_from 172.16.0.0/12;
location /ping {
try_files $uri $uri/ /index.php/ping;
}
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ =404;
}
error_page 404 /404.html;
location = /404.html {
root /var/www/errors;
internal;
}
location ^~ /sad.svg {
alias /var/www/errors/sad.svg;
access_log off;
}
location ^~ /twitter.svg {
alias /var/www/errors/twitter.svg;
access_log off;
}
location ^~ /gitlab.svg {
alias /var/www/errors/gitlab.svg;
access_log off;
}
# pass the PHP scripts to FastCGI server listening on socket
#
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|webp|tiff|ttf|svg)$ {
expires 5d;
}
# deny access to . files, for security
#
location ~ /\. {
log_not_found off;
deny all;
}
location ^~ /.well-known {
allow all;
auth_basic off;
}
}
I'm using vagrant homestead machine as local development environment and I'm having trouble configuring a new zend 3 project.
Actually I have other projects on the same machine and they're working ok.
This one is giving me a "No input file specified" no matter what I put in the configuration file.
I'm running nginx 1.15.8, php7.3 in vagrant homestead 7.
I tried many solutions provided in this site but none was useful.
This is my vhost file:
server {
listen 80;
listen 443 ssl http2;
server_name simulador-preferencias.test;
root /home/vagrant/code/simulador-preferencias/public;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log /var/log/nginx/simulador-preferencias.test.log;
error_log /var/log/nginx/simulador-preferencias.test-error.log error;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
location ~ /\.ht {
deny all;
}
}
}
Can anyone help me get this configuration right?
Thanks in advance.
I finally solved the problem.
After you add the vhost to the vagrant machine and edit the Homestead.yml file to add the new site you have to run the following command to generate de correct configuration:
vagrant up --provision
Note that you'll have to stop the machine to be able to edit the Homestead.yml file or else you'll get an error message.
Stop the machine with:
vagrant halt
I have installed a orocrm on a AWS Lightsail VPS with Nginx & PHP7. orocrm installed without a hitch but I'm using Nginx for the first time and my virtual host doesn't seem to be working.
sites-available/default:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index app.php index.php index.html;
server_name 34.127.224.10;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
I then purchased & pointed crmdomain.com to my instance and created another file.
sites-available/crm:
server {
listen 80;
server_name crmdomain.com www.crmdomain.com;
root /var/www/html/crm/web;
index app.php;
error_log /var/log/nginx/orocrm_error.log;
access_log /var/log/nginx/orocrm_access.log;
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
location #rewrite { rewrite ^/(.*)$ /app.php/$1; }
location / {
try_files $uri /app.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
if (!-f $document_root$fastcgi_script_name) {
return 404;
}
fastcgi_index app.php;
fastcgi_read_timeout 10m;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I honestly don't know most of what is above, it's mostly based on tutorial and apache experience.
Errors
Domain based
- crmdomain.com -> 403 Forbidden nginx/1.10.0 (Ubuntu)
- crmdomain.com/app.php -> No input file specified.
- crmdomain.com/app_dev.php -> No input file specified.
- crmdomain.com/index.nginx-debian.html -> Welcome to nginx!
- crmdomain.com/user/login -> 404 Not Found nginx/1.10.0 (Ubuntu) **This is what should word**
Static IP based
- 34.127.224.10 -> 403 Forbidden nginx/1.10.0 (Ubuntu)
- 34.127.224.10/crm/web/ - No input file specified.
- 34.127.224.10/crm/web/app.php - No input file specified.
- 34.127.224.10/crm/web/app_dev.php - No input file specified.
- 34.127.224.10/index.nginx-debian.html -> Welcome to nginx!
- 34.127.224.10/crm/web/app.php/user/login -> 404 Not Found nginx/1.10.0 (Ubuntu) **This is what should word**
What am I doing wrong?
It's weird how often I answer my own questions:
I was able to change the default files and remove the extra virtual host. Since the server was only going to host the one app.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name 34.127.224.10 crmdomain.com www.crmdomain.com;
root /var/www/html/crm/web;
index app.php app_dev.php index.php;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /app.php$is_args$args;
}
location ~ ^/(app|app_dev|config|install)\.php(/|$) {
#fastcgi_pass 127.0.0.1:9000;
# or
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
error_log /var/log/nginx/orocrm_error.log;
access_log /var/log/nginx/orocrm_access.log;
}
I would like to use dynamic host resolution with nginx and fastcgi_pass.
When fastcgi_pass $wphost:9000; is set in the conf then nginx displays the error
[error] 7#7: *1 wordpress.docker could not be resolved (3: Host not found),
but when I set fastcgi_pass wordpress.docker:9000;it is working except for the fact the that after a wordpress restart nginx still points to an old ip.
server {
listen [::]:80;
include /etc/nginx/ssl/ssl.conf;
server_name app.domain.*;
root /var/www/html;
index index.php index.html index.htm;
resolver 172.17.42.1 valid=60s;
resolver_timeout 3s;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args; ## First attempt to serve request as file, then as directory, then fall back to index.html
}
#error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
set $wphost wordpress.docker;
# pass the PHP scripts to FastCGI server listening on wordpress.docker
location ~ \.php$ {
client_max_body_size 25M;
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass $wphost:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
log_not_found off;
expires 168h;
}
# deny access to . files, for security
location ~ /\. {
access_log off;
log_not_found off;
deny all;
}
}
I have different virtual host configuration where I use proxy_pass http://$hostname; and in this setup everything is working as expected and the host is found.
After trying different options I wonder if fastcgi_pass does support variables
It does work if you pass the upstream as a variable instead, for example:
upstream fastcgi_backend1 {
server php_node1:9000;
}
upstream fastcgi_backend2 {
server php_node2:9000;
}
server {
listen 80;
server_name _;
set $FASTCGI_BACKEND1 fastcgi_backend1;
set $FASTCGI_BACKEND2 fastcgi_backend2;
location ~ ^/site1/index.php$ {
fastcgi_pass $FASTCGI_BACKEND1;
}
location ~ ^/site2/index.php$ {
fastcgi_pass $FASTCGI_BACKEND2;
}
}