I moved my site based on Magento 2 from hosting to my localhost.
I cleared cache, adjusted(secure and unsecure) URLs in core_config, run static content deploy() using CLI. Checked all permissions for "folder".
Magento runs but with no CSS and js files.
In console I can see the following:
What should I do to remove this issue?
P.S
Win 10
Open Sever (PHP7x64, MySQL5,7x64, Apache-PHP7-x64+Nginx1.10)
No external caching
P.P.S Before I copied the site from the host I tried to setup Magento with sample data using CLI and I received the same issue! So I believe it's not the only issue about moving Magento 2 from host to local.
I can see that M2 tries to load all files from the version1485628564 folder which doesn't exist in the pub/static
http://magehost.two/pub/static/**version1485628564**/frontend/Magento/luma/en_US/mage/calendar.css
You need to update the .htaccess file under /pub/static folder. Open MAGENTO_DIR/pub/static/.htaccess and add the following code:
...
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /pub/static/ # <- Add This
...
Alternatively, you can disable static file signing by adding this record into the core_config_data table with this query:
INSERT INTO `core_config_data` VALUES (NULL, 'default', 0, 'dev/static/sign', 0);
In this case, keep in mind that this will disable the browser's cache refreshing mechanism.
After the execution, you have to flush the Magento cache.
UPDATE 2018
In 2018 I've made a Pull Request to the Magento 2 team that includes this fix. Latest versions of branch 2.3 and 2.4 include the above row in the .htaccess file:
## you can put here your pub/static folder path relative to webroot
#RewriteBase /magento/pub/static/
You have to uncomment the row and set it accordingly to your Magento installation.
You can find the same row under the /pub/media/.htaccess file.
As you are using nginx, the htaccess comment above wont help you. You need to add this to your nginx domain config;
location /static/ {
# Remove version control string
location ~ ^/static/version {
rewrite ^/static/(version\d*/)?(.*)$ /static/$2 last;
}
It means your deployed_version.txt is removed. Add it again and deploy your Magento 2. Then it will work fine.
deployed_version.txt has to exist in pub/static/.
You need to run below command on CLI
path to Magento root folder : php bin/magento setup:static-content:deploy
path to Magento root folder : php bin/magento cache:flush
Add one more answer that might be helpful here. Firstly, if the website is set to production mode, make sure you run the command to deploy the static assets as below:
php bin/magento setup:static-content:deploy
Second, if your site is hosting with Nginx, make sure you include the nginx.conf.sample file located at the Magento 2 root folder. More specifically, following is the snippet (Magento 2.3.0) which handle the static assets requests:
location /static/ {
# Uncomment the following line in production mode
# expires max;
# Remove signature of the static files that is used to overcome the browser cache
location ~ ^/static/version {
rewrite ^/static/(version[^/]+/)?(.*)$ /static/$2 last;
}
location ~* \.(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2|json)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
}
if (!-f $request_filename) {
rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
}
add_header X-Frame-Options "SAMEORIGIN";
}
You might want to check your Nginx configuration to ensure that it is allowing includes - that is what happened in my case. Without this setting, it will not look at your site nginx.conf file and the server will not be able to find your css, img or js files.
This link has instructions: https://www.inmotionhosting.com/support/edu/wordpress/advanced-nginx-vps-and-dedicated/
Related
I wanted to know the configuration for WordPress Multisite when the main site is in the root and the others in /site-name/
For example:
Main website: http://www.example.com/ Second website: http://www.example.com/second-site/ Third site: http://www.example.com/thrid-site/
I looked for the setting, but I only found it with subdomains.
Let's say your folder structure is as follows:
/root_folder/*
/root_folder2/second-site/*
/root_folder3/third-site/*
Then you need:
server {
server_name www.example.com;
root /root_folder/;
location / {
# code
}
location /second-site {
root /root_folder2/;
# code
}
location /third-site {
root /root_folder3/;
# code
}
}
It's important not to add the /second-site and /third-site in the root directive as Nginx will automatically add requested subpaths to the root path at request.
If your folder structure is as follows:
/root_folder/*
/root_folder/second-site
/root_folder/third-site
You only need
server {
server_name www.example.com;
root /root_folder/;
location / {
# code
}
}
And Nginx will do the rest for you.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I'm trying to setup a web interface for some controlled plug sockets I have, the sockets are controlled by a Raspberry Pi. I want to have a couple of links on a web page that I can use to turn the switches on and off. I've been trying to use PHP to do this and for some reason it just won't work.
I've tried various suggestions (see below links). All I'm getting is a white page whenever I click the link, and it doesn't do what its supposed to i.e. turn the switch on and off. Running the PHP script from the command line works as expected, the issue seems to be only when trying to run it from the webpage.
I've looked at the permissions and for the script I've set the permissions with:
chmod 777 /path/to/script
I've tried storing the script in my home folder and in the /var/www/html folder with no joy. Nothing appears in the NGINX logs or PHP-FPM log to indicate any error.
I've tried editing the sudoers file to give www-data access to the script (www-data ALL:=/path/to/script/ and even tried it with all permissions for www-data (www-data ALL=(ALL:ALL) ALL) neither have worked.
I did think it might be because the script I'm trying to run involves starting an SSH session but I can't even get a local command to work to create a blank file either in the /home/pi/ directory or /var/www/html.
I've put the script I'm trying to run below along with the PHP I'm using to call the script and a second PHP file I've used to try other commands.
Any help or pointers in the right direction would be appreciated. I think the script is running but its failing somewhere and I can't work out where. The only thing I get back in a web browser is the echo $username line so I know its working in part but when I try to execute a command nothing happens.
PHP SCRIPT:
<?php
$username = posix_getpwuid(posix_geteuid())['name'];
echo $username;
exec("/home/pi/scripts/switch2off");
?>
TEST SCRIPT:
<?php
exec("touch /var/www/html/s/test.txt");
?>
SWITCH2OFF SCRIPT
#! /bin/bash
ssh pi#example 'python /home/pi/switches/switch_2_off.py'
NGINX CONFIG:
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
server {
listen 80;
listen [::]:80;
return 301 https://$server_name$request_uri;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com;
location /.well-known/ {
allow all;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
include snippets/ssl-example.conf;
include snippets/ssl-params.conf;
root /var/www/html;
location / {
limit_req zone=one burst=5;
root /var/www/html;
auth_basic "Please Log In";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header X-Content-Type-Options: nosniff;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-Frame-Options "allow-from example.com";
}
location /.well-known/ {
allow all;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
server {
error_page 401 403 404 /404.html;
}
PHP-FPM LOG:
[03-Aug-2020 05:00:01] NOTICE: Terminating ...
[03-Aug-2020 05:00:01] NOTICE: exiting, bye-bye!
[03-Aug-2020 05:00:29] NOTICE: fpm is running, pid 620
[03-Aug-2020 05:00:29] NOTICE: ready to handle connections
[03-Aug-2020 05:00:29] NOTICE: systemd monitor interval set to 10000ms
MY RESEARCH/THINGS I'VE TRIED:
Nginx serves .php files as downloads, instead of executing them - I started here as initially I had a config issue when instead of running the PHP scripts it served them as a download instead.
Run a shell script with an html button - this is where I got the code from for the PHP script
PHP code is not being executed, instead code shows on the page - not quite the same issue as I'm seeing. The web browser doesn't display any code from the php file even when going to view source
https://askubuntu.com/questions/520566/why-wont-this-php-script-execute-bash-script
https://unix.stackexchange.com/questions/115054/php-shell-exec-permission-on-linux-ubuntu
https://www.linode.com/docs/web-servers/nginx/serve-php-php-fpm-and-nginx/
Thanks for all the help. I found the issue. It wasn't with PHP or NGINX. The owner on /var/www/.ssh was set to pi for some reason. I've changed it to www-data and the script has started working now from the webpage. I'm still not sure why my second script to create a file wouldn't work (probably a permissions issue) but I was experimenting and found that other commands would work (like ls) which brought me back to thinking it had to be a permissions error somewhere.
So I went back through all the scripts and folders and checked and it was the .ssh folder. A quick chown fixed the problem.
Thank you again for all your suggestions and help!
I use KnpSnappyBundle 1.6.0 and wkhtmltopdf 0.12.5 to generate PDFs from HTML in PHP like so:
$html = $this->renderView(
'pdf/template.html.twig',
[ 'entity' => $entity, ]
);
return new PdfResponse($snappy->getOutputFromHtml($html,
['encoding' => 'UTF-8', 'images' => true]), 'file'.$entity->getUniqueNumber().'.pdf'
);
My issue: on my production server, when I refer to assets (images or css) that are hosted on the same server as my code, generating a PDF takes around 40-50 seconds. Even when I only use a tiny image that is hosted on the same server it takes 40 seconds. I could use images that are much larger that are hosted on another server and generating the PDF will happen instantly.
My server is not slow in serving assets or files in general. If I simply render out the HTML as a page it happens instantly (with or without the assets). When I locally (on my laptop) request assets from my production server to generate a PDF it also happens instantly.
The assets I require in the HTML that needs to be rendered to PDF all have absolute URLs, this is required for wkhtmltopdf to work. For example: <img src="https://www.example.com/images/logo.png"> The difficult thing is, everything works but just very slowly. There is no pointing to a non-existent asset that would cause a time-out.
I first thought it might have to do with wkhtmltopdf, so I tried different versions and different settings, but this did not change anything. I also tried to point to another domain on the same server, the problem remains. I tried not using the KnpSnappyBundle, but the problem also remains.
So my guess now is that it is a server issue (or a combination with wkhtmltopdf). I am running Nginx-1.16.1 and serve all content over SSL. I have OpenSSL 1.1.1d 10 Sep 2019 (Library: OpenSSL 1.1.1g 21 Apr 2020) installed and my OS is Ubuntu 18.04.3 LTS. Everything else works as expected on this server.
When I look in the Nginx access logs, I can see a get request is made by my own IP-address when using assets from the same server. I cannot understand though why this is taking so long and I have run out of ideas of what to try next. Any ideas are appreciated!
I'll add my Nginx config for my domain (in case it might help):
server {
root /var/www/dev.example.com/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name dev.example.com www.dev.example.com;
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/index\.php(/|$) {
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
internal;
}
location ~ \.(?:jpg|jpeg|gif|png|ico|woff2|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|js|css)$ {
gzip_static on;
# Set rules only if the file actually exists.
if (-f $request_filename) {
expires max;
access_log off;
add_header Cache-Control "public";
}
try_files $uri /index.php$is_args$args;
}
error_log /var/log/nginx/dev_example_com_error.log;
access_log /var/log/nginx/dev_example_com_access.log;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/dev.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/dev.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = dev.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name dev.example.com www.dev.example.com;
listen 80;
return 404; # managed by Certbot
}
Udate 5 Aug 2020: I tried wkhtmltopdf 0.12.6, but this gives me the exact same problem. The "solution" that I posted as an answer to my question a few months ago is far from perfect which is why I am looking for new suggestions. Any help is appreciated.
This sounds like a DNS issue to me. I would try adding an entry in /etc/hosts for example:
127.0.0.1 example.com
127.0.0.1 www.example.com
And pointing your images to use that domain
I have not found the root of my problem. However, I have found a workaround. What I have done is:
Install wkhtmltopdf globally (provided by my distribution):
sudo apt-get install wkhtmltopdf
This installs wkhtmltopdf 0.12.4 (on 5 Nov 2019) through the Ubuntu repositories. This is an older version of wkhtmltopdf and running this by itself gave me a myriad of problems. To solve this, I now run it inside xvfb. First install it by running:
sudo apt-get install xvfp
Then change the binary path of the wrapper you use that points to wkhtmltopdf to:
'/usr/bin/xvfb-run /usr/bin/wkhtmltopdf'
In my case, I use KnpSnappyBundle and set the binary path in my .env file In knp_snappy.yaml I set binary: '%env(WKHTMLTOPDF_PATH)%' and in .env I set WKHTMLTOPDF_PATH='/usr/bin/xvfb-run /usr/bin/wkhtmltopdf' (as described above). I can now generate PDFs although there are some issues with the layout.
Not sure if this is acceptable for you or not, but in my case, I always generate an HTML file that can stand on it's own. I convert all CSS references to be included directly. I do this programatically so I can still keep them as separate files for tooling. This is fairly trivial if you make a helper method to include them based on the URI. Likewise, I try to base64 encode all the images and include those as well. Again, I keep them as separate files and do this programatically.
I then feed this "self-contained" html to wkhtmltopdf.
I'd share some examples, but my implementation is actually C# & Razor.
That aside, I would also build some logging into those helpers with timestamps if you're still having problems so you can see how long the includes are taking.
I'm not sure what the server setup is, but possibly there's a problem connecting to the NAS or something.
You could also stand to throw some logging with timestamps around the rest of the steps to get a feel exactly which steps are taking a long time.
Other tips, I try to use SVGs (where I can) for images, and try not to pull large (or any) CSS libraries into the html that becomes the pdf.
I have 2 txt files that I placed at /home/forge/laravel58/public/files;
I want to index those 2 txt files when I goto my site/files
I've tried
location /files {
#auth_basic "Restricted";
#auth_basic_user_file /home/forge/laravel58/.htpasswd;
alias /home/forge/laravel58/public/files;
autoindex on;
}
Go to : site/files, and see
403 Forbidden Nginx
The trailing slash is essential for autoindex to work, it should be:
location /files/ {
alias /home/forge/laravel58/public/files/;
autoindex on;
}
Next, check that nginx has execute permissions (+x) on every folder in the path.
After that remove any index file from this folder, by default it's index.html.
And finally, check that your location / directive has attempt to try directories:
location / {
...
try_files $uri $uri/ ...;
^^^^^
}
why nginx if you want you can use symbolic link
usage : ln -s /path/to/file /path/to/symlink
ln -s /home/forge/laravel58/public/files site/files with full path
The other answer about trailing slash being "essential for autoindex to work" is 100% incorrect: trailing slash is not required here, although, it is, in actuality, the preferred paradigm, because otherwise you make it possible to access both regular files and directories like /filesSECRET, one level up from /files/, opening yourself to potential security issues.
In your situation, where /files is the suffix of both the location, as well as alias, it is preferred to use root instead of alias. See http://nginx.org/r/alias and http://nginx.org/r/root.
In order for http://nginx.org/r/autoindex to work, the UNIX user under which the nginx process is running must have "read" permission on the final directory of the path, as well as "execute" permissions for every part of the path.
You can use stat(1), or ls -l, to examine permissions, and chmod(1) to change the permissions. You'd probably want o+rx on /home/forge/laravel58/public/files, as well as o+x on every single directory that's leading to the above one.
server {
listen 80;
listen 443 ssl;
server_name www.old-name.com;
return 301 $scheme://www.new-name.com$request_uri;
}
SOURCE
I have created a PHP website on azure using app services. I use continuous deployment through bitbucket. I need to point the website to public folder in my code to run the app as it is built with zend framework.
After some search, was not able to find how to change the folder where the server points for default directory.
Go to Azure Web apps settings -> Application Settings -> Virtual Applications and directories and setup the physical path of the new folder. Also check the Application checkbox.
Restart the web app once.
There are a few scenarios possible:
You run a Windows App Service
You run a Linux App Service with PHP 7.4 or less
You run a Linux App Service with PHP 8
In the first scenario (Windows App Service) you can go to the App Service > Settings > Configuration blade you need to select the tab "Path Mappings" where you can set the Virtual Applications paths as follows: "/" maps to "site\wwwroot\public".
In the second scenario you can use the .htaccess solution described by #Ed Greenberg, even though for Zend Framework I suggest to use the following settings:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} -s [OR]
RewriteCond %{REQUEST_FILENAME} -l [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^.*$ - [NC,L]
RewriteRule ^.*$ /index.php [NC,L]
For the third scenario you have a bit more of a challenge since Apache was replaced by Nginx and the rewrite rules no longer apply. Please see my detailed blog article "PHP 8 on Azure App Service" on how to solve this and other challenges with the new Azure App Service for PHP 8.
Good luck and let me know if it solved your problem.
For PHP 8.0 with nginx I use startup.sh script placed in the root directory of the project. startup.sh contains the following line:
sed -i 's/\/home\/site\/wwwroot/\/home\/site\/wwwroot\/public/g' /etc/nginx/sites-available/default && service nginx reload
You need to add "startup.sh" as Startup Command in General Settings. Now "public" dir is your root directory.
The correct answer in 2021 (for Laravel, and probably other frameworks with a /public directory) is to put an extra .htaccess in the webroot directory.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^(.*)$ public/$1 [L]
</IfModule>
Credit to Azure Web App - Linux/Laravel : Point domain to folder
Finally I've found Laravel documentation how to make it work with Azure. To be more precise - PHP8 + NGINX. Here is the article link - https://azureossd.github.io/2022/04/22/PHP-Laravel-deploy-on-App-Service-Linux-copy/index.html
Hope it will be useful :-)
PHP 8 (NGINX)
PHP 8 on Azure App Service Linux use NGINX as the Web Server. To have NGINX route requests to /public we’ll have to configure a custom startup script. We can grab the existing default.conf under /etc/nginx/sites-available/default.conf and run cp /etc/nginx/sites-available/default.conf /home. This will copy the default.conf we need into /home so we can download it with an FTP client or any other tool that allows this.
This default.conf has the following line:
root /home/site/wwwroot;
We need to change it to the following:
root /home/site/wwwroot/public;
Next, under the location block we need to change it from:
location / {
index index.php index.html index.htm hostingstart.html;
}
to the following:
location / {
index index.php index.html index.htm hostingstart.html;
try_files $uri $uri/ /index.php?$args;
}
Now configure your actual startup.sh bash script. Note, the file name is arbitrary as long as it is a Bash (.sh) script. Configure the file along the lines of the below:
#!/bin/bash
echo "Copying custom default.conf over to /etc/nginx/sites-available/default.conf"
NGINX_CONF=/home/default.conf
if [ -f "$NGINX_CONF" ]; then
cp /home/default.conf /etc/nginx/sites-available/default
service nginx reload
else
echo "File does not exist, skipping cp."
fi
NOTE: $query_string can be used as well. See the official documentation here.
Our custom default.conf should look like the below:
server {
#proxy_cache cache;
#proxy_cache_valid 200 1s;
listen 8080;
listen [::]:8080;
root /home/site/wwwroot/public;
index index.php index.html index.htm;
server_name example.com www.example.com;
location / {
index index.php index.html index.htm hostingstart.html;
try_files $uri $uri/ /index.php?$args;
}
........
.....
...all the other default directives that were in this file originally...
}
Use an FTP client to upload both your startup.sh script and your custom default.sh to the /home directory for your PHP App Service.
Next, under ‘Configuration’ in the portal target /home/startup.sh (or whatever the startup script file name is).
Laravel App
Lastly, restart the App Service. This should now be using our custom startup script. Use LogStream or the Diagnose and Solve -> Application Logs detector, or other methods, to see the stdout from the script.