I use the following code in htaccess file located in /projects/testsite
AddType x-mapp-php5 .php
RewriteEngine On
RewriteBase /
Options -MultiViews
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . projects/testsite/index.php [L,QSA]
#ErrorDocument 404 /404.php
when i am in http://www.mydomain.com/projects/testsite/admin/articles/1/edit
and i press Save which redirects the request to http://www.mydomain.com/projects/testsite/admin/articles/1/save
all post data are lost.
What i get if i try to debug is
POST: Array print_r: Array ( )
GET: Array print_r: Array ( )
What should i do to my .htaccess file to keep redirecting all requests to index.php but preserving all post data?
Thank you in advance!
P.S. my site works normally if i set it in a Windows Server with web.config rewrite rules.
UPDATE #1
From Firefox Live HTTP headers i see a significant issue: a 301 Moved Permanently header is captured only in Apache (this is not happening in IIS)
HTTP/1.1 301 Moved Permanently
Date: Sun, 06 Apr 2014 13:48:06 GMT
Server: Apache
Location: http://mydomain.gr/projects/testsite/admin/articles/6/save
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 270
Keep-Alive: timeout=3, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
UPDATE #2
It seems that there is some relation with this issue:
How to preserve POST data via ajax request after a .htaccess redirect?
where the asker finds out that someone was forcing a 301 redirect rule for every request on top of the .htaccess file.
Is it the hosting provider to blame?
This was quite baffling, hope i saved someone from the same headache:
The problem was located in Parallels Panel>Websites & Domains>mydomain.gr> Hosting Settings
Locate the select field:
Preferred domain, it was set to domain.tld but all my requests where to www.domain.tld so a 301 redirection (you lose all post data) was forced by parallels and not by my applications files.
Note that the label in Parallels warns that:
Regardless of the domain's URL that visitors specify in a browser (with the www prefix or without it), a page with the preferred domain's URL opens. The HTTP 301 code is used for such a redirection. The 'None' value means that no redirection is performed.
So my solution was to change Preferred domain to None.
Related
I am using browser caching to speed up my PHP website. If a user visits a page before logging in and comes back to the page again after logging in, the page doesn't get updated with the contents that are only visible to the logged-in users.
i.e. The "Log in" link on the header menu doesn't change to the "Log out" link if the user visited the page before logging in. And if they visited a new page after logging in, the "Log out" link on the header menu doesn't change to the "Log in" link even after log out.
Here is a sample code
<h2>Contact Form</h2>
<form method="post" action="" onsubmit="">
<?php
if (logged_in() === false) {
?>
Name*
<input type="text" name="name" required />
Email*
<input type="email" name="email" required />
<?php
}
?>
Message*
<textarea name="message" rows="12"></textarea>
<input type="submit" name="contact" value="Send Message" />
</form>
It would be a huge help for me if someone could help me on this issue.
Thanks in advance!
I'll formulate the answer based on the comments we had together.
Check the HTTP headers sent to the browser
The HTTP headers sent by PHP, Apache, NGINX or whatever is running on your server or even passing through (such as reverse proxies like Varnish) will inform the browser what to do with the response.
If you have an Expire or Cache-Control HTTP header that says the page can be cached then effectively the browser won't retrieve the page again once you're logged in.
If you don't want your dynamic pages to be cached you have to set the HTTP header in adequation, ex: Cache-Control: no-cache, no-store, private
This could be done in PHP:
header('Cache-Control: no-cache, no-store, private');
It can also be done later in your web server config for page requests handled by PHP.
By the way, the web server can override what PHP says, so you have to check the output yourself with the browser. Even some other layers such as proxies could alter the HTTP headers.
The Vary HTTP header
If your server responds for a given URL in different ways then it has to inform that the response can vary depending on something. Typically, almost all assets should be compressed to gzip or brotli to save bandwidth. But not all browsers handle brotli compression. So when they do the request, they say what they can handle, ex: accept-encoding: gzip, deflate, br. Here, the browser can handle br = brotli.
The response will then be compressed with brotli to reduce the size. But if a reverse proxy caches it and someone else comes with a browser that doesn't handle brotli then we'll be in trouble. This is why the server should inform all caching layers that the response varies depending on the Accept-Encoding header by returning Vary: Accept-Encoding.
It's also common to have a language detection script for the homepage URL (if you hit the domain without a language path). In this case, we have to send Vary: Accept-Encoding, Accept-Language
For the HTML generated by PHP, it would also be important to say that the response can vary depending on the Cookie header. This is because a page for a specific user could be allowed for caching in private mode, let's say for a minute or more but it should not be messed up with another user's page.
So in PHP, you could do:
header('Vary: Accept-Language, Accept-Encoding, Cookie');
Caching static files and re-deploying them
In terms of performance, it's important to cache static data. So typically this should be done for all assets such as CSS, JS, images, favicons, fonts, etc. They are not dynamic so they can be stored for ages. This can be done by setting Cache-Control: max-age=31536000 for these static files.
If you change a CSS rule, the CSS file won't be re-downloaded by a visitor that already came before. But it's not a problem because your HTML can point to a new URL for this same static file. This can be done by adding a query parameter in the URL of this asset:
<?php
// Increment the version when you change your CSS.
// This can even be done automatically during deployment.
// You could also use the file timestamp or an hash instead.
$css_version = 3;
echo "<link rel=\"stylesheet\" media=\"all\" href=\"/css/theme.css?version=$css_version\" />\n";
Apache config for compressed static files
The idea to have a performance improvement is to compress all CSS and JS files. This can be done by minifying, aggregating and then compressing.
I use NodeJS and made some NPM scripts to do that. There's plenty of info about that on the web
Then you need to add a few rules in your .htaccess file:
RewriteEngine On
# Redirect to HTTPS if we are not in HTTPS:
RewriteCond %{HTTPS} off
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# Redirect to www.example.com if it's example.com without the www. prefix:
RewriteCond %{HTTP_HOST} !^www\. [NC]
# But don't do it if it's on the local development machine (ex: example.com.local).
RewriteCond %{HTTP_HOST} !\.local$ [NC]
RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
# If the URL looks like only having the language in the path, such as
# https://www.example.com/de or https://www.example.com/EN then rewrite it
# to index.php?lang=XX where XX is the captured language.
# It's case insensitive with [NC] because PHP makes it to lowercase after.
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(en|de|fr|es|it)$ index.php?lang=$1 [NC]
# Serve brotli compressed assets if they exist.
RewriteCond %{HTTP:Accept-Encoding} br
RewriteCond %{REQUEST_FILENAME}.br -f
RewriteRule (.*\.(?:css|js|svg|ico))(?:$|\?) $1.br [NC,L]
# Serve gzipped compressed assets if they exist.
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteCond %{REQUEST_FILENAME}.gz -f
RewriteRule (.*\.(?:css|js|svg|ico))(?:$|\?) $1.gz [NC,L]
# Set MIME types and avoid double compression done by some Apache modules.
RewriteRule \.css\.(gz|br) - [NC,E=no-gzip:1,E=no-brotli:1,T=text/css]
RewriteRule \.js\.(gz|br) - [NC,E=no-gzip:1,E=no-brotli:1,T=application/javascript]
RewriteRule \.svg\.(gz|br) - [NC,E=no-gzip:1,E=no-brotli:1,T=image/svg+xml]
RewriteRule \.ico\.(gz|br) - [NC,E=no-gzip:1,E=no-brotli:1,T=image/x-icon]
# Tell proxies to cache files separately depending on the Accept-Encoding header
# value sent by the browser. This is because we respond with different content
# encodings (none, brotli, gzip) for the same requested URL. We'll only do that
# for the static files. For pages, we'll let the PHP app set the Vary header.
<FilesMatch "\.(?i:css|js|svg|ico)(?:\.(?:gz|br))?$">
Header append Vary Accept-Encoding
</FilesMatch>
# Send the correct encoding header for *.gz files served instead of the original.
<FilesMatch "\.(?i:css|js|svg|ico)\.gz$">
Header set Content-Encoding gzip
</FilesMatch>
# Send the correct encoding header for *.br files served instead of the original.
<FilesMatch "\.(?i:css|js|svg|ico)\.br$">
Header set Content-Encoding br
</FilesMatch>
# Set a long cache age of 2 years for all static files.
# You have to change the URL if you update the file. Be aware
# that if you upload documents to your website you might have
# to disable this rule in the uploaded folder.
<FilesMatch "\.(?i:br|css|gif|gz|ico|jpe?g|js|png|svg)$">
Header set Cache-Control "max-age=63072000, public"
</FilesMatch>
# PHP can be configured to compress automatically the output if the browser
# accepts gzip compression. According to the documentation it is more effective
# this way than using ob_start('ob_gzhandler') in the PHP code.
# If output_compression is enabled then it will automatically set the
# "Vary: Accept-Encoding" HTTP header.
php_flag zlib.output_compression On
php_value zlib.output_compression_level 9
# Hide PHP's X-Powered-By HTTP header if php.ini doesn't already do it.
Header unset X-Powered-By
Compress PHP's output with gzip
As you see in the Apache config above, we can enable the
zlib.output_compression functionnality of PHP, which will
compress the output if the browser handles it and the Vary
HTTP header will also be automatically set.
But if you cannot enable it via Apache or php.ini then you
can still do the compression yourself in the PHP code with the
output buffering. Just put this piece of code in your PHP before
outputing any text:
// If zlib.output_compression=On then the output will be automatically compressed
// and the "Vary: Accept-Encoding" header will be set too. But if it's not enabled
// then we could compress the page ourselves with PHP's output buffering.
if (preg_match('/^(Off|0)$/i', ini_get('zlib.output_compression'))) {
// If the browser accepts gzip then compress with PHP's output buffering.
if (preg_match('/\bgzip\b/', $_SERVER['HTTP_ACCEPT_ENCODING'])) {
// No need to set the "Vary: Accept-Encoding" header because this is
// already done by ob_gzhandler() called below.
ob_start('ob_gzhandler');
}
}
// Hide the X-Powered-By HTTP header if expose_php=Off cannot be set.
if (function_exists('header_remove')) {
header_remove('X-Powered-By');
} else {
// We cannot remove it but we can set it to empty.
header('X-Powered-By:');
// We'll also try to remove it in the .htaccess file.
}
In my project, HSTS is enabled. So if someone is tryig to use the site using the HTTP then it redirects to HTTPS.
After the Security scan, it is reported that ttf, woff and woff2 files are ignoring the HSTS.
Example 1:
On Google Crome if i am trying below URL then it redirects to HTTPS:
http://example.com/backend/web/lib/roboto/Roboto-Light.woff2 then it
redirects to
https://example.com/backend/web/lib/roboto/Roboto-Light.woff2
If i try same thing on Firefox then it just downloads the Roboto-Light.woff2 file over HTTP instead of redirecting to HTTPS.
Example 2:
If i am trying below URL on both google Chrome and Firefox it just downloads the file.
http://example.com/backend/web/lib/roboto/Roboto-Black.ttf
So what should i do to fix this issue?
Update
Network log after accessing the below URL:
http://example.com/backend/web/lib/roboto/Roboto-Black.ttf
It seems that first file is being loaded by visiting the HTTP URL. But the https one not being updated in Address Bar of browser but not sure.
VHOST Settings
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ServerName example.com
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} ^http$
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
#RewriteCond %{HTTPS} !=on
#RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK|OPTIONS)
RewriteRule .* - [F]
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains;"
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
You need to go back and ask the security scan people why they think this is the case.
You are clearly showing that HSTS is being set for the font files.
You area also showing that you are correctly showing the 307 internal redirect for HSTS reasons.
This is the way it's supposed to work. You get two requests in Chrome's network tab (other browsers may be different):
A fake 307 response which upgrades the request from HTTP to HTTPS. This is created by the browser and the HTTP request never reaches the server. Hence why I am calling it a "fake" resonse.
The real request sent over HTTPS.
As fonts are downloaded it's difficult to tell that this was downloaded over HTTPS except by looking in the network tab - but that's fine.
If i try same thing on Firefox then it just downloads the Roboto-Light.woff2 file over HTTP instead of redirecting to HTTPS.
How do you know this? Are you sure you have visited the site over HTTPS to get the HSTS header? The first request may well be over HTTP (though you have a standard redirect in place so this should redirect to HTTPS and then download), but after that it should auto redirect BEFORE the request is sent.
If i am trying below URL on both google Chrome and Firefox it just downloads the file.
It probably does. But after a redirect.
It seems that first file is being loaded by visiting the HTTP URL. But the https one not being updated in Address Bar of browser but not sure.
No, as discussed the first one is a dummy request. The second is the real request which is actually sent to the browser. As the font file is downloaded immediately it doesn't do anything with the URL bar.
I built an API using PHP running on Apache on a CentOS box. I am trying to make a PUT request to v1/object/{objectID}/subobject/{subobjectID} but I am getting a 405 error. When I make a GET request to the same endpoint it works. When I make a PUT request to v1/object/{objectID} it works. To simplify things I replaced all of the code in api.php with a simple echo statement.
Contents of api.php:
<?php
echo "got here";
?>
Contents of .htaccess:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule v1/(.*)$ v1/api.php?request=$1 [QSA,NC,L]
</IfModule>
Below is the PUT request I am making with curl:
curl -i -X PUT -d '{"var1":"val1","var2":"val2"}' "http://x.x.x.x/api/v1/object/1/subobject/1?apiKey=somekey&secretToken=secret"
The results are as follows:
HTTP/1.1 405 Method Not Allowed
Date: Fri, 15 Dec 2017 03:31:21 GMT
Server: Apache/2.2.15 (CentOS)
Allow: GET,HEAD,POST,OPTIONS,TRACE
Content-Length: 359
Connection: close
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>405 Method Not Allowed</title>
</head><body>
<h1>Method Not Allowed</h1>
<p>The requested method PUT is not allowed for the URL /api/v1/object/1/subobject/1.</p>
<hr>
<address>Apache/2.2.15 (CentOS) Server at x.x.x.x Port 80</address>
</body></html>
Try the following changes on your server:
Open file "/etc/httpd/conf/httpd.conf" And look for the following:
<Limit GET POST OPTIONS>
Order allow,deny
Allow from all
</Limit>
If it is commented out, remove the # and add the PUT option
<Limit GET POST OPTIONS PUT>
Order allow,deny
Allow from all
</Limit>
Then just save, restart the server and test.
It turns out that I was making a call where the API used to be stored on the server - not to where it currently exists. Too much copy and paste, not enough attention to detail. After discovering this extremely embarrassing mistake I realized one reason I didn't think to re-check my URL was that I was getting a 405 Method Not Allowed error code, not a 404 Not Found error code..
In the interest of learning something at the expense of my confidence and my sanity I have opened a new question to find the answer to this here on Server Fault: https://serverfault.com/questions/888400/why-does-apache-return-a-405-error-code-on-a-put-request-to-a-file-or-directory
I made my own MVC app. And I am using the following RewriteRule in an .htaccess:
RewriteEngine on
RewriteRule ^([a-zA-Z0-9\-\_\/]*)$ index.php?p=$1
Then when i go to this url:
http://localhost/myapp/dahsboard/index
I get my variable $_GET['p']=dahsboard/index & it works.
But I use AutoComplete from jQueryUI with Ajax and it send a variable $_GET['term'] with the value of the input used by AutoComplete.
My URL is then:
http://localhost/myapp/dashboard/index?term='myvalue'
My .htaccess doesn’t resolve it and I don't know how to.
Easy. Change your RewriteRule from this:
RewriteEngine on
RewriteRule ^([a-zA-Z0-9\-\_\/]*)$ index.php?p=$1
To this:
RewriteEngine on
RewriteRule ^([a-zA-Z0-9\-\_\/]*)$ index.php?p=$1 [L,QSA]
Which uses the QSA flag which will carry over the query string to the rewritten destination.
So when you go to this URL:
http://localhost/myapp/dahsboard/index?term='myvalue'
It will be passed on like this:
http://localhost/myapp/dahsboard/index.php?p=index&term=myvalue
And if I place the following in index.php:
<?php
echo '<pre>';
print_r($_GET);
echo '</pre>';
?>
The returned output is:
Array
(
[p] => index
[term] => 'myvalue'
)
Additionally, if you want to easily debug the results you can do so by using R flag in addition to the L & QSA like this:
RewriteEngine on
RewriteRule ^([a-zA-Z0-9\-\_\/]*)$ index.php?p=$1 [L,R,QSA]
And then run a curl -I to check the Apache headers which should tell you where it will send the URL to via the Location: header:
curl -I http://localhost:8888/myapp/dahsboard/index?term='myvalue'
HTTP/1.1 302 Found
Date: Tue, 17 Jun 2014 16:07:29 GMT
Server: Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/0.9.8r DAV/2 PHP/5.4.10
Location: http://localhost:8888/Applications/MAMP/htdocs/myapp/dahsboard/index.php?p=index&term=myvalue
Content-Type: text/html; charset=iso-8859-1
Now yes, the R flag adds extra path info of /Applications/MAMP/htdocs/ to the URL I am not 100% clear on why that happens. But for basic debugging like this, you can read the headers nicely to see that the final destination of index.php?p=index&term=myvalue is being sent. And once you know that, remove the R flag when you put these rules into production.
FWIW, the http://localhost:8888/Applications/MAMP/ reflects my local MAMP setup in Mac OS X. So ignore that. Your local setup will most likely return 100% different info. But the basic concept is solid.
To combine new and old query strings, use the [QSA] flag.
RewriteEngine on
RewriteRule ^([a-zA-Z0-9\-\_\/]*)$ index.php?p=$1 [QSA]
I wrote a REST-ful API using Fat Free Framework in PHP, and am making a call using backbone.js. When I try and save a new Orders model my app makes a PUT request, and the server spits back a 406 error.
Request Method:PUT
Status Code:406 Not Acceptable
Request Headers
Accept:application/json, text/javascript, */*; q=0.01
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Content-Length:174
Content-Type:application/json
Cookie:__utma=239804689.76636928.1286699220.1305666110.1325104376.94; __utmz=239804689.1325104376.94.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=935d2632fd0d12a1a0df4cb0f392eb5e
X-Requested-With:XMLHttpRequest
Request Payload
{"id":0,"customerId":0,"lastPage":"items","priceConfig":null,"items":null,"saveStatus":0,"savedAt":1326588395899,"name":null}
Response Headers
Connection:Keep-Alive
Content-Length:460
Content-Type:text/html; charset=iso-8859-1
Date:Sun, 15 Jan 2012 00:46:37 GMT
Keep-Alive:timeout=5, max=98
Server:Apache
My .htaccess file looks like this:
# Enable rewrite engine and route requests to framework
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-l
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php [L,QSA]
# Disable ETags
<IfModule mod_headers.c>
Header Unset ETag
FileETag none
</IfModule>
# Default expires header if none specified (stay in browser cache for 7 days)
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault A604800
</IfModule>
<IfModule mod_security.c>
SecFilterEngine Off
SecFilterScanPOST Off
</IfModule>
My website application works fine on my local server and only does this on my web server. Any ideas what is going wrong?
I came up with a workaround.
My believe my server is using mod_security2 to block PUT and DELETE requests. I am waiting to hear back from them, and mod_security2 can't be disabled in .htaccess files so there's nothing I can do.
Using "Script PUT /filename" in the .htaccess file was causing a 500 error: "Script not allowed here", I am not sure why, but I decided not to deal with having my web host reconfigured to handle PUT and DELETE.
To keep my API REST-ful, I left in the normal handling of PUT and DELETE, and added this to the POST handling:
function post() {
//if Backbone.emulateHTTP is true, emulate PUT
$data = json_decode(F3::get('REQBODY'), true);
$type = $_SERVER['HTTP_X_HTTP_METHOD_OVERRIDE']; //PUT, DELETE, POST
if ($type == 'PUT') {
$this->put();
return;
}
if ($type == 'DELETE') {
$this->delete();
return;
}
//handle normal POST here
}
If you set Backbone.emulateHTTP = true; it keeps the request method as POST and sends the X-HTTP-Method-Override as PUT or DELETE.
I like this because I can keep my REST-ful implementation intact, and just comment out the emulateHTTP code for when I publish to my webserver.