Wordpress (WooCommerce?) forces https (when it shouldn't) - php

I'm experiencing a strange issue on a WooCommerce installation my company has taken over. It's not us who built it and unfortunately it's pretty crappy built so I'm not so sure what's actually going on in there.
It suddenly started to "force" https connections, but as far as I know nothing has changed in nether the code nor from the admin. We are running Git on the server and nothing has changed in the working tree, and I searched the uploads folder for suspicious files with no results. It's very unlikely some kind of malware. The site is not set up with https/ssl so this does of course trigger a timeout.
I checked the database and both home_url and site_url are set to "http://...". The WooCommerce option "force ssl" is set to false. Also we are running the plugin "Better WP Security/iThemes Security" which also offers a "force ssl"-option but that one is set to false too.
I tried setting both the constants FORCE_SSL_ADMIN and FORCE_SSL_LOGIN to false in wp-config.php - still no luck. Also I tried using .htaccess rewrite rules but that didn't help either.
It seems to be connected with a request header; HTTPS: 1 (tested with $ curl -I -H"HTTPS: 1" http://...). When that one is set to 0 this does not happen. However Chrome seems to send it by default, which is not the case for other browsers. I tried clearing cookies/data etc. Problem appears in my colleague's browser as well (and she has never visited the site before). Hosting company says this is not related to server configuration.
Has anyone experienced this before, or know to what it could be related to?
Update:
Running curl -I -H"HTTPS: 1" http://www.example.com/wp-admin/ pretty much confirms this has something to do with Wordpress. The cookies are set by WPML which indicates Wordpress is initialized. Check the Location: header:
HTTP/1.1 302 Moved Temporarily
Server: Apache
X-Powered-By: PHP/5.6.11
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Pragma: no-cache
Set-Cookie: _icl_current_admin_language=sv; expires=Wed, 22-Jul-2015 16:06:25 GMT; Max-Age=7200; path=/wp-admin/
Set-Cookie: _icl_current_language=sv; expires=Thu, 23-Jul-2015 14:06:25 GMT; Max-Age=86400; path=/
Set-Cookie: PHPSESSID=xxx; path=/
Location: https://www.example.com/wp-login.php?redirect_to=https%3A%2F%2Fwww.example.com%2Fwp-admin%2F&reauth=1
Vary: Accept-Encoding
Content-Type: text/html; charset=UTF-8
Date: Wed, 22 Jul 2015 14:06:26 GMT
X-Varnish: nnn
Age: 0
Via: 1.1 varnish
Connection: keep-alive

http://develop.woothemes.com/woocommerce/2015/07/woocommerce-2-3-13-security-and-maintenance-release/
Updating Woocommerce to 2.3.13 fixed it for me

#Zertuk's solution is correct: upgrading to the latest WooCommerce should fix the issue because of the change that #Zertuk has linked.
To give more detail: Chrome has implemented the Upgrade Insecure Requests specification from the World Wide Web Consortium (W3C). Section 3.2.1 of that specification is The HTTPS HTTP Request Header Field which states
3.2.1. The HTTPS HTTP Request Header Field
The HTTPS HTTP request header field sends a signal to the server
expressing the client’s preference for an encrypted and authenticated
response, and that it can successfully handle the
upgrade-insecure-requests directive in order to make that preference
as seamless as possible to provide.
This preference is represented by the following ANBF:
"HTTPS:" *WSP "1" *WSP
WooCommerce's is_ssl() function before version 2.3.13 was incorrectly rewriting all the URLs in the response if the HTTPS: 1 header was set.
Upgrading to the latest version of WooCommerce (currently 2.3.13) fixes the bug.

I fixed this issue by turning off the Force SSL setting within WooCommerce Settings, and then explicitly setting these 3 WooCommerce pages to use SSL via the checkbox provided as part of this plugin (on the Edit Page screen).
The pages that needing SSL according to WooCommerce are:
1. Checkout
2. Checkout -> Pay
3. My Account
and also try,
<?php
if (is_ssl()) {
//action to take for page using SSL
}
?>
Returns true if the page is using SSL (checks if HTTPS or on Port 443).

Kirby is right.
I did a quick fix modifying the Wordpress core function is_ssl().
I return false at the beginning of the function because some of my websites do not have SSL.
It's not recommended modify the core of Wordpress because of the updates, but I can control that.

Related

Random 5 alpha character path appended to requests

Starting in early September (maybe) our customers (thousands across the US) started experiencing "random" 5 character alpha paths being appended to their URL requests with 302 responses intermittently when requesting the root of their domains. We have been exploring all possibilities, including malware, updates to hardware and software, and have not been able to find the cause.
Has anyone else experienced this issue, and found the cause?
Happy to provide more details of the environments as needed. Some details may have to be provided via PM.
Sample Paths
domain.com/OUZPZ/
domain2.com/LVQgZ/
domain2.com/UpTZZ/
domain2.com/WNZOR/
domain3.com/UncLZ/
domain4.com/SVpjZ/
domain4.com/WOVRZ/
domain5.com/NcmUZ/
Curl Path
curl -IL domain.com
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /WQiNZ/
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /ToNLZ/WQiNZ/
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /WQiNZ/
General Notes
We only see this happen in person on sites with SSL enabled.
Wordpress multisite installs.
GoDaddy customers are experiencing this issue with their forwarding
service as well (see links below).
We only use GoDaddy as the domain registrar, and use an internal DNS name server system based on AWS route53.
When we audit our server logs, we see many more URL paths of this
type. They stretch all the way back to April of this year (2017), but most
of them have a google bot user agent
Regex for search: /\/[a-zA-Z]{5}\//
Both our company security team, hosting provider, and Sucuri have
audited the environments and have not found any malware.
Plugins audited for functionality and nothing found.
Using Let's Encrypt SSL certs.
Google and hosting provider say it does not have to do with DDoS
protection in their environments (see reddit thread below).
The only commonality so far between Godady and our environments are linux
boxes.
Articles/Threads Related to Subject
https://www.godaddy.com/community/Managing-Domains/My-domain-name-not-resolving-correctly-6-random-characters-are/td-p/60782
https://www.reddit.com/r/webhosting/comments/18v950/302_redirect_to_random_5_character_subdirectories/
http://mailman.nginx.org/pipermail/nginx/2015-December/049486.html
https://www.drupal.org/node/848972
Junk characters in URL when domain forwarding
http://gold-thiolate.com/2013/godaddy-random-302-redirect/

Mediawiki login cancelled to prevent session hijacking

I have just set up a MediaWiki 1.29.0 page on an AS400 IBM i machine. I am using MariaDB as a database. I am using PHP 5.5.37
Every time I try to log into an account, I get the error:
There seems to be a problem with your login session; this action has been canceled as a precaution against session hijacking. Go back to the previous page, reload that page and then try again.
Obviously, the behavior I'm looking for is to log in.
I've tried:
changing $wgMainCacheType and $wgSessionCacheType to various permutations of CACHE_NONE, CACHE_ACCEL, CACHE_DB, and CACHE_ANYTHING.
creating a tmp directory and setting its permissions.
rebuilding my LocalSettings.php file.
setting session.referer_check=off in php.ini
I've checked and I know my cookies are enabled (I'm able to call document.cookie; and get data back).
This question has been asked before here, and the linked questions within, but no solutions fixed my problem. They also deal with an older version of WikiMedia, though I don't know if that makes a difference in this instance.
EDIT: I am also getting the same behavior when I try to create a new account. However, I am able to navigate the wiki, create pages, and edit pages without any sort of error.
Here is my request header:
Cache-Control: private, must-revalidate, max-age=0
Connection: close
Content-language: en
Content-Type: text/html; charset=UTF-8
Date: Thu, 10 Aug 2017 13:48:36 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Link: </<path>/resources/assets/logo.png?88d75>;rel=preload;as=image
Server: Apache
Set-Cookie: ZDEDebuggerPresent=php,phtml,php3; path=/
Set-Cookie: <wikiname>_session=n7gs0ct99ck5i2juq0togto9q7bfou6u; path=/; secure; httponly
Transfer-Encoding: chunked
Vary: Accept-Encoding,Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Powered-By: PHP/5.5.37 ZendServer/8.5.5
X-UA-Compatible: IE=Edge
Here is my response header:
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding:gzip, deflate
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Cookie:ZDEDebuggerPresent=php,phtml,php3
Host:tdidev:10080
Referer:http://<wikiepath>/index.php?title=Special:UserLogin&retirnto=Main+Page
Upgrade-Insecure-Requests:1
User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36
I've finally found the issue to my problem. By default, MediaWiki passes the <wikiname>_session cookie with the secure flag set. Taken from OWASP:
The secure flag is an option that can be set by the application server when sending a new cookie to the user within an HTTP Response. The purpose of the secure flag is to prevent cookies from being observed by unauthorized parties due to the transmission of a the cookie in clear text.
To accomplish this goal, browsers which support the secure flag will only send cookies with the secure flag when the request is going to a HTTPS page. Said in another way, the browser will not send a cookie with the secure flag set over an unencrypted HTTP request.
So my MediaWiki installation correctly creates and caches a session token, and it even still passes it through the response header. However, since my browser sees an http instead of https, that's as far as the token gets. The Set-Cookie line is simply ignored.
There is a setting in php.ini called session.cookie_secure, but MediaWiki ignores this flag.
Instead, the solution was to add this line to the bottom of my localSettings.php file:
$wgCookieSecure = false;
I had something similar happen on a different application, when the sessionId was being updated out of sequence.
So normally you request a login form, and it creates a session with a sessionId, and stores this somewhere.
Then you submit the form, it ties that into the original sessionId, checks your authentication, and either logs in the original session, or creates you a new one, and updates yours (normally with a HTTP Set-Cookie command you can see in the Network log).
But you can follow everything, by looking at the sessionId in your current cookies, and any token on the form (to prevent replays), and checking it against either your /tmp/php-session-xxx file (maybe in /var/lib/php) or whatever database it's storing the session in.
What tipped me off to my problem was identifing that by the time I was about to submit a form, with a particular sessionid, that sessionid, was no-longer valid. Hence I failed the replay checks, and I got an error similar to yours. It turned out in my case it was to do with the databases replicating in a way that didn't match how they were being accessed downstream so I could attempt to access a session, that hadn't been created yet.
Looking at all your code, the sessionIds don't match. wpTokenLogin starts with 510a85 but your wiki session in SetCookie starts with n7gs0c and in your log it talks about 6ov933... So assuming you copied/pasted from different attempts, you need to run through it yourself from a clean state and check that everything looks like it's using the same session. If not, try to figure out what's happening to the session you have (if it's created/changed) or why you're not getting the right one, if it's created but never passed out to you properly.
That said, I just took at look at the client side of logging into our own inhouse version of mediawiki, and wpLoginToken, wikidb_session and JSESSIONID don't match either (although I'd expect one of them to show up in the wiki log, which I don't have access too).
If you have to, grep the source for the error message you're finding, and insert error_log(__FILE__.':'.__LINE__.' '.var_export(debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS), true)); to find out work back up the stack, to see what didn't match, to generate the error.
This is an ongoing problem with MediaWiki, and is the result of your password being incorrectly entered, or MediaWiki failed to write SOMETHING during the login process (database, cookie, disk file, whatever). In my case, I was using the $wgReadOnly variable to make the wiki read-only. I found that I had to use $wgMainCacheType = CACHE_MEMCACHED for my system to work properly.
See: https://www.mediawiki.org/wiki/Manual:Memcached

PHP HTTPS to HTTP

I am having difficulty with the header function in PHP.
The call to the function is initiated on a secure HTTPS page. Every time I call the header function with http://, something somewhere is changing the protocol to HTTPS.
In my program, this example:
header("Location: http://www.google.com");
takes me to https://www.google.com instead.
My environment is IIS 7.5 Windows 2008 64-Bit
PHP 5.5.12 with Fast CGI
Is there something that I have accidentally enabled either in IIS or php.ini that would automatically force http to https?
This does not happen when launching the code from an http page, http to http works, http to https works and https to https work. However, https to http is failing.
I've been searching and most results keep reversing my question by showing me ways to force http to https. I need the opposite.
Thanks in advance for any assistance!
EDIT: Google was an example URL. Sorry.
header("Location: http://www.systronicsinc.com/");
is my actual URL that is failing. This keeps redirecting to https://www.systronicsinc.com/.
This is a raw header from Fiddler.
HTTP/1.1 303 See Other
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Location: https://www.systronicsinc.com/
Server: Microsoft-IIS/7.5
X-Powered-By: PHP/5.5.12
Set-Cookie: PHPSESSID=va1hh3ff8h0buus689kf86eoc1; path=/
Date: Fri, 24 Oct 2014 17:39:34 GMT
Content-Length: 156
<head><title>Document Moved</title></head>
<body><h1>Object Moved</h1>This document may be found here</body>
I find it interesting that the link in the body retained the original http protocol as initially set, but the Location field in the header is modifying it to https. I've been hunting through IIS and my php.ini file. I cannot see anything that would dictate this behavior. Maybe this additional information will spark a thought with someone. Thanks!
Google uses SSL, so https://, for it's websites.
See: https://support.google.com/websearch/answer/173733?hl=en
and: https://www.seroundtable.com/google-ssl-drops-query-data-14188.html
No, Google redirects you to a secure page.
They probably use a function that does something like my https function. Feel free to use.
function https(){
$sv = $_SERVER;
if(!isset($sv['HTTPS'])){
header("LOCATION:https://{$sv['SERVER_NAME']}{$sv['PHP_SELF']}"); die;
}
}
function http(){
$sv = $_SERVER;
if(isset($sv['HTTPS'])){
unset($_SERVER['HTTPS']);
header("LOCATION:http://{$sv['SERVER_NAME']}{$sv['PHP_SELF']}"); die;
}
}

Why do some servers/applications send a second HTTP status header

Why do some web applications/servers/etc issue a non-standard, second, Status header. For example, I'm working with an existing application where, in addition to that HTTP protocal line, there's a second header named status
$ curl -I 'http://example.com/404'
HTTP/1.1 404 Not Found
//...
Status: 404 Not Found
//...
and a stock apache 404 doesn't include it
HTTP/1.1 404 Not Found
Date: Thu, 24 Jul 2014 13:16:28 GMT
Server: Apache/2.2.3 (CentOS)
Connection: close
Content-Type: text/html; charset=iso-8859-1
I'd write this off as one quirky application developer, but I've seen this behavior is other applications over the years, and the Wikipedia article on HTTP headers mentions this header, although it points out the header isn't included in RFC7230.
? "Status" is not listed as a registered header. The "Status-Line" of a "Response" is defined by RFC7230[23] without any explicit "Status:" header name.
Does anyone know the deal here? Is there some browser that needed this at some point? Still needs it? Is this some weird bit of SEO voodoo?
Is there any practical effect to including/not-including this field? Has there ever been?
(I'm specifically working with PHP, if that matters)

Pingdom monitoring tool detecting HTTP 302 Found responses intermittently

I am experiencing intermittent issues when using the Pingdom monitoring tool to check the status of my website.
Every 10-15 minutes I get an alert to say that a 302 has been found. What I can't understand is - i'm not doing any 302 temporary redirects. I am, however, doing 301 redirects (in certain circumstances).
Could this be a false positive from Pingdom?
Also, I have a redirect in code that does this. Would not specifying the HTTP response code
cause an issue here?
header('Location: http://www.ayrshireminis.com');
exit();
The Pingdom data:
Request 1
GET / HTTP/1.0
User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)
Host: www.ayrshireminis.com
Received header
302 Found
Date: Tue, 24 Jul 2012 13:13:25 GMT
Server: Apache
Set-Cookie: prev_session_id=2a7001f5caa79bd36995953bf4853675; expires=Thu, 23-Aug-2012 13:13:25 GMT; path=/; domain=ayrshireminis.com
Location: http://www.ayrshireminis.com/
Vary: Accept-Encoding
Connection: close
Content-Type: text/html; charset=ISO-8859-1
Looks to me like a cookie is being set on the response, then redirecting you to the same page. Because Pingdom uses a number of different monitoring sources, that cookie redirect behavior will cause a lot of problems. Then again, you may need it for actual website visitors.
Rather than monitor the root of the webpage, I would recommend creating a separate /status page just for Pingdom that:
Doesn't set or use cookies
Performs a cheap end-to-end health check of the application and backing services
Returns a 200 response code only if everything checks out OK

Categories