Random 5 alpha character path appended to requests - php

Starting in early September (maybe) our customers (thousands across the US) started experiencing "random" 5 character alpha paths being appended to their URL requests with 302 responses intermittently when requesting the root of their domains. We have been exploring all possibilities, including malware, updates to hardware and software, and have not been able to find the cause.
Has anyone else experienced this issue, and found the cause?
Happy to provide more details of the environments as needed. Some details may have to be provided via PM.
Sample Paths
domain.com/OUZPZ/
domain2.com/LVQgZ/
domain2.com/UpTZZ/
domain2.com/WNZOR/
domain3.com/UncLZ/
domain4.com/SVpjZ/
domain4.com/WOVRZ/
domain5.com/NcmUZ/
Curl Path
curl -IL domain.com
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /WQiNZ/
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /ToNLZ/WQiNZ/
HTTP/1.1 302 Found
Connection: close
Pragma: no-cache
cache-control: no-cache
Location: /WQiNZ/
General Notes
We only see this happen in person on sites with SSL enabled.
Wordpress multisite installs.
GoDaddy customers are experiencing this issue with their forwarding
service as well (see links below).
We only use GoDaddy as the domain registrar, and use an internal DNS name server system based on AWS route53.
When we audit our server logs, we see many more URL paths of this
type. They stretch all the way back to April of this year (2017), but most
of them have a google bot user agent
Regex for search: /\/[a-zA-Z]{5}\//
Both our company security team, hosting provider, and Sucuri have
audited the environments and have not found any malware.
Plugins audited for functionality and nothing found.
Using Let's Encrypt SSL certs.
Google and hosting provider say it does not have to do with DDoS
protection in their environments (see reddit thread below).
The only commonality so far between Godady and our environments are linux
boxes.
Articles/Threads Related to Subject
https://www.godaddy.com/community/Managing-Domains/My-domain-name-not-resolving-correctly-6-random-characters-are/td-p/60782
https://www.reddit.com/r/webhosting/comments/18v950/302_redirect_to_random_5_character_subdirectories/
http://mailman.nginx.org/pipermail/nginx/2015-December/049486.html
https://www.drupal.org/node/848972
Junk characters in URL when domain forwarding
http://gold-thiolate.com/2013/godaddy-random-302-redirect/

Related

Webfonts not caching on Cloudfront

We are trying to put all of our file assets on s3 and cloudfront to reduce load on our server. For almost all file types things are working fine and Cloudfront is caching files. For fonts files there always seems to be a miss. What am I doing wrong?
When we first put the fonts online and called them we got an error which pointed to the CORS protocol issue. This is where we learned about CORS.
We followed this solution, Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading
Here is my CORS setting. We have to have AllowedOrigion as a wildcard because we are supporting many websites.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I setup behavior rules in the cloudfront distribution for each font type type:
/*.ttf
with Whitelist Headers to origin
I checked with curl and there is always a miss and access-control-allow-origin is always *
curl -i -H "Origin: https://www.example.com" https://path/to-file/font-awesome/fonts/fontawesome-webfont.woff
HTTP/2 200
content-type: binary/octet-stream
content-length: 98024
date: Tue, 08 Jan 2019 09:07:03 GMT
access-control-allow-origin: *
access-control-allow-methods: GET
access-control-max-age: 3000
last-modified: Mon, 07 Jan 2019 08:44:46 GMT
etag: "fee66e712a8a08eef5805a46892932ad"
accept-ranges: bytes
server: AmazonS3
vary: Origin
x-cache: Miss from cloudfront
via: 1.1 d76fac2b5a2f460a1cbffb76189f59ef.cloudfront.net (CloudFront)
x-amz-cf-id: 1azzRgw3h33KXW90xyPMXCTUAfZdXjCb2osrSkxxdU5lCoq6VNC7fw==
I should also mentioned that when I go directly to the file it downloads instead of opens in browser (which might be the correct behavior, not sure).
The files are loading today, which is good but in the end I would like for Cloudfront to server the files when it has it in the cache instead of always missing.
Your Curl dump indicate there is no "cache-control" headers in the response. You should have this header setted (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html). Best practice is to havec cache-control to "public, max-age=xxx, s-maxage=yyy" (xxx = time cached on user browser, yyy time cached in CDN).
Do you have this header for other ressources (like a css or js) and not for woff ?
Check this : how to add cache control in AWS S3?

Website opening issue if user is behind proxy

Angular 6 project with PHP back-end is given (on a classic Apache server).
Everything works very well from localhost and production server as well. But once my friend tried to login from his university (he was behind a proxy there), it didn't work.
We can see the followings in the OPTIONS request response headers:
Connection: close
Content-Type: text/html
Transfer-encoding: chunked
Via: 1.0 firewall.uninamehere.com:3128 (squid)
X-Cache: MISS from firewall.uninamehere.com
X-Cache-Lookup: MISS from firewall.uninamehere.com:3128
That's all.
After OPTIONS a login POST request should start. But it doesn't...
He said that the proxy is shhh for a while, and sometimes WIFI also does not work.
The questions are:
Is the problem on our side?
Can we do anything with it?

Wordpress (WooCommerce?) forces https (when it shouldn't)

I'm experiencing a strange issue on a WooCommerce installation my company has taken over. It's not us who built it and unfortunately it's pretty crappy built so I'm not so sure what's actually going on in there.
It suddenly started to "force" https connections, but as far as I know nothing has changed in nether the code nor from the admin. We are running Git on the server and nothing has changed in the working tree, and I searched the uploads folder for suspicious files with no results. It's very unlikely some kind of malware. The site is not set up with https/ssl so this does of course trigger a timeout.
I checked the database and both home_url and site_url are set to "http://...". The WooCommerce option "force ssl" is set to false. Also we are running the plugin "Better WP Security/iThemes Security" which also offers a "force ssl"-option but that one is set to false too.
I tried setting both the constants FORCE_SSL_ADMIN and FORCE_SSL_LOGIN to false in wp-config.php - still no luck. Also I tried using .htaccess rewrite rules but that didn't help either.
It seems to be connected with a request header; HTTPS: 1 (tested with $ curl -I -H"HTTPS: 1" http://...). When that one is set to 0 this does not happen. However Chrome seems to send it by default, which is not the case for other browsers. I tried clearing cookies/data etc. Problem appears in my colleague's browser as well (and she has never visited the site before). Hosting company says this is not related to server configuration.
Has anyone experienced this before, or know to what it could be related to?
Update:
Running curl -I -H"HTTPS: 1" http://www.example.com/wp-admin/ pretty much confirms this has something to do with Wordpress. The cookies are set by WPML which indicates Wordpress is initialized. Check the Location: header:
HTTP/1.1 302 Moved Temporarily
Server: Apache
X-Powered-By: PHP/5.6.11
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Pragma: no-cache
Set-Cookie: _icl_current_admin_language=sv; expires=Wed, 22-Jul-2015 16:06:25 GMT; Max-Age=7200; path=/wp-admin/
Set-Cookie: _icl_current_language=sv; expires=Thu, 23-Jul-2015 14:06:25 GMT; Max-Age=86400; path=/
Set-Cookie: PHPSESSID=xxx; path=/
Location: https://www.example.com/wp-login.php?redirect_to=https%3A%2F%2Fwww.example.com%2Fwp-admin%2F&reauth=1
Vary: Accept-Encoding
Content-Type: text/html; charset=UTF-8
Date: Wed, 22 Jul 2015 14:06:26 GMT
X-Varnish: nnn
Age: 0
Via: 1.1 varnish
Connection: keep-alive
http://develop.woothemes.com/woocommerce/2015/07/woocommerce-2-3-13-security-and-maintenance-release/
Updating Woocommerce to 2.3.13 fixed it for me
#Zertuk's solution is correct: upgrading to the latest WooCommerce should fix the issue because of the change that #Zertuk has linked.
To give more detail: Chrome has implemented the Upgrade Insecure Requests specification from the World Wide Web Consortium (W3C). Section 3.2.1 of that specification is The HTTPS HTTP Request Header Field which states
3.2.1. The HTTPS HTTP Request Header Field
The HTTPS HTTP request header field sends a signal to the server
expressing the client’s preference for an encrypted and authenticated
response, and that it can successfully handle the
upgrade-insecure-requests directive in order to make that preference
as seamless as possible to provide.
This preference is represented by the following ANBF:
"HTTPS:" *WSP "1" *WSP
WooCommerce's is_ssl() function before version 2.3.13 was incorrectly rewriting all the URLs in the response if the HTTPS: 1 header was set.
Upgrading to the latest version of WooCommerce (currently 2.3.13) fixes the bug.
I fixed this issue by turning off the Force SSL setting within WooCommerce Settings, and then explicitly setting these 3 WooCommerce pages to use SSL via the checkbox provided as part of this plugin (on the Edit Page screen).
The pages that needing SSL according to WooCommerce are:
1. Checkout
2. Checkout -> Pay
3. My Account
and also try,
<?php
if (is_ssl()) {
//action to take for page using SSL
}
?>
Returns true if the page is using SSL (checks if HTTPS or on Port 443).
Kirby is right.
I did a quick fix modifying the Wordpress core function is_ssl().
I return false at the beginning of the function because some of my websites do not have SSL.
It's not recommended modify the core of Wordpress because of the updates, but I can control that.

Why do some servers/applications send a second HTTP status header

Why do some web applications/servers/etc issue a non-standard, second, Status header. For example, I'm working with an existing application where, in addition to that HTTP protocal line, there's a second header named status
$ curl -I 'http://example.com/404'
HTTP/1.1 404 Not Found
//...
Status: 404 Not Found
//...
and a stock apache 404 doesn't include it
HTTP/1.1 404 Not Found
Date: Thu, 24 Jul 2014 13:16:28 GMT
Server: Apache/2.2.3 (CentOS)
Connection: close
Content-Type: text/html; charset=iso-8859-1
I'd write this off as one quirky application developer, but I've seen this behavior is other applications over the years, and the Wikipedia article on HTTP headers mentions this header, although it points out the header isn't included in RFC7230.
? "Status" is not listed as a registered header. The "Status-Line" of a "Response" is defined by RFC7230[23] without any explicit "Status:" header name.
Does anyone know the deal here? Is there some browser that needed this at some point? Still needs it? Is this some weird bit of SEO voodoo?
Is there any practical effect to including/not-including this field? Has there ever been?
(I'm specifically working with PHP, if that matters)

In Webmaster tools googlebot is getting a crawl error 500 from the server

I've noticed my sites are not ranking as well as they did before and when I checked Webmaster tools I see that gooblebot cannot crawl pages that I can perfectly crawl with my browser and I'm getting an 500 error.
The websites are not WordPress and use PHP.
What can be causing this problem?
This is the actual error in WMT
HTTP/1.1 500 Internal Server Error
Date: Tue, 06 Nov 2012 21:04:38 GMT
Server: Apache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: PHPSESSID=blkss9toirna36p2mjl44htv01; path=/
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 3840
Connection: close
Content-Type: text/html
You may be blocking Googlebot with .htaccess, robots.txt or by some other means (maybe firewall settings?)
a. this is not good
b. you should use WMT to get Crawl stats/Crawl Error reports and use these to get better understanding of this issue (at what URLs / How Often does this occur...)
Also, try to look at your last Google Cache date (direct search the domain and click on the Cache link in the preview window)
This may be temporary, downtime related issue that will solve itself or a site wide blocking rule that you'll need to change.
GL
If you're still having a problem with googlebot receiving a 500 error code, I suggest you register with Google Webmaster Tools not Analytics. If you choose Health then Fetch As Google. You should get what the googlebot receives and see what the error is.
I had the same problem and discovered that it was one of the plugins that was causing this. Basically I disabled every plugin and then re-enabled one, tested, re-enabled the next .......
Took about 1 hour to find the culprit but now all is good

Categories