jQuery - GET News.html 404 (Not Found) - php

For some reason, this doesn't work:
$.ajax({
url: "News.html",
cache: false,
}).done(function(data) {
$("#content").load(data);
});
It gives me:
GET http://127.0.0.1/News.html 404 (Not Found)
But for whatever reason, opening that url manually (copy paste the url) works just fine.
And i thought it had something to do with browser cache at first so i added the cache: false option to the ajax function but even then.. argh..
Also it does not show up as a requested URL in my access.log file..
For information i guess, i'm running:
lighttpd
php as fast-cgi via localhost:port
mapped .html => .php
Running OpenBSD 5.3
and uncommented (in /etc/php.ini):
cgi.fix_pathinfo=1
Also:
# ls *.html
News.html index.html
And here's the request headers for News.html:
Request URL:http://127.0.0.1/News.html
Request Method:GET
Status Code:404 Not Found
Request Headers
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:127.0.0.1
Referer:http://127.0.0.1/index.php
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36
X-Requested-With:XMLHttpRequest
Response Headers
Content-type:text/html
Date:Tue, 16 Jul 2013 21:55:05 GMT
Server:lighttpd/1.4.32
Transfer-Encoding:chunked
X-Powered-By:PHP/5.3.21
Checkpoint
Conclusion from the comments so far is that this might not be a jQuery issue at all.
Considering that the server responds with all the data (i've checked raw data sent) and it contains everything, but the response header says 404.
Meaning, the data is found but the header says 404... it's odd to say the least..
curl test
curl 'http://127.0.0.1/News.html' -H 'Accept-Encoding: gzip,deflate,sdch' -H 'Host: 127.0.0.1' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36' -H 'Accept: */*' -H 'Referer: http://127.0.0.1/' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' -H 'Cache-Control: max-age=0' --compressed
Here you'll soon find a facebook feed, among other things :)
Zerkms test
# echo "wham bam" > zerkms_doesnt_believe.html
#
Config files
lighttpd.conf
php-5.3.ini
Error logs and what not
lighttpd-error.log
cURL test
Manual FastCGI test via a Python client:
# python fcgi_app.py
{'FCGI_MAX_CONNS': '1', 'FCGI_MPXS_CONNS': '0', 'FCGI_MAX_REQS': '1'}
After some tinkering, i figured out how the FastCGI protocol works and i found a client that matched my needs, funny enough it matched the name of my script so here's the output:
# python fcgi_app.py
('404 Not Found', [('x-powered-by', 'PHP/5.3.21'), ('content-type', 'text/html')], '<html>\n\t<head>\n\t\t<title>test php</title>\n\t</head>\n<body>\nChecking</body>\n</html>', '')
And Here's the source
Giving me the conclusion that this is in fact a PHP issue (even tho i've hated on lighttpd for not honoring the 200 code php should respond with.. And for that i'm sorry. Should go bash a little on PHP and see if that helps me come to a conclusion)
Temporary Solution
Placing the following in the top part of your .php page will work around this issue.
Note that it's a clean workaround, it will work but it's not a long term fix for sure.
<?php
header("HTTP/1.0 200 Found");
?>

This smells a bit like a same-origin policy issue.
The path you are specifying may be causing the issue.
Try
$.ajax({
url: "/News.html",
cache: false,
}).done(function(data) {
$("#content").load(data);
});
And let me (us) know if that helps.

This one had me stymied for a bit. Feeling some compulsive urges, I installed lighttpd and php5 on an fresh Ubuntu 12.10 VM (didn't have a BSD one handy). I had to modify to poll from kqueue, but other than that I used your lighttpd.conf. And everything worked fine.
So then I installed your php.ini file, and BAM http status 404 while returning proper content. So that narrowed it down to php-cgi.
Turns out that when the service started, it would log
PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php-5.3/modules/pdo.so' - /usr/local/lib/php-5.3/modules/pdo.so: cannot open shared object file: No such file or directory in Unknown on line 0
So id did a quick search and changed one line in the php.ini from
extension_dir = "/usr/local/lib/php-5.3/modules"
to
extension_dir = "/usr/lib/php5/20100525"
restarted php-cgi, and voila status 200 to go along with the content.

After setting up a fresh OpenBSD 5.3 server, and installing with your config files, I was able to narrow down the root cause.
In the lighttpd.conf you have server.chroot = "/var/www/" so all of its path names exclude the /var/www from the front. The php-fastcgi process is not chrooted, so it has a slightly different view of the file system.
Solution #1:
Don't chroot lighttpd and change the server.document-root, accesslog.filename, and server.errorlog to absolute paths.
Solution #2:
Use php-fpm or similar to make PHP chroot aware/capable

Use simple jQuery .load() method:
$(document).ready(function () {
$("#content").load('News.html');
});

Related

Paypal getting access token bad request from php-curl but working fine from terminal

I am trying to make a curl request to Paypal sandbox to get access token but every time when I make a request to this url
https://api.sandbox.paypal.com/v1
It sends back a 400 in response.I have tried sending curl request to any test url too and I get a 200 response.Moreover, if I send a request via terminal the reponse is 200 issue is only occuring with php-curl.Another noticable thing is this request works fine on localhost it only gets 400 on live server.
This is how i am making the curl request
$username = $params['paypal_client_id'];
$password = $params['paypal_secret'];
$headers = array(
"Accept-Language: en_US",
"Accept: application/json"
);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_TIMEOUT, 60);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_USERPWD, trim($username) . ':' . trim($password));
curl_setopt($ch, CURLOPT_POST, 1);
$data = curl_exec($ch);
This is how i am making request via terminal
curl https://api.sandbox.paypal.com/v1/oauth2/token -H "Accept: application/json" -H "Accept-Language: en_US" -u "{Client_id}:{Client_secret}" -d "grant_type=client_credentials"
The response i am getting is
Verbose info :* About to connect() to api.sandbox.paypal.com port 443 (#1)
* Trying 173.0.82.78...
* Connected to api.sandbox.paypal.com (173.0.82.78) port 443 (#1)
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_RSA_WITH_AES_256_CBC_SHA256
* Server certificate:
* subject: CN=api.sandbox.paypal.com,OU=PayPal Production,O="PayPal, Inc.",L=San Jose,ST=California,C=US
* start date: Aug 21 00:00:00 2018 GMT
* expire date: Aug 20 12:00:00 2020 GMT
* common name: api.sandbox.paypal.com
* issuer: CN=DigiCert Global CA G2,O=DigiCert Inc,C=US
* Server auth using Basic with user 'client_id'
> POST /v1/oauth2/token?grant_type=client_credentials HTTP/1.1
Authorization: Basic 'client_id'
Host: api.sandbox.paypal.com
Accept-Language: en_US
Accept: application/json
Content-Length: -1
Content-Type: application/x-www-form-urlencoded
Expect: 100-continue
< HTTP/1.1 400 Bad Request
< Date: Wed, 23 Jan 2019 12:23:30 GMT
< Server: Apache
< Content-Length: 338
< Connection: close
< Content-Type: text/html; charset=iso-8859-1
<
* Closing connection 1
Any help regarding this issue will be really appreciated Thanks.
As the comments already point out, it's hard to debug this remotely. I lack reputation to add a comment, so will suggest this answer instead. This is a complex problem, so I'm afraid this answer will, at best, point you in the right direction rather than give you a specific walk-through.
Firstly, you would be much better off using the paypal PHP SDK which at the time of writing can be found on this page:
https://github.com/paypal/PayPal-PHP-SDK/
If that link is broken, I'm sure Google will assist. This will almost certainly solve your problems.
However, to answer your question (or that of anyone else who, for some reason, can't use the canonical library) - as can be seen from the link above, Paypal are quite strict about their security. I strongly suspect that your command line curl is using a different SSL connection protocols from your php-curl. I would be surprised if this page didn't answer solve your problem:
https://github.com/paypal/tls-update#php
I have reproduced it here in case of link breakage:
PHP requirements
PHP uses the system-supplied cURL library, which requires OpenSSL 1.0.1c or later. You might need to update your SSL/TLS libraries.
Guidelines
Find OpenSSL in these locations:
OpenSSL installed in your operating system's openssl version.
OpenSSL extension installed in your PHP. Find this in your php.ini.
OpenSSL used by PHP_CURL - curl_version().
These OpenSSL extensions can be different, and you update each one separately.
PayPal and other PHP SDKs use the same OpenSSL extension that PHP_CURL uses to make HTTP connections. The PHP_CURL OpenSSL extension must support TLSv1.2.
The php_curl library uses its own version of the OpenSSL library, which is not the same version that PHP uses, which is the openssl.so file in php.ini.
To verify your PHP and TLS versions
To find the openssl_version information for cURL, run:
php -r 'echo json_encode(curl_version(), JSON_PRETTY_PRINT);'
The returned php_curl version might be different from the openssl version because they are different components.
When you update your OpenSSL libraries, you must update the php_curl OpenSSL version and not the OS OpenSSL version.
Download cacert.pem and TlsCheck.php.
In a shell on your production system, run:
php -f TlsCheck.php
On success:
PayPal_Connection_OK
On failure:
curl_error information
Notes:
Make sure that your command line test uses the same versions of PHP and SSL/TLS libraries that your web server uses.
If you use MAMP or XAMPP as your development set up, the PHP that is packaged with them uses an earlier version of OpenSSL, which you cannot easily update. For more information about this issue and a temporary workaround, see Unknown SSL protocol error.
One final thing - if this is the cause of your error, I'm going to assume your live server is at least a couple of years old. It's totally unrelated and none of my business, but if it's that out of date you probably have a whole bunch of other vulnerabilities. I would strongly suggest updating or even spinning up a new server (which may well be easier said than done...)

file_get_contents() works differently on different machines

I have written a piece of php code to use file_get_contents() to download a .js file from a site and try to run the code from 2 different machines and they produce different results. The code is:
$link = "https://www.scotchwhiskyauctions.com/scripting/store-scripting_frontend.js";
$options = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept-language: en\r\n" .
"User-Agent: Mozilla/5.0 (iPad; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B334b Safari/531.21.102011-10-16 20:23:10\r\n" ),
'ssl'=>array(
'verify_peer'=>false,
'verify_peer_name'=>false),
);
$context = stream_context_create($options);
$line = file_get_contents($link, false, $context);
var_dump($http_response_header);
echo $line;
exit;
When I run this piece of code in a Debian 8.11 machine it produces the following error:
PHP Warning: file_get_contents(https://www.scotchwhiskyauctions.com/scripting/store-scripting_frontend.js): failed to open stream: Connection timed out in /var/www/test.php on line 4
PHP Notice: Undefined variable: http_response_header in /var/www/test.php on line 4
NULL
However when I ran the exact same code on a different machine (Debian 4.16.12-1kali1) it can obtain the file content and the variable $http_response_header contains all the response header. Both machines use php7.2. After spending days trying to figure out what causes the Debian 8.11 machine to not be able to read the file, I used wget on both machines, and noticed that again, the Debian 8.11 (jessie) machine failed to read the file.
I suspected it has something to do with the ssl certificates so I ran
sudo update-ca-certificates
sudo update-ca-certificates --fresh
but it does not help at all.
Can anyone please point me to some direction?
Finally I got the problem fixed by following someone's comment on this post
echo 0 > /proc/sys/net/ipv4/tcp_timestamps
I found the following in the Linux Advanced Routing & Traffic Control HOWTO article.
/proc/sys/net/ipv4/tcp_timestamps
Timestamps are used, amongst other things, to protect against
wrapping sequence numbers. A 1 gigabit link might conceivably
re-encounter a previous sequence number with an out-of-line value,
because it was of a previous generation. The timestamp will let it
recognize this 'ancient packet'.
However I have no idea why it works. Can someone please explain?

999 Error Code on HEAD request to LinkedIn

We're using a curl HEAD request in a PHP application to verify the validity of generic links. We check the status code just to make sure that the link the user has entered is valid. Links to all websites have succeeded, except LinkedIn.
While it seems to work locally (Mac), when we attempt the request from any of our Ubuntu servers, LinkedIn returns a 999 status code. Not an API request, just a simple curl like we do for every other link. We've tried on a few different machines and tried altering the user agent, but no dice. How do I modify our curl so that working links return a 200?
A sample HEAD request:
curl -I --url https://www.linkedin.com/company/linkedin
Sample Response on Ubuntu machine:
HTTP/1.1 999 Request denied
Date: Tue, 18 Nov 2014 23:20:48 GMT
Server: ATS
X-Li-Pop: prod-lva1
Content-Length: 956
Content-Type: text/html
To respond to #alexandru-guzinschi a little better. We've tried masking the User Agents. To sum up our trials:
Mac machine + Mac UA => works
Mac machine + Windows UA => works
Ubuntu remote machine + (no UA change) => fails
Ubuntu remote machine + Mac UA => fails
Ubuntu remote machine + Windows UA => fails
Ubuntu local virtual machine (on Mac) + (no UA change) => fails
Ubuntu local virtual machine (on Mac) + Windows UA => works
Ubuntu local virtual machine (on Mac) + Mac UA => works
So now I'm thinking they block any curl requests that dont provide an alternate UA and also block hosting providers?
Is there any other way I can check if a link to linkedin is valid or if it will lead to their 404 page, from an Ubuntu machine using PHP?
It looks like they filter requests based on the user-agent:
$ curl -I --url https://www.linkedin.com/company/linkedin | grep HTTP
HTTP/1.1 999 Request denied
$ curl -A "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" -I --url https://www.linkedin.com/company/linkedin | grep HTTP
HTTP/1.1 200 OK
I found the workaround,
important to set accept-encoding header:
curl --url "https://www.linkedin.com/in/izman" \
--header "user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36" \
--header "accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" \
--header "accept-encoding:gzip, deflate, sdch, br" \
| gunzip
Seems like LinkedIn filter both user agent AND ip address. I tried this both at home and from an Digital Ocean node:
curl -A "Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" -I --url https://www.linkedin.com/company/linkedin
From home I got a 200 OK, from DO I got 999 Denied...
So you need a proxy service like HideMyAss or other (haven't tested it so I couldn't say if it's valid or not). Here is a good comparison of proxy services.
Or you could setup a proxy on your home network, for example use a Raspberry PI to proxy your requests. Here is a guide on that.
Proxy would work, but I think there's another way around it. I see that from AWS and other clouds that it's blocked by IP. I can issue the request from my machine and it works just fine.
I did notice that in the response from the cloud service that it returns some JS that the browser has to execute to take you to a login page. Once there, you can login and access the page. The login page is only for those accessing via a blocked IP.
If you use a headless client that executes JS, or maybe go straight to the subsequent link and provide the credentials of a linkedin user, you may be able to bypass it.

wget fails on a local domain

I have a Red Hat linux box with apache running several domains, including a.com and b.com.
I have a php script a.com/wget.php, which makes an exec() call to download a file on the local domain b.com. Running the php script from the command line is successful.
But running this script from a web page results in a 404 error. The command is:
/usr/bin/wget -k -S --save-headers --keep-session-cookies
-O <local-file-name> -o <local-log-file-name> -U \"Mozilla/5.0
(Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101
Firefox/24.0\" --max-redirect=100 "http://b.com/page.php"
No log messages are written to the Apache access log file for domain b.com for this call.
BUT the server access log file (/var/log/httpd/access_log) is NOT empty, it shows that there was an attempt made to open page "/page.php" on the server (the link in access log has no domain).
xx.xx.xx.xx - - [19/May/2014:12:02:49 +0100] "GET /page.php
HTTP/1.0" 404 285 "-" "Mozilla/5.0 (Macintosh;
Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0"
Server error log (/var/log/httpd/error_log) gives this error:
[Mon May 19 12:02:49 2014] [error] [client xx.xx.xx.xx]
File does not exist: /var/www/vhosts/default/htdocs
So it would seem that something is stripping the domain name from "http://b.com/page.php" and the resulting URL that wget is trying to connect to is "/page.php". This will not work, given that the server has many domains on it.
Has anyone come across this? Is there some setting in wget or php or apache that would cause this to not happen? I tried different things based on suggestions regarding similar problems, but nothing has worked so far.
Thanks.
The problem turned out to be not in wget, but in firewall settings. The wget call, executed from behind the firewall, was resolving the domain to an external IP address, and connections to the external IP address were failing. Correcting this in the firewall fixed the wget problem.

wkhtmltopdf integrated with php doesn't work on Centos (access deny)

I installed wkhtmltopdf on my Centos server.
Everything works fine in the shell. If I try to send the command in the shell:
/usr/local/bin/wkhtmltopdf http://www.google.it /var/www/html/test_report.pdf
or simply
wkhtmltopdf ... /var/www/html/test_report.pdf
everything goes well, but the same is not working if i use the exec command in a php script:
exec("/usr/local/bin/wkhtmltopdf http://www.google.it /var/www/html/test_report.pdf");
I changed the chmod of the html folder in 0777, but in the access.log I have the following response:
[08/Oct/2012:17:11:18 +0200] "GET test_report.php HTTP/1.1"
200 311 "-" "Mozilla/5.0 (Windows NT 6.1; rv:15.0) Gecko/20100101
Firefox/15.0.1"
The same script works fine on a windows 2003 server.
Is there a way to get around this error?
Thank you.
Most likely SELinux is blocking it, I had the same issue once.
Don't disable SELinux (that's just a bad idea/the lazy man's way to "fix" it), but use the audit2allow tool instead to figure out what context/SELinux booleans need to be altered.
See http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01 for more details.
In my case the problem was SELinux (as #Oldskool mentioned his answer). In execoutput there was only information PROT_EXEC|PROT_WRITE failed.
To resolve the problem I ran:
setsebool httpd_execmem on
I found this solution at groups.google.com

Categories