I did a website which uses Steam as OpenID provider. I firstly hosted it on a shared hosting.
But the traffic grew from 50 users in a day to 1000 in a day. I wasn't expecting that and had to change my host. I took another shared hosting with better performance, etc. to see how it is going to grow. But there's now a problem.
My OpenID login with Steam which worked perfectly on the last host doesn't work anymore. I tried with Google, and it worked. So I don't think my script uses a functionality that isn't enabled on my new host.
So when I put Steam identity, it loads during about 30 seconds and then Chrome returns me an error, ERR_EMPTY_RESPONSE. I tried to activate error_reporting E_ALL, but it does the same.
I am using LightOpenID, and here is the portion of the code incriminated:
$openid->identity = 'http://steamcommunity.com/openid';
header('Location: ' . $openid->authUrl());
Actually, it doesn't work whenever I call $openid->authUrl(). Here is the complete code: http://pastebin.com/rChDzECq
How can I resolve this? Thank you in advance.
I also had problems fighting the LightOpenID code in the last couple of hours. I finally made it work and here is what I learned in the process.
Absolutely no HTML output before the header() command. Even the tiniest space prevented the command from redirecting anything.
My server wouldn't allow the use of file_get_contents() and just returned a 404 error from the URL passed to it. You can solve this problem with a custom file_get_contents_curl() command. Here's the one I used myself:
function file_get_contents_curl($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 15);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
I know this is a late answer but I hope it will help someone.
Related
I have absolutely no problem in getting source code of the webpage in my local server with this:
$html = file_get_contents('https://opac.nlai.ir');
And I was also okay on my host using code below until just a few days ago:
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://opac.nlai.ir');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 10);
$result = curl_exec($curl);
But today I figured out that now the site is using ssl and it is not working with http anymore and force redirect to https. So I did some search & found this as a fix:
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($curl CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
The code above works just fine for sites like e.g. "https://google.com" (and any other https sites that I've tried! )
But not for that specific website ("https://opac.nlai.ir")
In that case, page takes about a minute to load (!) and finally with var_dump($result) , I get "bool(false)"
I don't know how that website can be different from other websites and I really want to know what cause the problem.
Sorry for my English.
Just filling this answer for the records.
As said in question comments, the website where you were trying to get its source code was blocking your requests due your server location area (that executed the requests). It seems it only responds to specific IP location.
The solution verified by #Hesam has been to use cUrl via a proxy IP located in allowed location area, and he found one at least running well.
He followed the instructions found in this other SO post:
How ot use cUrl via a proxy
Ok, I am having a hard time simply trying to get contents from Go Daddy Host server to our company's proprietary server. Originally I was using file_get_contents, then I searched all over SO and realized curl was a better option to bypass security and configuration. Here is my code:
function get_content($URL){
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $URL);
$data = curl_exec($ch);
if(curl_errno($ch)){
echo 'Curl error: ' . curl_error($ch);
}
curl_close($ch);
return $data;
}
echo 'curl:' . get_content('https://xxx-xxxxx:4032/test2.html');
Here is the error:
Curl error: Failed to connect to xxx-xxxxx.com port 4032: Connection refused
Here are some facts:
If I enter the URL into my browser, I will be able to retreive test2.html
If execute the EXACT script on a different web host (Lunar Pages), then it will work perfectly fine
get_content() will work on google.com
Go Daddy representatives cannot help us
On our server, we've disabled the firewall (while we tested this)
I would have posted this with a comment, but I don't have enough upvotes to do that. GoDaddy is one of the worst hosts for custom code. Sure they're good for things like WordPress, but if you're wanting custom functionality within your code, they're one of the worst.
This is just an example, GoDaddy blocks most file_get_contents and cURL calls within their firewall. I would go with a host like HostGator or Digital Ocean... Both are cheap but not near as limiting.
Before making a switch, I would try to run this same code on another environment locally and make sure you can connect.
I created a small web app, with the feature of being able to Sign in with a Twitter account, by connecting with OAuth. It worked like charm for a few months, but now it stopped working. This is a quick and summarized overview "Sign in with twitter" algorithm.
Collecting some parameters (timestamp, nonce, some kind of application ID and so)
Use this parameters to create a new URL that looks like this:
https://api.twitter.com/oauth/request_token?oauth_consumer_key=DLZZTIxpY19FnWJNtqw5A&oauth_nonce=1369452195&oauth_signature=DIetumiKqJu66XXVvDDHdepnP9M%3D&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1369452195&oauth_version=1.0
Connect to that URL and retrieve the data, it has an access token
Continue doing fun stuff using that access token.
The URL generated in step 2 is fine because I tried manually copying it in Google Chrome, and it shows a beautiful access token, so the problem isn't there (I think).
In the step 3, I have a really small method that should do some very basic stuff: Connect to the URL generated before, retrieve the content and return it.
In my localhost Using EasyPHP 12.1, it works perfectly, as usual, but in the free host that i'm using (000webhost) it doesn't work anymore. When trying to connect, it just timeouts. The HTTPCodeError is 0 and the CurlError is "Couldn't connect to host".
This is the method used to connect to the URL.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
$http_status = curl_getinfo($ch, CURLINFO_HTTP_CODE);
$response = curl_exec($ch);
curl_close($ch);
return $response;
And this is an example of a URL used with that method:
https://api.twitter.com/oauth/request_token?oauth_consumer_key=DLZZTIxpY19FnWJNtqw5A&oauth_nonce=1369452195&oauth_signature=DIetumiKqJu66XXVvDDHdepnP9M%3D&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1369452195&oauth_version=1.0
I've been trying to fix it all day long, but now I have no idea even of what to try. The files are the very same in my localhost and in the 000webhost.
If you could enlighten me I would be very happy. I'll take my pants off for answers if it's needed. Thank you very much.
It might be possible that Twitter has blacklisted (blocked) your free host's servers or IP address. This can happen if other users on the server abuse the API.
The only thing I can think of is that your free web hosting service blocks it. You know these services are perfect as long as everything is very simple. The moment things become complicated you run into restrictions implemented by the provider. Most of these services limit band width, disk space, server use, support, uploading and more.
I was using cURL to scrape content from a site and just recently my page stated hanging when it reached curl_exec($ch). After some tests I noticed that it could load any other page from my own domain but when attempting to load from anything external I'll get a connect() timeout! error.
Here's a simplified version of what I was using:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,'http://www.google.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
$contents = curl_exec ($ch);
curl_close ($ch);
echo $contents;
?>
Here's some info I have about my host from my phpinfo():
PHP Version 5.3.1
cURL support enabled
cURL Information 7.19.7
Host i686-pc-linux-gnu
I don't have access to SSH or modifying the php.ini file (however I can read it). But is there a way to tell if something was recently set to block cURL access to external domains? Or is there something else I might have missed?
Thanks,
Dave
I'm not aware about any setting like that, it would not make much sense.
As you said you are on a remote webserver without console access I guess that your activity has been detected by the host or more likely it caused issues and so they firewalled you.
A silent iptables DROP would cause this.
When scraping google you need to use proxies for more than a few hand full of requests and you should never abuse your webservers primary IP if it's not your own. That's likely a breach of their TOS and could even result in legal action if they get banned from Google (which can happen).
Take a look at Google rank checker that's a PHP script that does exactly what you want using CURL and proper IP management.
I can't think of anything that's causing a timeout than a firewall on your side.
I'm not sure why you're getting a connect() timeout! error, but the following line:
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);
If it's not set to 1, it will not return any of the page's content back into your $contents.
I use this method to get facebook api data. just a search query. but I find use curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); cost more time during a curl time (over 10+ seconds).
Is there other curl method can run faster?
NOTE: I am now testing in localhost
$url = "https://graph.facebook.com/search?access_token=".$token."&q=dallas&type=post&scope=publish_stream,offline_access,user_status,read_stream";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
//curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 2);
//curl_setopt($ch, CURLOPT_CAINFO, dirname(__FILE__). '/file.crt'); the way as Lumbendil recommend, download a crt file via firefox. still slowly.
$body= curl_exec($ch);
curl_close ($ch);
PS:I do not want to use a SDK, becuase I failed set SDK in localhost test. Although I have read many articles of how to set in localhost. I have set http://127.0.0.1/facebook as my callback url. But just failed. So I still want to get an easy curl way.
Thanks.
You could use a .crt file and verify against that instead of ignoring SSL verification, as explained here.
To keep all the information in one place: In your code, you should write the following:
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($ch, CURLOPT_CAINFO, '/path/to/crt/file.crt');
To obtain the certificate, you should go with the browser to the page, and then with "view certificate" you have to export it. Remember that you must export it as X.509 Certificate (PEM) for this to work. For a more detailed guide on how to export the certificate, visit the link provided.
If ignoring to check a certificate takes 10 seconds, the problem is not with the certificate or with the checking and quite frankly, it probably isn't with SSL at all.
Ignoring to check the certificate should be very fast and not be measurable compared to how long the rest of the SSL handshake procedure takes.
To properly track down the problem, I would recommend you use the curl command line tool and its --trace-ascii and --trace-time options to see what seems to take time. You may need to snoop on the network with wireshark or similar to get an even better picture of what's going on.
I can't see how the other suggestions of adding a certificate check to the mix will make anything faster.
Just a side note, but if you do wish to use the SDK you can work around the local issue by editing your hosts file and adding localhost.local for 127.0.0.1. /etc/hosts on a linux machine and C:\WINDOWS\system32\drivers\etc\hosts on a windows machine.
Then in the Facebook app settings, simply set localhost.local as your domain and set your site url accordingly.
You should be ready to go then.