i have this problem when im trying to upload a file to amazon s3, it gives me this error but i dnt seem to understand:
Warning: curl_setopt() [function.curl-setopt]: CURLOPT_FOLLOWLOCATION cannot be activated when safe_mode is enabled or an open_basedir is set in /var/www/vhosts/??????/httpdocs/actions/S3.php on line 1257
There is a lengthy workaround posted in the comments to the curl functions:
http://php.net/manual/en/function.curl-setopt.php#102121
Though the better solution would be not to use cURL. (See PEAR Http_Request2 or Zend_Http for alternatives, or use PHPs built-in HttpRequest if available.)
The problem is exactly what is says in the error message - you have safe_mode or open_basedir enabled in php.ini. Either edit php.ini to disable whichever one of those you have on, or don't use PHP's flavor of curl. If you can't edit php.ini you'll have to find a new host or find a new solution.
The best solution would be to get a new host. open_basedir isn't a great security feature (a good host will use the far better approach of setting up a jail). safe_mode is deprecated. So the best result will come from disabling both directives (or finding a new host if yours is unwilling to do so).
However, if that's not an option, you can always implement something like this (from a comment on php.net)...
i have shorter and less safe variant of workaround posted by mario, but you may find it useful for urls with known number of redirects (for example FB Graph API image calls -- graph.facebook.com/4/picture)
function cURLRequest($url) {
$ch = curl_init();
// curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_URL, $url);
$result = curl_exec($ch);
if ($result) {
curl_close($ch);
return $result;
} else if (empty($result)) {
$info = curl_getinfo($ch);
curl_close($ch);
// PHP safe mode fallback for 302 redirect
if (!empty($info['http_code']) && !empty($info['redirect_url'])) {
return cURLRequest($info['redirect_url']);
} else {
return null;
}
} else {
return null;
}
}
Use this version of Curl
//=================== compressed version===============(https://github.com/tazotodua/useful-php-scripts/)
function get_remote_data($url, $post_paramtrs=false) { $c = curl_init();curl_setopt($c, CURLOPT_URL, $url);curl_setopt($c, CURLOPT_RETURNTRANSFER, 1); if($post_paramtrs){curl_setopt($c, CURLOPT_POST,TRUE); curl_setopt($c, CURLOPT_POSTFIELDS, "var1=bla&".$post_paramtrs );} curl_setopt($c, CURLOPT_SSL_VERIFYHOST,false);curl_setopt($c, CURLOPT_SSL_VERIFYPEER,false);curl_setopt($c, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; rv:33.0) Gecko/20100101 Firefox/33.0"); curl_setopt($c, CURLOPT_COOKIE, 'CookieName1=Value;'); curl_setopt($c, CURLOPT_MAXREDIRS, 10); $follow_allowed= ( ini_get('open_basedir') || ini_get('safe_mode')) ? false:true; if ($follow_allowed){curl_setopt($c, CURLOPT_FOLLOWLOCATION, 1);}curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 9);curl_setopt($c, CURLOPT_REFERER, $url);curl_setopt($c, CURLOPT_TIMEOUT, 60);curl_setopt($c, CURLOPT_AUTOREFERER, true); curl_setopt($c, CURLOPT_ENCODING, 'gzip,deflate');$data=curl_exec($c);$status=curl_getinfo($c);curl_close($c);preg_match('/(http(|s)):\/\/(.*?)\/(.*\/|)/si', $status['url'],$link);$data=preg_replace('/(src|href|action)=(\'|\")((?!(http|https|javascript:|\/\/|\/)).*?)(\'|\")/si','$1=$2'.$link[0].'$3$4$5', $data);$data=preg_replace('/(src|href|action)=(\'|\")((?!(http|https|javascript:|\/\/)).*?)(\'|\")/si','$1=$2'.$link[1].'://'.$link[3].'$3$4$5', $data);if($status['http_code']==200) {return $data;} elseif($status['http_code']==301 || $status['http_code']==302) { if (!$follow_allowed){if(empty($redirURL)){if(!empty($status['redirect_url'])){$redirURL=$status['redirect_url'];}} if(empty($redirURL)){preg_match('/(Location:|URI:)(.*?)(\r|\n)/si', $data, $m);if (!empty($m[2])){ $redirURL=$m[2]; } } if(empty($redirURL)){preg_match('/href\=\"(.*?)\"(.*?)here\<\/a\>/si',$data,$m); if (!empty($m[1])){ $redirURL=$m[1]; } } if(!empty($redirURL)){$t=debug_backtrace(); return call_user_func( $t[0]["function"], trim($redirURL), $post_paramtrs);}}} return "ERRORCODE22 with $url!!<br/>Last status codes<b/>:".json_encode($status)."<br/><br/>Last data got<br/>:$data";}
Related
I have a repetitive task that I do daily. Log in to a web portal, click a link that pops open a new window, and then click a button to download an Excel spreadsheet. It's a time consuming task that I would like to automate.
I've been doing some research with PHP and cUrl, and while it seems like it should be possible, I haven't found any good examples. Has anyone ever done something like this, or do you know of any tools that are better suited for it?
Are you familiar with the basics of HTTP requests? Like, do you know the difference between a POST and a GET request? If what you're doing amounts to nothing more than GET requests, then it's actually super simple and you don't need to use cURL at all. But if "clicking a button" means submitting a POST form, then you will need cURL.
One way to check this is by using a tool such as Live HTTP Headers and watching what requests happen when you click on your links/buttons. It's up to you to figure out which variables need to get passed along with each request and which URLs you need to use.
But assuming that there is at least one POST request, here's a basic script that will post data and get back whatever HTML is returned.
<?php
if ( $ch = curl_init() ) {
$data = 'field1=' . urlencode('somevalue');
$data .= '&field2[]=' . urlencode('someothervalue');
$url = 'http://www.website.com/path/to/post.asp';
$userAgent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)';
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
$html = curl_exec($ch);
curl_close($ch);
} else {
$html = false;
}
// write code here to look through $html for
// the link to download your excel file
?>
try this >>>
$ch = curl_init();
$csrf_token = $this->getCSRFToken($ch);// this function to get csrf token from website if you need it
$ch = $this->signIn($ch, $csrf_token);//signin function you must do it and return channel
curl_setopt($ch, CURLOPT_HTTPGET, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 300);// if file large
curl_setopt($ch, CURLOPT_URL, "https://your-URL/anything");
$return=curl_exec($ch);
// the important part
$destination ="files.xlsx";
if (file_exists( $destination)) {
unlink( $destination);
}
$file=fopen($destination,"w+");
fputs($file,$return);
if(fclose($file))
{
echo "downloaded";
}
curl_close($ch);
Php curl_exec works here: http://membership.oqmhandbook.com/order.php
but same code doesn't work here: http://wecallyouleads.com/order.php
any help would be appreciated..!!
my code is
$dsc_msg = '[my xml request]';
$dsc_header = array("POST /send/interchange HTTP/1.1",
"Host: lightning.instascreen.net",
"Content-Type: text/xml, charset=utf-8",
"SOAPAction: \"https://lightning.instascreen.net/send/interchange\""
);
$ch = curl_init("https://lightning.instascreen.net/send/interchange");
if ($ch == FALSE) {
echo "Connecting to createsend failed\n";
}
curl_setopt($ch, CURLOPT_HTTPHEADER, $dsc_header);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $dsc_msg);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch, CURLOPT_VERBOSE, 0);
$result = curl_exec($ch);
echo "Return XML:\n$result\n";
From your error message, it looks like it is attempting to verify the SSL (which it absolutely should), but it cannot. Really, to fix this, you need to make sure the SSL is in proper order.
If you absolutely cannot, you can try adding in this to see if it makes a difference.
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);
This is not really good practice and this article on how doing stuff like this makes cURL the most dangerous code in the world is really good. Scroll down to section 7 (on page 7 of the PDF) for some good examples of what NOT To do.
The Most Dangerous Code in the World:
Validating SSL Certificates in Non-Browser Software
If I get title of the page, I can tell the download link is active or dead.
For example: "Free online storage" is title of dead link and "[file name]" is the title of active link (mediafire). But my page takes too long to respond, so is there any other way to check if a download link is active or dead?
That is what i have done:
<?php
function getTitle($Url){
$str = file_get_contents($Url);
if(strlen($str)>0){
preg_match("/\<title\>(.*)\<\/title\>/",$str,$title);
return $title[1];
}
}
?>
Do not perform a GET request, which downloads the whole page/file, but HEAD request, which gets only the HTTP headers, and check if the status is 200, and the content-type is not text/html
Something like this...
function url_validate($link)
{
#[url]http://www.example.com/determining-if-a-url-exists-with-curl/[/url]
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $link);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_MAXREDIRS, 10); //follow up to 10 redirections - avoids loops
$data = curl_exec($ch);
curl_close($ch);
preg_match_all("/HTTP\/1\.[1|0]\s(\d{3})/",$data,$matches);
$code = end($matches[1]);
if(!$data)
{
return(false);
}
else
{
if($code==200)
{
return(true);
}
elseif($code==404)
{
return(false);
}
}
}
You can safely use any cURL library function. It is legitimate and thus would not regarded as a hacking attempt. The only requirement is that your web hosting company has cURL extension installed, which is very likely.
cURL should do the job. You can check the headers returned and the text content as well if you want.
I'm trying to use Curl in PHP to read a unreliable web page. The page is often unavailable because of server errors. However, I still need to read it if it's available. Additionally, I don't want the unreliability of the web page to effect my code. I would like my PHP to fail gracefully and move on. Here is what I have so far:
<?php
function get_url_contents($url){
$crl = curl_init();
$timeout = 2;
curl_setopt ($crl, CURLOPT_URL,$url);
curl_setopt ($crl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($crl, CURLOPT_CONNECTTIMEOUT, $timeout);
$ret = curl_exec($crl);
curl_close($crl);
return $ret;
}
$handle = get_url_contents ( 'http://www.mydomain.com/mypage.html' );
?>
Use this instead, CURL is not super recommanded anymore as i've heard since PHP wrappers offer much better performance and are always available anywhere you go:
$currentcontext = stream_context_get_default();
stream_context_set_default(stream_context_create(array('timeout' => 2)));
$content = file_get_contents('url', $context);
stream_context_set_default($currentcontext);
This will set the default stream context to timeout after 2 seconds and get the content of the url via a stream wrapper that should be there in all php versions from 5.2 and up for sure;
You are not obligated to restore the default context depending on your site's code but it's always a good thing to do. If you don't, then this operation can be achieved in only 2 lines of code...
You could test the HTTP code to see if the page was successfully retrieved by testing the HTTP Response code. I can't remember if >200 and <302 are the correct code ranges though, have a quick peak at http response codes If you use this method.
<?php
function get_url_contents($url){
$crl = curl_init();
$timeout = 2;
curl_setopt ($crl, CURLOPT_URL,$url);
curl_setopt ($crl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($crl, CURLOPT_CONNECTTIMEOUT, $timeout);
$ret['pagesource'] = curl_exec($crl);
$httpcode = curl_getinfo($crl, CURLINFO_HTTP_CODE);
curl_close($crl);
if($httpcode >=200 && $httpcode<302) {
$ret['response']=true;
} else {
$ret['response']=false;
}
return $ret;
}
$handle = get_url_contents ( 'http://192.168.1.118/newTest/mainBoss.php' );
if($handle['response']==false){
echo 'page is no good';
} else {
echo 'page is ok and here it is:' . $handle['pagesource'] . 'DONE.<br>';
}
?>
Hi guys I wish to get information for entries I have in my database from wikipedia like for example some stadiums and country information. I'm using Zend Framework and also how would I be able to handle queries that return multiple ambiguous entries or the like.. I would like all the help I can get here...
Wikipedia is based on MediaWiki, offering an Application Programmable Interface (API).
You can check out MediaWiki API on Wikipedia - http://en.wikipedia.org/w/api.php
Documentation for MediaWiki API - http://www.mediawiki.org/wiki/API
Do a simple HTTP request to the article you are looking to import. Here's a good library which might help with parsing the HTML, though there are dozens of solutions for that as well, including using the standard DOM model which is provided by php.
<?php
require_once "HTTP/Request.php";
$req =& new HTTP_Request("http://www.yahoo.com/");
if (!PEAR::isError($req->sendRequest())) {
echo $req->getResponseBody();
}
?>
Note, you will be locked out of the site if your traffic levels are deemed too high. (If you want a HUGE number of articles, download the database)
This blog has a really good code for get a definition from wiki
<?php
//FUNCTION THAT :PARAMETER - KEYWORD , AND RETURNS WIKI DEFINITION (IN ARRAY FORMAT)
function wikidefinition($s) {
//ENGLISH WIKI
$url = "http://en.wikipedia.org/w/api.php?action=opensearch&search=".urlencode($s)."&format=xml&limit=1";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPGET, TRUE);
curl_setopt($ch, CURLOPT_POST, FALSE);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_NOBODY, FALSE);
curl_setopt($ch, CURLOPT_VERBOSE, FALSE);
curl_setopt($ch, CURLOPT_REFERER, "");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
curl_setopt($ch, CURLOPT_MAXREDIRS, 4);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 6.1; he; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8");
$page = curl_exec($ch);
$xml = simplexml_load_string($page);
if((string)$xml->Section->Item->Description) {
return array((string)$xml->Section->Item->Text,
(string)$xml->Section->Item->Description,
(string)$xml->Section->Item->Url);
} else {
return "";
}
}
//END OF FUNCTION WIKIDEFINITIONS
//USE OF FUNCTION
$data = wikidefinition('Bangladesh') ;
//var_dump( wikidefinition('bangladesh') ) ; //displays the array content
echo "Word:" . $data[0] . "<br/>";
echo "Definition:" . $data[1] . "<br/>";
echo "Link:" . $data[2] . "<br/>";
?>