I have the following code:
$fiz = $_GET['file'];
$file = file_get_contents($fiz);
$trim = trim($file);
$tw = explode("\n", $trim);
$browser = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36';
foreach($tw as $twi){
$url = 'https://twitter.com/users/username_available?username='.$twi;
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, '$browser');
curl_setopt($ch, CURLOPT_TIMEOUT, 8);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($ch);
$json = json_decode($result, true);
if($json['valid'] == 1){
echo "Twitter ".$twi." is available! <br />";
$fh = fopen('available.txt', 'a') or die("can't open file");
fwrite($fh, $twi."\n");
} else {
echo "Twitter ".$twi." is taken! <br />";
}
}
And what it does is that it takes list that would look something like:
apple
dog
cat
and so on, and it checks it with Twitter to check if the name is taken or not.
What I want to know is that if it's in any way possible to make the request show up after each check in instead of showing up all at once?
You need to use Ajax calls, If you are familiar with JavaScript or Jquery you can easily do this.
Instead of checking all names at once , use an Ajax function to send one name at a time to the server side PHP code.
Say you send "Cat" first , the page is processed and returns the result using Ajax. Now you can display the result on page.
send "dog" get the response---> display it and so on.
A similar Question has been answered here Return AJAX result for each cURL Request
Hope this helps, I use Jquery here ...
JavaScript
<script>
var keyArray = ('cat','dog','mouse','rat');
function checkUserName(name, keyArray, position){
$("#result").load("namecheck.php", {uesername: name, function(){ // Results are displayed on 'result' element
fetchNext(keyArray, position);
});}
}
function fetchNext(keyArray, position){
position++; // get next name in the array
if(position < keyArray.lenght){ // not exceeding the aray count
checkUserName(keyArray[position], keyArray, position) // make ajax call to check user name
}
}
function startProcess(){
var keyArray = ('cat','dog','mouse','rat');
var position = 0; // get the first element from the array
fetchNext(keyArray, position);
}
</script>
HTML
<div id="result"></div>
<button onclick="startProcess()"> Start Process </button>
PHP
<?
$twi = $_GET['username'];
$browser = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36';
$url = 'https://twitter.com/users/username_available?username='.$twi;
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, '$browser');
curl_setopt($ch, CURLOPT_TIMEOUT, 8);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($ch);
$json = json_decode($result, true);
if($json['valid'] == 1){
echo "Twitter ".$twi." is available! <br />";
$fh = fopen('available.txt', 'a') or die("can't open file");
fwrite($fh, $twi."\n");
}else{
echo "Twitter ".$twi." is taken! <br />";
} ?>
Related
I am new to programming,
I need to extract the wikipedia content and put it into html.
//curl request returns json output via json_decode php function
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 GTB5');
$result = curl_exec($ch);
curl_close($ch);
return $result;
}
$search = $_GET["search"];
if (empty($search)) {
//term param not passed in url
exit;
} else {
//create url to use in curl call
$term = str_replace(" ", "_", $search);
$url = "https://en.wikipedia.org/w/api.php?action=opensearch&search=".$search."&limit=1&namespace=0&format=jsonfm";
$json = curl($url);
$data = json_decode($json, true);
$data = $data['parse']['wikitext']['*'];
}
so I basically want to reprint a wiki page but with my styles and do not know how to do.
Any ideas, Thanks
I have this code to try and get the pagination links using php but the result is not quiet right. could any one help me.
what I get back is just a recurring instance of the first link.
<?php
include_once('simple_html_dom.php');
function dlPage($href) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_URL, $href);
curl_setopt($curl, CURLOPT_REFERER, $href);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.125 Safari/533.4");
$str = curl_exec($curl);
curl_close($curl);
// Create a DOM object
$dom = new simple_html_dom();
// Load HTML from a string
$dom->load($str);
$Next_Link = array();
foreach($dom->find('a[title=Next]') as $element){
$Next_Link[] = $element->href;
}
print_r($Next_Link);
$next_page_url = $Next_Link[0];
if($next_page_url !='') {
echo '<br>' . $next_page_url;
$dom->clear();
unset($dom);
//load the next page from the pagination to collect the next link
dlPage($next_page_url);
}
}
$url = 'https://www.jumia.com.gh/phones/';
$data = dlPage($url);
//print_r($data)
?>
what i want to get is
mySiteUrl/?facet_is_mpg_child=0&viewType=gridView&page=2
mySiteUrl//?facet_is_mpg_child=0&viewType=gridView&page=3
.
.
.
to the last link in the pagination. Please help
Here it is. Look that I htmlspecialchars_decode the link. Cause the href in curl there shouldn't be an & like in xml. Should the return value of dlPage the last link in Pagination. I understood so.
<?php
include_once('simple_html_dom.php');
function dlPage($href, $already_loaded = array()) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_URL, $href);
curl_setopt($curl, CURLOPT_REFERER, $href);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.125 Safari/533.4");
$htmlPage = curl_exec($curl);
curl_close($curl);
echo "Loading From URL:" . $href . "<br/>\n";
$already_loaded[$href] = true;
// Create a DOM object
$dom = file_get_html($href);
// Load HTML from a string
$dom->load($htmlPage);
$next_page_url = null;
$items = $dom->find('ul[class="osh-pagination"] li[class="item"] a[title="Next"]');
foreach ($items as $item) {
$link = htmlspecialchars_decode($item->href);
if (!isset($already_loaded[$link])) {
$next_page_url = $link;
break;
}
}
if ($next_page_url !== null) {
$dom->clear();
unset($dom);
//load the next page from the pagination to collect the next link
return dlPage($next_page_url, $already_loaded);
}
return $href;
}
$url = 'https://www.jumia.com.gh/phones/';
$data = dlPage($url);
echo "DATA:" . $data . "\n";
And the output is:
Loading From URL:https://www.jumia.com.gh/phones/<br/>
Loading From URL:https://www.jumia.com.gh/phones/?facet_is_mpg_child=0&viewType=gridView&page=2<br/>
Loading From URL:https://www.jumia.com.gh/phones/?facet_is_mpg_child=0&viewType=gridView&page=3<br/>
Loading From URL:https://www.jumia.com.gh/phones/?facet_is_mpg_child=0&viewType=gridView&page=4<br/>
Loading From URL:https://www.jumia.com.gh/phones/?facet_is_mpg_child=0&viewType=gridView&page=5<br/>
DATA:https://www.jumia.com.gh/phones/?facet_is_mpg_child=0&viewType=gridView&page=5
I want to combine Curl and Simple HTML DOM.
Both are working fine separately.
I want to curl a site and then I want to look into the inner data using DOM
with pagination page numbers.
I am using this code.
<?php
include 'simple_html_dom.php';
function dlPage($href) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_URL, $href);
curl_setopt($curl, CURLOPT_REFERER, $href);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.125 Safari/533.4");
$str = curl_exec($curl);
curl_close($curl);
// Create a DOM object
$dom = new simple_html_dom();
// Load HTML from a string
$dom->load($str);
return $dom;
}
$url = 'http://example.com/';
$data = dlPage($url);
// echo $data;
#######################################################
$startpage = 1;
$endpage = 3;
for ($p=$startpage;$p<=$endpage;$p++) {
$html = file_get_html('http://example.com/page/$p.html');
// connect to main page links
foreach ($html->find('div#link a') as $link) {
$linkHref = $link->href;
//loop through each link
$linkHtml = file_get_html($linkHref);
// parsing inner data
foreach($linkHtml->find('h1') as $title) {
echo $title;
}
foreach ($linkHtml->find('div#data') as $description) {
echo $description;
}
}
}
?>
How can I combine this to make it work as one single script?
I have been fighting with this for hours now I am trying to retrive a rss feed from maxhire:
rsslink, parse the content and display it using jfeed. now i am aware of the ajax not allowing for cross domain and i have been using the proxy.php that jfeed comes packaged with, but to no avail it just tells me there are to many redirects in the url so i have increased them like so:
<?php
header('Content-type: text/html');
$context = array(
'http'=>array('max_redirects' => 99)
);
$context = stream_context_create($context);
// hand over the context to fopen()
$handle = fopen($_REQUEST['url'], "r", false, $context);
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
?>
but still no luck it just returns a message telling me that the object has been moved. So i have moved on to using curl like so:
$ch = curl_init('http://www.maxhire.net/cp/?EC5A6C361E43515B7A591C6539&L=EN');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);
$result = curl_exec($ch);
var_dump($result);
to retrive the xml page locally but it just returns the same error the object has moved:
<body>string(237) "<title>Object moved</title>
<h2>Object moved to here.</h2>
"
</body>
then redirects me to a url locally with : &AspxAutoDetectCookieSupport=1 added to the end.
Can someone please explain what i'm doing wrong?
Right I managed to get curl working by faking the useragent and the cookies and i am using a custom metafield in wordpress to assign the url like so:
<?php
$mykey_values = get_post_custom_values('maxhireurl');
foreach ( $mykey_values as $key => $value ) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $value);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.6 (KHTML, like Gecko) Chrome/16.0.897.0 Safari/535.6');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
curl_setopt($ch, CURLOPT_COOKIEJAR, "cookie.txt");
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_REFERER, "http://www.maxhire.net");
$html = curl_exec($ch);
curl_close($ch);
echo $html;
}
?>
For what i'm trying to do, i use PHP5 in CLI, and cURL extension.
I'm trying to download a file from youtube's server, it works fine with any navigator,
the link is something like that;
`http://youtube.com/get_video_info?video_id=VIDEO_ID
exemple: http://youtube.com/get_video_info?video_id=9pQxmD6Bhd
When i access this file trough my navigator, it prompt me with a download box for the file
'get_video_info', when downloaded the file content some data, ..
The problem is to get this file with cURL, i keep getting this error message;
status=fail&errorcode=2&reason=Invalid+parameters.
This is the code ( i tried to change some option, but i'm not familliar with cURL, so i'm stuck.
$c = curl_init();
curl_setopt($c, CURLOPT_URL, "http://youtube.com/get_video_info?video_id=9pQxmD6Bhd");
curl_setopt($c, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1");
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c, CURLOPT_HEADER, false);
$output = curl_exec($c);
if($output === false)
{
trigger_error('Erreur curl : '.curl_error($c),E_USER_WARNING);
}
else
{
var_dump($output);
}
curl_close($c);
I tried to use some curl_setopt options, like CURLOPT_TRANSFERTEXT with no success.
I definitely need help !
Thanks for answers, and sorry if i did something that dont respect the rules here, it's my first post.
EDIT
Here is the code to download youtube video ( .ogg ) with php in cli.
<?php
/*Youtube URL and ID*/
$youtube_video = "http://www.youtube.com/watch?v=Ftud51NhY2I";
$yt_id = explode("=", $youtube_video);
$id = $yt_id[1];
/*
Functions
*/
function get_link($raw){
$url = rawurldecode(rawurldecode($raw));
$url = explode("&qual", $url);
return $url[0];
}
/*
Here we go
Query video token
*/
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $youtube_video);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_HEADER, false);
$output = curl_exec($c);
if($output === false)
{
trigger_error('Erreur curl : '.curl_error($c),E_USER_WARNING);
}
else{}
curl_close($c);
/*
Get Video infos
*/
$c = curl_init();
curl_setopt($c, CURLOPT_URL, "http://youtube.com/get_video_info?video_id=".$id);
curl_setopt($c, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1");
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c, CURLOPT_HEADER, false);
$output = curl_exec($c);
if($output === false){trigger_error('Erreur curl : '.curl_error($c),E_USER_WARNING);}
else{}
curl_close($c);
/*Get RAW link*/
$temp = explode("url_encoded_fmt_stream_map=url%3D", $output);
$url = explode("=", $temp[1]);
$url = get_link($url[0]);
/*Get Video name*/
$temp = "";
$temp = explode("title=", $output);
$title = explode("&", $temp[1]);
$title = rawurldecode(rawurldecode($title[0]));
$replace = array(':', '+', '\\', '/', '"', '<', '>', '|', '(', ')', '\'');
$title = str_replace($replace, ' ',$title);
//echo $title;
/*
Download Video
*/
$url = $url;
$path = $title.'.ogg';
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
curl_close($ch);
file_put_contents($path, $data);
echo "Done... \r\n";
?>
You get error message because the video_id parameter isn't valid.
Try changing that ID and it should work correctly.
http://www.youtube.com/watch?v=9pQxmD6Bhd - does not exist
youtube has changed their system. now it is working only with the real IP who use the get_video_info system. when you try with cURL it sends the server IP to Youtube, then you have to download videos with the servers IP, because youtube system creates the direct video download urls with given IP.