Trying to call get an array of ALL videos from a Vimeo Album. Using vimeo's JS API.
https://developer.vimeo.com/apis/simple#album-request
So the doc above says that to access information about a vimeo album you form a url like this:
http://vimeo.com/api/v2/album/album_id/request.output
and in my case that url looks like this:
$id2 = $_POST['alb1']
http://vimeo.com/api/v2/album/$id2/info.php
So everone says to use CURL to read the .php file that vimeo provides using the above url.
From it, i need to aquire the total number of videos in the album.
With that number I need to create a loop that accesses all the videos (for as long as the album is) and saves it as an array full of the video URLs.
That is then saved to a SQL database and the database is read out in a loop to print out <li>lines with an image of the video in it.
See it all in NON action here:
http://s199881165.onlinehome.us/transfem/newlayout/getal.php
here's the php I use to read the vimeo file
function getVimeoInfo($id, $info)
{
if (!function_exists('curl_init')) die('CURL is not installed!');
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://vimeo.com/api/v2/album/$id/videos.php");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$output = unserialize(curl_exec($ch));
$output = $output[0][$info];
curl_close($ch);
return $output;
}
function getVimeoInfo2($id2, $info2)
{
if (!function_exists('curl_init')) die('CURL is not installed!');
$ch2 = curl_init();
curl_setopt($ch2, CURLOPT_URL, "http://vimeo.com/api/v2/album/$id2/info.php");
curl_setopt($ch2, CURLOPT_HEADER, 0);
curl_setopt($ch2, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch2, CURLOPT_TIMEOUT, 10);
$output2 = unserialize(curl_exec($ch2));
$output2 = $output2[0][$info2];
curl_close($ch2);
return $output2;
}
Here is the other part:
$thetoatslotzes = getVimeoInfo2($posty_alby,'total_videos');
echo "<script> alert('lotze: " . $thetoatslotzes . "');</script>" ;
$albarray = getVimeoInfo($_POST['alb1'],'url');
echo "<script> alert('albarray: " . $thetoatslotzes . "');</script>" ;
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$dbh->beginTransaction();
$dbh->exec("TRUNCATE TABLE vim_playlist12");
$dbh->commit();
$i = 0;
while($i < 9) {
$eye_matee = $i + 1;
$dbh->beginTransaction();
$dbh->exec("insert into vim_playlist12 (url, listnum) values
($albarray[$i], $eye_matee)");
$i = $i + 1;
$dbh->commit();
}
$seleccion = 'SELECT url, listnum FROM vim_playlist12 ORDER BY listnum';
foreach ($dbh->query($seleccion) as $row) {
print $row['listnum'] . ": " . $row['url'] . "<br>" ;
}
}
catch (Exception $e) {
$dbh->rollBack();
echo "Failed: " . $e->getMessage();
}
}
else
{}
So ultimately if I can get that array of URLS from the vimeo videos.php i'll be in good shape.
Related
I am trying to skip when InnerText is empty but it put a default value.
This is my code:
if (strip_tags($result[$c]->innertext) == '') {
$c++;
continue;
}
This is the output:
Thanks
EDIT2: I did the var_dump
var_dump($result[$c]->innertext)
and I got this:
how can I fix it please?
EDIT3: This is my code; I extract in this way the names of the teams and the results, but the last one not works in the best way when We have postponed matches
<?php
include('../simple_html_dom.php');
function getHTML($url,$timeout)
{
$ch = curl_init($url); // initialize curl with given url
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER["HTTP_USER_AGENT"]); // set useragent
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // write the response to a variable
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // follow redirects if any
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); // max. seconds to execute
curl_setopt($ch, CURLOPT_FAILONERROR, 1); // stop when it encounters an error
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
return #curl_exec($ch);
}
$response=getHTML("https://www.betexplorer.com/soccer/japan/j3-league/results/",10);
$html = str_get_html($response);
$titles = $html->find("a[class=in-match]"); // name match
$result = $html->find("td[class=h-text-center]/a"); // result match
$c=0; $b=0; $o=0; $z=0; $h=0; // set counters
foreach ($titles as $match) { //get all data
$match_status = $result[$h++];
if (strip_tags($result[$c]->innertext) == 'POSTP.') { //bypass postponed match but it doesn't work anymore
$c++;
continue;
}
list($num1, $num2) = explode(':', $result[$c++]->innertext); // <- explode
$num1 = intval($num1);
$num2 = intval($num2);
$num3 = ($num1 + $num2);
$risultato = ($num1 . '-' . $num2);
list($home, $away) = explode(' - ', $titles[$z++]->innertext); // <- explode
$home = strip_tags($home);
$away = strip_tags($away);
$matchunit = $home . ' - ' . $away;
echo "<tr><td class='rtitle'>".
"<td> ".$matchunit. "</td> / " . // name match
"<td class='first-cell'>" . $risultato . "</td> " .
"</td></tr><br/>";
} //close foreach
?>
By browsing the content of the website you will always be dependent on the changes made in the future.
However, I will use PHP's native libxml DOM extension.
By doing the following:
<?php
function getHTML($url,$timeout)
{
$ch = curl_init($url); // initialize curl with given url
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER["HTTP_USER_AGENT"]); // set useragent
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // write the response to a variable
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // follow redirects if any
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout); // max. seconds to execute
curl_setopt($ch, CURLOPT_FAILONERROR, 1); // stop when it encounters an error
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
return #curl_exec($ch);
}
$response=getHTML("https://www.betexplorer.com/soccer/japan/j3-league/results/",10);
// "Create" the new DOM document
$doc = new DomDocument();
// Load HTML from a string, and disable xml error handling
$doc->loadHTML($response, LIBXML_NOERROR);
// Creates a new DOMXPath object
$xpath = new DomXpath($doc);
// Evaluates the given XPath expression and get all tr's without first line from table main
$row = $xpath->query('//table[#class="table-main js-tablebanner-t js-tablebanner-ntb"]/tr[position()>1]');
echo '<table>';
// Parse the values in row
foreach ($row as $tr) {
// Only get 2 first td's
$col = $xpath->query('td[position()<3]', $tr);
// Do not show POSTP and Round values
if (!str_contains($tr->nodeValue, 'POSTP') && !str_contains($tr->nodeValue, 'Round')) {
echo '<tr><td>'.$col->item(0)->nodeValue.'</td><td>'.$col->item(1)->nodeValue.'</td></tr>';
}
}
echo '</table>';
You obtain:
<tr><td>Nagano - Tegevajaro Miyazaki</td><td>3:2</td></tr>
<tr><td>YSCC - Toyama</td><td>1:2</td></tr>
...
i have a very weird issue with curl and url defined inside an array.
I have an array of url and i want perform an http GET on those urls with curl
for ($i = 0, $n = count($array_station) ; $i < $n ; $i++)
{
$station= curl_init();
curl_setopt($station, CURLOPT_VERBOSE, true);
curl_setopt($station, CURLOPT_URL, $array_station[$i]);
curl_setopt($station, CURLOPT_RETURNTRANSFER, true);
curl_setopt($station, CURLOPT_FOLLOWLOCATION, true);
$response = curl_exec($station);
curl_close($station);
}
If i define my $array_station in the way below
$array_station=array("http://www.example.com","http://www.example2.com");
the code above with curl working flawlassy,but since my $array_station is build in the way below (i perform a scan of directory searchin a specific filename, then i clean the url), the curl does not work, no error showed and nothing happens..
$di = new RecursiveDirectoryIterator(__DIR__,RecursiveDirectoryIterator::SKIP_DOTS);
$it = new RecursiveIteratorIterator($di);
$array_station=array();
$i=0;
foreach($it as $file) {
if (pathinfo($file, PATHINFO_FILENAME ) == "db_insert") {
$string = str_replace('/web/htdocs/', 'http://', $file.PHP_EOL);
$string2 = str_replace('/home','', $string);
$array_station[$i]=$string2;
$i++;
}
}
Doyou have some ideas? i'm giving up :-(
I'm on mobile right now so i cannot test it, but why are you adding a new line (PHP_EOL) to the url? Try to remove the new line or trim() the url at the end.
Add the lines of code below.
If there is a curl error it will report the error number.
If the request is made, it will show the HTTP request and response headers. The request is in $info and response header is in $head
for ($i = 0, $n = count($array_station) ; $i < $n ; $i++)
{
$station= curl_init();
curl_setopt($station, CURLOPT_VERBOSE, true);
curl_setopt($station, CURLOPT_URL, $array_station[$i]);
curl_setopt($station, CURLOPT_RETURNTRANSFER, true);
curl_setopt($station, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLINFO_HEADER_OUT, true);
$response = curl_exec($station);
if (curl_errno($station)){
$response .= 'Retreive Base Page Error: ' . curl_error($station);
}
else {
$skip = intval(curl_getinfo($station, CURLINFO_HEADER_SIZE));
$head = substr($response ,0,$skip);
$response = substr($response ,$skip);
$info = var_export(curl_getinfo($station),true);
}
echo $head;
echo $info;
curl_close($station);
}
<?php
$ca = curl_init();
curl_setopt($ca, CURLOPT_URL, "http://api.themoviedb.org/3/configuration?api_key=5094e4539de46c1abd1461920f3a3fb9");
curl_setopt($ca, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ca, CURLOPT_HEADER, FALSE);
curl_setopt($ca, CURLOPT_HTTPHEADER, array("Accept: application/json"));
$response = curl_exec($ca);
curl_close($ca);
//var_dump($response);
$config = json_decode($response, true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://api.themoviedb.org/3/movie/106646-the-wolf-of-wall-street?api_key=5094e4539de46c1abd1461920f3a3fb9");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_HTTPHEADER, array("Accept: application/json"));
$response = curl_exec($ch);
curl_close($ch);
$result = json_decode($response, true);
//print_r($result);
$x=0;
while($x<=0)
{
echo("<img src='" . $config['images']['base_url'] . $config['images']['poster_sizes'][1] . $result['results'][$x]['poster_path'] . "'/>");
echo (" ". $result['results'][$x]['title'] . "<br/>");
echo ("<a target=\"_blank\" href=\"https://www.themoviedb.org/\">TMDb</a> ". $result['results'][$x]['vote_average'] . "/10 <br/>");
echo (" ". $result['results'][$x]['release_date']);
echo (" ". $result['results'][$x]['genre']);
$x++;
}
?>
I am trying to show 'The Wolf Of Wall Streets' information. This is not working and I am not too sure how to get it working Ive had a look around and it's all just a bit too confusing. Please help.
I would also like to have a page where you can select a category and then the films within that category show up (limited to 15 per page)
If anyone can help me with that I will say thank you. Thank You
First thing, you don't need the title of the movie in your API call. You can do like this :
http://api.themoviedb.org/3/movie/106646?api_key=5094e4539de46c1abd1461920f3a3fb9
This call return informations from 1 movie, so you don't need a while loop do display these informations :
echo ("<img src='" . $config['images']['base_url'] . $config['images']['poster_sizes'][1] . $result['poster_path'] . "'/>");
echo (" ". $result['title'] . "<br/>");
echo ("<a target=\"_blank\" href=\"https://www.themoviedb.org/\">TMDb</a> ". $result['vote_average'] . "/10 <br/>");
echo (" ". $result['release_date']);
echo (" ". $result['genre']);
For your list, first build a genre list with there IDs :
http://api.themoviedb.org/3/genre/movie/list?api_key=5094e4539de46c1abd1461920f3a3fb9
Then list your movies :
http://api.themoviedb.org/3/genre/28/movies?api_key=5094e4539de46c1abd1461920f3a3fb9&page_size=2
Instead of a while loop, you can do a foreach like this :
foreach ($results as $movie) {
echo '<li>'.$movie['title'].'</li>
}
You can find all details about the TMDB API here :
http://docs.themoviedb.apiary.io/reference/genres/genreidmovies/
I have the following code which checks whether or not the Minecraft user can login to it's services and the output of the cURL requests seems to be "boolean false" for every request when I did my debugging, which is strange because if you enter the URL with the proper values of course into your browser the output is either "Bad login" if the credentials didn't work or it displays the username if it does work.
Here is my following code that isn't working.
Thank you for the help :)
function checkLogin($player, $mcpass, $user_sess)
{
//Get Random Proxy
$resultx = mysql_query("SELECT * FROM proxy
WHERE username='$user_sess' ORDER BY rand() LIMIT 1");
$rowx = mysql_fetch_array($resultx);
$proxy = $rowx['proxy'];
//Ok $proxy is the random proxy
$mcURL = 'http://login.minecraft.net/?user=' . $player . '&password=' . $mcpass . '&version=13';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $mcURL);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_PROXY, $proxy);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
$auth = curl_exec($curl);
if (strpos(trim($auth), $player) !== false) {
echo "$player : $mcpass : Login Valid<BR>";
ob_flush();
flush();
var_dump($auth); //Debugging
}
else {
echo "$player : $mcpass : Login Failed<BR>";
ob_flush();
flush();
var_dump($auth); //Debugging
}
}
I was trying to download the results of a batch and was coding a program for this.
On an aspx file, I was able to write the following PHP code since the URL included the parameters:
function get_data($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
for ($i=1; $i<15000; $i++) {
$url = "http://example.com/result.aspx?ClassId=342&TermId=95&StudentId=".$i;
$returned_content = get_data($url);
if (stripos($returned_content,'Roll') !== false) {
echo "Student ID:" . $i;
echo $returned_content;
}
}
However, when a result is queried on a .ASP file, the URL simply says 'results.asp' without any additional parameters. Is there a way to use CURL requests to run a for loop and download this data in a similar manner?
Thanks for any help!