Recently, I've begun writing my own PHP OpenID consumer class in order to better understand openID. As a guide, I've been referencing the [LightOpenID Class][1]. For the most part, I understand the code and how OpenID works. My confusion comes when looking at the author's discover function:
function discover($url)
{
if(!$url) throw new ErrorException('No identity supplied.');
# We save the original url in case of Yadis discovery failure.
# It can happen when we'll be lead to an XRDS document
# which does not have any OpenID2 services.
$originalUrl = $url;
# A flag to disable yadis discovery in case of failure in headers.
$yadis = true;
# We'll jump a maximum of 5 times, to avoid endless redirections.
for($i = 0; $i < 5; $i ++) {
if($yadis) {
$headers = explode("\n",$this->request($url, 'HEAD'));
$next = false;
foreach($headers as $header) {
if(preg_match('#X-XRDS-Location\s*:\s*(.*)#', $header, $m)) {
$url = $this->build_url(parse_url($url), parse_url(trim($m[1])));
$next = true;
}
if(preg_match('#Content-Type\s*:\s*application/xrds\+xml#i', $header)) {
# Found an XRDS document, now let's find the server, and optionally delegate.
$content = $this->request($url, 'GET');
# OpenID 2
# We ignore it for MyOpenID, as it breaks sreg if using OpenID 2.0
$ns = preg_quote('http://specs.openid.net/auth/2.0/');
if (preg_match('#<Service.*?>(.*)<Type>\s*'.$ns.'(.*?)\s*</Type>(.*)</Service>#s', $content, $m)
&& !preg_match('/myopenid\.com/i', $this->identity)) {
$content = $m[1] . $m[3];
if($m[2] == 'server') $this->identifier_select = true;
$content = preg_match('#<URI>(.*)</URI>#', $content, $server);
$content = preg_match('#<LocalID>(.*)</LocalID>#', $content, $delegate);
if(empty($server)) {
return false;
}
# Does the server advertise support for either AX or SREG?
$this->ax = preg_match('#<Type>http://openid.net/srv/ax/1.0</Type>#', $content);
$this->sreg = preg_match('#<Type>http://openid.net/sreg/1.0</Type>#', $content);
$server = $server[1];
if(isset($delegate[1])) $this->identity = $delegate[1];
$this->version = 2;
$this->server = $server;
return $server;
}
# OpenID 1.1
$ns = preg_quote('http://openid.net/signon/1.1');
if(preg_match('#<Service.*?>(.*)<Type>\s*'.$ns.'\s*</Type>(.*)</Service>#s', $content, $m)) {
$content = $m[1] . $m[2];
$content = preg_match('#<URI>(.*)</URI>#', $content, $server);
$content = preg_match('#<.*?Delegate>(.*)</.*?Delegate>#', $content, $delegate);
if(empty($server)) {
return false;
}
# AX can be used only with OpenID 2.0, so checking only SREG
$this->sreg = preg_match('#<Type>http://openid.net/sreg/1.0</Type>#', $content);
$server = $server[1];
if(isset($delegate[1])) $this->identity = $delegate[1];
$this->version = 1;
$this->server = $server;
return $server;
}
$next = true;
$yadis = false;
$url = $originalUrl;
$content = null;
break;
}
}
if($next) continue;
# There are no relevant information in headers, so we search the body.
$content = $this->request($url, 'GET');
if($location = $this->htmlTag($content, 'meta', 'http-equiv', 'X-XRDS-Location', 'value')) {
$url = $this->build_url(parse_url($url), parse_url($location));
continue;
}
}
if(!$content) $content = $this->request($url, 'GET');
# At this point, the YADIS Discovery has failed, so we'll switch
# to openid2 HTML discovery, then fallback to openid 1.1 discovery.
$server = $this->htmlTag($content, 'link', 'rel', 'openid2.provider', 'href');
$delegate = $this->htmlTag($content, 'link', 'rel', 'openid2.local_id', 'href');
$this->version = 2;
# Another hack for myopenid.com...
if(preg_match('/myopenid\.com/i', $server)) {
$server = null;
}
if(!$server) {
# The same with openid 1.1
$server = $this->htmlTag($content, 'link', 'rel', 'openid.server', 'href');
$delegate = $this->htmlTag($content, 'link', 'rel', 'openid.delegate', 'href');
$this->version = 1;
}
if($server) {
# We found an OpenID2 OP Endpoint
if($delegate) {
# We have also found an OP-Local ID.
$this->identity = $delegate;
}
$this->server = $server;
return $server;
}
throw new ErrorException('No servers found!');
}
throw new ErrorException('Endless redirection!');
}
[1]: http://gitorious.org/lightopenid
Okay, Here's the logic as I understand it (basically):
Check to see if the $url sends you a valid XRDS file that you then parse to figure out the OpenID provider's endpoint.
From my understanding, this is called the Yadis authentication method.
If no XRDS file is found, Check the body of the response for an HTML <link> tag that contains the url of the endpoint.
What. The. Heck.
I mean seriously? Essentially screen scrape the response and hope you find a link with the appropriate attribute value?
Now, don't get me wrong, this class works like a charm and it's awesome. I'm just failing to grok the two separate methods used to discover the endpoint: XRDS (yadis) and HTML.
My Questions
Are those the only two methods used in the discovery process?
Is one only used in version 1.1 of OpenID and the other in version 2?
Is it critical to support both methods?
The site I've encountered the HTML method on is Yahoo. Are they nuts?
Thanks again for your time folks. I apologize if I sound a little flabbergasted, but I was genuinely stunned at the methodology once I began to understand what measures were being taken to find the endPoint.
Specification is your friend.
But answering your question:
Yes. Those are the only two methods defined by the OpenID specifications (at least, for URLs -- there is a third method for XRIs).
No, both can be used with both version of the protocol. Read the function carefully, and you'll see that it supports both methods for both versions.
If you want your library to work with every provider and user, you'd better do. Some users paste the HTML tags into their sites, so their site's url can be used as an openid.
Some providers even use both methods at once, to mantain compatibility with consumers not implementing YADIS discovery (which isn't part of OpenID 1.1, but can be used with it). So that does make sense.
And yes, HTML discovery is about searching for a <link> in the response body. That's why it's called HTML discovery.
Related
My php project is using the reddit JSON api to grab the title of the current page's submission.
Right now I am doing running some code every time the page is loaded and I'm running in to some problems, even though there is no real API limit.
I would like to store the title of the submission locally somehow. Can you recommend the best way to do this? The site is running on appfog. What would you recommend?
This is my current code:
<?php
/* settings */
$url="http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
$reddit_url = 'http://www.reddit.com/api/info.{format}?url='.$url;
$format = 'json'; //use XML if you'd like...JSON FTW!
$title = '';
/* action */
$content = get_url(str_replace('{format}',$format,$reddit_url)); //again, can be xml or json
if($content) {
if($format == 'json') {
$json = json_decode($content,true);
foreach($json['data']['children'] as $child) { // we want all children for this example
$title= $child['data']['title'];
}
}
}
/* output */
/* utility function: go get it! */
function get_url($url) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,1);
$content = curl_exec($ch);
curl_close($ch);
return $content;
}
?>
Thanks!
Introduction
Here is a modified version of your code
$url = "http://stackoverflow.com/";
$loader = new Loader();
$loader->parse($url);
printf("<h4>New List : %d</h4>", count($loader));
printf("<ul>");
foreach ( $loader as $content ) {
printf("<li>%s</li>", $content['title']);
}
printf("</ul>");
Output
New List : 7New podcast from Joel Spolsky and Jeff Atwood. Good site for example code/ Pyhtonstackoverflow.com has clearly the best Web code ever conceived in the history of the Internet and reddit should better start copying it.A reddit-like, OpenID using website for programmersGreat developer site. Get your questions answered and by someone who knows.Stack Overflow launched into publicStack Overflow, a programming Q & A site. & Reddit could learn a lot from their interface!
Simple Demo
The Problem
I see some things you want to achieve here namely
I would like to store the title of the submission locally somehow
Right now I am doing running some code every time the page is loaded
From what i understand you need is a simple cache copy of your data so that you don't have to load the url all the time.
Simple Solution
A simple cache system you can use is memcache ..
Example A
$url = "http://stackoverflow.com/";
// Start cache
$m = new Memcache();
$m->addserver("localhost");
$cache = $m->get(sha1($url));
if ($cache) {
// Use cache copy
$loader = $cache;
printf("<h2>Cache List: %d</h2>", count($loader));
} else {
// Start a new Loader
$loader = new Loader();
$loader->parse($url);
printf("<h2>New List : %d</h2>", count($loader));
$m->set(sha1($url), $loader);
}
// Oupput all listing
printf("<ul>");
foreach ( $loader as $content ) {
printf("<li>%s</li>", $content['title']);
}
printf("</ul>");
Example B
You can use Last Modification Date as the cache key as so that you would only save new copy only if the document is modified
$headers = get_headers(sprintf("http://www.reddit.com/api/info.json?url=%s",$url), true);
$time = strtotime($headers['Date']); // get last modification date
$cache = $m->get($time);
if ($cache) {
$loader = $cache;
}
Since your class implements JsonSerializable you can json encode your result and also store in a Database like MongoDB or MySQL
$data = json_encode($loader);
// Save to DB
Class Used
class Loader implements IteratorAggregate, Countable, JsonSerializable {
private $request = "http://www.reddit.com/api/info.json?url=%s";
private $data = array();
private $total;
function parse($url) {
$content = json_decode($this->getContent(sprintf($this->request, $url)), true);
$this->data = array_map(function ($v) {
return $v['data'];
}, $content['data']['children']);
$this->total = count($this->data);
}
public function getIterator() {
return new ArrayIterator($this->data);
}
public function count() {
return $this->total;
}
public function getType() {
return $this->type;
}
public function jsonSerialize() {
return $this->data;
}
function getContent($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 1);
$content = curl_exec($ch);
curl_close($ch);
return $content;
}
}
I'm not sure what your question is exactly but the first thing that pops is the following:
foreach($json['data']['children'] as $child) { // we want all children for this example
$title= $child['data']['title'];
}
Are you sure you want to overwrite $title? In effect, that will only hold the last $child title.
Now, to your question. I assume you're looking for some kind of mechanism to cache the contents of the requested URL so you don't have to re-issue the request every time, am I right? I don't have any experience with appFog, only with orchestra.io but I believe they have the same restrictions regarding writing to files, as in you can only write to temporary files.
My suggestion would be to cache the (processed) response in either:
APC shared memory with a short TTL
temporary files
database
You could use the hash of the URL + arguments as the lookup key, doing this check inside get_url() would mean you wouldn't need to change any other part of your code and it would only take ~3 LOC.
After this:
if($format == 'json') {
$json = json_decode($content,true);
foreach($json['data']['children'] as $child) { // we want all children for this example
$title = $child['data']['title'];
}
}
}`
Then store in a json file and dump it into your localfolder website path
$storeTitle = array('title'=>$title)
$fp = fopen('../pathToJsonFile/title.json'), 'w');
fwrite($fp, json_encode($storeTitle));
fclose($fp);
Then you can always call the json file next time and decode it and extract the title into a variable for use
i usually just store the data as is as a flat file, like so:
<?php
define('TEMP_DIR', 'temp/');
define('TEMP_AGE', 3600);
function getinfo($url) {
$temp = TEMP_DIR . urlencode($url) . '.json';
if(!file_exists($temp) OR time() - filemtime($temp) > TEMP_AGE) {
$info = "http://www.reddit.com/api/info.json?url=$url";
$json = file_get_contents($info);
file_put_contents($temp, $json);
}
else {
$json = file_get_contents($temp);
}
$json = json_decode($json, true);
$titles = array();
foreach($json['data']['children'] as $child) {
$titles[] = $child['data']['title'];
}
return $titles;
}
$test = getinfo('http://imgur.com/');
print_r($test);
PS.
i use file_get_contents to get the json data, you might have your own reasons to use curl.
also i don't check for format, cos clearly you prefer json.
I have a series of Rackspace Cloud Files CDN URLs stored which reference an HTTP address and I would like to convert them to the HTTPS equivalent.
Rackspace Cloud Files CDN URLs are in the following format:
http://c186397.r97.cf1.rackcdn.com/CloudFiles Akamai.pdf
And the SSL equivalent for this URL would be:
https://c186397.ssl.cf1.rackcdn.com/CloudFiles Akamai.pdf
The changes to the URL are (source):
HTTP becomes HTTPS
The second URI segment ('r97' in this example) becomes 'ssl'
The 'r00' part seems to vary in length (as some are 'r6' etc.) so I'm having trouble converting these URLs to HTTPS. Here's the code I have so far:
function rackspace_cloud_http_to_https($url)
{
//Replace the HTTP part with HTTPS
$url = str_replace("http", "https", $url, $count = 1);
//Get the position of the .r00 segment
$pos = strpos($url, '.r');
if ($pos === FALSE)
{
//Not present in the URL
return FALSE;
}
//Get the .r00 part to replace
$replace = substr($url, $pos, 4);
//Replace it with .ssl
$url = str_replace($replace, ".ssl", $url, $count = 1);
return $url;
}
This however does not work for URLs where the second segment is of a different length.
Any thoughts appreciated.
I know this is old, but if you are using this library: https://github.com/rackspace/php-opencloud you can use the getPublicUrl() method on the object, you just need to use the following namespace
use OpenCloud\ObjectStore\Constants as Constant;
// Code logic to upload file
$https_file_url = $response->getPublicUrl(Constant\UrlType::SSL);
Try this:
function rackspace_cloud_http_to_https($url)
{
$urlparts = explode('.', $url);
// check for presence of 'r' segment
if (preg_match('/r\d+/', $urlparts[1]))
{
// replace appropriate segments of url
$urlparts[0] = str_replace("http", "https", $urlparts[0]);
$urlparts[1] = 'ssl';
// put url back together
$url = implode('.', $urlparts);
return $url;
}
else
{
return false;
}
}
I'm trying to use PHP and SoapClient to utilize the UPS Ratings web service. I found a nice tool called WSDLInterpreter to create a starting point library for creating the service requests, but regardless what I try I keep getting the same (non-descriptive) error:
EXCEPTION=SoapFault::__set_state(array(
'message' => 'An exception has been raised as a result of client data.',
'string' => '',
'code' => 0,
Um ok, what the hell does this mean?
Unlike some of the other web service tools I have implemented, the UPS Soap wants a security block put into the header. I tried doing raw associative array data but I wasn't sure 100% if I was injecting the header part correctly.
Using the WSDLInterpreter, it pulls out a RateService class with a ProcessRate method that excepts it's own (instance) datastructure for the RateRequest and UPSSecurity portions, but all of the above generates that same error.
Here's a sample of the code that I'm using that calls classes defined by the interpreter:
require_once "Shipping/UPS/RateService.php"; // WSDLInterpreter class library
class Query
{
// constants removed for stackoverflow that define (proprietary) security items
private $_upss;
private $_shpr;
// use Interpreter code's RateRequest class to send with ProcessRate
public function soapRateRequest(RateRequest $req, UPSSecurity $upss = null)
{
$res = false;
if (!isset($upss)) {
$upss = $this->__getUPSS();
}
echo "SECURITY:\n" . var_export($upss, 1) . "\n";
$upsrs = new RateService(self::UPS_WSDL);
try {
$res = $upsrs->ProcessRate($req, $upss);
} catch (SoapFault $exception) {
echo 'EXCEPTION=' . var_export($exception, 1) . "\n";
}
return $res;
}
// create a soap data structure to send to web service from shipment
public function getRequestSoap(Shipment $shpmnt)
{
$qs = new RateRequest();
// pickup information
$qs->PickupType->Code = '01';
$qs->Shipment->Shipper = $this->__getAcctInfo();
// Ship To address
$qs->Shipment->ShipTo->Address->AddressLine = $this->__getAddressArray($shpmnt->destAddress->address1, $shpmnt->destAddress->address2);
$qs->Shipment->ShipTo->Address->City = $shpmnt->destAddress->city;
$qs->Shipment->ShipTo->Address->StateProvinceCode = $shpmnt->destAddress->state;
$qs->Shipment->ShipTo->Address->PostalCode = $shpmnt->destAddress->zip;
$qs->Shipment->ShipTo->Address->CountryCode = $shpmnt->destAddress->country;
$qs->Shipment->ShipTo->Name = $shpmnt->destAddress->name;
// Ship From address
$qs->Shipment->ShipFrom->Address->AddressLine = $this->__getAddressArray($shpmnt->origAddress->address1, $shpmnt->origAddress->address2);
$qs->Shipment->ShipFrom->Address->City = $shpmnt->origAddress->city;
$qs->Shipment->ShipFrom->Address->StateProvinceCode = $shpmnt->origAddress->state;
$qs->Shipment->ShipFrom->Address->PostalCode = $shpmnt->origAddress->zip;
$qs->Shipment->ShipFrom->Address->CountryCode = $shpmnt->origAddress->country;
$qs->Shipment->ShipFrom->Name = $shpmnt->origAddress->name;
// Service type
// TODO cycle through available services
$qs->Shipment->Service->Code = "03";
$qs->Shipment->Service->Description = "UPS Ground";
// Package information
$pkg = new PackageType();
$pkg->PackagingType->Code = "02";
$pkg->PackagingType->Description = "Package/customer supplied";
// dimensions
$pkg->Dimensions->UnitOfMeasurement->Code = $shpmnt->dimensions->dimensionsUnit;
$pkg->Dimensions->Length = $shpmnt->dimensions->length;
$pkg->Dimensions->Width = $shpmnt->dimensions->width;
$pkg->Dimensions->Height = $shpmnt->dimensions->height;
$pkg->PackageServiceOptions->DeclaredValue->CurrencyCode = "USD";
$pkg->PackageServiceOptions->DeclaredValue->MonetaryValue = $shpmnt->dimensions->value;
$pkg->PackageServiceOptions->DeclaredValue->CurrencyCode = "USD";
$pkg->PackageWeight->UnitOfMeasurement = $this->__getWeightUnit($shpmnt->dimensions->weightUnit);
$pkg->PackageWeight->Weight = $shpmnt->dimensions->weight;
$qs->Shipment->Package = $pkg;
$qs->CustomerClassification->Code = 123456;
$qs->CustomerClassification->Description = "test_rate_request";
return $qs;
}
// fill out and return a UPSSecurity data structure
private function __getUPSS()
{
if (!isset($this->_upss)) {
$unmt = new UsernameToken();
$unmt->Username = self::UPSS_USERNAME;
$unmt->Password = self::UPSS_PASSWORD;
$sat = new ServiceAccessToken();
$sat->AccessLicenseNumber = self::UPSS_ACCESS_LICENSE_NUMBER;
$upss = new UPSSecurity();
$upss->UsernameToken = $unmt;
$upss->ServiceAccessToken = $sat;
$this->_upss = $upss;
}
return $this->_upss;
}
// get our shipper/account info (some items blanked for stackoverflow)
private function __getAcctInfo()
{
if (!isset($this->_shpr)) {
$shpr = new ShipperType();
$shpr->Address->AddressLine = array(
"CONTACT",
"STREET ADDRESS"
);
$shpr->Address->City = "CITY";
$shpr->Address->StateProvinceCode = "MI";
$shpr->Address->PostalCode = "ZIPCODE";
$shpr->Address->CountryCode = "US";
$shpr = new ShipperType();
$shpr->Name = "COMPANY NAME";
$shpr->ShipperNumber = self::UPS_ACCOUNT_NUMBER;
$shpr->Address = $addr;
$this->_shpr = $shpr;
}
return $this->_shpr;
}
private function __getAddressArray($adr1, $adr2 = null)
{
if (isset($adr2) && $adr2 !== '') {
return array($adr1, $adr2);
} else {
return array($adr1);
}
}
}
It doesn't even seem to be getting to the point of sending anything over the Soap so I am assuming it is dying as a result of something not matching the WSDL info. (keep in mind, I've tried sending just a properly seeded array of key/value details to a manually created SoapClient using the same WSDL file with the same error resulting)
It would just be nice to get an error to let me know what about the 'client data' is a problem. This PHP soap implementation isn't impressing me!
I know this answer is probably way too late, but I'll provide some feedback anyway. In order to make a custom SOAP Header you'll have to override the SoapHeader class.
/*
* Auth Class to extend SOAP Header for WSSE Security
* Usage:
* $header = new upsAuthHeader($user, $password);
* $client = new SoapClient('{...}', array("trace" => 1, "exception" => 0));
* $client->__setSoapHeaders(array($header));
*/
class upsAuthHeader extends SoapHeader
{
...
function __construct($user, $password)
{
// Using SoapVar to set the attributes has proven nearly impossible; no WSDL.
// It might be accomplished with a combined SoapVar and stdClass() approach.
// Security Header - as a combined XSD_ANYXML SoapVar
// This method is much easier to define all of the custom SoapVars with attributes.
$security = '<ns2:Security xmlns:ns2="'.$this->wsse.'">'. // soapenv:mustUnderstand="1"
'<ns2:UsernameToken ns3:Id="UsernameToken-49" xmlns:ns3="'.$this->wsu.'">'.
'<ns2:Username>'.$user.'</ns2:Username>'.
'<ns2:Password Type="'.$this->password_type.'">'.htmlspecialchars($password).'</ns2:Password>'.
'</ns2:UsernameToken>'.
'</ns2:Security>';
$security_sv = new SoapVar($security, XSD_ANYXML);
parent::__construct($this->wsse, 'Security', $security_sv, false);
}
}
Then call the upsAuthHeader() before the soap call.
$client = new SoapClient($this->your_ups_wsdl,
array('trace' => true,
'exceptions' => true,
'soap_version' => SOAP_1_1
)
);
// Auth Header - Security Header
$header = new upsAuthHeader($user, $password);
// Set Header
$client->__setSoapHeaders(array($header));
I was wondering if there is a way, after all the changes in youtube in the last months, to write a script that enables downloading videos?
I'm aware of the fact that the methods that worked a year ago, are no longer relevant.
Yes there is still a way. I'm not going to write the whole script for you, but I'll at least write you the function that provides the download link from YouTube.
Alright, you're going to need CURL installed to do this.
/* Developed by User - WebEntrepreneur # StackOverlow.com */
function ___get_youtube_video($youtube_url)
{
if(eregi('youtube.com', $youtube_url))
{
preg_match('/http:\/\/(.+)youtube.com\/watch(.+)?v=(.+)/', $youtube_url, $youtube_id_regex);
$youtube_id = ($youtube_id_regex[3]) ? $youtube_id_regex[3] : '';
if(!$youtube_id)
{
return INVALID_YOUTUBE_ID;
}
if(eregi('\&', $youtube_id))
{
$youtube_id_m = explode('&', $youtube_id);
foreach($youtube_id_m as $slices)
{
$youtube_id = $slices;
break;
}
}
} else {
$youtube_id = ($youtube_url);
}
$ping = ___get_curl("http://www.youtube.com/watch?v={$youtube_id}&feature=youtu.be");
if(!$ping)
{
return YOUTUBE_UNAVAILABLE;
}
$ping_scan = nl2br($ping);
$ping_scan = explode('<br />', $ping_scan);
if(eregi('= null', $ping_scan[36]) or !$ping_scan[36])
{
return YOUTUBE_TOO_MANY_REQUESTS;
}
$ping_scan = str_replace("\n", "", $ping_scan[36]);
$ping_scan = str_replace(" img.src = '", "", $ping_scan);
$out[1] = str_replace("';", "", $ping_scan);
$sub_ping = ___get_curl("http://gdata.youtube.com/feeds/api/videos/{$youtube_id}");
preg_match('/<title type=\'text\'>(.+)<\/title>/', $sub_ping, $inout);
$inout = $inout[1];
if(!$out[1])
{
return VERSION_EXPIRED;
}
$out[1] = str_replace("generate_204", "videoplayback", $out[1]);
$out[1] = str_replace("\\/", "/", $out[1]);
$out[1] = rawurldecode($out[1]);
if($inout)
{
$out[1] .= "&file=".urlencode($inout).".mp4";
$filename = urlencode($inout).".mp4";
}
header("Content-Disposition: attachment; filename=\"".$filename."\"");
flush();
return ___get_curl($out[1]);
}
function ___get_curl($url)
{
if(!function_exists("curl_setopt"))
{
return CURL_NOT_INSTALLED;
}
$ch=curl_init();
curl_setopt($ch,CURLOPT_USERAGENT, "YouTube Video Downloader");
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,0);
curl_setopt($ch,CURLOPT_HEADER,0);
curl_setopt($ch,CURLOPT_FOLLOWLOCATION,1);
curl_setopt($ch,CURLOPT_TIMEOUT,100000);
$curl_output=curl_exec($ch);
$curlstatus=curl_getinfo($ch);
curl_close($ch);
return $curl_output;
}
Now, let me talk about the code. After around 30 minutes of research, I managed to hack their algorithm however it does have very strict boundaries which they've set in now.
As each request made for a video download is only tied to the IP, you would need to stream it out of your bandwidth which is what the script above does. However, sites like keepvid.com use Java to grab the download url and stream it to the user. I've also put in my own YouTube ID grabber which is very handy for these kind of tools.
Please acknowledge that as stated, YouTube does always change their algorithms and usage of this tool is at your own risk. I am not held liable for any damages made to YouTube.
Hope it sets a good boundary for you, spent a while making it.
I want to authenticate to another site using HTTP Digest authorization in PHP script.
My function has as parameter just content of the WWW-Authenticate header and I want to generate correct response (Authorization header). I have found many examples that explain how to implement this the other way (browser authenticate to my script) but not this way. I am missing function that is able to parse WWW-Authenticate header content a generate response. Is there some standard function or common library that implements this?
Ok, no answer yet, I have investigated python implementation that lied around here and rewrite it to PHP. It is the simplest possible piece of code. Supports only md5 hashing, but works for me:
function H($param) {
return md5($param);
}
function KD($a,$b) {
return H("$a:$b");
}
function parseHttpDigest($digest) {
$data = array();
$parts = explode(", ", $digest);
foreach ($parts as $element) {
$bits = explode("=", $element);
$data[$bits[0]] = str_replace('"','', $bits[1]);
}
return $data;
}
function response($wwwauth, $user, $pass, $httpmethod, $uri) {
list($dummy_digest, $value) = split(' ', $wwwauth, 2);
$x = parseHttpDigest($value);
$realm = $x['realm'];
$A1 = $user.":".$realm.":".$pass;
$A2 = $httpmethod.":".$uri;
if ($x['qop'] == 'auth') {
$cnonce = time();
$ncvalue = 1;
$noncebit = $x['nonce'].":".$ncvalue.":".$cnonce.":auth:".H($A2);
$respdig = KD(H($A1), $noncebit);
}else {
# FIX: handle error here
}
$base = 'Digest username="'.$user.'", realm="';
$base .= $x['realm'].'", nonce="'.$x['nonce'].'",';
$base .= ' uri="'.$uri.'", cnonce="'.$cnonce;
$base .= '", nc="'.$ncvalue.'", response="'.$respdig.'", qop="auth"';
return $base;
}
Usage:
# TEST
$www_header = 'Digest realm="TEST", nonce="356f2dbb8ce08174009d53c6f02c401f", algorithm="MD5", qop="auth"';
print response($www_header, "user", "password", "POST", "/my_url_query");
Don't know of a ready-made client-side implementation in PHP; you have to implement the RFC as if your script were the browser, authenticating to a remote server. Wikipedia's page on HTTP Digest has a nice example.
(it's not that hard - a couple of MD5 hashes. Some gotchas I encontered when building the server-side: string delimiter is ":" (colon), request method is also a part of the hash)