Check if the Jquery CDN protocol relative url exists PHP - php

I am trying to see if JQuery CDN exists or not via PHP
essentially what Paul Irish did here but with PHP instead.
http://paulirish.com/2010/the-protocol-relative-url/
I am trying following but it's not working. Is this possible without http/https?
This is based on How can I check if a URL exists via PHP?
$jquery_cur = '1.9.1'; // JQuery Version
$jquery_cdn = '//code.jquery.com/jquery-'.$jquery_cur.'.min.js';
$jquery_local = '/assets/js/libs/jquery-'.$jquery_cur.'.min.js';
$jquery_ver = $jquery_cdn; //Load the Jquery CDN version by default
$cdn_headers = #get_headers($jquery_ver);
if(strpos($cdn_headers[0], '404 Not Found')) {
$jquery_ver = $jquery_cdn;
}
else {
$jquery_ver = $jquery_local;
}

Hi check this solution you were check header without any protocol you need to add http or https to check files. test it with or without your internet connection.
$jquery_cur = '1.9.1'; // JQuery Version
$jquery_cdn = '//code.jquery.com/jquery-'.$jquery_cur.'.min.js';
$jquery_local = '/assets/js/libs/jquery-'.$jquery_cur.'.min.js';
$jquery_ver = $jquery_cdn; //Load the Jquery CDN version by default
$jquery_url = ( $_SERVER['SERVER_PORT'] == 443 ? "https:" : "http:").$jquery_cdn;
$test_url = #fopen($jquery_url,'r');
if($test_url === False) {
$jquery_ver = $jquery_local;
}
echo $jquery_ver;
with get_header
$headers = #implode('', #get_headers($jquery_url));
$test_url = (bool)preg_match('#^HTTP/.*\s+[(200|301|302)]+\s#i', $headers);
if($test_url === False) {
$jquery_ver = $jquery_local;
}
echo $jquery_ver;

Related

Getting PHP Code in broswer while installing Bitexchange script

I am trying to Install Bitexhchange script on Localhost, after successfully doing all changes, I am getting PHP code in my browser
bitexchnage support page.
<?
include 'lib/common.php';
ini_set("memory_limit","200M");
$CFG->print = $_REQUEST['print'];
$CFG->url = ($_REQUEST['current_url'] != 'index.php') ? ereg_replace("[^a-zA-Z_\-]", "",$_REQUEST['current_url']) : '';
$CFG->action = ereg_replace("[^a-zA-Z_\-]", "",$_REQUEST['action']);
$CFG->bypass = ($_REQUEST['bypass'] || $CFG->print);
$CFG->is_tab = (!$CFG->url) ? 1 : $_REQUEST['is_tab'];
$CFG->id = ereg_replace("[^0-9]", "",$_REQUEST['id']);
$CFG->target_elem = ereg_replace("[^a-zA-Z_\-]", "",$_REQUEST['target_elem']);
$CFG->in_popup = ($CFG->target_elem == 'edit_box' || $CFG->target_elem == 'message_box' || $CFG->target_elem == 'attributes box');
$CFG->inset_id = false;
$_SESSION['last_query'] = $_SESSION['this_query'];
$_SESSION['this_query'] = 'index.php?'.http_build_query((is_array($_POST)) ? $_POST : $_GET);
date_default_timezone_set($CFG->default_timezone);
String::magicQuotesOff();
Your php is probably not configured to accept the short open tags (<?)
You should either enable it into your php.ini file (see How to enable PHP short tags? ) or use the "long" php open tag : <?php

Writing a PHP script that open website's pages and stores page's content in variable

I have been building a search-engine, but now I need a web crawler that in PHP that can crawl my website for it's content.
I don't know if a web crawler / spider is the right word, but I was hoping and wondering if anyone could help me write a simple PHP script that opens all pages in a domain ending in .php or .html and takes the content in the pages and stores that in a variable as raw text. One variable per page.
If anyone knows of a good and open source script that does this or can help me write one, please share or do so— I would greatly appreciate all and any help.
Check out http://sourceforge.net/projects/php-crawler/
Or try this simple code that searches for the presence of the Google Analytics tracking code:
// Disable time limit to keep the script running
set_time_limit(0);
// Domain to start crawling
$domain = "http://webdevwonders.com";
// Content to search for existence
$content = "google-analytics.com/ga.js";
// Tag in which you look for the content
$content_tag = "script";
// Name of the output file
$output_file = "analytics_domains.txt";
// Maximum urls to check
$max_urls_to_check = 100;
$rounds = 0;
// Array to hold all domains to check
$domain_stack = array();
// Maximum size of domain stack
$max_size_domain_stack = 1000;
// Hash to hold all domains already checked
$checked_domains = array();
// Loop through the domains as long as domains are available in the stack
// and the maximum number of urls to check is not reached
while ($domain != "" && $rounds < $max_urls_to_check) {
$doc = new DOMDocument();
// Get the sourcecode of the domain
#$doc->loadHTMLFile($domain);
$found = false;
// Loop through each found tag of the specified type in the dom
// and search for the specified content
foreach($doc->getElementsByTagName($content_tag) as $tag) {
if (strpos($tag->nodeValue, $content)) {
$found = true;
break;
}
}
// Add the domain to the checked domains hash
$checked_domains[$domain] = $found;
// Loop through each "a"-tag in the dom
// and add its href domain to the domain stack if it is not an internal link
foreach($doc->getElementsByTagName('a') as $link) {
$href = $link->getAttribute('href');
if (strpos($href, 'http://') !== false && strpos($href, $domain) === false) {
$href_array = explode("/", $href);
// Keep the domain stack to the predefined max of domains
// and only push domains to the stack that have not been checked yet
if (count($domain_stack) < $max_size_domain_stack &&
$checked_domains["http://".$href_array[2]] === null) {
array_push($domain_stack, "http://".$href_array[2]);
}
};
}
// Remove all duplicate urls from stack
$domain_stack = array_unique($domain_stack);
$domain = $domain_stack[0];
// Remove the assigned domain from domain stack
unset($domain_stack[0]);
// Reorder the domain stack
$domain_stack = array_values($domain_stack);
$rounds++;
}
$found_domains = "";
// Add all domains where the specified search string
// has been found to the found domains string
foreach ($checked_domains as $key => $value) {
if ($value) {
$found_domains .= $key."\n";
}
}
// Write found domains string to specified output file
file_put_contents($output_file, $found_domains);
I found it here.

Single Sign On : Stuck on progressing image in jsconnect

what i want to achieve is, user login in my wordpress website and also login on vanilla forum, i have installed jsconnect plugin in vanilla forum, and using the php's jsconnect library from following location jsConnectPHP
Here is my code:
require_once('functions.jsconnect.php');
$clientID = "1501569466";
$secret = "xxxxxxxxxxxxxxxxxxxxxx";
$userD = array();
if( isset($_POST['log']) ){
$data = array();
$data['user_login'] = $_POST['u_user'];
$data['user_password'] = $_POST['u_pass'];
$data['remember'] = TRUE;
$user = wp_signon($data, FALSE);
if(!is_wp_error($user)){
$userD['uniqueid'] = $user->ID;
$userD['name'] = $user->user_login;
$userD['email'] = $user->user_email;
$userD['photourl'] = '';
$secure = true;
WriteJsConnect($user, $_GET, $clientID, $secret, $secure);
$redirect = "http://localhost/vanilla/entry/jsconnect?client_id={$clientID}";
echo "<script>document.location.href='".$redirect."';</script>";
}
}
when the user login on wordpress i redirect it to jsconnect url in vanilla where i just found only a progress image, and can't figure out where is the problem..
jsconnect authentication url expects jsonp array like the following:
test({"email":"test#test.com",
"name":"testuser",
"photourl":"",
"uniqueid":1234,
"client_id":"12345678",
"signature":"XXXX"})
You authorization url you specify inside jsconnect should see this output to process further. In fact I am stuck at that point. I could see vanilla forum when loaded gets this input but no login happens.

url cutoff by link module or pagepeeker formatter issue in drupal 7

I have drupal 7 question that may involve some php help. I have created an rss feed from google alerts that I am mapping into fields. I have had success mapping into all the fields except the link module field where I have put a field formatter that creates a pagepeeker screenshot by attaching the appropriate url server query to the feeds url. Feeds is doing its job by taking the Item URL (link) and putting it into the field correctly. I am having an issue with with either pagepeeker or link module because below keeps happening.
To recap-
Google Alert feed -> Link module field -> pagepeeker screenshot formatter
here's the error
The url that google alerts provides is
http://www.google.com/url?sa=X&q=http://www.beautyjunkiesunite.com/WP/2012/05/30/whats-new-anastasia-beverly-hills-lash-genius/&ct=ga&cad=CAcQARgAIAEoATAAOABA3t-Y_gRIAlgBYgVlbi1VUw&cd=F7w9TwL-6ao&usg=AFQjCNG2rbJCENvRR2_k6pL9RntjP66Rvg
When the link is displayed I get :
http://pagepeeker.com/thumbs.php?size=m&url=www.google.com/url
Its cutting the url at url and not getting the rest of the url.
Here's the code that pagepeeker uses to parse the url ?
<?php
function _pagepeeker_format_url($url, $domain_only = FALSE) {
if (filter_var($url, FILTER_VALIDATE_URL) === FALSE) {
return FALSE;
}
// try to parse the url
$parsed_url = parse_url($url);
if (!empty($parsed_url)) {
$host = (!empty($parsed_url['host'])) ? $parsed_url['host'] : '';
$port = (!empty($parsed_url['port'])) ? ':' . $parsed_url['port'] : '';
$path = (!empty($parsed_url['path'])) ? $parsed_url['path'] : '';
$query = (!empty($parsed_url['query'])) ? '?' . $parsed_url['query'] : '';
$fragment = (!empty($parsed_url['fragment'])) ? '#' . $parsed_url['fragment'] : '';
if ($domain_only) {
return $host . $port;
}
else {
return $host . $port . $path . $query . $fragment;
}
}
return FALSE;
}
Could this be the problem?
Please let me know I can clarify in any way.
What I need is for the entire url to get processed and not just the truncated one
Thanks !
I have seen a very similar question here at SO or drupal SO page but couldn't find it so I'm writing "my way" answer again here.
<?php
function _pagepeeker_format_url($url, $domain_only = FALSE) {
if (filter_var($url, FILTER_VALIDATE_URL) === FALSE) {
return FALSE;
}
//$url = 'http://www.google.com/url?sa=X&q=http://www.beautyjunkiesunite.com/WP/2012/05/30/whats-new-anastasia-beverly-hills-lash-genius/&ct=ga&cad=CAcQARgAIAEoATAAOABA3t-Y_gRIAlgBYgVlbi1VUw&cd=F7w9TwL-6ao&usg=AFQjCNG2rbJCENvRR2_k6pL9RntjP66Rvg';
// Now we use parse_url to split the url to an array with url parts.
$parsed_url = parse_url($url);
// $parsed_url['query'] is 'sa=X&q=http://www.beautyjunkiesunite.com/WP/2012/05/30/whats-new-anastasia-beverly-hills-lash-genius/&ct=ga&cad=CAcQARgAIAEoATAAOABA3t-Y_gRIAlgBYgVlbi1VUw&cd=F7w9TwL-6ao&usg=AFQjCNG2rbJCENvRR2_k6pL9RntjP66Rvg'
// ";" can also be used to separate params. But & is the usual one so using it.
$queryParts = explode('&', $parsed_url['query']);
$params = array();
foreach ($queryParts as $param) {
$item = explode('=', $param);
// sa = X, etc.
$params[$item[0]] = $item[1];
}
//$params is now an array with query parts.
// $params['sa'] = 'X' , q = 'http://www.beautyjunkiesunite.com/WP/2012/05/30/whats-new-anastasia-beverly-hills-lash-genius', etc.
if ($domain_only){
$new_url_parsts = parse_url($params['q']);
return $new_url_parts['host'];
}
else{
return $params['q'];
}

how to detect favicon (shortcut icon) for any site via php?

how to detect favicon (shortcut icon) for any site via php ?
i cant write regexp because is different in sites..
You could use this address and drop this into a regexp
http://www.google.com/s2/favicons?domain=www.example.com
This addresses the problem you were having with Regexp and the different results per domain
You can request http://domain.com/favicon.ico with PHP and see if you get a 404.
If you get a 404 there, you can pass the website's DOM, looking for a different location as referenced in the head element by the link element with rel="icon".
// Helper function to see if a url returns `200 OK`.
function $resourceExists($url) {
$headers = get_headers($request);
if ( ! $headers) {
return FALSE;
}
return (strpos($headers[0], '200') !== FALSE);
}
function domainHasFavicon($domain) {
// In case they pass 'http://example.com/'.
$request = rtrim($domain, '/') . '/favicon.ico';
// Check if the favicon.ico is where it usually is.
if (resourceExists($request)) {
return TRUE;
} else {
// If not, we'll parse the DOM and find it
$dom = new DOMDocument;
$dom->loadHTML($domain);
// Get all `link` elements that are children of `head`
$linkElements = $dom
->getElementsByTagName('head')
->item(0)
->getElementsByTagName('link');
foreach($linkElements as $element) {
if ( ! $element->hasAttribute('rel')) {
continue;
}
// Split the rel up on whitespace separated because it can have `shortcut icon`.
$rel = preg_split('/\s+/', $element->getAttribute('rel'));
if (in_array('link', $rel)) {
$href = $element->getAttribute('href');
// This may be a relative URL.
// Let's assume http, port 80 and Apache
$url = 'http://' . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI'];
if (substr($href, 0, strlen($url)) !== $url) {
$href = $url . $href;
}
return resourceExists($href);
}
}
return FALSE;
}
If you want the URL returned to the favicon.ico, it is trivial to modify the above function.
$address = 'http://www.youtube.com/'
$domain = parse_url($address, PHP_URL_HOST);
or from a database
$domain = parse_url($row['address_column'], PHP_URL_HOST);
display with
<image src="http://www.google.com/s2/favicons?domain='.$domain.'" />

Categories