I need any link that has a "a href=" tag when clicked to be received via curl. I can't hard code these links as they are from a dynamic site so could be anything. How would I achieve this?
Thanks
Edit: Let me explain more. I have an app on my pc that uses a web front end. It catalogs files and gives yo options to rename delete etc. I want to add a public view however if I put it as is online then anyone can delete rename files. If I curl the pages I can remove the menu bars and editing options through the use of a different css. That part all works. The only part that isn't working is if I click on a link on the page it directs me back to the original link address and that defeats the point as the menu bars are back. I need it to curl the clicked links. Hope that makes more sense..
Here is my code that fetches the original link and curls that and changes the css to point to my own css. It points the java script to the original as I dont need to change that. I now need to make the "a href" links on the page when clicked be called by curl and not go to the original destination
<?php
$ch = curl_init();
curl_setopt ($ch, CURLOPT_URL, 'http://192.168.0.14:8081/home/');
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$curl_response = curl_exec($ch);
curl_close($ch);
//Change link url
$link = $curl_response;
$linkgo = '/sickbeard_public';
$linkfind = 'href="';
$linkreplace = 'href="' . $linkgo ;
$link = str_replace($linkfind, $linkreplace, $link);
//Change js url
$js = $link;
$jsgo = 'http://192.168.0.14:8081';
$jsfind = 'src="';
$jsreplace = 'src="' . $jsgo ;
$js = str_replace($jsfind, $jsreplace, $js);
//Fix on page link errors
$alink = $js;
$alinkgo = 'http://192.168.0.14:8081/';
$alinkfind = 'a href="/sickbeard_public/';
$alinkreplace = 'a href="' . $alinkgo ;
$alink = str_replace($alinkfind, $alinkreplace, $alink);
//Echo page back
echo $alink;
?>
You could grab all the URLs using a regular expression
// insert general warning about how parsing HTML using regex is evil :-)
preg_match('/href="([^"]+)"/', $html, $matches);
$urls = array_slice($matches, 1);
// Now just loop through the array and fetch the URLs with cUrl...
While I can't imagine why you would do that I think you should use ajax.
Attach an event on every a tag and send them to a script on your server where the magic of curl would happen.
Anyway you should explain why you need to fetch data with curl.
As far as I can understand your question you need to get the contents of URL via CURL... so here is the solution
Click here to get via curl
Then attach an event with the above <a> tag, e.g. in JQuery
$("#my_link").click(function(){
var target_url = $(this).attr("href");
//Send an ajax call to some of your page like cURL_wrapper.php with target_url as parameter in get
});
then in cURL_wrapper.php do follwoing
<?php
//Get the $target_url here from $_GET[];
$ch = curl_init($your_domain");
$fp = fopen("$target_url", "r");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
Related
Search on Google images with car keyword & get car images.
I found two links to implement like this,
PHP class to retrieve multiple images from Google using curl multi
handler
Google image API using cURL
implement also but it gave 4 random images not more than that.
Question: How to get car images in PHP using keyword i want to implement like we search on Google?
Any suggestion will be appreciated!!!
You could use the PHP Simple HTML DOM library for this:
<?php
include "simple_html_dom.php";
$search_query = "ENTER YOUR SEARCH QUERY HERE";
$search_query = urlencode( $search_query );
$html = file_get_html( "https://www.google.com/search?q=$search_query&tbm=isch" );
$image_container = $html->find('div#rcnt', 0);
$images = $image_container->find('img');
$image_count = 10; //Enter the amount of images to be shown
$i = 0;
foreach($images as $image){
if($i == $image_count) break;
$i++;
// DO with the image whatever you want here (the image element is '$image'):
echo $image;
}
This will print a specific number of images (number is set in '$image_count').
For more information on the PHP Simple HTML DOM library click here.
i am not very much sure about this ,but still google gives a nice documentation about this.
$url = "https://ajax.googleapis.com/ajax/services/search/images?" .
"v=1.0&q=barack%20obama&userip=INSERT-USER-IP";
// sendRequest
// note how referer is set manually
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_REFERER, /* Enter the URL of your site here */);
$body = curl_exec($ch);
curl_close($ch);
// now, process the JSON string
$json = json_decode($body);
// now have some fun with the results...
this is from the official Google's developer guide regarding image searching.
for more reference you can have a reference of the same here.
https://developers.google.com/image-search/v1/jsondevguide#json_snippets_php
in $url you must set the search keywords.
I'm a beginner at PHP. I have one task in my project, which is to fetch all videos from a YouTube link using curl in PHP. Is it possible to show all videos from YouTube?
I found this code with a Google search:
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://www.youtube.com');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$contents = curl_exec ($ch);
echo $contents;
curl_close ($ch);
?>
It shows the YouTube site, but when I click any video it will not play.
You can get data from youtube oemebed interface in two formats Xml and Json which returns metadata about a video:
http://www.youtube.com/oembed?url={videoUrlHere}&format=json
Using your example, a call to:
http://www.youtube.com/oembed?url=http://www.youtube.com/watch?v=B4CRkpBGQzU&format=json
So, You can do like this:
$url = "Your_Youtube_video_link";
Example :
$url = "http://www.youtube.com/watch?v=m7svJHmgJqs"
$youtube = "http://www.youtube.com/oembed?url=" . $url. "&format=json";
$curl = curl_init($youtube);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$return = curl_exec($curl);
curl_close($curl);
$result = json_decode($return, true);
echo $result['html'];
Try it...Hope it will help you.
You could use curl to retrieve the Google main page (or an alternative page) and parse the returned html using a library such as html5lib. If you wanted to try this approach the first step could be to 'view source' on the relevant page and look at how the links are structured.
A more elegant way to approach the problem could be to use the Youtube API (a way to interact with the Youtube system), which may allow you to retrieve the links directly. e.g it may be possible to just ask the Youtube API to send you the links. Try this.
You can also get all youtube's channel videos using file_get_contents
bellow is sample and working code
<?php
$Youtube_API_Key = ""; // you can obtain api key : https://developers.google.com/youtube/registering_an_application
$Youtube_channel_id = "";
$TotalVideso = 50; // 50 is max , if you want more video you need Youtube secret key.
$order= "date"; ////allowed order : date,rating,relevance,title,videocount,viewcount
$url = "https://www.googleapis.com/youtube/v3/search?key=".$Youtube_API_Key."&channelId=".$Youtube_channel_id."&part=id&order=".$order."&maxResults=".$TotalVideso."&format=json";
$data = file_get_contents($url);
$JsonDecodeData=json_decode($data, true);
print_r($data);
?>
First of all have a look at here,
www.zedge.net/txts/4519/
this page has so many text messages , I want my script to open each of the message and download it,
but i am having some problem,
This is my simple script to open the page,
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.zedge.net/txts/4519");
$contents = curl_exec ($ch);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_close ($ch);
?>
The page download fine but how would i open every text message page inside this page one by one and save its content in a text file,
I know how to save the content of a webpage in a text file using curl but in this case there are so many different pages inside the page i've downloaded how to open them one by one seperately ?
I've this idea but don't know if it will work,
Downlaod this page,
www.zedge.net/txts/4519
look for the all the links of text messages page inside the page and save each link into one text file (one in each line), then run another curl session , open the text file read each link one by one , open it copy the content from the particular DIV and then save it in a new file.
The algorithm is pretty straight forward:
download www.zedge.net/txts/4519 with curl
parse it with DOM (or alternative) for links
either store them all into text file/database or process them on the fly with "subrequest"
// Load main page
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, "http://www.zedge.net/txts/4519");
$contents = curl_exec ($ch);
$dom = new DOMDocument();
$dom->loadHTML( $contents);
// Filter all the links
$xPath = new DOMXPath( $dom);
$items = $xPath->query( '//a[class=myLink]');
foreach( $items as $link){
$url = $link->getAttribute('href');
if( strncmp( $url, 'http', 4) != 0){
// Prepend http:// or something
}
// Open sub request
curl_setopt($ch, CURLOPT_URL, "http://www.zedge.net/txts/4519");
$subContent = curl_exec( $ch);
}
See documentation and examples for xPath::query, note that DOMNodeList implements Traversable and therefor you can use foreach.
Tips:
Use curl opt COOKIE_JAR_FILE
Use sleep(...) not to flood server
Set php time and memory limit
I used DOM for my code part. I called my desire page and filtered data using getElementsByTagName('td')
Here i want the status of my relays from the device page. every time i want updated status of relays. for that i used below code.
$keywords = array();
$domain = array('http://USERNAME:PASSWORD#URL/index.htm');
$doc = new DOMDocument;
$doc->preserveWhiteSpace = FALSE;
foreach ($domain as $key => $value) {
#$doc->loadHTMLFile($value);
//$anchor_tags = $doc->getElementsByTagName('table');
//$anchor_tags = $doc->getElementsByTagName('tr');
$anchor_tags = $doc->getElementsByTagName('td');
foreach ($anchor_tags as $tag) {
$keywords[] = strtolower($tag->nodeValue);
//echo $keywords[0];
}
}
Then i get my desired relay name and status in $keywords[] array.
Here i am sharing of Output.
If you want to read all messages in the main page. then first you have to collect all link for separate messages. Then you can use it for further same process.
I am updating facebook status with my feed update from my site,
and facebook status followed by link of feed.
I'm using
echo $_POST['msg']." #"."<a href='http://xxx.ch/comment.php?id=".$result."'>link</a>";
but the status updates in facebook is like that,
msg #<a href='http://xxx.ch/comment.php?id=2>link</a>
I want only
msg # link
Facebook doesn't support html tags in messages. Just specify url and it will be shown as url.
Links should be posted in link parameter, you can also select custom name for that one. Don't use message for this purpose
i got it using tinyurl:
function get_tiny_url($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,'http://tinyurl.com/api-create.php?url='.$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$new_url = get_tiny_url('http://xxx.ch/comment.php?id='.$result);
echo $_POST['msg']." # ".$new_url;
I need to get the complete output from an aspx site. When the user leaves I will save what's in some specific elements in cookies. The problem is that the aspx is on a domain I don't have access to. I want the output to behave as in an iframe so links need to be clickable but it won't leave my page.
I think of either AJAX with PHP-proxy or an iframe that I can modify content in.
Is this possible?
If it is possible and it involves server-side code I would like to know if there are any free web hosts that support the full code( for example almost every free web host has safe_mode on for PHP).
EDIT: I want to display this page : School scheme. The URL doesn't to change, it just sends requests to the server (think via JavaScript). When the user leaves I will see what's in the select box id="TypeDropDownList" and what's in the select box id="ScheduleIDDropDownList".
When the user returns to my page I will print those values to the page via URL like this "http://www.novasoftware.se/webviewer/(S(lv1isca2txx1bu45c3kvic45))/design1.aspx?schoolid=27500&code=82820&type=" + type + "&id=" + id + "
I tried several php proxy scripts on 000webhost before I posted here.
for example this :
<?php
ob_start();
function logf($message) {
$fd = fopen('proxy.log', "a");
fwrite($fd, $message . "\n");
fclose($fd);
}
?>
<?
$url = $_REQUEST['url'];
logf($url);
$curl_handle = curl_init($url);
curl_setopt($curl_handle, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl_handle, CURLOPT_USERAGENT, "Owen's AJAX Proxy");
$content = curl_exec($curl_handle);
$content_type = curl_getinfo($curl_handle, CURLINFO_CONTENT_TYPE);
curl_close($curl_handle);
header("Content-Type: $content_type");
echo $content;
ob_flush();
?>
But it returns Warning: curl_setopt(): supplied argument is not a valid cURL handle resource in /home/a5379897/public_html/ajax-proxy.php on line 16
I tried to contact them about this because they say they have cURL enabled but they haven't responded yet.
I think it would be possible to just display the two select boxes when the user first visit the page. When options is selected it will make an iframe show the right page by passing "http://www.novasoftware.se/webviewer/(S(lv1isca2txx1bu45c3kvic45))/design1.aspx?schoolid=27500&code=82820&type=" + type + "&id=" + id + " to the src attribute.
The problem with that is that I will need to retrieve the select boxes someway and I will have the same problem.
You would need to use PHP as Javascript doesn't doesn't allow cross domain requests. Your PHP code would literally grab the page the client wants, process it (changing link's href to your page with a get variable of the page the original href links to). When they click the link they will be sent to the same page they are on now but the page will grab the new page and return that(processing that page too) and so on.
000webhost are a nice free webhost that allow you to do most of PHP's functions and don't put adverts on your site.
To get the whole aspx output as a string to manipulate, you can use file_get_contents(http://yoursite.com/yourpage.aspx);
For best results, open a stream as the context via http.
<?php
// Create a stream
$opts = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept-language: en\r\n" .
"Cookie: foo=bar\r\n"
)
);
$context = stream_context_create($opts);
// Open the file using the HTTP headers set above
$file = file_get_contents('http://www.example.com/', false, $context);
?>
Thanks to greg I could create this script that gets the page.
<html>
<head>
</head>
<body>
<?php
// Create a stream
$opts = array(
'http'=>array(
'method'=>"GET",
'header'=>"Accept-language: en\r\n" .
"Cookie: foo=bar\r\n"
)
);
$context = stream_context_create($opts);
$host = 'http://www.novasoftware.se/webviewer/(S(bkjwdqntqzife4251x4sdx45))/';
$url = '/design1.aspx?schoolid=27500&code=82820&type=3&id={7294F285-A5CB-47D6-B268-E950CA205560}';
$changetothis='src="'.$host;
// Open the file using the HTTP headers set above
$file = file_get_contents($host.$url, false, $context);
$changed = str_replace('src="', $changetothis,$file);
echo $changed;
?>
</body>
</html>