I have been given API documentation which I don't quite get as there is no URL to connect up to?
http://support.planetdomain.com/index.php?_m=downloads&_a=viewdownload&downloaditemid=14&nav=0
I'd prefer doing this in PHP..
How can I run a 10 iteration loop, doing a check if domain is available, if it's response is available, then perform the register command and exit the script (using the code provided in thd documentation).
Thank you.
For the basics, I suggest using cURL to access resources by HTTP POST.
I put this into a function:
function api_call($url,$data,$timeout=20)
{
$response=false;
$ch=curl_init($url);
curl_setopt_array($ch,array(CURLOPT_RETURNTRANSFER=>true,CURLOPT_NOBODY=>false,CURLOPT_TIMEOUT=>$timeout,CURLOPT_FORBID_REUSE=>1,CURLOPT_FRESH_CONNECT=>1,CURLOPT_POST=>true));
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);//this is an array containing the data you're sending them - an associative array describing which call.
//data example:
//array('operation'=>'user.verify','admin.username'=>'you','admin.password'=>'pass','reseller.id'=>'xxx')
$response=curl_exec($ch);
$status_code=intval(curl_getinfo($ch,CURLINFO_HTTP_CODE));
curl_close($ch);
return array('status'=>$status_code,'url'=>$url,'data'=>$response);
}
However, you need to supply a URL. Lucanos noted in the comments it is "api.planetdomain.com/servlet/TLDServlet".
http://support.planetdomain.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=77
by the way, I only use cURL for GET requests, so I might be missing some details on how to do a POST right. I tried to fill it in, though.
You ask "How can I run a 10 iteration loop, doing a check if domain is available, if it's response is available, then perform the register command and exit the script (using the code provided in thd documentation)."
Well, here's some pseudocode mixed with valid PHP. I don't know the domainplanet API as you know, so this will NOT work as is but it should give you a decent idea about it.
for($i=0;$i<10;$i++)
{
//set up the domain check call
$domains=array('futunarifountain.co.uk','megahelicopterunicornassaultlovepageant.ly');
$domain_check_call=array('domain.name'=>$domains[$i]);
$domain_info=api_call($dp_base_url,$domain_check_call);
$info=json_decode($domain_info,true);//IF they use JSON and not XML or something
if($info['domain']['status']=='available')
{
$register_call=something();//make the API calls to register the domain, similar to the above
if($register_call['success']){ exit();/*or whatever*/ }
}
}
Hope that helps get you on the right track.
Related
I am trying to get page meta tags and description from given url .
I have url array that I have to loop through to send curl get request and get each page meta, this takes a lot of time to process .
Is there any way to process all urls simultaneuosly at same time?
I mean send request to all urls at same time and then receive
response as soon as request is completed respectively.
For this purpose I have used
curl_multi_init()
but its not working as expected. I have used this example
Simultaneuos HTTP requests in PHP with cURL
I have also used GuzzleHttp example
Concurrent HTTP requests without opening too many connections
my code
$urlData = [
'http://youtube.com',
'http://dailymotion.com',
'http://php.net'
];
foreach ($urlData as $url) {
$promises[] = $this->client->requestAsync('GET', $url);
}
Promise\all($promises)->then(function (array $responses) {
foreach ($responses as $response) {
$htmlData = $response->getBody();
dump($profile);
}
})->wait();
But I got this error
Call to undefined function GuzzleHttp\Promise\Promise\all()
I am using Guzzle 6 and Promises 1.3
I need a solution whether it is in curl or in guzzle to send simultaneous request to save time .
Check your use statements. You probably have a mistake there, because correct name is GuzzleHttp\Promise\all(). Maybe you forgot use GuzzleHttp\Promise as Promise.
Otherwise the code is correct and should work. Also check that you have cURL extension enabled in PHP, so Guzzle will use it as the backend. It's probably there already, but worth to check ;)
I'm trying to get a JSON string from a page in my Laravel Project. Using this:
$json = file_get_contents($url);
$data = json_decode($json, TRUE);
return View::make('adventuretime.marceline')
->with('json', $json)
->with('title', 'ICE KING')
->with('description', 'I am the Ice King')
->with('content', 'ice king');
But since I'm only using a localhost, I think this doesn't work that's why it doesn't output anything. I want to know what is the proper way for it to be flexible and be able to get the JSON string with any $url value using php?
Looking at the comments above, it is possible that the $url you are using is not valid, check it out by pointing your browser there and see what happens.
If you are sure that the $url is fine, but you still get the 404 Not Found error - verify that you have proper Laravel routing defined for that address. If the routes are fine, maybe you forgot to do
composer dump-autoload
after making modifications in your routes.php. If so, try the above and refresh the browser to see if it helps.
Furthermore, bear in mind that using your current function, you can submit only GET requests. What is more, this function might not be available for fetching remote urls, on some hosting servers due to security reasons. If you still want to use it, it'd be good to check
if($json !== FALSE)
before you process the $json response. If the file_get_contents fails it will return false.
Reffering to the part of your question
what is the proper way for it to be flexible and be able to get the JSON string with any $url
I'd suggest using cURL, as a standard and convenient way to fetch remote content. Using cURL you have better control over the process of sending the http request and receiving the "answer" it returns. Personaly, in my Laravel 4 apps I often use this package jyggen/curl. You can read the docs for it here: jyggen docs
If you are not satisfied with cURL and you want greater control try Guzzle As the authors state, Guzzle is a PHP HTTP client & framework for building RESTful web service clients.
I was wondering how to send a php variable from one server to another php script on another server?
I have 2 php scripts on 2 different server and one must send vars to the other.
I've been searching with little luck.
Would appreciate any help.
You could achieve that using curl and sending the variable as a GET value.
Something like this:
$data = "data you want to send";
$data = urlencode($data);
$url = "http://example.com?data=" . $data;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_exec($ch);
curl_close($ch);
Let's assume $data = "foobar"
Doing the above from a PHP script would be the same as someone visiting http://example.com?data=foobar from a browser.
You could obviously send it to any script using the url:
http://example.com/yourscript.php?data=foobar
At yourscript.php you can get the data at $_GET['data'], do some input validation to ensure it is being sent from your script and not from someone else via a browser (more on that later) and then proceed with your script.
For this to work, yourscript.php will have to reside in the public html folder of youtr webhost so it is accessible to your other script.
SECURITY
Whether you are passing the data over GET or POST, someone else can send (possibly malicious) data to your script as well. Thus, when yourscript.php receives data, there needs to be a way for it to ensure you are the sender of the script. An easy way to achieve this is: decide on any arbitrary number known only to you, say, 12.
Concatenate the number with the data you are passing and calculate the md5 hash and send it as another get variable.
In this case, you would calculate md5("12foobar")
and the URL would be: http://example.com/yourscript.php?data=foobar&auth=hash
When yourscript.php receives the data, it calculates the same hash (using the number 12, known to no one else) and if the hash it calculates matches with $_GET['auth'], you can be sure you sent the data.
If someone tried to imitate you and send data, they would not know how you calculate the hash, and would thus send the wrong hash.
PS
Another way to ensure rock solid security, would be to just check the IP address of the user-agent at $_SERVER['REMOTE_ADDR']. If it is the IP address of the webhost where your other script resides, then you know it is you.
I haven't thought this method through, so there might be some loopholes.
You can do that either using GET query strings (second_php?var=value) or using a curl connection with POST method and then send your data over POST.
You should probably use SOAP. It's used for remote function calls and it brings you little more overhead than just calling http requests, but it also brings you guarantee that remote function will be executed (or will cause error), it will directly return whatever datatype you need and I believe that's what this technology was developed for :)
I want to pass data (a 16 digit key) to another site which will validate the key and return a response. I want to grab the response and check if it's a valid key on my side so I can do some extra stuff with it.
Is this possible? If not, why can't it be done?
EDIT:
Ok here's the process. I am grabbing this key from a user input, which can be accessed by grabbing the POST data. After, the data needs to be sent into another form with 1 input field on another site. Ideally, this will produce a result that I can grab on my end.
Some sample test code to give you an idea of calling a remote site behind the scenes:
$key = $_POST['key']
// Create a curl handle to the remote checking server
$ch = curl_init('http://remoteurl/?key=' . $key);
// Execute
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Get the reply back
$reply = curl_exec($ch);
curl_close($ch);
// Do stuff with the reply
if ($reply == '...')
// Save $key?!
Ya, it's possible and there are many ways to accomplish it.
cURL is probably going to be your best bet.
Or you could use sockets and directly connect to the target machine on port 80 and throw an HTTP request with form data on it for ultimate control.
Or you could also do it on the client side with javascript.
Given a list of urls, I would like to check that each url:
Returns a 200 OK status code
Returns a response within X amount of time
The end goal is a system that is capable of flagging urls as potentially broken so that an administrator can review them.
The script will be written in PHP and will most likely run on a daily basis via cron.
The script will be processing approximately 1000 urls at a go.
Question has two parts:
Are there any bigtime gotchas with an operation like this, what issues have you run into?
What is the best method for checking the status of a url in PHP considering both accuracy and performance?
Use the PHP cURL extension. Unlike fopen() it can also make HTTP HEAD requests which are sufficient to check the availability of a URL and save you a ton of bandwith as you don't have to download the entire body of the page to check.
As a starting point you could use some function like this:
function is_available($url, $timeout = 30) {
$ch = curl_init(); // get cURL handle
// set cURL options
$opts = array(CURLOPT_RETURNTRANSFER => true, // do not output to browser
CURLOPT_URL => $url, // set URL
CURLOPT_NOBODY => true, // do a HEAD request only
CURLOPT_TIMEOUT => $timeout); // set timeout
curl_setopt_array($ch, $opts);
curl_exec($ch); // do it!
$retval = curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200; // check if HTTP OK
curl_close($ch); // close handle
return $retval;
}
However, there's a ton of possible optimizations: You might want to re-use the cURL instance and, if checking more than one URL per host, even re-use the connection.
Oh, and this code does check strictly for HTTP response code 200. It does not follow redirects (302) -- but there also is a cURL-option for that.
Look into cURL. There's a library for PHP.
There's also an executable version of cURL so you could even write the script in bash.
I actually wrote something in PHP that does this over a database of 5k+ URLs. I used the PEAR class HTTP_Request, which has a method called getResponseCode(). I just iterate over the URLs, passing them to getResponseCode and evaluate the response.
However, it doesn't work for FTP addresses, URLs that don't begin with http or https (unconfirmed, but I believe it's the case), and sites with invalid security certificates (a 0 is not found). Also, a 0 is returned for server-not-found (there's no status code for that).
And it's probably easier than cURL as you include a few files and use a single function to get an integer code back.
fopen() supports http URI.
If you need more flexibility (such as timeout), look into the cURL extension.
Seems like it might be a job for curl.
If you're not stuck on PHP Perl's LWP might be an answer too.
You should also be aware of URLs returning 301 or 302 HTTP responses which redirect to another page. Generally this doesn't mean the link is invalid. For example, http://amazon.com returns 301 and redirects to http://www.amazon.com/.
Just returning a 200 response is not enough; many valid links will continue to return "200" after they change into porn / gambling portals when the former owner fails to renew.
Domain squatters typically ensure that every URL in their domains returns 200.
One potential problem you will undoubtably run into is when the box this script is running on looses access to the Internet... you'll get 1000 false positives.
It would probably be better for your script to keep some type of history and only report a failure after 5 days of failure.
Also, the script should be self-checking in some way (like checking a known good web site [google?]) before continuing with the standard checks.
You only need a bash script to do this. Please check my answer on a similar post here. It is a one-liner that reuses HTTP connections to dramatically improve speed, retries n times for temporary errors and follows redirects.