I have a web-service, which is deployed on a server. The web-service is working perfectly. Now I want to do is to deploy the same web-service on another server. And then at my client site I want to check that If any of the server is running than the call is made.
I want to do something like that
$Ip1= "192.168.1.1/GetSomeData";
$Ip2= "202.47.22.1/GetSomeDate";
Now I want to check the Ip1 whether it is running or not
if(Ip1=="running")
{
//call the web-service
}//if the Ip1 is not working
else if (Ip2=="running")
{
//call the web-service
}
else
{
//do nothing
}
How can i achieve that in Yii2?
Any help would be highly appreciated.
If web-service is under your control you could make an echo method ans simply check if it echoes back an answer to you using normal web-service call.
On the other hand you could use curl to check for an existing file or service on your web-server as in this post:
function isRunning($url=NULL)
{
if($url == NULL) return false;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
return ($httpcode>=200 && $httpcode<300) ? true : false;
}
Of course then your code looks something like this:
if(isRunning(ipAddress1))
{
//call the web-service
}//if the Ip1 is not working
else if (isRunning(ipAddress2))
{
//call the web-service
}
else
{
//do nothing
}
Where ipAddress is a file or path on that server. There are also a lot of other ways. You could use fsockopen if you have an open port, or use shell_exec to fetch ping result....
Hope my answer helped.
Related
I'm trying to create a simple script that'll let me know if a website is based off WordPress.
The idea is to check whether I'm getting a 404 from a URL when trying to access its wp-admin like so:
https://www.audi.co.il/wp-admin (which returns "true" because it exists)
When I try to input a URL that does not exist, like "https://www.audi.co.il/wp-blablabla", PHP still returns "true", even though Chrome, when pasting this link to its address bar returns 404 on the network tab.
Why is it so and how can it be fixed?
This is the code (based on another user's answer):
<?php
$file = 'https://www.audi.co.il/wp-blabla';
$file_headers = #get_headers($file);
if(!$file_headers || strpos($file_headers[0], '404 Not Found')) {
$exists = "false";
}
else {
$exists = "true";
}
echo $exists;
You can try to find the wp-admin page and if it is not there then there's a good change it's not WordPress.
function isWordPress($url)
{
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER , 1 );
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, false);
// grab URL and pass it to the browser
curl_exec($ch);
$httpStatus = curl_getinfo($ch, CURLINFO_RESPONSE_CODE);
// close cURL resource, and free up system resources
curl_close($ch);
if ( $httpStatus == 200 ) {
return true;
}
return false;
}
if ( isWordPress("http://www.example.com/wp-admin") ) {
// This is WordPress
} else {
// Not WordPress
}
This may not be one hundred percent accurate as some WordPress installations protect the wp-admin URL.
I'm probably late to the party but another way you can easily determine a WordPress site is by crawling the /wp-json. If you're using Guzzle by PHP, you can do this:
function isWorpress($url) {
try {
$http = new \GuzzleHttp\Client();
$response = $http->get(rtrim($url, "/")."/wp-json");
$contents = json_decode($response->getBody()->getContents());
if($contents) {
return true;
}
} catch (\Exception $exception) {
//...
}
return false;
}
I have a limit of 25 requests/min from PUBGs official API. For some reason instead of it requesting twice for each search its using up 4 requests. I can't figure out why. I have checked that the code isn't running twice. Only once, but still it's requesting 4 times.
UPDATE:
I tried making a separate page and apparently there is a bug somewhere calling my function twice. Still don't know why but I'm now 99% sure it's not the function itself.
Code For My Request
function getProfile($profileName, $region, $seasonDate){
// Just check if there is an acctual user
if($profileName === null){
$data->error = "Player Not Found";
$data->noUser = true;
return $data;
}else{
$season = "division.bro.official.".$seasonDate;
/*
Get The UserID
*/
$ch = curl_init("https://api.pubg.com/shards/$region/players?filter[playerNames]=$profileName");
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Authorization: Bearer APIKEY',
'Accept: application/vnd.api+json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$rawData = json_decode(curl_exec($ch), true);
$data->playerId = $rawData["data"][0]["id"];
curl_close($ch);
// Testing if user exists
if($rawData["errors"][0]["title"] === "Not Found"){
$data->noUser = true;
$data->error = "Player Not Found";
return $data;
}else{
/*
Get The acctual stats
*/
$ch = curl_init("https://api.pubg.com/shards/$region/players/$data->playerId/seasons/$season");
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Authorization: Bearer APIKEY',
'Accept: application/vnd.api+json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data->playerDataJSON = curl_exec($ch);
$data->playerData = json_decode($data->playerDataJSON, true);
curl_close($ch);
return $data;
}
}
}
This is how it's getting called
if (isset($_POST['search-username'])) {
$username = $_POST['search-username'];
header("Location: /profile/$username/pc-na/2018-01/overall/tpp");
die();
}
In The actual profile php
$data = getProfile($page_parts[1], $page_parts[2], $page_parts[3]);
absolutely sure it's only called once? set a lock on it. change it to
function getProfile($profileName, $region, $seasonDate){
static $once=true;
if($once!==true){
throw new \LogicException("tried to run getProfile() twice!");
}
$once=false;
Shortly after I figure out it's not the function I realized that the culprit was an empty script I was calling. I knew this script created an error which I didn't really care about since it was empty and I had no idea why it was creating the error. For some obscure reason this script created the error. I'll just make a lesson out of it to always fix the smallest errors.
I've also been seeing this behavior - a script with a single curl_exec() request gets called twice.
The strange thing is this was only happening when running on localhost (under a wampp installation) but when run from any other webserver was fine and it is just called once.
I never managed to debug it completely but it seems to be an issue with the local server so test elsewhere if you are seeing this.
I am making a website that will check if a website is working and live. I pass in the URL of the site I would like to check and the following code will check if the site is live and return the HTTP response code as well as true or false.
function urlExists($url=NULL)
{
if($url == NULL) return false;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
$httpcode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
if ($httpcode == 0) {
return array (false, $httpcode);
}
else if($httpcode < 400){
return array (true, $httpcode);
} else {
return array (false, $httpcode);
}
}
With one of the sites I am testing though I am getting the HTTP response code of 0 even though I know that the site is live and working.
The site is very slow as its a large site on a not very powerful server so response times can vary between 7 - 25 seconds.
Any help would be greatly appreciated.
Thanks,
Sam
Based on these two links:-
https://curl.haxx.se/libcurl/c/CURLOPT_TIMEOUT.html
And
https://curl.haxx.se/libcurl/c/CURLOPT_CONNECTTIMEOUT.html
First one is:- set maximum time the request is allowed to take
Second one is:- timeout for the connect phase
As you said that the Site URL you are hitting is taking 7-25 second for responding. meanwhile your CURL request is terminated and closed because of these two time settings.
Increase these two time settings in your code and it will work for you.
thanks.
I will offer 2 alternatives for you to compare - along with your curl() function, you will have 3 options to see which one is better/faster for you.
Option A (all php versions), requires fopen() to be activated:
if (!$fp = fopen($url, 'r'))
{
trigger_error("Unable to open URL ($url)", E_USER_ERROR);
}
$headers = stream_get_meta_data($fp);
fclose($fp);
$http_header_info = $headers['wrapper_data'][0];
$httpCode = (int)substr($http_header_info, 9, 3);
Option B (php5+):
$headers = get_headers($url, 1);
$http_header_info = $headers[0];
$httpCode = substr($http_header_info, 9, 3);
Also, if anyone has benchmarks on these 3 approaches, i am curious to see which is more appropriate (only for retrieving http response headers of course)
Code 0 returns often when used invalid URL syntax or host not found error.
You can also call curl_error($ch) function (http://php.net/manual/en/function.curl-error.php) to determine error details.
I am trying to make a simple web crawler with PHP and I am having issues getting the HTML source of a given URL. I am currently using cURL to get the source.
My code:
$url = "http://www.nytimes.com/";
function url_get_contents($Url) {
if (!function_exists('curl_init')) {
die('CURL is not installed!');
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $Url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
if ($output === false) { die(curl_error($ch)); }
curl_close($ch);
return $output;
}
echo url_get_contents($url);
?>
Right now nothing gets echoed and there aren't any errors, so it is a bit of a mystery. Any suggestions or fixes will be appreciated
Edit: I added
if ($output === false) { die(curl_error($ch)); }
to the middle of the function and it ended up giving me an error (finally!):
Could not resolve host: www.nytimes.com
I still do not really know what the problem is. Any ideas?
Thanks
Turns out that it was not a cURL problem
My host server (Ubuntu VM) was working off of a "host-only" network adapter which blocked access to all other IPs or domains outside of it's host machine making it impossible for cURL to connect to URLs.
Once it was changed to "bridged" network adapter I had access to the outside world.
Hope this helps.
Variable case mismatch ($url vs. $Url). Change:
function url_get_contents($Url) {
to
function url_get_contents($url) {
I am trying to fix links on a website. I have to check for 404 for all links on a page. I am using php curl to check response http code. But strangely it always return 200 OK.
Here is my code for is_404(),
$curl = curl_init($url);
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
$ret = true;
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 200) {
$ret = false;
}
}
curl_close($curl);
return $ret;
I always return 200 OK even on a page where there is 404 page is displaying. Server is handling all 404 with proper page.
Any help would be appreciated!
i had the same issue until i understand it was because of multiple failover ip in my network configuration
host A ( failover ip1,failover ip2 )
host B ( failover ip1,failover ip2 )
curl create false positive on host B because it resolve by default the IP, even if the failover point to host A, the call was in local
perhaps it s the same on your configuration ?
a simple workaround which fix my problem:
curl_setopt($ch, CURLOPT_INTERFACE, "eth0");