Google Play links validation via PHP - php

I want to check via script if Google Play link for app is valid:
https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggame - valid
https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggamessdasd - invalid
but every script what I bought or is free is giving for me 404 or 303 response. There is some redirect probably.
How to validate links like that. I need to check some 1000 links in my ad system if apps exist in Google Play store.
I will write myself loops, reading from database, etc. but please someone familiar with php, help with the check. I spended some $300 for this and got cheated by 2 people, that is "checking" link. Always 404 or 303.

Try this :
<?php
/**
* Check google play app
*
* #param string $url Url to check
*
* #return boolean True if it exists, false otherwise
* #throws \Exception On Curl error, an exception is thrown
*/
function checkGooglePlayApp($url)
{
$curlOptions = array(
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CUSTOMREQUEST => 'GET',
CURLOPT_URL => $url
);
$ch = curl_init();
curl_setopt_array($ch, $curlOptions);
$result = curl_exec($ch);
$http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($curl_error = curl_error($ch))
{
throw new \Exception($curl_error, Exception::CURL_ERROR);
}
curl_close($ch);
return $http_code == '200';
}
$url = 'https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggameERRORERROR';
$result = checkGooglePlayApp($url);
var_dump($result); // Should return false
$url = 'https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggame';
$result = checkGooglePlayApp($url);
var_dump($result); // Should return true
It will return :
bool(false)
bool(true)

This can be easily done with the get_headers function. For example:
Incorrect URL
$file = 'https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggamessdasd';
$file_headers = get_headers($file);
print_r($file_headers);
Will return:
Array
(
[0] => HTTP/1.0 404 Not Found
[1] => Cache-Control: no-cache, no-store, max-age=0, must-revalidate
[2] => Pragma: no-cache
[3] => Expires: Fri, 01 Jan 1990 00:00:00 GMT
[4] => Date: Tue, 03 Mar 2015 04:23:31 GMT
[5] => Content-Type: text/html; charset=utf-8
[6] => Set-Cookie: NID=67=QFThy03gh34QypYfoLFTz7bJDI-qzXvuzI05DtrF3aVs1L7NJO9byV6kemHRVVkViz-sodx3Z0GuCQTu9a_1JvToen6ZtjfhNy8MH6DDgH6zix2I4Gm9mauBPCxipnlG;Domain=.google.com;Path=/;Expires=Wed, 02-Sep-2015 04:23:31 GMT;HttpOnly
[7] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[8] => X-Content-Type-Options: nosniff
[9] => X-Frame-Options: SAMEORIGIN
[10] => X-XSS-Protection: 1; mode=block
[11] => Server: GSE
[12] => Alternate-Protocol: 443:quic,p=0.08
[13] => Accept-Ranges: none
[14] => Vary: Accept-Encoding
)
If the file does exist, will return:
Array
(
[0] => HTTP/1.0 200 OK
[1] => Content-Type: text/html; charset=utf-8
[2] => Set-Cookie: PLAY_PREFS=CgJVUxC6uYnvvSkourmJ770p:S:ANO1ljKvPst7-nSw; Path=/; Secure; HttpOnly
[3] => Set-Cookie: NID=67=iFUl_Ls8EhAJE7STIJD7Wdq6NF-y4i6Xrlb78My75ZaruVWlAKObDRDNGDddGxD0hSsLRpvrQK7Tp5nuKCgGg2jF1GUf9_4H_zYsUDQ548Be2n8EDjp9clDfXKLYjmSg;Domain=.google.com;Path=/;Expires=Wed, 02-Sep-2015 04:26:14 GMT;HttpOnly
[4] => Cache-Control: no-cache, no-store, max-age=0, must-revalidate
[5] => Pragma: no-cache
[6] => Expires: Fri, 01 Jan 1990 00:00:00 GMT
[7] => Date: Tue, 03 Mar 2015 04:26:14 GMT
[8] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[9] => X-Content-Type-Options: nosniff
[10] => X-Frame-Options: SAMEORIGIN
[11] => X-XSS-Protection: 1; mode=block
[12] => Server: GSE
[13] => Alternate-Protocol: 443:quic,p=0.08
[14] => Accept-Ranges: none
[15] => Vary: Accept-Encoding
)
So you can create a script like:
<?php
$files = ['https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggame', 'https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggamesadasd'];
foreach($files as $file)
{
$headers = get_headers($file);
if($headers[0] == 'HTTP/1.0 404 Not Found')
{
return false;
}
else
{
return true;
}
}
?>

You can simply do as
function checkGooglePlayApp($url)
{
$headers = get_headers($url);
return $headers[0] == 'HTTP/1.0 404 Not Found';
}
$inValid = checkGooglePlayApp("https://play.google.com/store/apps/details?id=com.ketchapp.zigzaggame");
if(!$inVald)
{
echo "URL Valid";
}
else{
echo "URL Invalid";
}

Related

get_headers() used on live site is not returning any array but on localhost it is

When I use the function get_headers($url) where $url = "https://www.example.com/product.php?id=15" on my live site then it is not returning any array from given url. I get nothing. But when the same code is used on my localhost, I get following:
Array
(
[0] => HTTP/1.1 200 OK
[1] => Cache-Control: private
[2] => Content-Type: text/html; charset=utf-8
[3] => Server: Microsoft-IIS/8.5
[4] => Set-Cookie: ASP.NET_SessionId=wumg0dyscw3c4pmaliwehwew; path=/; HttpOnly
[5] => X-AspNetMvc-Version: 4.0
[6] => X-AspNet-Version: 4.0.30319
[7] => X-Powered-By: ASP.NET
[8] => Date: Fri, 18 Aug 2017 13:06:18 GMT
[9] => Connection: close
[10] => Content-Length: 73867
)
So, why the function is not working successfully on live?
EDIT
<?php
if(isset($_POST['prdurl']))
{
$url = $_POST['prdurl'];
print_r(get_headers($url)); // not getting any array on live but on localhost
if(is_array(#get_headers($url)))
{
// some code goes here...
}
else
{
echo "URL doesn't exist!"
}
}
?>
One more thing to note down here is that I'm using file_get_html to retrieve the html page from the remote url. It's working on my localhost but not on live as well.

PHP get response time and http status code same request

I can get the HTTP status code of a URL using Curl and I can get the response time of a URL by doing something like the following...
<?php
// check responsetime for a webbserver
function pingDomain($domain){
$starttime = microtime(true);
// supress error messages with #
$file = #fsockopen($domain, 80, $errno, $errstr, 10);
$stoptime = microtime(true);
$status = 0;
if (!$file){
$status = -1; // Site is down
} else {
fclose($file);
$status = ($stoptime - $starttime) * 1000;
$status = floor($status);
}
return $status;
}
?>
However, I'm struggling to think of a way to get the HTTP status code and the response time using the same request. If this is possible to do only via curl that would be great.
Note: I don't want/need any other information from the URL as this will slow down my process.
Please use get_headers() function it will return you status code, refer php docs -http://php.net/manual/en/function.get-headers.php
<?php
$url = "http://www.example.com";
$header = get_headers($url);
print_r($header);
$status_code = $header[0];
echo $status_code;
?>
Output -->
Array
(
[0] => HTTP/1.0 200 OK
[1] => Cache-Control: max-age=604800
[2] => Content-Type: text/html
[3] => Date: Sun, 07 Feb 2016 13:04:11 GMT
[4] => Etag: "359670651+gzip+ident"
[5] => Expires: Sun, 14 Feb 2016 13:04:11 GMT
[6] => Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT
[7] => Server: ECS (cpm/F9D5)
[8] => Vary: Accept-Encoding
[9] => X-Cache: HIT
[10] => x-ec-custom-error: 1
[11] => Content-Length: 1270
[12] => Connection: close
)
HTTP/1.0 200 OK

Infusionsoft API not returning orders by date

I am trying to integrate the Infusionsoft API and I am having some issues in retrieving orders. I need to get all the orders from yesterday and so far this is what I have done.
require_once("src/isdk.php");
$app = new iSDK;
if ($app->cfgCon("connectionName")) {
echo "connected<br/>";
echo "app connected<br/>";
$qry = array('DateCreated' => $app->infuDate('10/30/2014'));
$rets = array('Id', 'JobTitle', 'ContactId', 'StartDate', 'DueDate', 'JobNotes', 'ProductId', 'JobRecurringId', 'JobStatus', 'DateCreated', 'OrderType', 'OrderStatus', 'ShipFirstName', 'ShipMiddleName', 'ShipLastName', 'ShipCompany', 'ShipPhone', 'ShipStreet1', 'ShipStreet2', 'ShipCity', 'ShipState', 'ShipZip', 'ShipCountry');
$cards = $app->dsQueryOrderBy("Job", 100, 0, $qry, $rets, 'DateCreated', false);
echo "<pre>";
print_r($cards);
echo "</pre>";
} else {
echo "Connection Failed";
}
The connection and everything works fine and I am able retrieve orders using other fields like Id. But for some reason searching through date doesn't work. I am not getting any errors below is the response which I get.
xmlrpcresp Object
(
[val] => yes
[valtyp] => phpvals
[errno] => 0
[errstr] =>
[payload] =>
[hdrs] => Array
(
[server] => Apache-Coyote/1.1
[pragma] => no-cache
[cache-control] => no-cache, no-store
[expires] => Sat, 01 Nov 2014 00:19:59 GMT
[content-type] => text/xml;charset=UTF-8
[content-length] => 121
[date] => Fri, 31 Oct 2014 12:19:59 GMT
[set-cookie] => app-lb=3238199306.20480.0000; path=/
)
[_cookies] => Array
(
[app-lb] => Array
(
[value] => 3238199306.20480.0000
[path] => /
)
)
[content_type] => text/xml
[raw_data] => HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Pragma: no-cache
Cache-Control: no-cache, no-store
Expires: Sat, 01 Nov 2014 00:19:59 GMT
Content-Type: text/xml;charset=UTF-8
Content-Length: 121
Date: Fri, 31 Oct 2014 12:19:59 GMT
Set-Cookie: app-lb=3238199306.20480.0000; path=/
<?xml version="1.0" encoding="UTF-8"?><methodResponse><params><param><value>yes</value></param></params></methodResponse>
)
And the format of the date is below.
20141030T00:00:00
Can anyone please help me with this?
I have also gone through similar questions and haven't found a solution.
Thanks in advance for help.
For those who run into the same problem below is the solution.
Querying for date using the code below solve the problem for me.
$qry = array('DateCreated' => '2014-10-30%');

PHP check download link without downloading the file

On my site I have a couple links for downloading a file, but I want to make a php script that check if the download link is still online.
This is the code I'm using:
$cl = curl_init($url);
curl_setopt($cl,CURLOPT_CONNECTTIMEOUT,10);
curl_setopt($cl,CURLOPT_HEADER,true);
curl_setopt($cl,CURLOPT_NOBODY,true);
curl_setopt($cl,CURLOPT_RETURNTRANSFER,true);
if(!curl_exec($cl)){
echo 'The download link is offline';
die();
}
$code = curl_getinfo($cl, CURLINFO_HTTP_CODE);
if($code != 200){
echo 'The download link is offline';
}else{
echo 'The download link is online!';
}
The problem is that it downloads the whole file which makes it really slow, and I only need to check the headers. I saw that curl has an option CURLOPT_CONNECT_ONLY, but the webhost I'm using has php version 5.4 which doesn't have that option. Is there any other way I can do this?
CURLOPT_CONNECT_ONLY would be good, but it’s only available in PHP 5.5 & abodes. So instead, try using get_headers. Or even use another method using fopen, stream_context_create & stream_get_meta_data. First the get_headers method:
// Set a test URL.
$url = "https://www.google.com/";
// Get the headers.
$headers = get_headers($url);
// Check if the headers are empty.
if(empty($headers)){
echo 'The download link is offline';
die();
}
// Use a regex to see if the response code is 200.
preg_match('/\b200\b/', $headers[0], $matches);
// Act on whether the matches are empty or not.
if(empty($matches)){
echo 'The download link is offline';
}
else{
echo 'The download link is online!';
}
// Dump the array of headers for debugging.
echo '<pre>';
print_r($headers);
echo '</pre>';
// Dump the array of matches for debugging.
echo '<pre>';
print_r($matches);
echo '</pre>';
And the output of this—including the dumps used for debugging—would be:
The download link is online!
Array
(
[0] => HTTP/1.0 200 OK
[1] => Date: Sat, 14 Jun 2014 15:56:28 GMT
[2] => Expires: -1
[3] => Cache-Control: private, max-age=0
[4] => Content-Type: text/html; charset=ISO-8859-1
[5] => Set-Cookie: PREF=ID=6e3e1a0d528b0941:FF=0:TM=1402761388:LM=1402761388:S=4YKP2U9qC6aMgxpo; expires=Mon, 13-Jun-2016 15:56:28 GMT; path=/; domain=.google.com
[6] => Set-Cookie: NID=67=Wun72OJYmuA_TQO95WXtbFOK5g-xU53PQZ7dAIBtzCaBWxhXzduHQZfBVPf4LpaK3MVH8ZKbrBIc3-vTKuMlEnMdpWH0mcft5pA_0kCoe4qolDmednpPJqezZF_HyfXD; expires=Sun, 14-Dec-2014 15:56:28 GMT; path=/; domain=.google.com; HttpOnly
[7] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[8] => Server: gws
[9] => X-XSS-Protection: 1; mode=block
[10] => X-Frame-Options: SAMEORIGIN
[11] => Alternate-Protocol: 443:quic
)
Array
(
[0] => 200
)
And here is another method using fopen, stream_context_create & stream_get_meta_data. The benefit of this method is it gives you a bit more info on what actions were taken to fetch the URL in addition to the headers:
// Set a test URL.
$url = "https://www.google.com/";
// Set the stream_context_create options.
$opts = array(
'http' => array(
'method' => 'HEAD'
)
);
// Create context stream with stream_context_create.
$context = stream_context_create($opts);
// Use fopen with rb (read binary) set and the context set above.
$handle = fopen($url, 'rb', false, $context);
// Get the headers with stream_get_meta_data.
$headers = stream_get_meta_data($handle);
// Close the fopen handle.
fclose($handle);
// Use a regex to see if the response code is 200.
preg_match('/\b200\b/', $headers['wrapper_data'][0], $matches);
// Act on whether the matches are empty or not.
if(empty($matches)){
echo 'The download link is offline';
}
else{
echo 'The download link is online!';
}
// Dump the array of headers for debugging.
echo '<pre>';
print_r($headers);
echo '</pre>';
And here is the output of that:
The download link is online!
Array
(
[wrapper_data] => Array
(
[0] => HTTP/1.0 200 OK
[1] => Date: Sat, 14 Jun 2014 16:14:58 GMT
[2] => Expires: -1
[3] => Cache-Control: private, max-age=0
[4] => Content-Type: text/html; charset=ISO-8859-1
[5] => Set-Cookie: PREF=ID=32f21aea66dcfd5c:FF=0:TM=1402762498:LM=1402762498:S=NVP-y-kW9DktZPAG; expires=Mon, 13-Jun-2016 16:14:58 GMT; path=/; domain=.google.com
[6] => Set-Cookie: NID=67=mO_Ihg4TgCTizpySHRPnxuTp514Hou5STn2UBdjvkzMn4GPZ4e9GHhqyIbwap8XuB8SuhjpaY9ZkVinO4vVOmnk_esKKTDBreIZ1sTCsz2yusNLKA9ht56gRO4uq3B9I; expires=Sun, 14-Dec-2014 16:14:58 GMT; path=/; domain=.google.com; HttpOnly
[7] => P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
[8] => Server: gws
[9] => X-XSS-Protection: 1; mode=block
[10] => X-Frame-Options: SAMEORIGIN
[11] => Alternate-Protocol: 443:quic
)
[wrapper_type] => http
[stream_type] => tcp_socket/ssl
[mode] => rb
[unread_bytes] => 0
[seekable] =>
[uri] => https://www.google.com/
[timed_out] =>
[blocked] => 1
[eof] =>
)
Try add curl_setopt( $cl, CURLOPT_CUSTOMREQUEST, 'HEAD' ); to send HEAD request.

Checking if a URL returns a PDF

I need to check if a URL returns a PDF document using PHP. Right now I'm using the file_get_mimetype function. A normal URL(https://www.google.com/) returns type as application/octet-stream while a normal pdf link (http://www.brainlens.org/content/newsletters/Spring%202013.pdf) returns application/pdf. But now I also encounter URL's like http://www.dadsgarage.com/~/media/Files/example.ashx or http://www.wpdn.org/webfm_send/241 which also is pdf but returns application/octet-stream. There are other URL's too which opens a Save as dialog box which also has to be detected.
Use get_headers()
Eg:
$url = "http://www.dadsgarage.com/~/media/Files/example.ashx";
if (in_array("Content-Type: application/pdf", get_headers($url))) {
echo "URL returns PDF";
}
print_r(get_headers($url));
returns
Array
(
[0] => HTTP/1.1 200 OK
[1] => Cache-Control: private, max-age=604800
[2] => Content-Length: 194007
[3] => Content-Type: application/pdf
[4] => Expires: Tue, 09 Dec 2014 09:40:20 GMT
[5] => Last-Modified: Wed, 07 Aug 2013 16:46:30 GMT
[6] => Accept-Ranges: bytes
[7] => Server: Microsoft-IIS/8.0
[8] => Content-Disposition: inline; filename="example.pdf"
[9] => X-AspNet-Version: 4.0.30319
[10] => X-Powered-By: ASP.NET
[11] => X-Provider: AWS
[12] => Date: Tue, 02 Dec 2014 09:40:20 GMT
[13] => Connection: close
)
mime types could include:
application/pdf, application/x-pdf, application/acrobat, applications/vnd.pdf, text/pdf, text/x-pdf

Categories