Please read below my scenario..
I have been given a link.. On executing the link in web browser, message will be sent to the intended recipients .. But for my website I need the link to be executed in php as I would retrieve member name from db...
Steps....
Retrieve name from db
$URL = "ABC.com&msg=".$msg
(Execute the link)
/* do something
url = 'http://api.smsgatewayhub.com/smsapi/pushsms.aspx?user=stthomasmtc&pwd=429944&to=9176411081&sid=STMTSC&msg=Dear%20Sam,%20choir%20practice%20will%20be%20held%20in%20our%20Church%20on%20July%2031%20at%208:00%20pm.%20Thanks,%20St.%20Thomas%20MTC!&fl=0&gwid=2'
I am not sure how to execute a link without redirecting.. Hence cannot use header()
I tried using file_get_contents() but didn't work..
Can you please guide me.. Thanks!
Why not you are using AJAX,
As well through the AJAX you can also execute external link by using http client and can get the data and send it back in UI side.
once you retrieve the data in JSON/XML format then render the same.
Well, first of all you'd need the http:// part for file_get_contents to work:
$URL = "http://example.com&msg=".$msg
$result = file_get_contents($URL);
You can use the CURL to hit the URL after fetching the details from database.
PHP Manual Curl.
function get_http_request($uri, $time_out = 100, $headers = 0)
{
$ch = curl_init(); // Initializing
curl_setopt($ch, CURLOPT_URL, trim($uri)); // Set URI
curl_setopt($ch, CURLOPT_HEADER, $headers); //Set Header
curl_setopt($ch, CURLOPT_TIMEOUT, $time_out); // Time-out in seconds
$result = curl_exec($ch); // Executing
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($httpCode != 200) {
$result = ""; // Executing
}
curl_close($ch); // Closing the channel
return $result;
}
Related
I have a large system with an API.
On the frontend, JavaScript uses AJAX to talk to the API.
I have a PHP file that runs every 5-min as a CRON job.
I want this PHP code to interact with the API.
All it has to submit is query-vars.
All that is sent back is a single number.
For Example:
https://examplesite.com/api/create?id=1&data=2
This replies with a simple number that is the SQL last-insert-id.
EXTRA:
The API also needs two Session variables (user-id and system-id)
Can I just start session and set them before calling the API I guess?
I need the PHP script, ran by the CRON system, to talk to this API.
I have tried using cURL but no luck yet:
//Need to add a user-id to session, does this work?
session_start();
$_SESSION['user-id'] = 1;
//HOW TO CALL API FROM CRON?
$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
//curl_setopt($ch, CURLOPT_URL, 'https://example.com/api/create?id=1&gid=2');
//curl_setopt($ch, CURLOPT_URL, 'http://example.com/api/create?id=1&gid=2');
curl_setopt($ch, CURLOPT_URL, 'file://../framework/api/create.php?id=1&data=2');
$result = curl_exec($ch);
curl_close($ch);
$result = str_replace("\n", '', $result); // remove new lines
$result = str_replace("\r", '', $result); // remove carriage returns
//Expect Result to be a number only
file_put_contents("curl.log", "[".date('Y-m-d H:i:s')."] $result\n\n", FILE_APPEND);
The file method doesn't seem to work... maybe path issue with ../
The http method doesn't seem to work... server loopback issue?
Any advice on how to best have my PHP CRON robot use my API will be much appreciated.
I have simply copied API code into the CRON, but then I am duplicating code, and not allowing the robot to test the real API.
Thanks.
Assuming you still wanted to use a session. Your first curl should be a request that to a script that will create the SESSION then respond.
I created this get_cookie.php this to test this concept:
<?php
session_start();
$_SESSION['time'] = time();
echo 'Time=' . $_SESSION['time'];
?>
I called this script to get the PHPSESSID from the response cookie
$ch = curl_init('http://example.com/get_cookie.php');
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
$skip = intval(curl_getinfo($ch, CURLINFO_HEADER_SIZE));
$head = substr($response,0,$skip);
$data = substr($response,$skip);
$end = 0;
$start = strpos($head,'Set-Cookie: ',$end);
$start += 12;
$end = strpos($head,';',$start );
$cookie = substr($head,$start ,$end-$start );
file_put_contents('cookie.txt',$cookie);
echo "\ncookie=$cookie";
echo "\n$data\n";
RESPONSE:
cookie=PHPSESSID=bc65c95468d08dd02cc5ab8ab87bbd39
Time=1664237484
The CRON job URL, session.php:
<?php
session_start();
echo 'Time=' . $_SESSION['time'];
?>
This is the CRON job script.
$cookie = file_get_contents('cookie.txt');
$ch = curl_init('http://example.com/session.php');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_COOKIE, $cookie );
curl_setopt($ch, CURLOPT_HEADER, false);
$response = curl_exec($ch);
echo $response;
RESPONSE:
Time=1664237484
The result "Time" ($_SESSION['TIME']) is always the same as the time from the get_cookie.php.
Thanks for the help, but in the end my alternate approach will be easiest and work best. Here is the answer for others in need...
To have an external PHP (run via CRON) test an internal PHP API (that the frontend JavaScript usually talks to via AJAX), this is the easiest solution, including passing query-variables and session-vars.
STEP 1: Don't worry about start_session() or passing any query variables, just add what you need to the arrays ahead of time.
STEP 2: Capture the output buffer and simply include the PHP API file:
$_SESSION['user-id'] = ' works! ';
$_REQUEST['var'] = ' too ';
ob_start();
include '/var/www/sitename.com/framework/api/create.php';
$result = ob_get_contents();
ob_end_clean();
file_put_contents("/var/www/sitename.com/cron/curl.log", "[".date('Y-m-d H:i:s')."] Result: $result\n\n", FILE_APPEND);
/* OUTPUT in curl.log:
[2022-09-26 22:30:01] Result: works! / too
*/
/* create.php API
$user = $_SESSION['user-id']??' did not work ';
$req = $_REQUEST['var' ]??' novar ';
echo $user.'/'.$req;
*/
(obviously the CRON has to be on the same server as the website/api, otherwise you will have to use cURL with Cookie/Session/QueryVar-Arrays)
I want to return the first URL of google search result like:
First Url Result
.php code:
<?php
//Check if submit button is clicked or not
if (isset($_POST['submit'])) {
$text = $_POST['text'];
// echo $text;
function file_get_contents_curl($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //Set curl to return the data instead of printing it to the browser.
curl_setopt($ch, CURLOPT_URL, $url);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$query = $text;
$url = 'http://www.google.co.in/search?q='.urlencode($query).'';
echo $url;
$scrape = file_get_contents_curl($url);
echo $scrape;
?>
How can i achieve that?
By default since using this scraping method only sends an HTTP request and does not run render page as browsers do, google won't load you the page you are looking for and it will show you an agreement page like this.
You should use the google Search API as mentioned by ADyson in the comment
Another approach which is not recommended is that you use selenium or any headless browser. Using the headless browser you will still be prompt by agreement but behind it you can scrape the search results.
I have create a signed $URL for Amazon s3 and it opens perfectly in the browser.
http://testbucket.com.s3.amazonaws.com/100-game-play-intro-1.m4v?AWSAccessKeyId=AKIAJUAjhkhkjhMO73BF5Q&Expires=1378465934&Signature=ttmsAUDgJjCXepwEXvl8JdFu%2F60%3D
**Bucket name and accesskey changed in this example
I am however trying to then use the function below to check (using curl) that the file exists. It fails the CURL connection. If I replace $URL above with the url of an image outside of s3 then this code works perfectly.
I know the file exists in amazon but can't work out why this code fails if using a signed url as above
Any ideas?
Thanks
Here is my code.
function remoteFileExists($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, false);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
//don't fetch the actual file, only get header to check if file exists
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_NOBODY, true);
$result = curl_exec($ch);
curl_close($ch);
if ($result !== false) {
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($statusCode == 200) {
$ret = true;
} else {
$ret = false;
}
} else {
$ret='connection failed';
}
return $ret;
}
When using CURLOPT_NOBODY, libcurl sends an HTTP HEAD request, not a GET request.
...the string to be signed is formed by appending the REST verb, content-md5 value, content-type value, expires parameter value, canonicalized x-amz headers (see recipe below), and the resource; all separated by newlines.
— http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
The "REST verb" -- e.g., GET vs HEAD -- must be consistent between the signature you generate, and the request that make, so a signature that is valid for GET will not be valid for HEAD and vice versa.
You will need to sign a HEAD request instead of a GET request in order to validate a file in this way.
You can check by the header part.
$full_url = 'https://www.example.com/image.jpg';
$file_headers = #get_headers($full_url);
if($file_headers && strpos($file_headers[0], '200 OK')){
// enter code here
}
Or If you are using AWS S3 then you can also use this one.
if(!class_exists('S3')){
require('../includes/s3/S3.php');
}
S3::setAuth(awsAccessKey, awsSecretKey);
$info = S3::getObjectInfo($bucketName, $s3_furl);
// check for $info value and apply your condition.
When I execute the following code it takes between 10-12 seconds to respond.
Is the problem with Twitter or with our server?
I really need to know as this is part of the code to display tweets on our website and a 12 second load time is just not acceptable!
function get_latest_tweets($username)
{
print "<font color=red>**". time()."**</font><br>";
$path = 'http://api.twitter.com/1/statuses/user_timeline/' . $username.'.json?include_rts=true&count=2';
$jason = file_get_contents($path);
print "<font color=red>**". time()."**</font><br>";
}
Thanks
When you put the URL into your browser (http://api.twitter.com/1/statuses/user_timeline/username.json?include_rts=true&count=2) how long does it take for the page to appear? If it's quick then you need to start the search at your server.
use curl instead of file_get_contents() to request, so that response will be compressed. Here is the curl function which iam using.
function curl_file_get_contents($url)
{
$curl = curl_init();
curl_setopt($curl,CURLOPT_URL,$url); //The URL to fetch. This can also be set when initializing a session with curl_init().
curl_setopt($curl,CURLOPT_RETURNTRANSFER,TRUE); //TRUE to return the transfer as a string of the return value of curl_exec() instead of outputting it out directly.
curl_setopt($curl,CURLOPT_ENCODING , "gzip");
curl_setopt($curl, CURLOPT_FAILONERROR, TRUE); //To fail silently if the HTTP code returned is greater than or equal to 400.
curl_setopt($curl,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, TRUE);
$contents = curl_exec($curl);
curl_close($curl);
return $contents;
}
I need to implement a simple PHP proxy in a web application I am building (Its flash based and the destination service provider doesn't allow edits to their crossdomain.xml file)
Can any php gurus offer advice on the following 2 options? Also, I think, but am not sure, that I need to include some header info as well.
Thanks for any feedback!
option1
$url = $_GET['path'];
readfile($path);
option2
$content .= file_get_contents($_GET['path']);
if ($content !== false)
{
echo($content);
}
else
{
// there was an error
}
First of all, never ever ever include a file based only on user input. Imagine what would happen if someone would call your script like this:
http://example.com/proxy.php?path=/etc/passwd
Then onto the issue: what kind of data are you proxying? If any kind at all, then you need to detect the content type from the content, and pass it on so the receiving end knows what it's getting. I would suggest using something like HTTP_Request2 or something similar from Pear (see: http://pear.php.net/package/HTTP_Request2) if at all possible. If you have access to it, then you could do something like this:
// First validate that the request is to an actual web address
if(!preg_match("#^https?://#", $_GET['path']) {
header("HTTP/1.1 404 Not found");
echo "Content not found, bad URL!";
exit();
}
// Make the request
$req = new HTTP_Request2($_GET['path']);
$response = $req->send();
// Output the content-type header and use the content-type of the original file
header("Content-type: " . $response->getHeader("Content-type"));
// And provide the file body
echo $response->getBody();
Note that this code hasn't been tested, this is just to give you a starting point.
Here's another solution using curl
Can anyone comment??
$ch = curl_init();
$timeout = 30;
$userAgent = $_SERVER['HTTP_USER_AGENT'];
curl_setopt($ch, CURLOPT_URL, $_REQUEST['url']);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
$response = curl_exec($ch);
if (curl_errno($ch)) {
echo curl_error($ch);
} else {
curl_close($ch);
echo $response;
}