Calling a PHP file from a PHP loop in background - php

I have a PHP loop where i need to call another PHP file in the background to insert/update some information based on a variable send to it. I have tried to use CURL, but it does not seem to work.
I need it to call SQLupdate.php?symbol=$symbol - Is there another way of calling that PHP with the paramter in the background - and can it eventually be done Synchronously with a response back for each loop?
while(($row=mysqli_fetch_array($res)) and ($counter < $max))
{
$ch = curl_init();
$curlConfig = array(
CURLOPT_URL => "SQLinsert.php",
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POSTFIELDS => array(
'symbol' => $symbol,
)
);
curl_setopt_array($ch, $curlConfig);
$result = curl_exec($ch);
curl_close($ch);
}

I'm going to weigh in down here in hopes of getting this one "away & done".
Although it isn't entirely clear from your post, it seems you're trying to call your PHP file via an HTTP(s) protocol.
In many configurations of PHP, you could do this and avoid some potential cURL overhead by using file_get_contents() instead:
while(($row=mysqli_fetch_array($res)) and ($counter < $max)) {
$postdata = http_build_query(
array(
'symbol' => $row['symbol']
)
);
$opts = array('http' =>
array(
'method' => 'POST',
'header' => 'Content-type: application/x-www-form-urlencoded',
'content' => $postdata
)
);
$context = stream_context_create($opts);
$result = file_get_contents('http://example.com/SQLinsert.php', false, $context);
$counter++; // you didn't mention this, but you don't want a everloop...
}
That's pretty much a textbook example copied from the manual, actually.
To use cURL instead, as you tried to do originally, and in truth it seems pretty clean with one call to curl_setopt() inside the loop:
$ch = curl_init();
$curlConfig = array(
CURLOPT_URL => "http://example.com/SQLinsert.php",
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true
);
curl_setopt_array($ch, $curlConfig);
while(($row=mysqli_fetch_array($res)) and ($counter < $max)) {
curl_setopt($ch, CURLOPT_POSTFIELDS, array('symbol' => $row['symbol']));
$result = curl_exec($ch);
$counter++; //see above
}
// do this *after* the loop
curl_close($ch);
Now the actual and original problem may be that $symbol isn't initialized; at least, it isn't in the example you have provided. I've attempted to fix this by using $row['symbol'] in both my examples. If this isn't the name of the column in the database then you would obviously need to use the correct name.
Finally, be advised that it's almost always better to access a secondary resource via the fastest available mechanism; if "SQLinsert.php" is local to the calling script, using HTTP(s) is going to be terribly under-performant, and you should rewrite both pieces of the system to work from a local (e.g. 'disk-based') point-of-view (which has already been recommended by a plethora of commenters):
//SQLinsert.php
function myInsert($symbol) {
// you've not given us any DB schema information ...
global $db; //hack, *cough*
$sql = "insert into `myTable` (symbol) values('$symbol')";
$res = $this->db->query($sql);
if ($res) return true;
return false;
}
//script.php
require_once("SQLinsert.php");
while(($row=mysqli_fetch_array($res)) and ($counter < $max)) {
$ins = myInsert($row['symbol']);
if ($ins) { // let's only count *good* inserts, which is possible
// because we've written 'myInsert' to return a boolean
$counter++;
}
}

Related

call php array inside the function

require_once('db_lib.php');
$oDB = new db;
$result = $oDB->select('select * from tweet_urls');
while ($row = mysqli_fetch_row($result)) {
//echo $row['1'].'</br>';
echo get_follow_url($row['1']);
}
function get_follow_url($url) {
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => $url,
CURLOPT_HEADER => false,
CURLOPT_NOBODY => true,
CURLOPT_FOLLOWLOCATION => true,
));
curl_exec($ch);
$follow_url = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL);
curl_close($ch);
return $follow_url;
}
I extract the tweets urls from twitter and I want to change the short urls into its original long urls.
What is wrong in my code. I call the function get_follow_url($url) inside the while loop. I think I do some mistakes in calling array get_follow_url($row['1']) inside the call function .
You need to use the field position which will be $row[1] without quotes. If there is a field named 1 you will need to use the mysqli_fetch_assoc() or mysqli_fetch_array() methods

curl_exec() prints the returned JSON data, won't put it into a variable

There are many such questions on Stack Overflow & elsewhere, but they all seem to be for earlier versions of PHP as their answers refer to CURLOPT_RETURNTRANSFER, open_basedir and allow_url_include.
I am using PHP 5.4.17. Here’s my code:
$curl = curl_init();
if ($curl === False)
{
die('Fatal error initiating CURL');
}
curl_setopt_array($curl,
array(CURLOPT_HTTPGET => True,
CURLOPT_RETURNTRANSFER => True,
CURLOPT_FOLLOWLOCATION => True,
CURLOPT_URL => $gatewayURL . $parameters
));
$rawJasonData = curl_exec($curl);
curl_close($curl);
if ($rawJasonData === False)
The code seems to be OK—although I will admit that this is my first time using CURL—because the returned JSON data is echoed.
I want to capture it in a variable, how do I do that (without resorting to output buffering)?
[Update] I am certain that I don't var_dump() or echo the result myself. Neither 1 instead of True, nor uppercase TRUE make any difference.
I am developing locally, but using an entry in the Windows HOST file in my URL, not localhost.
Not sure why cURL isn't working for you, but since you are just making a simple GET request, why not just do:
$rawJasonData = file_get_contents($gatewayURL.$parameters);
Likely a non issue, but:
curl_setopt_array($curl,
array(CURLOPT_HTTPGET => 1,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_FOLLOWLOCATION => 1,
CURLOPT_HEADER => 0,
CURLOPT_URL => $gatewayURL . $parameters));
In previous versions of PHP, I encountered significant issues with using True when setting curl options. Give 1 a shot and see what happens. This worked for me, but it could have been due to the environment I was working in. Just wanting to mention this in case you have some weird environment (like I had) that caused the most odd problems.
As for personal preference, I prefer using the following method to set options:
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_CURLOPT_HEADER, 1);
curl_setopt($curl, CURLOPT_CURLOPT_HTTPGET, 1);
As for your question -- I'd remove CURLOPT_FOLLOWLOCATION as that will follow redirects and you shouldn't have any in your case.
I suspect that CURLOPT_FOLLOWLOCATION is the issue here.
The big issue I am seeing with your code is the use of the first uppercase in True and False when it should be TRUE and FALSE. Here is my refactored version of your code that should work:
$curl = curl_init();
if (!$curl) {
die('Fatal error initiating CURL');
}
$curl_options = array();
$curl_options['CURLOPT_HTTPGET'] = TRUE;
$curl_options['CURLOPT_RETURNTRANSFER'] = TRUE;
$curl_options['CURLOPT_HTTPGET'] = TRUE;
$curl_options['CURLOPT_HTTPHEADER'] = array('Content-Type: application/json', 'Accept: application/json');
$curl_options['CURLOPT_URL'] = $gatewayURL . $parameters;
curl_setopt_array($curl, $curl_options);
$rawJasonData = curl_exec($curl);
curl_close($curl);
if (!$rawJasonData)
I added CURLOPT_HTTPGET as TRUE to force GET behavior from curl as well as JSON related headers in CURLOPT_HTTPHEADER.
Past all of that you have checks that were set as === False which are a bit excessive. Simply setting a if (!$curl) { and an if (!$rawJasonData) would work as expected.
If that still somehow does not work, change the TRUE values to a 1 like this:
$curl_options = array();
$curl_options['CURLOPT_HTTPGET'] = 1;
$curl_options['CURLOPT_RETURNTRANSFER'] = 1;
$curl_options['CURLOPT_HTTPGET'] = 1;

Google Closure, Response Always Empty

I'm trying to get together a basic example of how to use Google Closure to minify JS. I can't seem to get this to work at all.
I'm trying to follow these examples:
https://developers.google.com/closure/compiler/docs/gettingstarted_api
http://closure-compiler.appspot.com/home
When working on API's and/or AJAX code, the first thing I try to is get the variables and values setup properly using just Advanced Rest Client Applications - a Chrome Extension. Whenever I send this data, though, I get an empty response (image below).
Trying to insert the same code into my PHP code, no matter what I send in the $postData variable, I get an empty (null) response.
PHP Code:
$postData =
http_build_query(
[
'output_info' => 'compiled_code',
'output_format' => 'text',
'compilation_level' => 'SIMPLE_OPTIMIZATIONS',
'js_code' => urlencode("function hello(name) { // Greets the user alert('Hello, ' + name); } hello('New user');")
]
);
$ret = $this->ci->curl->simple_post(
$url,
$postData,
$options
);
var_dump($ret);
die();
Response:
string ' ' (length=1)
I'm 99% confident that I'm missing something to use the Closure API like a key or something, but I have no idea how to proceed.
After many, many, many attempts, I found that if I used rawurlencode() instead of urlencode(), it works. Here's the final function.
// use google closure to get compiled JS
$encoded = rawurlencode($js);
$postData =
'output_info=compiled_code&' .
'output_format=text&' .
'compilation_level=WHITESPACE_ONLY&' .
'js_code=' . $encoded
;
$options = [];
$call = curl_init();
curl_setopt_array(
$call,
array(
CURLOPT_URL => 'http://closure-compiler.appspot.com/compile',
CURLOPT_POST => 1,
CURLOPT_POSTFIELDS => $postData,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_HEADER => 0,
CURLOPT_FOLLOWLOCATION => 0
)
);
$jscomp = curl_exec($call);
return $jscomp;

Send post data and cookiefile in Google App Engine PHP runtime

I have a code written in PHP and currently running on my shared hosting. Now I'm going to move it on Google App Engine.
sendRequest() method sends post data and cookies to another website and returns a response.
private function sendRequest($url, array $data = array()) {
$ch = curl_init(self::URL_BASE);
$curlConfig = array(
CURLOPT_URL => $url,
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POSTFIELDS => $data,
CURLOPT_COOKIE => "user_name=" . $this->username . "; user_password=" . md5($this->password));
if ($url == self::URL_LOGIN) {
$this->cookieFile = tempnam("/tmp", "CURLCOOKIE");
$curlConfig[CURLOPT_COOKIEJAR] = $this->cookieFile;
} else {
$curlConfig[CURLOPT_COOKIEFILE] = $this->cookieFile;
}
curl_setopt_array($ch, $curlConfig);
$result = curl_exec($ch);
curl_close($ch);
return $result;
}
Problems:
CURL module is not supported in App Engine
tempnam() function is disabled
I've searched a lot, but couldn't find any alternatives. fsockopen() is also disabled.
Use the stream context to set the cookies on the request, per the example here.
Not sure from your code why you want to persist the cookies and for how long - can you use memcache for this purpose instead?

Checking 1000 urls if they exist or not, is there a quick way for this?

I have an array of urls (~1000 urls in it), i want to check all of them if they exist or not. Here is my current code:
$south_east_png_endings = array();
for($x=1;$x<=25;$x++) {
for($y=1;$y<=48;$y++) {
$south_east_png_endings[] ="${x}s${y}e.png";
}
}
foreach ($south_east_png_endings as $se){
$url = 'http://imgs.xkcd.com/clickdrag/'.$se;
$file_headers = #get_headers($url);
if($file_headers[0] == 'HTTP/1.1 404 Not Found') {
// echo 'Does not exist';
}
else
{
echo $url;
}
}
This script works, it echos out all the working urls, but the process is too long (takes several minutes to complete). Is there a way to do this faster or is this as fast as it gets? Maybe i can use curl_timeout functions to shorten the time?
1) get_headers() actually uses GET requests, which are not needed if you just want to know if a file exists. Use HEAD instead, example from the manual:
<?php
// By default get_headers uses a GET request to fetch the headers. If you
// want to send a HEAD request instead, you can do so using a stream context:
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
$headers = get_headers('http://example.com');
?>
2) since those checks can be easily run in parallel, you should use separate threads/processes to do the checking. However, if you're doing this from home, your router might choke on 1000 requests at once, so you might want to use something like 5-20 concurrent threads.
For paralleling check you may use multi_curl. It might be pretty fast. Here some example. Cause it's more complex than example by #eis.
P.S. Also with curl you can use trick with method HEAD.
function _isUrlexist($url) {
$flag = false;
if ($url) {
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => $url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_NOBODY => true,
CURLOPT_HEADER => true
));
curl_exec($ch);
$info = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
$flag = ($info == 200) ? true : false;
}
return $flag;
}

Categories