WordPress Extend Apache CGI Timeout - php

I wrote a custom WordPress plugin that creates a cron job that once a day calls the StoreRocket API to grab all the locations. Since there are more than 2000 entries, and their API has a 60 api call per min rate limit each get all location only gives me 15 entries per page, I had to call the api almost 150 times with a 1 second sleep(1) after each API call.
while ( $current_page <= $page_count ) {
set_time_limit(0);
$current_page_url = $storerocket_api_url . "?page=" . $current_page;
$storerocket_get_request = wp_remote_get( $current_page_url, $request_args );
if ( is_wp_error( $storerocket_post_request ) ) {
$log = "ERROR\n" ;
$log .= $storerocket_post_request->get_error_message();
storerocket_log($log);
}
$get_response_body = $storerocket_get_request['body'];
$storerocket_location_data = json_decode( $get_response_body );
$current_page = $storerocket_location_data->meta->current_page;
array_push( $storerocket_location_array, $storerocket_location_data->data );
storerocket_log( "page " . $current_page );
sleep(1);
$current_page++;
}
It all works on my local testing. But when I deployed the plugin to the live server hosted at SiteGround, the cron job stopped at exactly 1 mimute although in the the php_ini the set_time_limit is set at 120. I check the error log, it gives a "Timeout waiting for output from CGI script" at exactly 1 minute after the cron job is called. After some research, I found the Apache Timeout on Siteground is 60 seconds and cannot be changed.
Is there a way to bypass the Apache timeout in code?

Related

WordPress Cron Function exceed php timeout

I am trying to update StoreRocket using their REST API in a cron job. However, they have a 60-requests per minute limit. But if I put a 1-second timer after every request, the function time out at 2 minutes because the max_execution_time is set to 120. I have no way to update the max_execution_time because I do not have access to it. Is there another way around to bypass this timeout issue?
function cron_repeat_function () {
$remote_api_url = "StoreRocket api url";
$request_args = "setup request arguments";
foreach ( $locations as $location ) {
$storerocket_post_request = wp_remote_post( $remote_api_url, $request_args );
sleep(1);
}
}

Multiupload using pthread in php

I have been trying to implement multi-threading in php to achieve multi-upload using pthreads php.
From my understanding of multi-threading, this is how I envisioned it working.
I would upload a file,the file will start uploading in the background; even if the file is not completed to upload, another instance( thread ) will be created to upload another file. I would make multiple upload requests using AJAXand multiple files would start uploading, I would get the response of a single request individually and I can update the status of upload likewise in my site.
But this is not how it is working. This is the code that I got from one of the pthread question on SO, but I do not have the link( sorry!! ).
I tested this code to see of this really worked like I envisioned. This is the code I tested, I changed it a little.
<?php
error_reporting(E_ALL);
class AsyncWebRequest extends Thread {
public $url;
public $data;
public function __construct ($url) {
$this->url = $url;
}
public function run () {
if ( ($url = $this->url) ){
/*
* If a large amount of data is being requested, you might want to
* fsockopen and read using usleep in between reads
*/
$this->data = file_get_contents ($url);
echo $this->getThreadId ();
} else{
printf ("Thread #%lu was not provided a URL\n", $this->getThreadId ());
}
}
}
$t = microtime (true);
foreach( ["http://www.google.com/?q=". rand () * 10, 'http://localhost', 'https://facebook.com'] as $url ){
$g = new AsyncWebRequest( $url );
/* starting synchronized */
if ( $g->start () ){
printf ( $url ." took %f seconds to start ", microtime (true) - $t);
while ($g->isRunning ()) {
echo ".";
usleep (100);
}
if ( $g->join () ){
printf (" and %f seconds to finish receiving %d bytes\n", microtime (true) - $t, strlen ($g->data));
} else{
printf (" and %f seconds to finish, request failed\n", microtime (true) - $t);
}
}
echo "<hr/>";
}
So what I expected from this code was it would hit google.com, localhost and facebook.com simultaneously and run their individual threads. But every request is waiting for another request to complete.
For this it is clearly waiting for first response to complete before it is making another request because time the request are sent are after the request from the previous request is complete.
So, This is clearly not the way to achieve what I am trying to achieve. How do I do this?
You might want to look at multi curl for such multiple external requests. Pthreads is more about internal processes.
Just for further reference, you are starting threads 1 by 1 and waiting for them to finish.
This code: while ($g->isRunning ()) doesn't stop until the thread is finished. It's like having a while (true) in a for. The for executes 1 step at a time.
You need to start the threads, add them in an array, and in another while loop check each of the threads if it stopped and remove them from the array.

determining proper gearman task function to retrieve real-time job status

Very simply, I have a program that needs to perform a large process (anywhere from 5 seconds to several minutes) and I don't want to make my page wait for the process to finish to load.
I understand that I need to run this gearman job as a background process but I'm struggling to identify the proper solution to get real-time status updates as to when the worker actually finishes the process. I've used the following code snippet from the PHP examples:
do {
sleep(3);
$stat = $gmclient->jobStatus($job_handle);
if (!$stat[0]) // the job is known so it is not done
$done = true;
echo "Running: " . ($stat[1] ? "true" : "false") . ", numerator: " . $stat[2] . ", denomintor: " . $stat[3] . "\n";
} while(!$done);
echo "done!\n";
and this works, however it appears that it just returns data to the client when the worker finished telling the job what to do. Instead I want to know when the literal process of the job finished.
My real-life example:
Pull several data feeds from an API (some feeds take longer than others)
Load a couple of the ones that always load fast, place a "Waiting/Loading" animation on the section that was sent off to a worker queue
When the work is done and the results have been completely retrieved, replace the animation with the results
This is a bit late, but I stumbled across this question looking for the same answer. I was able to get a solution together, so maybe it will help someone else.
For starters, refer to the documentation on GearmanClient::jobStatus. This will be called from the client, and the function accepts a single argument: $job_handle. You retrieve this handle when you dispatch the request:
$client = new GearmanClient( );
$client->addServer( '127.0.0.1', 4730 );
$handle = $client->doBackground( 'serviceRequest', $data );
Later on, you can retrieve the status by calling the jobStatus function on the same $client object:
$status = $client->jobStatus( $handle );
This is only meaningful, though, if you actually change the status from within your worker with the sendStatus method:
$worker = new GearmanWorker( );
$worker->addFunction( 'serviceRequest', function( $job ) {
$max = 10;
// Set initial status - numerator / denominator
$job->sendStatus( 0, $max );
for( $i = 1; $i <= $max; $i++ ) {
sleep( 2 ); // Simulate a long running task
$job->sendStatus( $i, $max );
}
return GEARMAN_SUCCESS;
} );
while( $worker->work( ) ) {
$worker->wait( );
}
In versions of Gearman prior to 0.5, you would use the GearmanJob::status method to set the status of a job. Versions 0.6 to current (1.1) use the methods above.
See also this question: Problem With Gearman Job Status

How to set a timeout for a gearman job

I want to set a timeout duration for Gearman jobs. For instance, I don't want a gearman job to run for more than 30 seconds, and if the job is running for more than 30 seconds it should be stopped and the next job is started.
Is this possible with Gearman? (I'm using the Gearman PHP API on Centos 6.2)
What you are looking for is GearmanWorker::timeout here is a Good Example
//Set Timeout
$gmworker->setTimeout(5000);
echo "Waiting for job...\n";
// Start working
while ( #$gmworker->work() || $gmworker->returnCode() == GEARMAN_TIMEOUT ) {
if ($gmworker->returnCode() == GEARMAN_TIMEOUT) {
// Normally one would want to do something useful here ...
continue;
}
if ($gmworker->returnCode() != GEARMAN_SUCCESS) {
// Somthign failed
break;
}
}

Prevent timeout during large request in PHP

I'm making a large request to the brightcove servers to make a batch change of metadata in my videos. It seems like it only made it through 1000 iterations and then stopped - can anyone help in adjusting this code to prevent a timeout from happening? It needs to make about 7000/8000 iterations.
<?php
include 'echove.php';
$e = new Echove(
'xxxxx',
'xxxxx'
);
// Read Video IDs
# Define our parameters
$params = array(
'fields' => 'id,referenceId'
);
# Make our API call
$videos = $e->findAll('video', $params);
//print_r($videos);
foreach ($videos as $video) {
//print_r($video);
$ref_id = $video->referenceId;
$vid_id = $video->id;
switch ($ref_id) {
case "":
$metaData = array(
'id' => $vid_id,
'referenceId' => $vid_id
);
# Update a video with the new meta data
$e->update('video', $metaData);
echo "$vid_id updated sucessfully!<br />";
break;
default:
echo "$ref_id was not updated. <br />";
break;
}
}
?>
Thanks!
Try the set_time_limit() function. Calling set_time_limit(0) will remove any time limits for execution of the script.
Also use ignore_user_abort() to bypass browser abort. The script will keep running even if you close the browser (use with caution).
Try sending a 'Status: 102 Processing' every now and then to prevent the browser from timing out (your best bet is about 15 to 30 seconds in between). After the request has been processed you may send the final response.
The browser shouldn't time out any more this way.

Categories