Codeigniter run code in background - php

I'm developing an iOS app and I need to call to a web developed with CodeIgniter. The problem is that I have the response very quickly, but then I need to do some actions with it in CodeIgniter.
How can I do it in background?
My code is something like this:
$data = json_decode($response);
echo $response;
//Data has around 100 rows
foreach ($data as $info)
{
//Database inserts and updates
}
If I comment the foreach, it works perfect, but with it, it takes a lot of time.
I don't want to speed up database, because that's not the problem... what really takes time its what i need to do with my data...

you can try something like below,
class Proc_test extends CI_Controller
{
public function index()
{
echo "Proc_text::Index is called at ".$this->rightnow()."<br>";
$param = 5000000;
$command = "php ".FCPATH."index.php tools proc1 $param > /dev/null &";
exec($command);
$command = "php ".FCPATH."index.php tools proc2 $param > /dev/null &";
echo "Proc_text::Index is done at ".$this->rightnow()."<br>";
}
//a helper to give time of day with microseconds
public function rightnow()
{
$time = microtime(true);
$micro_time = sprintf("%06d", ($time - floor($time)) * 1000000);
$date = new DateTime(date('Y-m-d H:i:s.'.$micro_time, $time));
return $date->format("H:i:s.u");
}
}
Here background command executes as per below example.
$command = "php ".FCPATH."index.php tools proc1 $param > /dev/null &";
It's basically a cli command which follows this form
"php absolute/path/to/codeigniter/index.php controller method argument_1 argument_2 argument_n > pipe to null statement".
Ref. Url : https://forum.codeigniter.com/thread-67870.html

You can solve this issue in two ways.
Create a cronjob to do the time consuming task in the main method you just need to add an enrty to a job table to the post processing after sending the response. So reponse will not wait until all the processing is completed. Also you can schedule the cronjob time as you need depend on the urgency of the post processing and server load.
You can use CodeIgniter hook function to do the processing after sending the response to the caller. The hook method will be "post_system" Read more about it at https://ellislab.com/codeigniter/user-guide/general/hooks.html

Related

nohup process keep shutting down

I am trying to run 10.000 processes to create asterisks phone accounts.
This is to stress-test our Asterisk server.
I called with php an exec() function to create a Linux command.
nohup /usr/src/pjproject-2.3/pjsip-apps/bin/pjsua-x86_64-unknown-linux-gnu --id=sip:%s#13.113.163.3 --registrar=sip:127.0.0.1:25060 --realm=* --username=%s --password=123456 --local-port=%s --null-audio --no-vad --max-calls=32 --no-tcp >>/dev/null 2>>/dev/null & $(echo -ne \'\r\')"
Everything works perfect and the script does exactly what I am expecting.
But here comes also the next problem; after creating the 10.000 accounts, the processes are suddenly all getting killed.
Why is this?
Isn't it so that the nohup function keeps the processes alive?
After calling the nohup function I am also calling the disown function.
Thank you for the help
[edit]
I also tried this project with the function screen, the screen functions work like a charm, but the problem is the cpu usage. To create 10.000 screens, makes a linux server go nuts, this is why I choose for nohup.
The full php code:
<?php
# count
$count_screens = 0;
# port count start
$port_count = 30000;
# register accounts number
$ext_number = 1000;
# amount of times you want this loop to go
$min_accounts = 0;
$max_accounts = 1000;
Class shell {
const CREATE_SESSION = 'screen -dmS stress[%s]';
const RUN_PJSUA = 'screen -S stress[%s] -p 0 -rX stuff "nohup /usr/src/pjproject-2.3/pjsip-apps/bin/pjsua-x86_64-unknown-linux-gnu --id=sip:%s#13.113.163.3 --registrar=sip:127.0.0.1:25060 --realm=* --username=%s --password=123456 --local-port=%s --null-audio --no-vad --max-calls=32 --no-tcp >>/dev/null 2>>/dev/null &"';
const DISOWN_PJSUA = 'screen -S stress[%s] -p 0 -rX stuff "disown -l $(echo -ne \'\r\')"';
public function openShell($count_screens) {
# creating a new screen to make the second call
$command = sprintf(static:: CREATE_SESSION, $count_screens);
$string = exec($command);
return var_dump($string);
}
public function runPJSUA($count_screens, $ext_number, $ext_number, $port_count) {
# register a new pjsua client
$command = sprintf(static:: RUN_PJSUA, $count_screens, $ext_number, $ext_number, $port_count);
$string = exec($command);
usleep(20000);
return var_dump($string);
}
public function disownPJSUA($count_screens) {
# register a new pjsua client
$command = sprintf(static:: DISOWN_PJSUA, $count_screens);
$string = exec($command);
return var_dump($string);
}
}
while ($min_accounts < $max_accounts) {
$shell = new shell();
if ($count_screens == '0') {
$count_screens++;
echo $shell->openShell($count_screens);
} else {
$count_screens = 1;
}
$port_count++;
$ext_number++;
$min_accounts++;
echo $shell->runPJSUA($count_screens, $ext_number, $ext_number, $port_count);
echo $shell->disownPJSUA($count_screens);
}
?>
Pjsua is relatively heavy application, definitely too heavy to run 10000 instances, not intended for this kind of testing. As you are configuring it for 32 calls even running out of ports would be a problem (there are two ports per call reserved plus port for SIP). If you want to stay with pjsua you might at least optimize test by configuring multiple accounts for single pjsua instance. It might be limited by command line length, but ~30 accounts per instance might work.

Execute every N command in parallel in shell_exec() in PHP

I'd like to execute N commands in bash in parallel. And then the next N commands after all the commands finish, and then the next N commands …
Because I am not an expert in shell scripting, I have resorted to PHP. But I suspect my code is not doing what I needed optimally:
<?php
// d() is a function like var_dump()
d($d);
$ips = array(
"83.149.70.159:13012" => 8,
"37.48.118.90:13082" => 77,
"83.149.70.159:13082" => 77,);
d($ips);
reset($ips);
$prx = "storm";
$f = array();
foreach ($d as $calln) {
$ip = current($ips);
$ipkey = key($ips);
d($ip, $ipkey);
$comd = choose_comd($prx);
d( $comd);
$f[] = shell_exec($comd);
d($f);
choose_limiting ($prx);
d($GLOBALS['a']);
}
function choose_comd ($prx) {
d($GLOBALS['calln']);
switch ($prx) {
case "storm":
return "cd /Users/jMac-NEW/HoldingForDO/phub/phubalt_pages && curl -x {$GLOBALS['ipkey']} \"https://catalog.loc.gov/vwebv/search?searchArg={$GLOBALS['calln']}&searchCode=CALL%2B&searchType=1&limitTo=none&fromYear=&toYear=&limitTo=LOCA%3Dall&limitTo=PLAC%3Dall&limitTo=TYPE%3Dall&limitTo=LANG%3Dall&recCount=1200\" >trial_{$GLOBALS['calln']}_out.html 2> trial_{$GLOBALS['calln']}_error.txt &";;
// more cases ...
}
function choose_limiting ($prx){
switch ($prx) {
case "":
if (!next($ips)) {
sleep (80);
reset($ips);
}
case "storm":
if (!isset($GLOBALS['a'])) {
echo "if";
$GLOBALS['a'] = 0;
}
elseif ($GLOBALS['a'] == current($GLOBALS['ips'])) {
echo "elseif";
next($GLOBALS['ips']);
sleep(80/count($GLOBALS['ips']) - 7); // 80 is the standard
$GLOBALS['a'] = 0;
}
else {
echo "else";
$GLOBALS['a']++;
}
}
}
function trying () {
$GLOBALS['a']++;
d($GLOBALS['a']);
}
Firstly, I am not sure if running a loop around shell_exec("command… &") will make all the commands run in parallel.
Secondly, the loop runs around all the possible commands, but is made to sleep() with an arbitrary / estimated duration of 70 after every N commands are run with shell_exec(). 70 seconds sleep period may or may not correspond with the completion of all previous N commands that have been executed, but i am just assuming that it will be around there.
May I know if what I have done has fulfilled my aim? If no, why? And what other solution is there?
Actually I do not mind just using bash directly, but the problem is that every iteration of loop is supposed to be fed with a variable $calln from a php array $d populated in earlier parts of the script not shown. If PHP can do what I need, pls stick to PHP.

PHP pthreads failing when run from cron

Ok, so lets start slow...
I have a pthreads script running and working for me, tested and working 100% of the time when I run it manually from the command line via ssh. The script is as follows with the main thread process code adjusted to simulate random process' run time.
class ProcessingPool extends Worker {
public function run(){}
}
class LongRunningProcess extends Threaded implements Collectable {
public function __construct($id,$data) {
$this->id = $id;
$this->data = $data;
}
public function run() {
$data = $this->data;
$this->garbage = true;
$this->result = 'START TIME:'.time().PHP_EOL;
// Here is our actual logic which will be handled within a single thread (obviously simulated here instead of the real functionality)
sleep(rand(1,100));
$this->result .= 'ID:'.$this->id.' RESULT: '.print_r($this->data,true).PHP_EOL;
$this->result .= 'END TIME:'.time().PHP_EOL;
$this->finished = time();
}
public function __destruct () {
$Finished = 'EXITED WITHOUT FINISHING';
if($this->finished > 0) {
$Finished = 'FINISHED';
}
if ($this->id === null) {
print_r("nullified thread $Finished!");
} else {
print_r("Thread w/ ID {$this->id} $Finished!");
}
}
public function isGarbage() : bool { return $this->garbage; }
public function getData() {
return $this->data;
}
public function getResult() {
return $this->result;
}
protected $id;
protected $data;
protected $result;
private $garbage = false;
private $finished = 0;
}
$LoopDelay = 500000; // microseconds
$MinimumRunTime = 300; // seconds (5 minutes)
// So we setup our pthreads pool which will hold our collection of threads
$pool = new Pool(4, ProcessingPool::class, []);
$Count = 0;
$StillCollecting = true;
$CountCollection = 0;
do {
// Grab all items from the conversion_queue which have not been processed
$result = $DB->prepare("SELECT * FROM `processing_queue` WHERE `processed` = 0 ORDER BY `queue_id` ASC");
$result->execute();
$rows = $result->fetchAll(PDO::FETCH_ASSOC);
if(!empty($rows)) {
// for each of the rows returned from the queue, and allow the workers to run and return
foreach($rows as $id => $row) {
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 1 WHERE `queue_id` = ?");
$update->execute([$row['queue_id']]);
$pool->submit(new LongRunningProcess($row['fqueue_id'],$row));
$Count++;
}
} else {
// 0 Rows To Add To Pool From The Queue, Do Nothing...
}
// Before we allow the loop to move on to the next part, lets try and collect anything that finished
$pool->collect(function ($Processed) use(&$CountCollection) {
global $DB;
$data = $Processed->getData();
$result = $Processed->getResult();
$update = $DB->prepare("UPDATE `processing_queue` SET `processed` = 2 WHERE `queue_id` = ?");
$update->execute([$data['queue_id']]);
$CountCollection++;
return $Processed->isGarbage();
});
print_r('Collecting Loop...'.$CountCollection.'/'.$Count);
// If we have collected the same total amount as we have processed then we can consider ourselves done collecting everything that has been added to the database during the time this script started and was running
if($CountCollection == $Count) {
$StillCollecting = false;
print_r('Done Collecting Everything...');
}
// If we have not reached the full MinimumRunTime that this cron should run for, then lets continue to loop
$EndTime = microtime(true);
$TimeElapsed = ($EndTime - $StartTime);
if(($TimeElapsed/($LoopDelay/1000000)) < ($MinimumRunTime/($LoopDelay/1000000))) {
$StillCollecting = true;
print_r('Ended To Early, Lets Force Another Loop...');
}
usleep($LoopDelay);
} while($StillCollecting);
$pool->shutdown();
So while the above script will run via a command line (which has been adjusted to the basic example, and detailed processing code has been simulated in the above example), the below command gives a different result when run from a cron setup for every 5 minutes...
/opt/php7zts/bin/php -q /home/account/cron-entry.php file=every-5-minutes/processing-queue.php
The above script, when using the above command line call, will loop over and over during the run time of the script and collect any new items from the DB queue, and insert them into the pool, which allows 4 processes at a time to run and finish, which is then collected and the queue is updated before another loop happens, pulling any new items from the DB. This script will run until we have processed and collected all processes in the queue during the execution of the script. If the script has not run for the full 5 minute expected period of time, the loop is forced to continue checking the queue, if the script has run over the 5 minute mark it allows any current threads to finish & be collected before closing. Note that the above code also includes a code based "flock" functionality which makes future crons of this idle loop and exit or start once the lock has lifted, ensuring that the queue and threads are not bumping into each other. Again, ALL OF THIS WORKS FROM THE COMMAND LINE VIA SSH.
Once I take the above command, and put it into a cron to run for every 5 minutes, essentially giving me a never ending loop, while maintaining memory, I get a different result...
That result is described as follows... The script starts, checks the flock, and continues if the lock is not there, it creates the lock, and runs the above script. The items are taken from the queue in the DB, and inserted into the pool, the pool fires off the 4 threads at a time as expected.. But the unexpected result is that the run() command does not seem to be executed, and instead the __destruct function runs, and a "Thread w/ ID 2 FINISHED!" type of message is returned to the output. This in turn means that the collection side of things does not collect anything, and the initiating script (the cron script itself /home/account/cron-entry.php file=every-5-minutes/processing-queue.php) finishes after everything has been put into the pool, and destructed. Which prematurely "finishes" the cron job, since there is nothing else to do but loop and pull nothing new from the queue, since they are considered "being processed" when processed == 1 in the queue.
The question then finally becomes... How do I make the cron's script aware of the threads that where spawned and run() them without closing the pool out before they can do anything?
(note... if you copy / paste the provided script, note that I did not test it after removing the detailed logic, so it may need some simple fixes... please do not nit-pick said code, as the key here is that pthreads works if the script is executed FROM the Command Line, but fails to properly run when the script is executed FROM a CRON. If you plan on commenting with non-constructive criticism, please go use your fingers to do something else!)
Joe Watkins! I Need Your Brilliance! Thanks In Advance!
After all of that, it seems that the issue was with regards to user permissions. I was setting this specific cron up inside of cpanel, and when running the command manually I was logged in as root.
After setting this command up in roots crontab, I was able to get it to successfully run the threads from the pool. Only issue I have now is some threads never finish, and sometimes I am unable to close the pool. But this is a different issue, so I will open another question elsewhere.
For those running into this issue, make sure you know who the owner of the cron is as it matters with php's pthreads.

Multiupload using pthread in php

I have been trying to implement multi-threading in php to achieve multi-upload using pthreads php.
From my understanding of multi-threading, this is how I envisioned it working.
I would upload a file,the file will start uploading in the background; even if the file is not completed to upload, another instance( thread ) will be created to upload another file. I would make multiple upload requests using AJAXand multiple files would start uploading, I would get the response of a single request individually and I can update the status of upload likewise in my site.
But this is not how it is working. This is the code that I got from one of the pthread question on SO, but I do not have the link( sorry!! ).
I tested this code to see of this really worked like I envisioned. This is the code I tested, I changed it a little.
<?php
error_reporting(E_ALL);
class AsyncWebRequest extends Thread {
public $url;
public $data;
public function __construct ($url) {
$this->url = $url;
}
public function run () {
if ( ($url = $this->url) ){
/*
* If a large amount of data is being requested, you might want to
* fsockopen and read using usleep in between reads
*/
$this->data = file_get_contents ($url);
echo $this->getThreadId ();
} else{
printf ("Thread #%lu was not provided a URL\n", $this->getThreadId ());
}
}
}
$t = microtime (true);
foreach( ["http://www.google.com/?q=". rand () * 10, 'http://localhost', 'https://facebook.com'] as $url ){
$g = new AsyncWebRequest( $url );
/* starting synchronized */
if ( $g->start () ){
printf ( $url ." took %f seconds to start ", microtime (true) - $t);
while ($g->isRunning ()) {
echo ".";
usleep (100);
}
if ( $g->join () ){
printf (" and %f seconds to finish receiving %d bytes\n", microtime (true) - $t, strlen ($g->data));
} else{
printf (" and %f seconds to finish, request failed\n", microtime (true) - $t);
}
}
echo "<hr/>";
}
So what I expected from this code was it would hit google.com, localhost and facebook.com simultaneously and run their individual threads. But every request is waiting for another request to complete.
For this it is clearly waiting for first response to complete before it is making another request because time the request are sent are after the request from the previous request is complete.
So, This is clearly not the way to achieve what I am trying to achieve. How do I do this?
You might want to look at multi curl for such multiple external requests. Pthreads is more about internal processes.
Just for further reference, you are starting threads 1 by 1 and waiting for them to finish.
This code: while ($g->isRunning ()) doesn't stop until the thread is finished. It's like having a while (true) in a for. The for executes 1 step at a time.
You need to start the threads, add them in an array, and in another while loop check each of the threads if it stopped and remove them from the array.

Reading a Git commit message from PHP

I'm looking for a way to read a Git commit message with PHP. I suspect I need to use a Git hook, but I've never worked with them before, so I need a push in the right direction. Specifically, I'd like to implement the following process:
A PHP script is executed automatically after every commit
The script captures the Git username, the time of the commit, and the commit content
If at all possible, I'd like to stick with pure PHP. If there are tutorials or references that you could point out, that would be a huge help.
To get the commit hash, you can use
git rev-parse --verify HEAD 2> /dev/null
From within php:
exec('git rev-parse --verify HEAD 2> /dev/null', $output);
$hash = $output[0];
You can get the commit message, author and time (though - the time will simply be "now" if it's run as part of a post-commit hook) with:
exec("git show $hash", $output);
If it's not obvious, whatever you do with php is simply going to be a wrapper around the things you'd do with git on the cli - I.e. any "how can I do x with git from php" is just exec('the git answer', $output)
As far as using PHP to extract the correct commit:
Indefero
There is a project called Indefero that is a PHP forge tool that has an SCM connector for git. You could easily use their git class as an API for yourself. You can just grab the git class and the SCM class.
I have, for example, pulled out two methods from the class below, which I think are the most relevant to you so you can see how they work.
Get a changelog list: getChangeLog()
/**
* Get latest changes.
*
* #param string Commit ('HEAD').
* #param int Number of changes (10).
* #return array Changes.
*/
public function getChangeLog($commit='HEAD', $n=10)
{
if ($n === null) $n = '';
else $n = ' -'.$n;
$cmd = sprintf('GIT_DIR=%s '.Pluf::f('git_path', 'git').' log%s --date=iso --pretty=format:\'%s\' %s',
escapeshellarg($this->repo), $n, $this->mediumtree_fmt,
escapeshellarg($commit));
$out = array();
$cmd = Pluf::f('idf_exec_cmd_prefix', '').$cmd;
self::exec('IDF_Scm_Git::getChangeLog', $cmd, $out);
return self::parseLog($out);
}
Get a particular commit: getCommit()
/**
* Get commit details.
*
* #param string Commit
* #param bool Get commit diff (false)
* #return array Changes
*/
public function getCommit($commit, $getdiff=false)
{
if ($getdiff) {
$cmd = sprintf('GIT_DIR=%s '.Pluf::f('git_path', 'git').' show --date=iso --pretty=format:%s %s',
escapeshellarg($this->repo),
"'".$this->mediumtree_fmt."'",
escapeshellarg($commit));
} else {
$cmd = sprintf('GIT_DIR=%s '.Pluf::f('git_path', 'git').' log -1 --date=iso --pretty=format:%s %s',
escapeshellarg($this->repo),
"'".$this->mediumtree_fmt."'",
escapeshellarg($commit));
}
$out = array();
$cmd = Pluf::f('idf_exec_cmd_prefix', '').$cmd;
self::exec('IDF_Scm_Git::getCommit', $cmd, $out, $ret);
if ($ret != 0 or count($out) == 0) {
return false;
}
if ($getdiff) {
$log = array();
$change = array();
$inchange = false;
foreach ($out as $line) {
if (!$inchange and 0 === strpos($line, 'diff --git a')) {
$inchange = true;
}
if ($inchange) {
$change[] = $line;
} else {
$log[] = $line;
}
}
$out = self::parseLog($log);
$out[0]->diff = implode("\n", $change);
} else {
$out = self::parseLog($out);
$out[0]->diff = '';
}
$out[0]->branch = implode(', ', $this->inBranches($commit, null));
return $out[0];
}
VersionControl_Git from PEAR
There is also a library in PEAR called VersionControl_Git that would be helpful in this situation and is documented.
As #Pawel mentioned, you're going to want to work with hooks on this:
On localhost you can navigate to /.git/hooks and rename
post-commit.sample to post-commit and then put inside #!/usr/bin/php
There are also other hooks that may be more
suitable for you.
Git will look for the post-commit hook after you've commit and automatically run anything inside.
What you're going to want to do here depends on the task, but I'd suggest curling the script - here's where things get interesting.
In order to extract the information you're looking for, you're going to want to use git log -1 - that should pull back the latest commit.
More specifically, you'll want to build your commit using the --pretty=format toggle, which can output the latest commit with the info you need. Check out this string:
git log -1 --pretty=format:'%h - %cn (%ce) - %s (%ci)'
This would return most of the things you are looking for. Check out the git-log page to see all of the different % variables that you can use with --pretty=format:. Once you've made the string that you'd like, you can either POST those via cURL to a PHP script, or run a PHP script and use shell_exec(git log -1 --pretty=format:'%h - %cn (%ce) - %s (%ci)') to work with the commit message inline.
I was digging in the same question and found out a way to do it faster and easier.
To get just the commit message you could use
git rev-list --format=%B --max-count=1 HEAD
Obviously HEAD may be replaced with any commit hash.
It will output something like
commit 4152601a42270440ad52680ac7c66ba87a506174
Improved migrations and models relations
Second line is what you need.

Categories