Is there a realistic way of implementing a multi-threaded model in PHP whether truly, or just simulating it. Some time back it was suggested that you could force the operating system to load another instance of the PHP executable and handle other simultaneous processes.
The problem with this is that when the PHP code finished executing the PHP instance remains in memory because there is no way to kill it from within PHP. So if you are simulating several threads you can imagine whats going to happen. So I am still looking for a way multi-threading can be done or simulated effectively from within PHP. Any ideas?
Warning: This extension is considered unmaintained and dead.
Warning: The pthreads extension cannot be used in a web server environment. Threading in PHP is therefore restricted to CLI-based applications only.
Warning: pthreads (v3) can only be used with PHP 7.2+: This is due to ZTS mode being unsafe in 7.0 and 7.1.
https://www.php.net/manual/en/intro.pthreads.php
Multi-threading is possible in php
Yes you can do multi-threading in PHP with pthreads
From the PHP documentation:
pthreads is an object-orientated API that provides all of the tools needed for multi-threading in PHP. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Threaded objects.
Warning:
The pthreads extension cannot be used in a web server environment. Threading in PHP should therefore remain to CLI-based applications only.
Simple Test
#!/usr/bin/php
<?php
class AsyncOperation extends Thread {
public function __construct($arg) {
$this->arg = $arg;
}
public function run() {
if ($this->arg) {
$sleep = mt_rand(1, 10);
printf('%s: %s -start -sleeps %d' . "\n", date("g:i:sa"), $this->arg, $sleep);
sleep($sleep);
printf('%s: %s -finish' . "\n", date("g:i:sa"), $this->arg);
}
}
}
// Create a array
$stack = array();
//Initiate Multiple Thread
foreach ( range("A", "D") as $i ) {
$stack[] = new AsyncOperation($i);
}
// Start The Threads
foreach ( $stack as $t ) {
$t->start();
}
?>
First Run
12:00:06pm: A -start -sleeps 5
12:00:06pm: B -start -sleeps 3
12:00:06pm: C -start -sleeps 10
12:00:06pm: D -start -sleeps 2
12:00:08pm: D -finish
12:00:09pm: B -finish
12:00:11pm: A -finish
12:00:16pm: C -finish
Second Run
12:01:36pm: A -start -sleeps 6
12:01:36pm: B -start -sleeps 1
12:01:36pm: C -start -sleeps 2
12:01:36pm: D -start -sleeps 1
12:01:37pm: B -finish
12:01:37pm: D -finish
12:01:38pm: C -finish
12:01:42pm: A -finish
Real World Example
error_reporting(E_ALL);
class AsyncWebRequest extends Thread {
public $url;
public $data;
public function __construct($url) {
$this->url = $url;
}
public function run() {
if (($url = $this->url)) {
/*
* If a large amount of data is being requested, you might want to
* fsockopen and read using usleep in between reads
*/
$this->data = file_get_contents($url);
} else
printf("Thread #%lu was not provided a URL\n", $this->getThreadId());
}
}
$t = microtime(true);
$g = new AsyncWebRequest(sprintf("http://www.google.com/?q=%s", rand() * 10));
/* starting synchronization */
if ($g->start()) {
printf("Request took %f seconds to start ", microtime(true) - $t);
while ( $g->isRunning() ) {
echo ".";
usleep(100);
}
if ($g->join()) {
printf(" and %f seconds to finish receiving %d bytes\n", microtime(true) - $t, strlen($g->data));
} else
printf(" and %f seconds to finish, request failed\n", microtime(true) - $t);
}
why don't you use popen?
for ($i=0; $i<10; $i++) {
// open ten processes
for ($j = 0; $j < 10; $j++) {
$pipe[$j] = popen('script2.php', 'w');
}
// wait for them to finish
for ($j = 0; $j < 10; ++$j) {
pclose($pipe[$j]);
}
}
Threading isn't available in stock PHP, but concurrent programming is possible by using HTTP requests as asynchronous calls.
With the curl's timeout setting set to 1 and using the same session_id for the processes you want to be associated with each other, you can communicate with session variables as in my example below. With this method you can even close your browser and the concurrent process still exists on the server.
Don't forget to verify the correct session ID like this:
http://localhost/test/verifysession.php?sessionid=[the correct id]
startprocess.php
$request = "http://localhost/test/process1.php?sessionid=".$_REQUEST["PHPSESSID"];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $request);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 1);
curl_exec($ch);
curl_close($ch);
echo $_REQUEST["PHPSESSID"];
process1.php
set_time_limit(0);
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
function checkclose()
{
global $_SESSION;
if ($_SESSION["closesession"])
{
unset($_SESSION["closesession"]);
die();
}
}
while(!$close)
{
session_start();
$_SESSION["test"] = rand();
checkclose();
session_write_close();
sleep(5);
}
verifysession.php
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
session_start();
var_dump($_SESSION);
closeprocess.php
if ($_REQUEST["sessionid"])
session_id($_REQUEST["sessionid"]);
session_start();
$_SESSION["closesession"] = true;
var_dump($_SESSION);
While you can't thread, you do have some degree of process control in php. The two function sets that are useful here are:
Process control functions
http://www.php.net/manual/en/ref.pcntl.php
POSIX functions
http://www.php.net/manual/en/ref.posix.php
You could fork your process with pcntl_fork - returning the PID of the child. Then you can use posix_kill to despose of that PID.
That said, if you kill a parent process a signal should be sent to the child process telling it to die. If php itself isn't recognising this you could register a function to manage it and do a clean exit using pcntl_signal.
using threads is made possible by the pthreads PECL extension
http://www.php.net/manual/en/book.pthreads.php
I know this is an old question but for people searching, there is a PECL extension written in C that gives PHP multi-threading capability now, it's located here https://github.com/krakjoe/pthreads
You can use exec() to run a command line script (such as command line php), and if you pipe the output to a file then your script won't wait for the command to finish.
I can't quite remember the php CLI syntax, but you'd want something like:
exec("/path/to/php -f '/path/to/file.php' | '/path/to/output.txt'");
I think quite a few shared hosting servers have exec() disabled by default for security reasons, but might be worth a try.
You could simulate threading. PHP can run background processes via popen (or proc_open). Those processes can be communicated with via stdin and stdout. Of course those processes can themselves be a php program. That is probably as close as you'll get.
You can have option of:
multi_curl
One can use system command for the same
Ideal scenario is, create a threading function in C language and compile/configure in PHP. Now that function will be the function of PHP.
How about pcntl_fork?
check our the manual page for examples: PHP pcntl_fork
<?php
$pid = pcntl_fork();
if ($pid == -1) {
die('could not fork');
} else if ($pid) {
// we are the parent
pcntl_wait($status); //Protect against Zombie children
} else {
// we are the child
}
?>
If you are using a Linux server, you can use
exec("nohup $php_path path/script.php > /dev/null 2>/dev/null &")
If you need pass some args
exec("nohup $php_path path/script.php $args > /dev/null 2>/dev/null &")
In script.php
$args = $argv[1];
Or use Symfony
https://symfony.com/doc/current/components/process.html
$process = Process::fromShellCommandline("php ".base_path('script.php'));
$process->setTimeout(0);
$process->disableOutput();
$process->start();
Depending on what you're trying to do you could also use curl_multi to achieve it.
pcntl_fork won't work in a web server environment if it has safe mode turned on. In this case, it will only work in the CLI version of PHP.
I know this is an old question, but this will undoubtedly be useful to many: PHPThreads
Code Example:
function threadproc($thread, $param) {
echo "\tI'm a PHPThread. In this example, I was given only one parameter: \"". print_r($param, true) ."\" to work with, but I can accept as many as you'd like!\n";
for ($i = 0; $i < 10; $i++) {
usleep(1000000);
echo "\tPHPThread working, very busy...\n";
}
return "I'm a return value!";
}
$thread_id = phpthread_create($thread, array(), "threadproc", null, array("123456"));
echo "I'm the main thread doing very important work!\n";
for ($n = 0; $n < 5; $n++) {
usleep(1000000);
echo "Main thread...working!\n";
}
echo "\nMain thread done working. Waiting on our PHPThread...\n";
phpthread_join($thread_id, $retval);
echo "\n\nOur PHPThread returned: " . print_r($retval, true) . "!\n";
Requires PHP extensions:
posix
pcntl
sockets
I've been using this library in production now for months. I put a LOT of effort into making it feel like using POSIX pthreads. If you're comfortable with pthreads, you can pick this up and use it very effectively in no time.
Computationally, the inner workings are quite different, but practically, the functionality is nearly the same including semantics and syntax.
I've used it to write an extremely efficient WebSocket server that supports high throughput rates. Sorry, I'm rambling. I'm just excited that I finally got it released and I want to see who it will help!
popen()/proc_open() works parallel even in Windows.
Most often pitfall is "fread/stream_get_contents" without while loop. Once you try to fread() from running process it will block output for processes that run after it (cause of fread() waits until at least one byte arrives)
Add stream_select(). Closest analogy is "foreach with timeout but for streams", you pass few arrays to read and write and each call of stream_select() one or more streams will be selected. Function updates original arrays by reference, so dont forget to restore it to all streams before next call. Function gives them some time to read or write. If no content - control returns allowing us to retry cycle.
// sleep.php
set_error_handler(function ($severity, $error, $file, $line) {
throw new ErrorException($error, -1, $severity, $file, $line);
});
$sleep = $argv[ 1 ];
sleep($sleep);
echo $sleep . PHP_EOL;
exit(0);
// run.php
<?php
$procs = [];
$pipes = [];
$cmd = 'php %cd%/sleep.php';
$desc = [
0 => [ 'pipe', 'r' ],
1 => [ 'pipe', 'w' ],
2 => [ 'pipe', 'a' ],
];
for ( $i = 0; $i < 10; $i++ ) {
$iCmd = $cmd . ' ' . ( 10 - $i ); // add SLEEP argument to each command 10, 9, ... etc.
$proc = proc_open($iCmd, $desc, $pipes[ $i ], __DIR__);
$procs[ $i ] = $proc;
}
$stdins = array_column($pipes, 0);
$stdouts = array_column($pipes, 1);
$stderrs = array_column($pipes, 2);
while ( $procs ) {
foreach ( $procs as $i => $proc ) {
// #gzhegow > [OR] you can output while script is running (if child never finishes)
$read = [ $stdins[ $i ] ];
$write = [ $stdouts[ $i ], $stderrs[ $i ] ];
$except = [];
if (stream_select($read, $write, $except, $seconds = 0, $microseconds = 1000)) {
foreach ( $write as $stream ) {
echo stream_get_contents($stream);
}
}
$status = proc_get_status($proc);
if (false === $status[ 'running' ]) {
$status = proc_close($proc);
unset($procs[ $i ]);
echo 'STATUS: ' . $status . PHP_EOL;
}
// #gzhegow > [OR] you can output once command finishes
// $status = proc_get_status($proc);
//
// if (false === $status[ 'running' ]) {
// if ($content = stream_get_contents($stderrs[ $i ])) {
// echo '[ERROR]' . $content . PHP_EOL;
// }
//
// echo stream_get_contents($stdouts[ $i ]) . PHP_EOL;
//
// $status = proc_close($proc);
// unset($procs[ $i ]);
//
// echo 'STATUS: ' . $status . PHP_EOL;
// }
}
usleep(1); // give your computer one tick to decide what thread should be used
}
// ensure you receive 1,2,3... but you've just run it 10,9,8...
exit(0);
Multithreading means performing multiple tasks or processes simultaneously, we can achieve this in php by using following code,although there is no direct way to achieve multithreading in php but we can achieve almost same results by following way.
chdir(dirname(__FILE__)); //if you want to run this file as cron job
for ($i = 0; $i < 2; $i += 1){
exec("php test_1.php $i > test.txt &");
//this will execute test_1.php and will leave this process executing in the background and will go
//to next iteration of the loop immediately without waiting the completion of the script in the
//test_1.php , $i is passed as argument .
}
Test_1.php
$conn=mysql_connect($host,$user,$pass);
$db=mysql_select_db($db);
$i = $argv[1]; //this is the argument passed from index.php file
for($j = 0;$j<5000; $j ++)
{
mysql_query("insert into test set
id='$i',
comment='test',
datetime=NOW() ");
}
This will execute test_1.php two times simultaneously and both process will run in the background simultaneously ,so in this way you can achieve multithreading in php.
This guy done really good work Multithreading in php
As of the writing of my current comment, I don't know about the PHP threads. I came to look for the answer here myself, but one workaround is that the PHP program that receives the request from the web server delegates the whole answer formulation to a console application that stores its output, the answer to the request, to a binary file and the PHP program that launched the console application returns that binary file byte-by-byte as the answer to the received request. The console application can be written in any programming language that runs on the server, including those that have proper threading support, including C++ programs that use OpenMP.
One unreliable, dirty, trick is to use PHP for executing a console application, "uname",
uname -a
and print the output of that console command to the HTML output to find out the exact version of the server software. Then install the exact same version of the software to a VirtualBox instance, compile/assemble whatever fully self-contained, preferably static, binaries that one wants and then upload those to the server. From that point onwards the PHP application can use those binaries in the role of the console application that has proper multi-threading. It's a dirty, unreliable, workaround to a situation, when the server administrator has not installed all needed programming language implementations to the server. The thing to watch out for is that at every request that the PHP application receives the console application(s) terminates/exit/get_killed.
As to what the hosting service administrators think of such server usage patterns, I guess it boils down to culture. In Northern Europe the service provider HAS TO DELIVER WHAT WAS ADVERTISED and if execution of console commands was allowed and uploading of non-malware files was allowed and the service provider has a right to kill any server process after a few minutes or even after 30 seconds, then the hosting service administrators lack any arguments for forming a proper complaint. In United States and Western Europe the situation/culture is very different and I believe that there's a great chance that in U.S. and/or Western Europe the hosting service provider will
refuse to serve hosting service clients that use the above described trick. That's just my guess, given my personal experience with U.S. hosting services and given what I have heard from others about Western European hosting services. As of the writing of my current comment(2018_09_01) I do not know anything about the cultural norms of the Southern-European hosting service providers, Southern-European network administrators.
There is a PHP script. It gets data from external API and import(update/delete) data into WordPress database (products for Woocommerce). There are a lot of products... To import all of them the script needs about 2-3 hours.
The problem is that when the script executes, the memory is not cleaned which leads to its overflow. After that, the script just silently dies without any error.
In short, the script looks like this:
$products = getProductsFromApi();
foreach ($products as $key => $product) {
$this->import($product);
}
The idea is to split the cronjob script into parts: if $currentMemory > 100Mb then stop the script and run it again but not from the beginning, from the moment it stopped.
How can this be realized? If there is a restriction on a server: only 1 cronjob script per 2 hours.
Any other ideas?
You can use a tool such as Gearman to create a queue and workers for importing processes. You can program each worker to process a certain amount of products that would take time less than the server's maximum execution time.
Gearman will also allow you to control how many workers can run simultaneously. Therefore, the importing process would be faster and you'll make sure the server resources aren't being totally consumed by workers.
You can serilize the $products array when $currentMemory > 100Mb to a file and then execute the script again:
$limit = 100*1000*1000;
$store = 'products.bin';
$products = [];
if ( !file_exists($store)) {
$products = getProductsFromApi();
} else {
$products = unserialize(file_get_contents($store));
}
foreach ($products as $key => $product) {
$this->import($product);
unset($products[$key]);
if (memory_get_usage() > $limit) {
file_put_contents($store,serialize($products));
exec('nohup /usr/bin/php -f myscript.php');
exit(1);
}
}
unlink ($store);
You can use sleep function
For example
$products = getProductsFromApi();
$i=0;
foreach ($products as $key => $product) {
// you can use your condition here instead of this
if($i%10==0){// run ten times then sleep for 100 second
sleep(100);
}
$this->import($product);
$i++;
}
https://php.net/manual/en/function.sleep.php
My issue is this. I am forking a process so that I can speed up access time to files on disk. I store any data from these files in a tmp file on local desk. ideally, after all processes have finished, I need to access that tmp file and get that data into an array. I then unlink the tmp file as it is no longer needed. My problem is that it would seem that pcntl_wait() does not acutally wait until all child processes are done before moving on to the final set of operations. So I end up unlinking that file before some random process can finish up.
I can't seem to find a solid way to wait for all processes to exit cleanly and then access my data.
$numChild = 0;
$maxChild = 20; // max number of forked processes.
// get a list of "availableCabs"
foreach ($availableCabs as $cab) {
// fork the process
$pids[$numChild] = pcntl_fork();
if (!$pids[$numChild]) {
// do some work
exit(0);
} else {
$numChild++;
if ($numChild == $maxChild) {
pcntl_wait($status);
$numChild--;
}
} // end fork
}
// Below is where things fall apart. I need to be able to print the complete serialized data. but several child processes don't actually exit before i unlink the file.
$dataFile = fopen($pid, 'r');
while(($values = fgetcsv($dataFile,',')) !== FALSE) {
$fvalues[] = $values;
}
print serialize($fvalues);
fclose($dataFile);
unlink($file);
please note that i'm leaving a lot of code out regarding what i'm actually doing, if we need that posted thats not issue.
Try restructuring you code so that you have two loops - one that spawns processes and one that waits for them to finish. You should also use pcntl_waitpid() to check for specific process IDs, rather than the simple child counting approach you are currently using.
Something like this:
<?php
$maxChildren = 20; // Max number of forked processes
$pids = array(); // Child process tracking array
// Get a list of "availableCabs"
foreach ($availableCabs as $cab) {
// Limit the number of child processes
// If $maxChildren or more processes exist, wait until one exits
if (count($pids) >= $maxChildren) {
$pid = pcntl_waitpid(-1, $status);
unset($pids[$pid]); // Remove PID that exited from the list
}
// Fork the process
$pid = pcntl_fork();
if ($pid) { // Parent
if ($pid < 0) {
// Unable to fork process, handle error here
continue;
} else {
// Add child PID to tracker array
// Use PID as key for easy use of unset()
$pids[$pid] = $pid;
}
} else { // Child
// If you aren't doing this already, consider using include() here - it
// will keep the code in the parent script more readable and separate
// the logic for the parent and children
exit(0);
}
}
// Now wait for the child processes to exit. This approach may seem overly
// simple, but because of the way it works it will have the effect of
// waiting until the last process exits and pretty much no longer
foreach ($pids as $pid) {
pcntl_waitpid($pid, $status);
unset($pids[$pid]);
}
// Now the parent process can do it's cleanup of the results
Im trying to create counter that using shared block memory, just look code:
$i=0; $counter = new counter('g');
while($i<3){
$pid = pcntl_fork();
echo $counter->get()."\t".$i."\t".$pid."\n";
$i++;
}
class counter {
protected static $projID = array();
protected $t_key;
protected $length;
function __construct($projID){
!in_array( $projID, self::$projID) or die('Using duplicate project identifer "'.$projID.'" for creating counter');
self::$projID[] = $projID;
$this->t_key = ftok(__FILE__, $projID);
$this->shmid = shmop_open($t_key, 'c', 0755, 64);
$this->length = shmop_write($this->shmid, 0, 0);
shmop_close($this->shmid);
}
function get(){
$sem = sem_get($this->t_key, 1);
sem_acquire($sem);
$shmid = shmop_open($this->t_key, 'c', 0755, 64);
$inc = shmop_read($shmid, 0, $this->length);
$this->length = shmop_write($shmid, $inc+1, 0);
shmop_close($shmid);
sem_release($sem);
return $inc;
}
}
But il get strange result
7 0 2567
8 1 2568
9 0 0
1 1 0
2 2 2569
40 1 2570
4 2 2572
3 2 0
51 2 2571
52 1 0
63 2 0
5 2 0
64 2 2573
65 2 0
I want to create this class for read and write strings in file in multithreading.
You're not ending child processes at all, they'll never finish. You're also not checking whether the process forked correctly or not, there's no control over what's finished processing and in what order. Forking a process isn't really multithreading that other languages provide, all that happens is that the current process is being copied and variables are shared - your $i won't end at 3, nor is there a guarantee which process is finishing first or last.
Try with:
while($i < 3)
{
$pid = pcntl_fork();
if($pid == -1)
{
// some sort of message that the process wasn't forked
exit(1);
}
else
{
if($pid)
{
pcntl_wait($status); // refer to PHP manual to check what this function does
}
else
{
// enter your code here, for whatever you want to be done in parallel
// bear in mind that some processes can finish sooner, some can finish later
// good use is when you have tasks dependent on network latency and you want
// them executed asynchronously (such as uploading multiple files to an ftp or
// synchronizing of something that's being done over network
// after you're done, kill the process so it doesn't become a zombie
posix_kill(getmypid(), 9); // not the most elegant solution, and can fail
}
}
}
You aren't dealing with the PID after your call to pcntl_fork. Your forks are forking because the loop continues to execute and fork.
Unless you're trying to create a localized fork bomb, you probably don't want your forks to fork.
I did some work locally to try and figure out if that alone would solve the problem, but it didn't. It almost looks like the shared memory segment isn't being written to correctly, as if one of the digits on either side of the string is being repeated, which corrupts all of it and forces things to start over.
Complete speculation.
You might want to consider a different way of performing parallel processing with PHP. Using Gearman as a multi-process work queue is a favorite solution of mine.