I am not sure if this is doable or not. I am running cron job to process data and they are all independent of each other.
For example i have a data [x,y,z] and i have a method in the parent controller that does what it needs to do. Process takes little long and hence my queue is piling up since it is doing one at a time. I tried forking process but it loses connection to the mongo database. Therefore, I had to remove fork for now but please let me know if i can reconnect.
Pseudocode
MyTools.php
class MY_Tools extends CI_Controller {
...
public function process($item) {
Make curl request
Update database for the item
}
}
Tools.php
class Tools extends MY_Tools {
...
public function getAllDate() {
$data = fetchDataFromDB() => [X,Y,Z]
$i = 0
while ($i < sizeof($data) {
$this->process($data[$i]);
$i++;
}
}
}
if i can do this without waiting for another process to complete and just keep on going, that will be great
In addition, I am using php7
cimongo library for codeigniter and https://github.com/alcaeus/mongo-php-adapter
Possible Solution
For php7, i have used this for gearman installation
https://techearl.com/php/installing-gearman-module-for-php7-on-ubuntu
Codeigniter gearman library that I used : https://github.com/appleboy/CodeIgniter-Gearman-Library
To overcome static method accessing parent controller, use singleton method
I was struggling with this for a bit and hopefully it will help someone
Example
class MY_Tools extends CI_Controller {
private static $instance;
function __construct() {
parent::__construct();
self::$instance =& $this;
}
public static function get_instance()
{
return self::$instance;
}
}
To Access
MY_Tools::get_instance()->YOUR_PUBLIC_METHODS();
Hope this can help someone
This is the sample code, I'm working on
class workerThread extends Thread {
public function __construct($i){
$this->i=$i;
}
public function run(){
while(true){
echo $this->i;
sleep(1);
}
}
}
for($i=0;$i<50;$i++){
$workers[$i]=new workerThread($i);
$workers[$i]->start();
}
What is the appropriate way to get return value from run() or should create another function for callback?
well first you have to wait for all threads to finish.
so after your initial loop you should do one more loop waiting for each worker to finish. there is thread->join function that syncs your main thread with the sub-thread. causing to halt the execution and wait until the sub thread finishes. so if you call if($worker->join()) {...} you can be sure, that the worker is done working :)
http://php.net/manual/de/thread.join.php
second, a thread does not return a value. instead create a variable in your class, for example result and fill it with data during the run of a thread. collect at the end (after join) the $worker->result
third, your current threads even cannot report any result, as they run for ever. From the question I dont understand, if you want them to run for ever? Because if you do there are more complicated steps involved to get the results continuously.
I want to use php threads for asynchronously loading a function that executes a mysql stored procedure. The stored procedure takes a lot of time to load, so keeping it asynchronous is the only optimal solution, I found.
I have no idea on how to bring the threading inside Laravel. Laravel has queues but I want to do it directly in script with thread.
What i've done to approach a similar issue (I've done it in a sync command) is to create a class that extends from Thread and call it from the laravel code.
The class in your case might be something like this:
class LaravelWorker extends Thread
{
private $object;
public function __construct($object)
{
$this->object = $object;
}
public function run()
{
$object->runProcedure();
}
}
And you can call it at your code such as this:
$object = new ObjectWithProcedure();
$threadedMethod = new LaravelWorker($object);
$threadedMethod->start();
If, for some reason, you need to wait until the $threadedMethod finishes, you can do
$threadedMethod->join();
(more_code...)
And the more_code section will only execute once $threadedMethod has ended.
Hope it helps!
When such situation occurs?
If your are using shared memory and semaphores for interpocess locking (with pcntl extension) you should care about semaphore and shared memory segment life circle. For example, you writing backgroud worker application and use master and some child (forked) process for job processing. Using shared memory and semaphores good idea for IPC between them. And RAII like class wrapper around shm_xxx and sem_xxx php functions look`s like good idea too.
Example
class Semaphore
{
private $file;
private $sem;
public function __construct()
{
$this->file = tempnam(sys_get_temp_dir(), 's');
$semKey = ftok($this->file, 'a');
$this->sem = sem_get($semKey, 1); //auto_release = 1 by default
}
public function __destruct()
{
if (is_resource($this->sem) {
sem_remove($this->sem);
}
}
....
}
Not the good choise - after fork we have one instanse in parent and one in child process. And destructor in any of them destroy the semaphore.
Why important
Most of linux systems has limit about semaphore of shared memory count. If you have application which should create and remove many shared memory segfments of semaphores you can`t wait while it be automatically released on process shutdown.
Question
Using с you can use shmctl with IPC_RMID - it marks the segment for removal. The actual removal itself occurs when the last process currently attached to the segment has properly detached it. Of course, if no processes are currently attached to the segment, the removal seems immediate. It works like simple referenc counter. But php do not implements shmctl.
The other strategy - destroy semaphore only in destructor of master process:
class Semaphore
{
...
private $pid;
public function __construct()
{
$this->pid = getmypid();
...
}
public function __destruct()
{
if (is_resource($this->sem) && $this->pid === getmypid()) {
sem_remove($this->sem);
}
}
....
}
So, the questions is
If any way to use IPC_RMID in php?
What strategy should be used in such cases? Destroy in master process only? Other cases?
I checked the current PHP source code and IPC_RMID is not used. However, PHP uses semop() and with it, the SEM_UNDO flag, in case auto_release (see PHP sem_get() manual) is set. But be aware that this works on a per process level. So in case you are using PHP as Apache module, or FCGI or FPM, it might not work as expected. It should work nicely for CLI, though.
For your cleanup, it depends on whether the "master" terminates last or not.
If you do not know, you can implement reference counting yourself.
class Semaphore
{
static private $m_referenceCount = 0;
public function __construct()
{
++self::$m_referenceCount;
// aquire semaphore
}
public function __destruct()
{
if (--self::$m_referenceCount <= 0) {
// clean up
}
}
}
But be aware that the destructor is NOT executed in some circuumstances.
Please give me some real life examples when you had to use __destruct in your classes.
Ok, since my last answer apparently didn't hit the mark, let me try this again. There are plenty of resources and examples on the internet for this topic. Doing a bit of searching and browsing other framework's code and you'll see some pretty good examples...
Don't forget that just because PHP will close resources on termination for you doesn't mean that it's bad to explictly close them when you no longer need them (or good to not close them)... It depends on the use case (is it being used right up to the end, or is there one call early on and then not needed again for the rest of execution)...
Now, we know that __destruct is called when the object is destroyed. Logically, what happens if the object is destroyed? Well, it means it's no longer available. So if it has resources open, doesn't it make sense to close those resources as it's being destroyed? Sure, in the average web page, the page is going to terminate shortly after, so letting PHP close them usually isn't terrible. However, what happens if for some reason the script is long-running? Then you have a resource leak. So why not just close everything when you no longer need it (or considering the scope of the destructor, when it's no longer available)?
Here's some examples in real world frameworks:
Lithium's lithium\net\Socket class
Kohana's Memcached Driver
Joomla's FTP Implementation
Zend Frameworks's SMTP Mail Transport Class
CodeIgniter's TTemplate Class
A Tidy Filter Helper for Cake
A Google-Groups Thread about using Destructors For the Symfony Session Class
The interesting thing is that Kohana keeps track of the tags, so that it can delete by "namespace" later (instead of just clearing the cache). So it uses the destructor to flush those changes to the hard storage.
The CodeIgniter class also does something interesting in that it adds debugging output to the output stream in the destructor. I'm not saying this is good, but it's an example of yet another use...
I personally use destructors whenever I have long running processes on my master controller. In the constructor, I check for a pid file. If that file exists (And its process is still running), I throw an exception. If not, I create a file with the current processes id. Then, in the destructor I remove that file. So it's more about cleaning up after itself than just freeing resources...
There is another handy use to generate HTML page
class HTMLgenerator {
function __construct() {
echo "<html><body>";
}
function __destruct() {
echo "</body></html>";
}
}
With this class, you can write
$html = new HTMLgenerator();
echo "Hello, world!";
And the result is
<html><body>Hello, world!</body></html>
For example:
<?php
class Session
{
protected $data = array();
public function __construct()
{
// load session data from database or file
}
// get and set functions
public function __destruct()
{
// store session data in database or file
}
};
This is a good why to use destruct. You prevents reading and writing to a session source all the time and do this only at the start and at the end.
I create a php page what will generate a movie information jpg file. This page will have to gather a few information and run inkscape to convert template (an svg file) to a png before converting to jpg. The svg contain relative links to other image which must be a file. So my page download necessary files into a temporary folder, convert the svg file. At the end, the temporary folder must be deleted.
I put the temporary folder deletion into the destructor. Before there can be many reason the page ends unexpected and the only think I can be sure is that destructor will be call when page exit.
Hope this helps.
A destructor is extremely useful if you use a custom database connector/wrapper.
In the constructor, you can pass the connection information. Because you can use a destructor (rather than a finalizer, etc.,) you can rely on that to close the connection for you. It's more of a convenience, but it certainly is useful.
For example, when PHP decides to explicitly "free" the object (i.e., it is no longer used,) it will call the destructor at that time. This is more useful in the scenario I describe as you're not waiting for the garbage collector to run and call the finalizer.
$0.02
Ian
<?php
class Database
{
private $connection;
private $cache = array();
function __construct([$params])
{
//Connection here
}
//Query
public function query(Query $Query)
{
if($this->is_cached($Query->checksum))
{
return $this->get_cache($Query->checksum);
}
//...
}
public function __destruct()
{
unset($this->connection);
$this->WriteCache();
unset($this->cache);
shutdown_log($this,'Destruction Completed');
}
}
?>
theres an example that should make you understand.
If you use handles returned by fopen() for say, logging, you can use __destruct() to make sure fclose() is called on our resources when your class is destroyed.
You are right, __destruct is mostly unnecessary for the short running php scripts. Database connections, file handles and so on close on script exit or sometimes even earlier if variables run out of scope.
One example i can think of is writing logs to the database. Since we didn't want to fire one query per log entry that gets created somewhere in the script we wrote the "write to db" part in the __destruct of the logging class so when the script ends everything gets inserted into the database at one.
Another example: If you allow a user to upload files the destructor is sometimes a nice places to delete the temp file (in case something goes wrong in the script it at least get cleaned up)
But even for filehandles it can be useful. I've worked on a application that did use the old fopen etc. calls wrapped in objects and when using those on large filetrees php would run out of filehandles sooner or later, so cleaning up while the script was running was not only nice but necessary.
I use APC caching for large numbers of "low level" objects, that otherwise would use excessive memory; and I have a cacheCollection object that handles the reading and writing of those "low level" objects to and from APC during execution of the script. When the script terminates, the objects must be cleared down from APC, so I use the cacheCollection __destruct method to perform that function.
I have used __destruct() in a logging class that wrapped a database connection:
<?php
class anyWrap
{
private $obj,$calls,$log,$hooks;
function anyWrap($obj, $logfile = NULL)
{
if(is_null($logfile))
{
$this->log = dirname(__FILE__) . "/../logs/wrapLog.txt";
}
$this->hooks = array();
$this->dbCalls = 0;
$this->obj = $obj;
}
public function __set($attri, $val) {
$this->obj->$attri = $val;
}
public function __get($attri) {
return $this->obj->$attri;
}
public function __hook($method)
{
$this->hooks[] = $method;
}
public function __call($name,$args)
{
$this->calls++;
if(in_array($name,$this->hooks))
{
file_put_contents($this->log,var_export($args,TRUE)."\r\n",FILE_APPEND);
}
return call_user_func_array(array($this->obj,$name),$args);
}
//On destruction log diagnostics
public function __destruct()
{
unset($this->dbReal);
file_put_contents($this->log,$this->calls."\r\n",FILE_APPEND);
}
}
The script hooks into the database calls and logs the prepare statements, then when the script has run to an end (I don't always know when) it will finally log the number of calls to the database to the file. This way I can see how many times certain functions has been called on the database and plan my optimization accordingly.
If you are creating a view using a PHP script in a MySQL database, you must drop that view at the end of the script. Because if not, the next time that script is executed view will not be created, as there is already a view of similar name in the database. For this purpose you can use destructor.
Here's a rather unusual use case for destructors that I think libraries such as pest are using to combine method chaining with functions or in other words, to achieve fluent interface for functions, Which goes like this:
<?php
class TestCase {
private $message;
private $closure;
private $endingMessage;
public function __construct($message, $closure) {
$this->message = $message;
$this->closure = $closure;
}
public function addEndingMessage($message) {
$this->endingMessage = $message;
return $this;
}
private function getClosure() {
return $this->closure;
}
public function __destruct() {
echo $this->message . ' - ';
$this->getClosure()();
echo $this->endingMessage ? ' - ' . $this->endingMessage : '';
echo "\r\n";
}
}
function it($message, $closure) {
return new TestCase($message, $closure);
}
it('ok nice', function() {
echo 'what to do next?';
});//outputs: ok nice - what to do next?
it('ok fine', function() {
echo 'what should I do?';
})->addEndingMessage('THE END');//outputs: ok fine - what should I do? - THE END