Test script for transaction concurrency for postgresql - php

I would like to test some variants of transaction concurrency in PostgreSQL and for that I need a script which would force two transaction to start at exactly the same time. Something that does not requires manual intervention ;)
Any ideas?

You can homebrew this by taking a LOCK on a table, setting up your transactions, then releasing the lock by rolling back the transaction that got the lock. See this prior answer and its links for details on this approach. While I demonstrated it using three psql sessions it's equally viable to do it with bash co-processes, a Python script using psycopg2 and the multiprocessing or threading modules, etc. Fairly simple to do. Update: In fact here's an example I just wrote in python3.
For more sophisticated tests, grab the PostgreSQL source code and use the "isolationtester" tool in src/test/isolation which lets you write recipes that do complex orderings of commands. It doesn't support being built with PGXS (though such support would probably be pretty trivial to add) so you have to compile the whole PostgreSQL source tree, but that's quick enough. It'll run against your existing PostgreSQL so there's no need to install the one you compiled.
See src/test/isolation/README for more about the isolationtester tool. The docs are a little thin on the ground since it's an internal testing tool, but the existing tests cases should help you get started. Feel free to improve it to meet your needs and submit patches :)

Related

Running 30 php script at once in the background

I have a PHP script that must run 30 parallel times each with a different argument. What is the best way to do this so that each script can have as much even exposure to the processor as possible?
Problem description
Like some other users are telling(me too) you should give a little bit more explanation (maybe code samples). For example should these tasks run for ever or just once when php script is being called?
Message Queue
First off I think if possible it should be avoided to run so many tasks at once but schedule(be gentle to PC) them with a message queue like for instance beanstalkd
PHP solution
I don't think PHP is the right tool for your problem because of thread model(no). Threads are lightweight and creating new process is heavy. You could do it like stroncium is explaining. My opinion is that running this code on shared host will not be appreciated because if all users would run long running processes they would over utilize(use too much PC) the server.
Quoto from nettuts
There's no better resource than PHP's creator for knowing what PHP is capable of. Rasmus Lerdorf created PHP in 1995, and since then the language has spread like wildfire through the developer community, changing the face of the Internet. However, Rasmus didn't create PHP with that intent. PHP was created out of a need to solve web development problems.
However, you can't use PHP for everything. Lerdorf is the first to admit that PHP is really just a tool in your toolbox, and that even PHP has limitations.
Better language
Like I said previously I don't think PHP is the right tool.
Some languages which I think could solve the problem better:
java
python
C
Off course a lot more languages which support thread model are right tool for the job, but PHP isn't orginally designed for tasks like this. Even the creator of php Rasmus confirms this. You can read about this on this list from nettuts which I think has some pretty good points.
Google app engine
Last I would advice you to have a look at taskqueu api from google app engine. Because this is also a real good option ;). I might even consider it the best option. you have a free quote and the the costs are fair if you exceed quote. The task queue uses webhooks so that the hooks could be coded in PHP.
PHP itself haven't threads support. But you can just run few copies of your script simultaneously by using popen() or proc_open().
Sometimes multicurl is used for this purposes(when popen and alikes are resricted).
I don't think its CPU affinity that you have to worry about (so much), its how I/O bound each process is bound (pardon the pun) to become.
If using a UNIX like operating system, you can try using the nice command to adjust for processes that you predict will be doing more disk / network / database access, but I don't think you'll see any significant speed up.
If all processes are going to handle the same amount of I/O, you are probably better off just letting the kernel's scheduler do its job.
A little more information regarding what your jobs are actually accomplishing would be extremely helpful.
If you run it CLI you can fork 29-30 child processes and run the code there. You can have one main process with open sockets to each child or serial link them if you want to. You'd mostly have to hope the kernel will balance the processes if they have the same priority.
Given the simplicity of the question, I suggest you look for the simplest answer. Off the top, I'd say you might consider using one instance looping through 30 arguments.

PHP Build system [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm using PHPUnderControl which runs on top of Cruise Control for my continuous integration and unit testing. I also have it setup to run PHPDocumentor to generate phpdoc's for me and it runs PHP Code Sniffer to enforce coding standards for me. But now I want to set up something on that same server (Ubuntu) to make deploying to a remote server easier. I already have it setup so after every successful build an SVN Export is done from trunk into a directory within the projects folder on the server.
I've been thinking of writing a little custom PHP script that will SSH to a configured remote server, tarball up the latest export, copy it over, untar and run any migrations. A PHP script like this shouldn't be too hard initially, unless I need to eventually begin scaling to multiple servers. I know there are systems out there like Phing, Fabric and others.
My question is if anyone has any experience with these and can provide some pro's and con's? I've begun setting up Phing on my server and will be trying Fabric next to play with them, but was wondering if anyone who has used them more extensively, or had to scale them, could provide some feedback.
I've used Capistrano with PHP (even though it's more of a Rails-y thing as it's written in Ruby).
It's been really straightforward to use, but that said I haven't had to scale much with it. We do deploy to various different staging/production servers though, and the multi-stage extension has been useful in those scenarios.
However like most things Ruby, there's a lot of hooks and "magic" which can get confusing if you're new to Capistrano and trying to do something tricky with it.
As for how it compares to other deployment tools, I can't comment. I know we used to use Phing, but I'm uncertain why we switched to Capistrano.
If you like Capistrano, but wished it was a bit more PHP'ish, check out Fredistrano.
I wrote an automated build (SVN export, Zend Guard encoding, etc) and deployment system using Phing once and found quite the pain to use. Whenever I had to write a special task I felt I had to jump through way to many hoops just to get it to work.
So, these days I just write simple bash scripts that does building with SVN checkout, encoding, creating a tag in SVN and deployment through rsync. It may be low-tech, and Phing may have some superior features, but atleast it doesn't get in my way.
Theres a new build tool, called Bldr. It uses Yaml for config, instead of xml like most of the build systems out there, and its highly extensible.
http://bldr.io
We use phing and it has come in handy. We don't use it for packaging, but it shouldn't be too hard to make it do what you are looking for. We mainly use it for common tasks such as clearing caches, building development sites, and other tasks to aide in development. Its been a big help, and from what I can gather it seems to be an ant clone, although it might not have all the functionality that ant has.
If I was to implement such a deployment system, I would probably opt for a slightly different solution from what you've outlined above. Instead of having code that runs locally on my system, connects to a list of remote servers and does the "work" there, I would pack the updater module with the rest of the code and have it pull the update data from my server on demand (or rather when I "told" it to do so). That way you have much less to worry about on your end (you just need to serve the updated code via http when requested, and the remote server handles the rest). Just my 2 cents.
I've written my own rsync like tool for this because i work from a very bad internet connection in a 3rd world contry and have all kinds of failures and starving connections so that rsync does not work.
On your remote system you should at least write a litte script that is doing backups before running migrations.
Better is you are using a total independent mirror system on your web host system and include some small but fundamental unit tests after a migration. Then do a root switching to put the updated website online. This would require to run a few interactive services in read-only mode during migration (unfortunately a feature that not many people implement).
But first of all - think if it is really worth your time doing this - if you only update each a quarter then a simple checklist on paper would be enough.

What are best practices for self-updating PHP+MySQL applications?

It is pretty standard practice now for desktop applications to be self-updating. On the Mac, every non-Apple program that uses Sparkle in my book is an instant win. For Windows developers, this has already been discussed at length. I have not yet found information on self-updating web applications, and I hope you can help.
I am building a web application that is meant to be installed like Wordpress or Drupal - unzip it in a directory, hit some install page, and it's ready to go. In order to have broad server compatibility, I've been asked to use PHP and MySQL -- is that **MP? In any event, it has to be broadly cross-platform. For context, this is basically a unified web messaging application for small businesses. It's not another CMS platform, think webmail.
I want to know about self-updating web applications. First of all, (1) is this a bad idea? As of Wordpress 2.7 the automatic update is a single button, which seems easy, and yet I can imagine so many ways this could go terribly, terribly wrong. Also, isn't the idea that the web files are writable by the web process a security hole?
(2) Is it worth the development time? There are probably millions of WP installs in the world, so it's probably worth the time it took the WP team to make it easy, saving millions of man hours worldwide. I can only imagine a few thousand installs of my software -- is building self-upgrade worth the time investment, or can I assume that users sophisticated enough to download and install web software in the first place could go through an upgrade checklist?
If it's not a security disaster or waste of time, then (3) I'm looking for suggestions from anyone who has done it before. Do you keep a version table in your database? How do you manage DB upgrades? What method do you use for rolling back a partial upgrade in the context of a self-updating web application? Did using an ORM layer make it easier or harder? Do you keep a delta of version changes or do you just blow out the whole thing every time?
I appreciate your thoughts on this.
Frankly, it really does depend on your userbase. There are tons of PHP applications that don't automatically upgrade themselves. Their users are either technical enough to handle the upgrade process, or just don't upgrade.
I purpose two steps:
1) Seriously ask yourself what your users are likely to really need. Will self-updating provide enough of a boost to adoption to justify the additional work? If you're confident the answer is yes, just do it.
Since you're asking here, I'd guess that you don't know yet. In that case, I purpose step 2:
2) Release version 1.0 without the feature. Wait for user feedback. Your users may immediately cry for a simpler upgrade process, in which case you should prioritize it. Alternately, you may find that your users are much more concerned with some other feature.
Guessing at what your users want without asking them is a good way to waste a lot of development time on things people don't actually need.
I've been thinking about this lately in regards to database schema changes. At the moment I'm digging into WordPress to see how they've handled database changes between revisions. Here's what I've found so far:
$wp_db_version is loaded from wp-includes/version.php. This variable corresponds to a Subversion revision number, and is updated when wp-admin/includes/schema.php is changed. (Possibly through a hook? I'm not sure.) When wp-admin/admin.php is loaded, the WordPress option named db_version is read from the database. If this number is not equal to $wp_db_version, wp-admin/upgrade.php is loaded.
wp-admin/includes/upgrade.php includes a function called dbDelta(). dbDelta() scans $wp_queries (a string of SQL queries that will create the most recent database schema from scratch) and compares it to the schema in the database, altering the tables as necessary so that the schema is brought up-to-date.
upgrade.php then runs a function called upgrade_all() which runs specific upgrade_NNN() functions if $wp_db_version is less than target values. (ie. upgrade_250(), the WordPress 2.5.0 upgrade, will be run if the database version is less than 7499.) Each of these functions run their own data migration and population procedures, some of which are called during the initial database setup script. Nicely cuts down on duplicate code.
So, that's one way to do it.
Yes it would be a security feature if PHP went and overwrote its files from some place on the internet with no warning. There's no guarantee that the server is connecting correctly to your update server (it might download someone code crafted by someone else if DNS poisoning occured) - giving someone else access to your client's data. Therefore digital signing would be important.
The user could control updates by setting permissions on the web directory so that PHP only has read access to the files - this procedure could simply be documented with your program.
One question remains (I really don't know the answer to): can PHP overwrite files if it's currently using them (e.g. if the update.php file itself needed to be updated)? Worth testing.
I suppose you've already ruled this out, but you could host it as a service. (Think wordpress.com)
I'd suggest that you package your application with pear and set up a channel. Your users can then upgrade the application through a standard interface (pear). It's not entirely automatic (unless the users have some kind of automation running on top of pear), but it's standard, so any sysadmin can maintain it.
I think your best option is an update checking mechanism that will alert the administrator when there are update(s).
As you mention, there are a number of potential security problems. Due to those alone, I would suggest not doing this. Instead, try creating a fairly smart upgrading script.
Just my 2 cents: I'd consider an automatically self updating application within my CMS as a security hole, so if you decide to code this feature, you should consider to implement different levels of this behavior:
Automatically update
Check for updates and notify
Disable

Does PHP have threading?

I found this PECL package called threads, but there is not a release yet. And nothing is coming up on the PHP website.
From the PHP manual for the pthreads extension:
pthreads is an Object Orientated API that allows user-land multi-threading in PHP. It includes all the tools you need to create multi-threaded applications targeted at the Web or the Console. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Stackables.
As unbelievable as this sounds, it's entirely true. Today, PHP can multi-thread for those wishing to try it.
The first release of PHP4, 22 May 2000, PHP was shipped with a thread safe architecture - a way for it to execute multiple instances of it's interpreter in separate threads in multi-threaded SAPI ( Server API ) environments. Over the last 13 years, the design of this architecture has been maintained and advanced: It has been in production use on the worlds largest websites ever since.
Threading in user land was never a concern for the PHP team, and it remains as such today. You should understand that in the world where PHP does it's business, there's already a defined method of scaling - add hardware. Over the many years PHP has existed, hardware has got cheaper and cheaper and so this became less and less of a concern for the PHP team. While it was getting cheaper, it also got much more powerful; today, our mobile phones and tablets have dual and quad core architectures and plenty of RAM to go with it, our desktops and servers commonly have 8 or 16 cores, 16 and 32 gigabytes of RAM, though we may not always be able to have two within budget and having two desktops is rarely useful for most of us.
Additionally, PHP was written for the non-programmer, it is many hobbyists native tongue. The reason PHP is so easily adopted is because it is an easy language to learn and write. The reason PHP is so reliable today is because of the vast amount of work that goes into it's design, and every single decision made by the PHP group. It's reliability and sheer greatness keep it in the spot light, after all these years; where it's rivals have fallen to time or pressure.
Multi-threaded programming is not easy for most, even with the most coherent and reliable API, there are different things to think about, and many misconceptions. The PHP group do not wish for user land multi-threading to be a core feature, it has never been given serious attention - and rightly so. PHP should not be complex, for everyone.
All things considered, there are still benefits to be had from allowing PHP to utilize it's production ready and tested features to allow a means of making the most out of what we have, when adding more isn't always an option, and for a lot of tasks is never really needed.
pthreads achieves, for those wishing to explore it, an API that does allow a user to multi-thread PHP applications. It's API is very much a work in progress, and designated a beta level of stability and completeness.
It is common knowledge that some of the libraries PHP uses are not thread safe, it should be clear to the programmer that pthreads cannot change this, and does not attempt to try. However, any library that is thread safe is useable, as in any other thread safe setup of the interpreter.
pthreads utilizes Posix Threads ( even in Windows ), what the programmer creates are real threads of execution, but for those threads to be useful, they must be aware of PHP - able to execute user code, share variables and allow a useful means of communication ( synchronization ). So every thread is created with an instance of the interpreter, but by design, it's interpreter is isolated from all other instances of the interpreter - just like multi-threaded Server API environments. pthreads attempts to bridge the gap in a sane and safe way. Many of the concerns of the programmer of threads in C just aren't there for the programmer of pthreads, by design, pthreads is copy on read and copy on write ( RAM is cheap ), so no two instances ever manipulate the same physical data, but they can both affect data in another thread. The fact that PHP may use thread unsafe features in it's core programming is entirely irrelevant, user threads, and it's operations are completely safe.
Why copy on read and copy on write:
public function run() {
...
(1) $this->data = $data;
...
(2) $this->other = someOperation($this->data);
...
}
(3) echo preg_match($pattern, $replace, $thread->data);
(1) While a read, and write lock are held on the pthreads object data store, data is copied from its original location in memory to the object store. pthreads does not adjust the refcount of the variable, Zend is able to free the original data if there are no further references to it.
(2) The argument to someOperation references the object store, the original data stored, which it itself a copy of the result of (1), is copied again for the engine into a zval container, while this occurs a read lock is held on the object store, the lock is released and the engine can execute the function. When the zval is created, it has a refcount of 0, enabling the engine to free the copy on completion of the operation, because no other references to it exist.
(3) The last argument to preg_match references the data store, a read lock is obtained, the data set in (1) is copied to a zval, again with a refcount of 0. The lock is released, The call to preg_match operates on a copy of data, that is itself a copy of the original data.
Things to know:
The object store's hash table where data is stored, thread safe, is
based on the TsHashTable shipped with PHP, by Zend.
The object store has a read and write lock, an additional access lock is provided for the TsHashTable such that if requires ( and it does, var_dump/print_r, direct access to properties as the PHP engine wants to reference them ) pthreads can manipulate the TsHashTable outside of the defined API.
The locks are only held while the copying operations occur, when the copies have been made the locks are released, in a sensible order.
This means:
When a write occurs, not only are a read and write lock held, but an
additional access lock. The table itself is locked down, there is no
possible way another context can lock, read, write or affect it.
When a read occurs, not only is the read lock held, but the
additional access lock too, again the table is locked down.
No two contexts can physically nor concurrently access the same data from the object store, but writes made in any context with a reference will affect the data read in any context with a reference.
This is shared nothing architecture and the only way to exist is co-exist. Those a bit savvy will see that, there's a lot of copying going on here, and they will wonder if that is a good thing. Quite a lot of copying goes on within a dynamic runtime, that's the dynamics of a dynamic language. pthreads is implemented at the level of the object, because good control can be gained over one object, but methods - the code the programmer executes - have another context, free of locking and copies - the local method scope. The object scope in the case of a pthreads object should be treated as a way to share data among contexts, that is it's purpose. With this in mind you can adopt techniques to avoid locking the object store unless it's necessary, such as passing local scope variables to other methods in a threaded object rather than having them copy from the object store upon execution.
Most of the libraries and extensions available for PHP are thin wrappers around 3rd parties, PHP core functionality to a degree is the same thing. pthreads is not a thin wrapper around Posix Threads; it is a threading API based on Posix Threads. There is no point in implementing Threads in PHP that it's users do not understand or cannot use. There's no reason that a person with no knowledge of what a mutex is or does should not be able to take advantage of all that they have, both in terms of skill, and resources. An object functions like an object, but wherever two contexts would otherwise collide, pthreads provides stability and safety.
Anyone who has worked in java will see the similarities between a pthreads object and threading in java, those same people will have no doubt seen an error called ConcurrentModificationException - as it sounds an error raised by the java runtime if two threads write the same physical data concurrently. I understand why it exists, but it baffles me that with resources as cheap as they are, coupled with the fact the runtime is able to detect the concurrency at the exact and only time that safety could be achieved for the user, that it chooses to throw a possibly fatal error at runtime rather than manage the execution and access to the data.
No such stupid errors will be emitted by pthreads, the API is written to make threading as stable, and compatible as is possible, I believe.
Multi-threading isn't like using a new database, close attention should be paid to every word in the manual and examples shipped with pthreads.
Lastly, from the PHP manual:
pthreads was, and is, an experiment with pretty good results. Any of its limitations or features may change at any time; that is the nature of experimentation. It's limitations - often imposed by the implementation - exist for good reason; the aim of pthreads is to provide a useable solution to multi-tasking in PHP at any level. In the environment which pthreads executes, some restrictions and limitations are necessary in order to provide a stable environment.
Here is an example of what Wilco suggested:
$cmd = 'nohup nice -n 10 /usr/bin/php -c /path/to/php.ini -f /path/to/php/file.php action=generate var1_id=23 var2_id=35 gen_id=535 > /path/to/log/file.log & echo $!';
$pid = shell_exec($cmd);
Basically this executes the PHP script at the command line, but immediately returns the PID and then runs in the background. (The echo $! ensures nothing else is returned other than the PID.) This allows your PHP script to continue or quit if you want. When I have used this, I have redirected the user to another page, where every 5 to 60 seconds an AJAX call is made to check if the report is still running. (I have a table to store the gen_id and the user it's related to.) The check script runs the following:
exec('ps ' . $pid , $processState);
if (count($processState) < 2) {
// less than 2 rows in the ps, therefore report is complete
}
There is a short post on this technique here: http://nsaunders.wordpress.com/2007/01/12/running-a-background-process-in-php/
There is nothing available that I'm aware of. The next best thing would be to simply have one script execute another via CLI, but that's a bit rudimentary. Depending on what you are trying to do and how complex it is, this may or may not be an option.
In short: yes, there is multithreading in php but you should use multiprocessing instead.
Backgroud info: threads vs. processes
There is always a bit confusion about the distinction of threads and processes, so i'll shortly describe both:
A thread is a sequence of commands that the CPU will process. The only data it consists of is a program counter. Each CPU core will only process one thread at a time but can switch between the execution of different ones via scheduling.
A process is a set of shared resources. That means it consists of a part of memory, variables, object instances, file handles, mutexes, database connections and so on. Each process also contains one or more threads. All threads of the same process share its resources, so you may use a variable in one thread that you created in another. If those threads are parts of two different processes, then they cannot access each others resources directly. In this case you need inter-process communication through e.g. pipes, files, sockets...
Multiprocessing
You can achieve parallel computing by creating new processes (that also contain a new thread) with php. If your threads do not need much communication or synchronization, this is your choice, since the processes are isolated and cannot interfere with each other's work. Even if one crashes, that doesn't concern the others. If you do need much communication, you should read on at "multithreading" or - sadly - consider using another programming language, because inter-process communication and synchronization introduces a lot of complexion.
In php you have two ways to create a new process:
let the OS do it for you: you can tell your operation system to create a new process and run a new (or the same) php script in it.
for linux you can use the following or consider Darryl Hein's answer:
$cmd = 'nice php script.php 2>&1 & echo $!';
pclose(popen($cmd, 'r'));
for windows you may use this:
$cmd = 'start "processname" /MIN /belownormal cmd /c "script.php 2>&1"';
pclose(popen($cmd, 'r'));
do it yourself with a fork: php also provides the possibility to use forking through the function pcntl_fork(). A good tutorial on how to do this can be found here but i strongly recommend not to use it, since fork is a crime against humanity and especially against oop.
Multithreading
With multithreading all your threads share their resources so you can easily communicate between and synchronize them without a lot of overhead. On the other side you have to know what you are doing, since race conditions and deadlocks are easy to produce but very difficult to debug.
Standard php does not provide any multithreading but there is an (experimental) extension that actually does - pthreads. Its api documentation even made it into php.net.
With it you can do some stuff as you can in real programming languages :-) like this:
class MyThread extends Thread {
public function run(){
//do something time consuming
}
}
$t = new MyThread();
if($t->start()){
while($t->isRunning()){
echo ".";
usleep(100);
}
$t->join();
}
For linux there is an installation guide right here at stackoverflow's.
For windows there is one now:
First you need the thread-safe version of php.
You need the pre-compiled versions of both pthreads and its php extension. They can be downloaded here. Make sure that you download the version that is compatible with your php version.
Copy php_pthreads.dll (from the zip you just downloaded) into your php extension folder ([phpDirectory]/ext).
Copy pthreadVC2.dll into [phpDirectory] (the root folder - not the extension folder).
Edit [phpDirectory]/php.ini and insert the following line
extension=php_pthreads.dll
Test it with the script above with some sleep or something right there where the comment is.
And now the big BUT: Although this really works, php wasn't originally made for multithreading. There exists a thread-safe version of php and as of v5.4 it seems to be nearly bug-free but using php in a multi-threaded environment is still discouraged in the php manual (but maybe they just did not update their manual on this, yet). A much bigger problem might be that a lot of common extensions are not thread-safe. So you might get threads with this php extension but the functions you're depending on are still not thread-safe so you will probably encounter race conditions, deadlocks and so on in code you did not write yourself...
You can use pcntl_fork() to achieve something similar to threads. Technically it's separate processes, so the communication between the two is not as simple with threads, and I believe it will not work if PHP is called by apache.
If anyone cares, I have revived php_threading (not the same as threads, but similar) and I actually have it to the point where it works (somewhat) well!
Project page
Download (for Windows PHP 5.3 VC9 TS)
Examples
README
pcntl_fork() is what you are searching for, but its process forking not threading.
so you will have the problem of data exchange. to solve them you can use phps semaphore functions ( http://www.php.net/manual/de/ref.sem.php ) message queues may be a bit easier for the beginning than shared memory segments.
Anyways, a strategy i am using in a web framework that i am developing which loads resource intensive blocks of a web page (probably with external requests) parallel:
i am doing a job queue to know what data i am waiting for and then i fork off the jobs for every process. once done they store their data in the apc cache under a unique key the parent process can access. once every data is there it continues.
i am using simple usleep() to wait because inter process communication is not possible in apache (children will loose the connection to their parents and become zombies...).
so this brings me to the last thing:
its important to self kill every child!
there are as well classes that fork processes but keep data, i didn't examine them but zend framework has one, and they usually do slow but reliably code.
you can find it here:
http://zendframework.com/manual/1.9/en/zendx.console.process.unix.overview.html
i think they use shm segments!
well last but not least there is an error on this zend website, minor mistake in the example.
while ($process1->isRunning() && $process2->isRunning()) {
sleep(1);
}
should of course be:
while ($process1->isRunning() || $process2->isRunning()) {
sleep(1);
}
There is a Threading extension being activley developed based on PThreads that looks very promising at https://github.com/krakjoe/pthreads
Just an update, its seem that PHP guys are working on supporting thread and its available now.
Here is the link to it:
http://php.net/manual/en/book.pthreads.php
I have a PHP threading class that's been running flawlessly in a production environment for over two years now.
EDIT: This is now available as a composer library and as part of my MVC framework, Hazaar MVC.
See: https://git.hazaarlabs.com/hazaar/hazaar-thread
I know this is a way old question, but you could look at http://phpthreadlib.sourceforge.net/
Bi-directional communication, support for Win32, and no extensions required.
Ever heard about appserver from techdivision?
It is written in php and works as a appserver managing multithreads for high traffic php applications. Is still in beta but very promesing.
There is the rather obscure, and soon to be deprecated, feature called ticks. The only thing I have ever used it for, is to allow a script to capture SIGKILL (Ctrl+C) and close down gracefully.

PHP performance

What can I do to increase the performance/speed of my PHP scripts without installing software on my servers?
Profile. Profile. Profile. I'm not sure if there is anything out there for PHP, but it should be simple to write a little tool to insert profiling information in your code. You will want to profile function times and SQL query times.
So where you have a function:
function foo($stuff) {
...
return ...;
}
I would change it to:
function foo($stuff) {
trace_push_fn('foo');
...
trace_pop_fn('foo');
return ...;
}
(This is one of those cases where multiple returns in a function become a hinderance.)
And SQL:
function bar($stuff) {
trace_push_fn('bar');
$query = ...;
trace_push_sql($query);
mysql_query($query);
trace_pop_sql($query);
trace_pop_fn('bar');
return ...;
}
In the end, you can generate a full trace of the program execution and use all sorts of techniques to identify your bottlenecks.
One reasonable technique that can easily be pulled off the shelf is caching. A vast amount of time tends to go into generating resources for clients that are common between requests (and even across clients); eliminating this runtime work can lead to dramatic speed increases. You can dump the generated resource (or resource fragment) into a file outside the web tree, and then read it back in when needed. Obviously, some profiling will be needed to ensure this is actually faster than regeneration - forcing the web server back to disk regularly can be detrimental, so the resource really does need to have heavy reuse.
You might also be surprised how much time is spent inside badly written database queries; time common generated queries and see if they can be rewritten. The amount of time spent executing actual PHP code is generally pretty limited, unless you're using some sub-optimal algorithms.
Neither of these are limited to PHP, though some of the PHP "magicy" approaches/functions can over-protect one from thinking about these concerns. For example, I recently updated a script that was using array_search to use a binary search over a sorted array, and gained the expected exponential speedup.
Really consider using XDebug profiler: it helps with checking how much a certain function is being executed against what you would have expected.
I try to decrease instructions while improving code readability by replacing logic with array-lookups when appropriate.
It's what Jeff Atwood wrote in [The Best Code is No Code At All][1].
Also, avoid loops inside another
loop, and nested if/else statements.
Short functions. Sometimes a lot of
code does not need to be executed
when the result-value is already
known.
Unnecessary testing:
if (count($array) === 0) return;
can also be written as:
if (! $array) return;
Another function-call eliminated!
[1]: http://www.codinghorror.com/blog/archives/000878.html"The Best Code is No Code At All"
You can optimized the code with two basic things:
Optimizing PHP associated library and server
Go through https://www.digitalocean.com/community/articles/how-to-optimize-apache-web-server-performance Or
You can use profiling tool like xhprof to view what part of your code can by optimized and here is the link to follow: http://michaelsanford.com/compiling-xhprof-for-php-5-4/
Optimizing your code using code profiler and code analyzer
You need to install Netbeans in order to use this plugin.
Here are the steps you need to follow:
1) Open NetBeans then select option from menu bar Tools > Plugin. Then search plug-in name "phpcsmd" in the available plug-in tab and install it from there.
2) Now open the terminal and be there as the super user by typing command "sudo su".
3) Install PEAR library (if it is not installed) into your system by running following commands into your terminal
a) wget http://pear.php.net/go-pear.phar
b) php go-pear.phar
As we need this for the installation of further addons.
4) Then run the command
"pear config-set auto_discover 1"
This will be used to set auto discover the path "true" for the required plug-ins. So they get install to the desired location automatically.
5) Then run below command to install PHP code sniffer.
"pear install --alldeps pear/PHP_CodeSniffer"
6) Now to install the PHP Mess Detector by running following command
"pear install --alldeps phpmd/PHP_PMD"
If you get the error like "invalid package name/package file "phpmd/PHP_PMD"" while installing this module. You need to use this "pear channel-discover pear.phpmd.org" command to get rid of this error. After this command you can run the above command again to install Mess detector.
7) Now to install the PHP Depend by running following command
"pear install --alldeps pdepend/PHP_Depend"
8) Now install the PHP Copy Paste Detector by running following command
"pear install --alldeps phpunit/phpcpd"
9) Then run the command
"pear config-set auto_discover 0"
This will be used to set auto discover path "false".
10) Then open net beans and follow the path Tools>Options>PHP>PHPCSMD
There is no magic solution, and attempting to provide generic solutions could well just be a waste of time.
Where are your bottlenecks? For example are your scripts processor/database/memory intensive?
Have you performed any profiling?
including files is slow, and requiring them is even slower. If you use __autoload for including every class then that will add up. for example.
I'm always a bit wary of trying to be too clever in terms of code optimisation, if it sacrifices code clairty. If you need to make code obscure to make it fast, would it not be cheaper to upgrade hardwear instead of wasting your time trying to tweak code? Processor cycles are cheaper than programmer cycles, after all.
The ones I can think of...
Loop invariants are always a
good one to watch.
Write E_STRICT and E_NOTICE
compliant code, particularly if you
are logging errors.
Avoid the # operator.
Absolute paths for requires and
includes.
Use strpos, str_replace etc. instead of regular expressions whenever possible.
Then there's a bunch of other methods that might work, but probably wont give you much benefit.
Whenever I look at performance problems, I think the best thing to do is time how long your pages take to run, and then look at the slowest ones. When you get these real metrics, you can often improve performance on the slowest ones by orders of magnitude, either by fixing a slow SQL query or perhaps tightening up the code a bit.
This of course requires no new hardware or special software, just a critical eye on the existing code.
That said, this will only work for so long... if you really are getting enough traffic to hit the limits of your hardware, and/or there is some code that is just inherently slow and really required, you will have to look at other possibilities.
I'm responsible for a large reporting system and we track the slowest reports kind of like that. I fire a unique key into the db when the report starts and then when it finishes I can determine how long it took. I'm using the database because that way I can detect when pages timeout (which happens a lot more than I'd like)
Follow some of the other advice first like profiling and making good resource allocation decisions, e.g. caching.
Also, take into account the performance of outside resources like your database. In MySQL you can check the slow query log for example. In addition make sure you didn't design your database an forget about it. Optimizing your queries (again for MySQL) against real data can pay of big.
Rasmus Lerdorf gave some good tips in his recent presentation "Simple is Hard" at FrOSCon '08. If you are using a bytecode cache (and you really should be using one), include path misses hurt a lot, so optimize your require/require_once.
You can use profiling tool like xhprof to view what part of your code can by optimized !
1) Use latest version of PHP
The core team is working hard on improving the performance of PHP in every version.
2) Use a bytecode cache
Since PHP 5.5 a bytecode cache has been added to PHP named OPcache. Using OPcache can make a huge difference especially since PHP 7. It receives improvements in every PHP version and might even get a JIT implementation in the future.
3) Profiling
While developing profiling gives you great insight what exactly is happening. This helps a lot finding bottlenecks in your code.
One of the most used tools is XHProf but is not officially supported anymore and has issues with PHP >= 7. An alternative when you want to profile PHP >= 7 is Tideways which is a fork of XHProf.

Categories