How do php acclerators work? [closed] - php

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Okay so I've running so pretty large queries on my site and its been running up the mysql resources. My admin questioned whether I've tried different php accelerators but I've never installed one before. So I did some research on it, and I'm curious do I need to make any modifications to my actual php codes or do I just install an accelerator and let it take effect? I need ways to optimize my load and reduce the amount of resources being used on the server.

"PHP accelerators" are opcode caches; they save the server from having to re-interpret PHP files on every request. The savings is somewhere in the realm of 1% of CPU load, and it won't help you one bit if your problem is with the database's resource usage.

Most PHP accelerators work by caching the compiled bytecode of PHP
scripts to avoid the overhead of parsing and compiling source code on
each request (some or all of which may never even be executed). To
further improve performance, the cached code is stored in shared
memory and directly executed from there, minimizing the amount of slow
disk reads and memory copying at runtime.
Source: http://en.wikipedia.org/wiki/PHP_accelerator

Sounds to me like you need to accelerate your SQL queries, not your PHP code.

Here are a list of PHP accelerators that you can evaluate and install
http://en.wikipedia.org/wiki/List_of_PHP_accelerators
I've used APC, which I believe is one of the most popular PHP accelerators. One thing that it does is basically cache function calls and arguments, so that subsequent calls to the same function with the same arguments will have its return value cached, and not have to recompute everything.

Related

php cache laravel (file & memory) performance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
i just wondering
what is the best practice to store value
cache in file system Or in memory In terms of performance
i don't want use Redis cache or any software
just want to used either (memory cache OR file cache) to cache so file for period of time
Redis, memcache, memcached are just wrappers or helpers to access memory blocks (so you dont have to map memory blocks manually)
That being said, and to answer your question it depends on the OS you are using, assuming you are running linux, by default when you open a file it makes uses of the kernel filesystem_cache, you could make use of that and just use file cache, for most applications this is the best as it is reliable even on memory dumps or system reboots.
memory cache is the fastest, and the best for concurrency, but is not to be rely on.
lets look at it with an example
if your application makes 100 calls/second
when the request is not cached, it takes 10 seconds to generate/serve the request
it means you need to support to have open 1000 threads for the 10 seconds the request will take, besides that you will be processing the same cache 1000 times. unless you can set a flag to let other process know that you are already generating that data and to just wait.
based on this scenario you could have a process that generates that file each day.
if you use file cache, you will be safe if the systems dumps memory or anything because you're file will still exist.
if you use memory cache, you will be in troubles as you will to generate the file either on the fly or manually, either way you have a downtime of at least 10 seconds.
it's just an example, your flow could be completely different.
comment if you have any doubt ill try to expand (:

Tracking the source of slowdowns [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was wondering if someone could give a high-level answer about how to track functions which are causing a slow-down.
We have a site with 6 thousand lines of code and at times there's a significant slowdown.
I was wondering what would be the best way to track the source of these occasional slowdowns? Should we attach a time execution tracker on each function or would you recommend something else?
It's a standard LAMP stack setup with PHP 5.2.9 (no frameworks).
The only way to properly track down a why and where a script is slowing down, is by the use of a profiler.
There are a few of these available for PHP. Some of which requires that you install a module on the server, some which uses a PHP-only library, and others again which are stand alone.
My preferred profiler is Zend Studio, mainly because I use it as my IDE. It has the benefit of being both stand-alone, and to be used in the conjunction with server-side modules (or the Zend Server package). Allowing you to profile both locally, and on production systems.
One of the easiest things to look for, however, are SELECT queries inside loops. They are notorious for causing slow-downs, especially when you have a more than a few hundred records in the table being queried.
Another if is you have multiple AJAX calls in rapid succession, and you're using the default PHP session handler (flat files). This can cause the loading time to increase significantly because the IO-operations are locking. This means that it can only handle one request that uses session at a time, even though AJAX is by its very nature asynchronous.
The best way to combat this, is to use/write a custom session handler that utilizes a database to store the sessions. Just make sure you don't saturate the DB connection limit.
First and foremost though: Get yourself a proper profiler. ;)

Is there a performance hit for querying the file system in PHP? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was told many years ago that using "include" statements in PHP doesn't "cost" anything in performance. But what about when you query the file system, for instance running "filemtime" or "readdir". If I am performing these with every page request, is that a problem? Thanks!
The reason why include statements "don't cost anything" in performance, is because those include files are often cached as well. Semi compiled versions of PHP scripts can be stored in APC cache (See: http://php.net/manual/en/book.apc.php)
Apart from that cache, the OS will also cache file access, so subsequent calls to filemtime won't need actual disk access every time. And even if the OS request information from the hard drive, that drive might have cached the most recent requests as well. So there is caching at multiple levels, all in order to make disk access as fast as possible.
So, for those reasons, calling filemtime many times should not be a big issue either, but if you need to read a lot of different files, the caches might not work optimally, and you will have a lot of actual disk I/O. Eventuall, if you have many visitor, the file I/O might become a bottleneck. You might be able to solve this by upgrading your hardware. A raid of SSDs will likely be able to read faster than a single spinning disk.
If performance is still an issue, you might store the filetime of a file in a cache yourself, for instance APC or memcache, or even an include file for PHP that contains an array of relevant file information. Of course you need to make sure to update this cache every time when you write a file. And make sure to profile every optimization you make. If you don't have APC, an include file probably won't do any good. Also, requests to memcache have some overhead even though the data itself is in memory. So these solutions are not guaranteed to improve things.
But as always, don't start implementing such optimizations if you don't need to. Premature optimization... :)

native PHP transfer operation to CPU [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there any way to let the cpu handle some operations in PHP (quite like openCL) but is available in native php (without having PHP-openCL implemented)?
/E:
What i mean:
I am coding some php cli scripts
Everything you do in php (variables, etc.) will be cached in the ram. But you still can access the RAM directly (shmop - Link), which makes it way faster. (This is an example to access deep system resources, i just want to know if there are ways to access other deep system resources)
I want to acces the CPU directly for having speed up some operations by doing so. (in context of multithreading (pcntl_fork and running inside an endless while-loop in php cli script). Is there a way to skip the c-handler (dunno if this is the right expression).
OpenCL was just an example =)
shmop is not "accessing memory directly" (it's sharing memory) and "bypassing the C layer" is not "accessing the CPU more directly".
The only thing that may remotely make sense is that you wish to code in ASM directly, instead of writing code in a higher level language which gets compiled down to machine code eventually. This is useful if you think you understand the CPU better than a C/PHP compiler and can write more efficient code for it (in your case, I'd have my doubts, to be honest). If so, you'll have to write an external application in said low-level language and invoke it from PHP. In C and some other languages you could write inline-ASM, but PHP doesn't support that.

Tuning Apache-PHP-MySQL for speed [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have leased VPS with 2GB mem.
Problem i have is that i have few joomla installations and server get in to very slow response if there is more than 30-50 users attached at same time.
Do you have any tips, books/tutorials/suggestions how to increase response time in this situation?
Pls. give me only very concrete and useful URLs, i would be very grateful.
In attachment i attached just part of htop view on that VPS
The easiest and cheapest thing you can do is to install a bytecode cache, e.g. APC. Thus, php does not need to process every file again and again.
If you're on Debian or Ubuntu this is as easy as apt-get install apc.
I'm going to guess that most of our issues will come from joomla - I'd start by looking through this list: https://stackoverflow.com/search?q=joomla+performance
Other than that, you might want to investigate a php accelerator: http://en.wikipedia.org/wiki/List_of_PHP_accelerators
If you have any custom sql, you might want to check your sql queries are making good use
of indexes
A quick look at your config suggests your using apache pre fork - you might want to try
using threaded worker mode, though always benchmark each config change you make (apache
comes with a benchmarking tool) to ensure any changes have a positive effect.
Some other links..
http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/
Though this is for wordpress, the principals should still apply.
http://blog.mydream.com.hk/howto/linux/performance-tuning-on-apache-php-mysql-wordpress
A couple of things to pay close attention to.
You never want your server to run out of memory. Ensure any apache config limits the
number of children to within your available memory.
Doing SHOW PROCESSLIST on mysql and looking for long running queries can highlight some
easy wins, as nothing kills performance like a slow sql query.

Categories