CENTOS / HTTPD memory issue - php

Hi I am running a VPS (1GB memory) which has a client site on it with these specs:
Wordpress (no cache plugins)
Timthumb image resize script (http://timthumb.googlecode.com/svn/trunk/timthumb.php)
Shopp plugin for ecommerce (has a caching system)
Php.ini memory limit is set to 64M max per script
After restarting apache I have around 500M free memory. After only visiting this client's site in a random browser memory drops by 150-200M !!
I am trying to figure out the loophole, but I might be overlooking the obvious awnser please advise :-)

I'm assuming you're on a Linux VPS, so ... how are you looking at 'free' memory? There's a few different measures of that in your average Linux system. For instance, from my Linux box, I get:
marc#panic:~$ free
total used free shared buffers cached
Mem: 2058188 1596532 461656 0 778404 604752
-/+ buffers/cache: 213376 1844812
Swap: 1052248 0 1052248
By the first line, it would appear that 1.5gig are in use and just under 500meg are free (on a 2gig box). However, those totals include memory used for disk cache, which is the second line. Once you remove cache buffers from the counts, then only 213meg of memory are used by running processes, and 1.8gig are free.

When you have started apache, the various php process that are idle around occupy only about 10MB of memory. The number of php process depends on how many server / childs do you have.
When you access your site, PHP is executing, and increse is memory size. Tipically you end up with a PHP process that is about 50-60 MB each.
To verify type in your shell
ps -ylC apache2
and see the column RSS. Substiture apache2 with the process name of your http server
Do it after a fresh start and after visiting your site!

Related

memory_get_peak_usage(true) PHP and Virtual Memory Size resource usage are very different

I have a PHP script which runs many http requests via curl - I use a variation on Rolling Curl (curl_multi) so that the requests can be run simultaneously. The script runs every few minutes with cron.
It is on a VPS and I received some 'Excessive resource usage' warnings from lfd (ConfigServer Security & Firewall) because the resource usage of the script went over the threshold of 512MB.
An example notification is:
Resource: Virtual Memory Size
Exceeded: 582 > 512 (MB)
Executable: /path/to/php-cgi
Command Line: /path/to/myscript.php
So I upped the threshold to 800MB and recorded the memory usage of the script using memory_get_peak_usage(true) every time the script runs.
However, the results from memory_get_peak_usage(true) are consistently 2MB... which is nowhere near the Virtual Memory usage as seen in the warning.
Note - only one instance of the script runs as multiple instances are prevented using flock.
So what am I missing?
Update - virtual memory usage also greater than php.ini memory_limit
After upping the threshold to 800MB I still get occasional notifications from lfd. I also checked the php.ini settings and memory_limit is set to 256MB - in theory the script wouldn't run if it was using more than this. From this I am guessing that either:
a) It is not PHP that is using the memory (could it be MySQL or CURL - is the memory used by these included in the value frommemory_get_peak_usage(true)?)
b) I'm not getting an accurate figure from lfd
Second Update - memory used by MySQL is not included in memory_get_peak_usage(true)
I suspect this is the issue - however I'm not sure what exactly CSF includes in resource figure. I will look into making the MySQL requests more efficient and see how that effects things.
PHP's memory_get_usage family of functions tracks the state of PHP's internal memory manager, which is responsible for all memory directly used by things like PHP variables. This is also what is limited by the memory_limit setting - after the limit is reached, the internal memory manager will refuse to claim more RAM from the OS.
However, not all the RAM used by your process is allocated via that memory manager. For instance, third party libraries like CURL or the MySQL connection library will have memory allocation completely independent of PHP, because they are effectively separate programs being executed by the PHP engine. Another example is that opening a file might cause it to be mapped into memory by the OS kernel, even if you never read its contents into a PHP variable.
This talk by Julien Pauli at PHP UK a few years ago goes into more details on the different types of memory usage, and how to measure them.

WP Memory Limit - Should I dedicate all my RAM to this setting

I have 6GB Ram with my host - in woocommerce system status it says this is the amount of ram your wordpress site can use at one time?
I'm confused with this - should I set this setting to 6000M or not - don't I want all my RAM dedicated to my woocommerce site as it is 98% woocom pages.
Setting from wp-config.php
define('WP_MEMORY_LIMIT', '3072M');
This is my current setting.
tl;dr 3072M is far too high for this setting.
It's hard to answer this question without more information about your server environment. Is the server dedicated to your WordPress / Woo instance or are you sharing it with others? Is your MySQL instance running on this same server with your WordPress instance?
You definitely don't want to give all your RAM, or even half of it, to WordPress this way. Why not? Because it's not a shared limit. When your site is busy you have multiple web server processes running, and they each obey the limits you give. If you give them all half your RAM and they take it, your server will thrash, and your web site will work poorly if at all. Thrashing servers sometimes experience hard crashes. You don't want that.
In any case WP_MEMORY_LIMIT can't set memory limits to exceed the php memory limits in your system. That's probably why your current ludicrously high setting of 3072M hasn't already brought your site to its knees.
There are separate limits here because some servers use PHP for other things than WordPress.
My suggestion: be very careful messing with this number. Make conservative changes. If you're getting php memory exhaustion errors from WordPress, increase this limit (and the php memory limit) slowly and sparingly until things start to work.
Install a plugin to monitor memory. Here is one: https://wordpress.org/plugins/server-ip-memory-usage/
Keep an eye on your web server error log to see if you're routinely getting memory exhaustion errors.
This depends on how memory intensive your site is. Try logging a few calls to memory_get_peak_usage() to see. I wouldn't assign more than 128 MiB of memory by default. You don't want a bug in the code to eat all of your server's available RAM.

memory_limit = 1024M, still, Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp

I know that there is another question with a similar title/error, but i think this is a different problem.
Context:
Running wordpress 3.6.1, PHP 5.3.27, MySql 5.1.70, on a shared Linux host, 1gb Memory.
.htaccess: suPHP_ConfigPath /home/[username]/public_html
php.ini: memory_limit = 512M (I tried 2048M, 1024M, 32M, .. heck even -1)
I also tried (ini_set(/* all the values above, in the WordPress index.php */))
I disabled ALL plugins, I even re-enabled one by one.
I have about 300-400 concurrent connection/users on the site at the same time, on average.
I DO NOT have SSH access :/
I cannot reproduce the bug locally (On Mac running 'MAMP' and 'ab', I even lower the mem limit locally to 16m.. )
the way I know that none of that worked is that, in the /cpanel errorlog screen, I see the Error (in the title) about 3-4 times per minute !!! (12)Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp for /home/infomed/public_html/index.php
I already called the hosting company, directnic, and they weren't able to help me much, they suggested few of the solutions that I already tried, they don't support SSH, so that's a dead end for me. I know I can switch hosts, but I am not able to do so at the moment.
Please, all I am asking is to point out any potential other causes that I can investigate, I am out of ideas ... what could use more than 1gb of memory, in a simple Wordpress blog, all plugins disabled, on the home screen? there is no images upload happening, just 2 wp_get_recent_posts each with a limit of 6.
code here
I have about 300-400 concurrent connection/users on the site at the
same time, on average. I DO NOT have SSH access :/
This is your problem.
Shared hosting companies usually limit more than just PHP memory, you could be reaching i/o, and memory limits elsewhere. You could be inside a VM already and reaching that limit, etc. Shared hosting comes cheap but once you start throwing decent traffic at it they generally suspend your account or you start running up against limits.
I'd advise upgrading with that much traffic to something like a VPS, dedicated server or move to cloud based hosting. You're obviously doing something right with the traffic levels so move off quickly :)
Turn on your server's opcache and memcache. I've been getting this error for a few months, but after enabling opcache and memcache of the php extension everything disappeared

WordPress 3.2.1 using all available memory on LAMP

I just installed wordpress 3.2.1 on a pretty fresh LAMP server (specs below). On a completely fresh wordpress install, wordpress is using all available memory (512 mb) after just a few handled requests. Pages tested (which both cause the same issue) are the pre made index page and the admin page.
Right after reboot i've just above 200 mb of memory available ($> free -m) and the available memory after each request to the wordpress instance decreases drastically, ending up in memory allocation errors on server after less than 20 requests and causing 500 server error from apache.
This issue is not occurring when using other non wordpress php pages on apache.
Non successful solutions have been to set memory_limit in php.ini and define('WP_MEMORY_LIMIT', ...) to various sizes.
System specs:
WordPress 3.2.1
PHP 5.3.2-1ubuntu4.9 (Zend Engine v.2.3.0)
Apache 2.2.14
Ubuntu 10.04 LTS 64-bit
Take a look at your apache mpm module config file. On ubuntu, it should be somewhere inside /etc/apache2/modules.available. There may be a large worker process number set for thick servers, and all the memory is used by worker processes which are not killed after request processing. Taking in account that each WP worker process uses about 40-50M of RAM, you only need 4 concurrent requests to consume 200MB, and thus all you need is to open WP dashboard because it makes a number of concurrent AJAX requests. You may want to use fastcgi configuration to be able to limit a number of PHP worker processes to save memory.

A good server spec for image resizing in PHP

I have a site that enables users to upload images which are then re-sized into 4 different sizes.
I'm moving to a new host and I wondered what makes a good spec for handling this task - or should any server spec be able to handle this task. Should I look at more RAM or a better CPU etc...
Images are currently restricted to 2mb but I'd like to increase that.
Is there anything to choose between these (for this task)?
Option 1.
* Processor: Pentium 4 3GHZ Hyperthreaded
* Memory: 2GB DDR SDRAM
* Hd1: 120GB 7200RPM SATA / 8MB Cache
* Hd2: 120GB 7200RPM SATA / 8MB Cache
* OS: Linux - CentOS 5 (+32 Bit)
Option 2.
* Processors: Dual Core Intel Core 2 Duo 2.2GHz
* Memory: 1GB RAM
* Hard Disk: 1x 160GB 7,200rpm
* OS: Linux - CentOS 5.2
edit:
I'm using
http://pear.php.net/package/Image_Transform
with GD2
Volume is very low, but
certain JPG files fail even when
they are < 2mb
Current hosting is a VPS with 768mb dedicated ram (find
out about processor)
You don't say how many you are doing per time period, what you are using (GD? ImageMagick? Something else) or the spec and performance of your current server.
However unless you are doing a lot, both of those servers should be way more than fine.
Definitely stick with a VPS (vs. shared hosting) because working with images in PHP is all about tuning your php.ini file.
There are a ton of reasons why a PHP script would fail to process an upload:
Upload is too big. The upload size is controlled by several directives:
post_max_size, upload_max_filesize, memory_limit. If all of the above directives are not configured properly, the defaults will cap you around 2MB.
Ran out of memory working with the image. The memory_limit directive affects this. Also make sure your code is releasing resources as soon as possible instead of waiting for script termination.
Operations took too long. max_input_time and max_execution_time control how long the script gets to execute (max_input_time controls HTTP I/O, max_execution_time controls actual script execution). Bigger images take longer to process.
Figure out which conditions are failing, and then scale your server up to resolve those conditions. If you are switching hosts based on performance issues, you might want to do this first. You might find that the switch is unneeded.
IF you're just doing development/testing, and maybe just a soft launch - one if fine. If you expect to go live you're going to need to keep tabs on your server load and how many processes you are spawning, as well as how long your actual resize time is for images.
If you expect to handle serious volume in the near future, you'll definitly want the dual core system. Resizing images is very intensive. Further down the road, you may need additional machines just to handle image processing and one to handle the site.

Categories