A good server spec for image resizing in PHP - php

I have a site that enables users to upload images which are then re-sized into 4 different sizes.
I'm moving to a new host and I wondered what makes a good spec for handling this task - or should any server spec be able to handle this task. Should I look at more RAM or a better CPU etc...
Images are currently restricted to 2mb but I'd like to increase that.
Is there anything to choose between these (for this task)?
Option 1.
* Processor: Pentium 4 3GHZ Hyperthreaded
* Memory: 2GB DDR SDRAM
* Hd1: 120GB 7200RPM SATA / 8MB Cache
* Hd2: 120GB 7200RPM SATA / 8MB Cache
* OS: Linux - CentOS 5 (+32 Bit)
Option 2.
* Processors: Dual Core Intel Core 2 Duo 2.2GHz
* Memory: 1GB RAM
* Hard Disk: 1x 160GB 7,200rpm
* OS: Linux - CentOS 5.2
edit:
I'm using
http://pear.php.net/package/Image_Transform
with GD2
Volume is very low, but
certain JPG files fail even when
they are < 2mb
Current hosting is a VPS with 768mb dedicated ram (find
out about processor)

You don't say how many you are doing per time period, what you are using (GD? ImageMagick? Something else) or the spec and performance of your current server.
However unless you are doing a lot, both of those servers should be way more than fine.

Definitely stick with a VPS (vs. shared hosting) because working with images in PHP is all about tuning your php.ini file.
There are a ton of reasons why a PHP script would fail to process an upload:
Upload is too big. The upload size is controlled by several directives:
post_max_size, upload_max_filesize, memory_limit. If all of the above directives are not configured properly, the defaults will cap you around 2MB.
Ran out of memory working with the image. The memory_limit directive affects this. Also make sure your code is releasing resources as soon as possible instead of waiting for script termination.
Operations took too long. max_input_time and max_execution_time control how long the script gets to execute (max_input_time controls HTTP I/O, max_execution_time controls actual script execution). Bigger images take longer to process.
Figure out which conditions are failing, and then scale your server up to resolve those conditions. If you are switching hosts based on performance issues, you might want to do this first. You might find that the switch is unneeded.

IF you're just doing development/testing, and maybe just a soft launch - one if fine. If you expect to go live you're going to need to keep tabs on your server load and how many processes you are spawning, as well as how long your actual resize time is for images.
If you expect to handle serious volume in the near future, you'll definitly want the dual core system. Resizing images is very intensive. Further down the road, you may need additional machines just to handle image processing and one to handle the site.

Related

My server is getting high traffic and reaches his limits, what should be my new structure?

Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.

WP Memory Limit - Should I dedicate all my RAM to this setting

I have 6GB Ram with my host - in woocommerce system status it says this is the amount of ram your wordpress site can use at one time?
I'm confused with this - should I set this setting to 6000M or not - don't I want all my RAM dedicated to my woocommerce site as it is 98% woocom pages.
Setting from wp-config.php
define('WP_MEMORY_LIMIT', '3072M');
This is my current setting.
tl;dr 3072M is far too high for this setting.
It's hard to answer this question without more information about your server environment. Is the server dedicated to your WordPress / Woo instance or are you sharing it with others? Is your MySQL instance running on this same server with your WordPress instance?
You definitely don't want to give all your RAM, or even half of it, to WordPress this way. Why not? Because it's not a shared limit. When your site is busy you have multiple web server processes running, and they each obey the limits you give. If you give them all half your RAM and they take it, your server will thrash, and your web site will work poorly if at all. Thrashing servers sometimes experience hard crashes. You don't want that.
In any case WP_MEMORY_LIMIT can't set memory limits to exceed the php memory limits in your system. That's probably why your current ludicrously high setting of 3072M hasn't already brought your site to its knees.
There are separate limits here because some servers use PHP for other things than WordPress.
My suggestion: be very careful messing with this number. Make conservative changes. If you're getting php memory exhaustion errors from WordPress, increase this limit (and the php memory limit) slowly and sparingly until things start to work.
Install a plugin to monitor memory. Here is one: https://wordpress.org/plugins/server-ip-memory-usage/
Keep an eye on your web server error log to see if you're routinely getting memory exhaustion errors.
This depends on how memory intensive your site is. Try logging a few calls to memory_get_peak_usage() to see. I wouldn't assign more than 128 MiB of memory by default. You don't want a bug in the code to eat all of your server's available RAM.

Is there a reason for keeping max_file_uploads at a low value?

So while configuring a server for uploading files I noticed the default for max_file_uploads is 20 files. Is there any reason to keep this at a low value or is it safe to up it to 100 files?
This will depends about your server resources (bandwidth, memory, cpu, etc) If you have a powerfull server and you need to download at the same time 100 files, go head and change it to 100 otherwise keep it as low possible

Allowing large file uploads in PHP (security)

Are there any security and/or performance implications to consider when allowing large file uploads in PHP? For example, these are the PHP ini settings I currently have set.
memory_limit = 950M
upload_max_filesize = 950M
post_max_size = 950M
max_execution_time = 0
What, if anything, could go wrong with these settings?
The security considerations do not change by changing these settings. However for performance the following is valid:
The art of serving users in a performing way is to offer enough ressources to what is requested by the sum of your users. Translating this into examples upon your settings would be something like:
10 users uploading 950 MB would require you to serve 9.5 GB of bandwidth and I/O throughput (which is eg. ipacted by disk speed) in a performing manner. I as user could probably live with uploading 950 MB in 1 minute, but would be dissatisfied with this taking me an hour.
100 users uploading 950 MB would require you to serve 95 GB...
1000 users uploading 950 MB would reuire you to serve 950 GB...
...
Of cause not all of your users go for max at all the time and even concurrent uploads might be limited. However these Max-settings add to your risk stack. So depending on your usage characteristics and your ressource stuffing these settings could be valid.
However I assume you gave extreme examples and want to learn about implications.
When I google "optimize php memory_limit" I get this:
https://softwareengineering.stackexchange.com/questions/207935/benefits-of-setting-php-memory-limit-to-lower-value-for-specific-php-script
Obviously you can do the same with the other settings.
In forums you can find a lot of swear against setting those config-values such high. However having this in environments, where ressource utilization is managed carefully on other access layers (eg. restrict the number of upload-users via in-app permissions) did work out for me in past very well.

CENTOS / HTTPD memory issue

Hi I am running a VPS (1GB memory) which has a client site on it with these specs:
Wordpress (no cache plugins)
Timthumb image resize script (http://timthumb.googlecode.com/svn/trunk/timthumb.php)
Shopp plugin for ecommerce (has a caching system)
Php.ini memory limit is set to 64M max per script
After restarting apache I have around 500M free memory. After only visiting this client's site in a random browser memory drops by 150-200M !!
I am trying to figure out the loophole, but I might be overlooking the obvious awnser please advise :-)
I'm assuming you're on a Linux VPS, so ... how are you looking at 'free' memory? There's a few different measures of that in your average Linux system. For instance, from my Linux box, I get:
marc#panic:~$ free
total used free shared buffers cached
Mem: 2058188 1596532 461656 0 778404 604752
-/+ buffers/cache: 213376 1844812
Swap: 1052248 0 1052248
By the first line, it would appear that 1.5gig are in use and just under 500meg are free (on a 2gig box). However, those totals include memory used for disk cache, which is the second line. Once you remove cache buffers from the counts, then only 213meg of memory are used by running processes, and 1.8gig are free.
When you have started apache, the various php process that are idle around occupy only about 10MB of memory. The number of php process depends on how many server / childs do you have.
When you access your site, PHP is executing, and increse is memory size. Tipically you end up with a PHP process that is about 50-60 MB each.
To verify type in your shell
ps -ylC apache2
and see the column RSS. Substiture apache2 with the process name of your http server
Do it after a fresh start and after visiting your site!

Categories