PHP codeigniter Memory usage around 4mb - php

I made the mistake of getting into the habit of autoloading a bunch of libraries, models..etc, when I don't need to. It is too hard to trace down all the cases to make sure everything is available and not broken. I am estimating the autoloading is causing 1-2 MB of extra resources per script. The total memory usage for my script is around 4Mb. (I used the profiler and disabled autoloading and saw that it dropped 1-2mb)
Is this something to worry about? The server I am running this on has 1gb of ram and doesn't ever seem to be under heady load.
Is this a bad thing? Am I worrying too much?

Is always better to load what you needs.
But 4MB is a normal PHP app memory usage.
I read somewhere, in case of a php optimized app to worry when you exceed 9MB.
Memory usage become important when your server have to respond many requests. So the goal become to use less server ressource as possible to pay less.
Sorry for my poor english.

Related

How to decrease the memory usage in Codeigniter?

When I upload my project on the online hosting server, the entry processes usage goes to 100%. It results to website down.
Using
$this->output->enable_profiler(TRUE);
I got this result.
MEMORY USAGE
1,263,856 bytes
Even a single page takes 1.2 MB
How can I solve my this problem, Please Help
I don't believe so. Maybe just make sure you aren't loading things where they aren't actually needed (like autoload loads things for each request whether you utilize it or not). CI actually has one of the smallest memory footprints of any a framework. All frameworks take more overhead than straight PHP. Try benchmarking the same thing in another framework and it will most likely use more memory than CI does

How to reduce the memory footprint of a multi-process PHP application

I have a multi-process PHP (CLI) application that runs continuously. I am trying to optimize the memory usage because the amount of memory used by each process limits the number of forks that I can run at any given time (since I have a finite amount of memory available). I have tried several approaches. For example, following the advice given by preinheimer, I re-compiled PHP, disabling all extensions and then re-enabling only those needed for my application (mysql, curl, pcntl, posix, and json). This, however, did not reduce the memory usage. It actually increased slightly.
I am nearly ready to abandon the multi-process approach, but I am making a last ditch effort to see if anyone else has any better ideas on how to reduce memory usage. I will post my alternative approach, which involves significant refactoring of my application, below.
Many thanks in advance to anyone who can help me tackle this challenge!
Mutli-process PHP applications (e.g. an application that forks itself using pcntl_fork()) are inherently inefficient in terms of memory because each child process loads an entire copy of the php executable into memory. This can easily equate to 10 MB of memory per process or more (depending on the application). Compiling extensions as shared libraries, in theory, should reduce the memory footprint, but I have had limited success with this (actually, my attempts at this made the memory usage worse for some unknown reason).
A better approach is to use multi-threading. In this approach, the application resides in a single process, but multiple actions can be performed *concurrently** in separate threads (i.e. multi-tasking). Traditionally PHP has not been ideal for multi-threaded applications, but recently some new extensions have made multi-threading in PHP more feasible. See for example, this answer to a question about multithreading in PHP (whose accepted answer is rather outdated).
For the above problem, I plan to refactor my application into a multi-theaded one using pthreads. This requires a significant amount of modifications, but it will (hopefully) result in a much more efficient overall architecture for the application. I will update this answer as I proceed and offer some re-factoring examples for anyone else who would like to do something similar. Others feel free to provide feedback and also update this answer with code examples!
*Footnote about concurrence: Unless one has a multi-core machine, the actions will not actually be performed concurrently. But they will be scheduled to run on the CPU in different small time slices. From the user perspective, they will appear to run concurrently.

What is a normal amount of memory for a Wordpress script to use?

I'm trying to troubleshoot a memory issue I've run into with Wordpress and rather than bore you with the whole problem I was hoping to get a nice compact answer to three parts of my problem:
Normal Memory Footprint. I know there is no real "normal" Wordpress script and yet I think it would be quite useful to hear from people what a typical Wordpress script's memory footprint is. Let's call "normal" for sake of argument as a installation with very few plugins, a base type theme like twenty-twelve, and a script that has some DB retrieval but nothing monumental ... maybe a typical blog roll page or something. What I'm trying to understand is what is the baseline memory footprint (a range not a discrete number) that a more complicated script would be starting from?
Memory Ceiling Versus memory_get_usage(). I have been putting lots of logging in my scripts that pull out the memory usage by using PHP's memory_get_usage(true) call. This seems like one of the few troubleshooting techniques that determine where the memory is being used but what perplexes me is that I see memory usage ranging from 15M to 45M at the script level -- note this is with the "true" parameter so this includes the overhead of the memory manager - and yet in many instances I'll see a 27M script all of a sudden fall over with the message that the "Allowed memory size of 268435456 bytes exhausted". It is possible that maybe there is one very large memory request that takes place after the logging but I'm interested to hear if other people have found any differences between the memory limit and the memory reported by memory_get_usage()?
New Memory Ceiling Ignored. In a desperate attempt to get the site back to working -- and buy me time to troubleshoot -- I thought I'd just up the memory limit in the php.ini file to 512M but doing this seems to have had no impact. The fatal error continues to talk about the old 256M limit.
Any help would be appreciated. Thanks in advance.
Hopefully someone can answers your question so detailed. By my side:
Q: What is a normal amount of memory for a Wordpress script to use?
A1.- As a WP is a plugin driven CMS, memory depends on these plugins. As you must know there exists very bad coded ones. But an out-of-box WP has a very good performance.
A2.- To try helping you to find bottlenecks I recommend you to use BlackBox (WordPress Debug Bar Plugin )
... As for information you will find in profiler, these are: time passed
since profiler was started and total memory WordPress was using when
checkpoint was reached ...
I just found this interesting article:
WordPress Memory Usage & Website Outage Issues Resolved.
I ran a test for Wordpress 4.4 with a clean install on a windows 7 PC (a local install).
Memory Used / Allocated:
9.37 MB / 9.5 MB
Total Files: 89
Total File Size: 2923.38 KB
Ran in 1.27507 seconds
This was all done in the index file, timing before anything is called and memory / file usage after everything is 100% finished.
I tried a few pages (category, archive, single post, etc..) and all were very similar (within 1% difference) in files and memory usage.
I think it stands to reason this would be the best possible performance, so adding plugins /content will only bump these numbers up. May be possible a caching plugin would offer a little better performance though.

How does apache PHP memory usage really work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
To give some context:
I had a discussion with a colleague recently about the use of Autoloaders in PHP. I was arguing in favour of them, him against.
My point of view is that Autoloaders can help you minimise manual source dependency which in turn can help you reduce the amount of memory consumed when including lots of large files that you may not need.
His response was that including files that you do not need is not a big problem because after a file has been included once it is kept in memory by the Apache child process and this portion of memory will be available for subsequent requests. He argues that you should not be concerned about the amount of included files because soon enough they will all be loaded into memory and used on-demand from memory. Therefore memory is less of an issue and the overhead of trying to find the file you need on the filesystem is much more of a concern.
He's a smart guy and tends to know what he's talking about. However, I always thought that the memory used by Apache and PHP was specific to that particular request being handled.
Each request is assigned an amount of memory equal to memory_limit PHP option and any source compilation and processing is only valid for the life of the request.
Even with op-code caches such as APC, I thought that the individual request still needs to load up each file in it's own portion of memory and that APC is just a shortcut to having it pre-compiled for the responding process.
I've been searching for some documentation on this but haven't managed to find anything so far. I would really appreciate it if someone can point me to any useful documentation on this topic.
UPDATE:
Just to clarify, the autoloader discussion part was more of a context :).
It may not have been clear but my main question is about whether Apache will pool together its resources to respond to multiple requests (especially memory used by included files), or whether each request will need to retrieve the code required to satisfy the execution path in isolation from other requests handled from the same process.
e.g.:
Files 1, 2, 3 and 4 are an equal size of 100KB each.
Request A includes file 1, 2 and 3.
Request B includes file 1, 2, 3 and 4.
In his mind he's thinking that Request A will consume 300KB for the entirety of it's execution and Request B will only consume a further 100KB because files 1,2 and 3 are already in memory.
In my mind it's 300KB and 400KB because they are both being processed independently (if by the same process).
This brings him back to his argument that "just include the lot 'cos you'll use it anyway" as opposed to my "only include what you need to keep the request size down".
This is fairly fundamental to how I approach building a PHP website, so I would be keen to know if I'm off the mark here.
I've also always been of the belief that for large-scale website memory is the most precious resource and more of a concern than file-system checks for an autoloader that are probably cached by the kernel anyway.
You're right though, it's time to benchmark!
Here's how you win arguments: run realistic benchmark, and be on the right side of the numbers.
I've had this same discussion, so I tried an experiment. Using APC, I tried a Kohana app with a single monolithic include (containing all of Kohana) as well as with the standard autoloader. The final result was that the single include was faster at a statistically irrelevant rate (less than 1%) but used slightly more memory (according to PHP's memory functions). Running the test without APC (or XCache, etc) is pointless, so I didn't bother.
So my conclusion was to continue use autoloading because it's much simpler to use. Try the same thing with your app and show your friend the results.
Now you don't need to guess.
Disclaimer: I wasn't using Apache. I cannot emphasize enough to run your own benchmarks on your own hardware on your own app. Don't trust that my experience will be yours.
You are the wiser ninja, grasshopper.
Autoloaders don't load the class file until the class is requested.  This means that they will use at most the same amount memory as manual includes, but usually much less.
Classes get read fresh from file each request even if an apache thread can handle multiple requests, so your friends 'eventuall all are read' doesn't hold water.
You can prove this by putting an echo 'foo'; above the class definition in the class file. You'll see on each new request the line will be executed regardless of if you autoload or manually include the whole world of class files at start.
I couldn't find any good concise documentation on this--i may write some with some memory usage examples--as i also have had to explain this to others and show evidence to get it to sink in. I think the folks at zend didn't think anyone would not see the benifits of autoloading.
Yes, apc and such (like all caching solutions) can overcome the resouce negatives and even eek out small gains in performance, but you eat up lots of unneeded memory if you do this on a non-trivial number of libraries and serving a large number of clients. Try something Like loading a healthy chunk of the pear libraries in a massive include file while handling 500 connections hitting your page at the same time.
Even using things like Apc you benefit from using autoloaders with any non-namespaced classes (most of the existing php code currently) as it can help avoid global namespace pollution when dealing with large umbers of class libraries.
This is my opionion.
I think autoloaders are a very bad idea for the following reasons
I like to know what and where my scripts are grabbing the data/code from. Makes debugging easier.
This also has configuration problems in so far as if one of your developers changes the file (upgrade etc) or configuration and things stop working it is harder to find out where it is broken.
I also think that it is lazy programming.
As to memory/preformance issues it is just as cheap to buy some more memory for the computer if it is struggling with that.

Do too many requires / includes slow down PHP

I am now writing a php framework. I am wondering whether it will slow down when php require/include or require_once/include_once too many files during a request?
Well of course it will. Doing anything too many times will cause a slow down.
On a more serious note though, IO operations that touch disk are very slow compared to anything that happens in memory. So often times, including files will be a major performance factor when using a large framework (just look at Zend Framework...).
However, there are typically ways to alleviate this such as APC and similar op code caches.
Sometimes programming approaches are also taken. For example, if I remember correctly, Doctrine 1 has the capability to bundle everything into 1 giant file as to have fewer IO calls.
If in doubt, do some indepth profiling of an application written with your framework and see if include/require/etc are one of the major slow points.
Yes, this will slow your application down. *_once calls are generally more expensive, since it must be checked whether that file has already been included. With a lot of includes, there is a lot of hard disk access and a lot of memory usage bundled. I've developed applications with the Zend Framework that include a total of 150 to 200 files at each request - you really can see the impact that has on the overall performance.
The more files you include will add to some load. However, if you have to choose between require and require_once, require_once / include_once take more load because a check will need to be done by the server to see if the same file has been included elsewhere. So if you could possibly avoid that, at least you could boost performance.
Unless you use cache libraries, everytime a request comes those files would be included again and again. Surely it would slow things down. Create a framework that only include-s what needs to be include-ed.

Categories