I have never used memcache before now so please excuse my inexperience. Although it is pretty self explanatory, I would like to make sure I am using the built in functions correctly as I am creating a class that will be used commercially so it must be correctly coded and efficient.
I have several questions but as they are very basic I felt it would be alright to combine them into one Stackoverflow question.
If they require an essay answer, please dont bother and I will post it up as a separate question
When would I need to use memcache::addServer and what is the difference between this and memcache::connect?
Does memcache overwrite stored values if it runs out of memory, even if the item has not yet expired?
What would I use memcache::getExtendedStats for?
How do I check to see if a connection to memcache already exists and if not, create a connection?
If I have my usual memcache server of 'localhost' set up, how would I go about setting up another memcache server on my same dedicated server?
Apart from more memory, what is the benefit of having more than one memcache server?
Should I check for memcache server updates regularly?
Does it use a lot of memory to run memcache::connect at the beginning of each page, even if I am not using it?
When am I likely to return errors and how do I catch these?
Most importantly, if I am using memcache within another class that has several methods that may be called more then once per script, how should I go about initialising the object and connecting to the server within each method?
My guess for the last question would be to do it like so:
class test {
public function blah(){
// Make sure the memcache object is accessible
global $memcache;
// Do something ...
// Save result in memcache
$memcache->set(...);
}
public function foo(){
// Do something ...
// No use for memcache
}
}
// Initialise each class
$test = new test;
$memcache = new memcache;
$memcache->connect(...);
// Call some methods from the test class
$test->blah();
$test->foo();
$test->blah();
As you can see in the above example, I connect to the memcache server at the beginning of the script. If I was to include this at the beginning of every page, even on pages that do not use memcache, would this increase the response time a lot or minimal amounts? Hence, question 8!
You might need some coffee or something before you read this:
You'd want to use Memcache::addServer when you need to add more Memcached servers. For example, if you had a really busy website or web app... you'd probably want to have more than one Memcached server running1. Memcache::connect is used when you want to start a connection to one of your Memcached servers. Also, according to the Memcache::addServer docs, another difference between Memcache::addServer and Memcache::connect is that with Memcache::addServer, the connection is not established until actually needed2.
If Memcached runs out of RAM, it will discard the oldest values3.
Memcache::getExtendedStats is used to check information about your Memcached server. For example, if you need to find out how long the server has been up (uptime,) how many connections the server has, or general server usage4, this is a great tool.
Probably the easiest way to check if a connection to Memcached already exists is to check your $memcache connection variable to see if it returns TRUE5. If you need to have a persistent connection (that keeps on going even after your script ends,) there is the option to use Memcache::pconnect6.
If you want to have two Memcached servers going on... and your first server is already your localhost, you will most likely need to have a separate, distinct server for the second7.
At least one other benefit of having more than one Memcached server is the idea that whenever you diversify your data (or servers,) even when one server goes down... you still have however many other servers there to pick up the pieces. Memcached looks8 like it is distributed over however many servers you have running... so if a server goes down, you are still losing that part of the cache. But, you do still have other servers up and running to help keep going.
In general, it's not a bad idea to keep almost any type of software up to date. It looks like Memcached is still a highly active project9 so you may want to update it when you can. But the essence of Memcached doesn't seem to change a whole lot over past versions... so, it might not be as critical to update it compared to something like operating system software.
It sounds like the way that Memcached allocates memory for TCP connections (when you make a call to your Memcached server via Memcache::connect,) does end up costing you memory10. If you are sure you aren't going to need that connection on some of your pages, you may want to avoid making that connect call.
Hard to say what type of errors might come up in your code. But, with something like Memcached, you may find errors coming up when you are running out of memory11.
Like the answer to question eight, I would still recommend trying to only call that $memcache->connect() in areas where you absolutely need it. You might be using Memcached in a lot of your application or scripts; but there still will probably be places where you won't need it.
As far as your code idea for question 10 goes, it's really up to you as far as the implementation goes. In general, it's good to try to avoid global variables12 when possible, though. Instead, like that article (12) in the footnote talks about, it's easier to just use a singleton class call for a connection... and then just call that each time you want to make a connection.
Wow, my eyes are tired. I hope this helps, man...!
1 http://en.wikipedia.org/wiki/Memcached (see Architecture section)
2 http://www.php.net/manual/en/memcache.addserver.php
3 http://en.wikipedia.org/wiki/Memcached (see Architecture section)
4 http://www.php.net/manual/en/memcache.getextendedstats.php
5 http://www.php.net/manual/en/memcache.connect.php (see Return Values section)
6 http://www.php.net/manual/en/memcache.pconnect.php
7 http://www.php.net/manual/en/memcache.addserver.php#101194
8 Benefits of multiple memcached instances
9 http://code.google.com/p/memcached/
10 http://www.facebook.com/note.php?note_id=39391378919 (from Facebook's point of view)
11 http://groups.google.com/group/memcached/browse_thread/thread/9ce1e2691efb283b
12 How to avoid using PHP global objects?
Related
Is there a way in PHP to use "out of session" variables, which would not be loaded/unloaded at every connexion, like in a Java server ?
Please excuse me for the lack of accuracy, I don't figure out how to write it in a proper way.
The main idea would be to have something like this :
<?php
...
// $variablesAlreadyLoaded is kind of "static" and shared between all PHP threads
// No need to initialize/load/instantiate it.
$myVar = $variablesAlreadyLoaded['aConstantValueForEveryone'];
...
?>
I already did things like this using shmop and other weird things, but if there is a "clean" way to do this in "pure PHP" without using caching systems (I think about APC, Redis...), nor database.
EDIT 1 :
Since people (thanks to them having spent time for me) are answering me the same way with sessions, I add a constraint I missed to write : no sessions please.
EDIT 2 :
It seems the only PHP native methods to do such a thing are shared memory (shmop) and named pipes. I would use a managed manner to access shared objects, with no mind of memory management (shared memory block size) nor system problems (pipes).
Then, I browsed the net for a PHP module/library which provides functions/methods to do that : I found nothing.
EDIT 3 :
After a few researches on the way pointed out by #KFO, it appears that the putenv / setenv are not made to deal with objects (and I would avoid serialization). Thus, it resolves the problem for short "things" such as strings or numbers but not for more large/comples objects.
Using the "env way" AND another method to deal with bigger objects would be uncoherent and add complexity to the code and maintenability.
EDIT 4 :
Found this : DBus (GREE Lab DBus), but I'm not having tools to test it at work. Has somebody tested it yet ?
I'm open to every suggestion.
Thanks
EDIT 5 ("ANSWER"):
Since DBus is not exactly what I'm looking for (needs to install a third-party module, with no "serious" application evidence), I'm now using Memcache which has already proven its reliability (following #PeterM comment, see below).
// First page
session_id('same_session_id_for_all');
session_start();
$_SESSION['aConstantValueForEveryone'] = 'My Content';
// Second page
session_id('same_session_id_for_all');
session_start();
echo $_SESSION['aConstantValueForEveryone'];
This works out of the box in PHP. Using the same session id (instead of an random user-uniqe string) to initialize the session for all visitors leads to a session which is the same for all users.
Is it really necessary to use session to achieve the goal or wouldn't it better to use constants?
There is no pure PHP way of sharing information across different
threads in PHP! Except for an "external"
file/database/servervariable/sessionfile solution.
Since some commentators pointed out, that there is serialize/unserialize functionality for Session data which might break data on the transport, there is a solution: In PHP the serialize and unserialize functionality serialize_handler can be configured as needed. See https://www.php.net/manual/session.configuration.php#ini.session.serialize-handler It might be also interesting to have a look at the magic class methods __sleep() and __wakeup() they define how a object behaves on a serialize or unserialize request. https://www.php.net/manual/language.oop5.magic.php#object.sleep ... Since PHP 5.1 there is also a predefined Serializable interface available: https://www.php.net/manual/class.serializable.php
You can declare a Variable in your .htaccess. For Example SetEnv APPLICATION_ENVIRONMENT production and access it in your application with the function getenv('APPLICATION_ENVIRONMENT')
Another solution is to wrap your variable in a "persistent data" class that will automatically restore its data content every time the php script is run.
Your class needs to to the following:
store content of variable into file in __destructor
load content of variable from file in __constructor
I prefer storing the file in JSON format so the content can be easily examined for debugging, but that is optional.
Be aware that some webservers will change the current working directory in the destructor, so you need to work with an absolute path.
I think you can use $_SESSION['aConstantValueForEveryone'] that you can read it on every page on same domain.
Consider to refer to it's manual.
Okay, so I'm relatively naive in my knowledge of the PHP VM and I've been wondering about something lately. In particular, what the request lifecycle looks like in PHP for a web application. I found an article here that gives a good explanation, but I feel that there has to be more to the story.
From what the article explains, the script is parsed and executed each time a request is made to the server! This just seems crazy to me!
I'm trying to learn PHP by writing a little micro-framework that takes advantage of many PHP 5.3/5.4 features. As such, I got to thinking about what static means and how long a static class-variable actually lives. I was hoping that my application could have a setup phase which was able to cache its results into a class with static properties. However, if the entire script is parsed and executed on each request, I fail to see how I can avoid running the application initialization steps for every request servered!
I just really hope that I am missing something important here... Any insight is greatly apreciated!
From what the article explains, the script is parsed and executed each time a request is made to the server! This just seems crazy to me!
No, that article is accurate. There are various ways of caching the results of the parsing/compilation, but the script is executed in its entirety each time. No instances of classes or static variables are retained across requests. In essence, each request gets a fresh, never-before execute copy of your application.
I fail to see how I can avoid running the application initialization steps for every request servered!
You can't, nor should you. You need to initialize your app to some blank state for each and every request. You could serialize a bunch of data into $_SESSION which is persisted across requests, but you shouldn't, until you find there is an actual need to do so.
I just really hope that I am missing something important here...
You seem to be worried over nothing. Every PHP site in the world works this way by default, and the vast, vast majority never need to worry about performance problems.
No, you are not missing anything. If you need to keep some application state, you must do it using DB, files, Memcache etc.
As this can sound crazy if you're not used to it, it's sometimes good for scaling and other things - you keep your state in some other services, so you can easily run few instances of PHP server.
A static variable, like any other PHP variable only persists for the life of the script execution and as such does not 'live' anywhere. Persistence between script executions is handled via session handlers.
I'm working on a medium-sized (probably) PHP system which had MySQL connections being opened everywhere throughout different files and, made into global variables for the later included scripts to have access to. Since I'm creating another module, I'd like to avoid globals and keeping the same mysql connection for each page request. My current solution is this:
Class Db {
static public $dbConnectionArray = array();
}
For every request, the connections would be saved in the static array and referred back to at a later time. What do you think could go wrong? And why?
Would like to hear some opinions on how to best tackle this as I would love to reduce the number of opened connections per script run (currently, one page request invoked about 6-15 mysql connections to at least 3 different databases).
No need to reinvent the wheel. you can use mysql persistent connections to keep connections alive. (http://php.net/manual/en/function.mysql-pconnect.php)
By using persistent connections, your PHP scripts will reuse the same database connections (as long as the database name & credentials are the same)
Also, if your databases are on the same host, you should be able to use the same mysql connection by using mysql_select_db() function.
I'm working on a big project with several http servers that use one main sql database.
The project has many settings that are frequently used(almost every request).
The settings are stored in the main sql database.
I wanted to know, if there is some way to initialize settings only once in php, because it makes no sense for every request to go and read same setting from sql server over and over again, it feels like a waste of resources.
Thanks in advance
2 solutions:
Create a (perhaps also PHP) script that exports settings from database into a plain text file, and includes that file on every http server;
use a memory cache server like http://memcached.org/ and preload data there from an external script, then have http servers connect to memcache instead of SQL.
Edit: Other than that, PHP does not give you a real web application, where you "run" your application and it has its own memory and persistant, global variables. This is one of the reasons I personally got tired of PHP and moved to Python (and Django, specifically).
Hard code these settings in your PHP code.
// Your current code, somthing like this:
$setting_1 = getDataFromMySQL('setting1');
// Hard coded
$setting_1 = TRUE;
You can use shared memory in php if it is compiled that way.
Another possibility is that you store a combined value of your settings as PHP code in one field (a PHP array for example), then you can read them all with only one query to the DB server. Of course this cached value have to be updated when settings change.
APC is the best solution if you are using a single server, otherwise I would go with memcached. However, you may also consider a MYSQL memory table, it is very efficient for fast reads and writes. Another solution is using Linux to keep and call settings with Linux exec. However, this might be a trouble and there might be some security issues. Also let me remind you that efficient INNODB indexes can help you a lot. MYISAM is also considered a good "read" performer, however my benchmarks show me that INNODB indexes are faster.
You can store the settings in the user's session -
session_start();
if (!isset($_SESSION['settings'])) {
$settings_array = //pulled from database
$_SESSION['settings'] = $settings_array;
}
That way, it'll only query once per user
You could use a session to store those settings.
I need to write some service for my application. I want each client to have limited persistent connections (like only allow first 10 clients to connect).
I know I can listen on the port with PHP by socket_listen(). The parent process accept the connection, then pcntl_fork() the process to have children to handle connection.
But as far as I know, PHP resources doesn't persist when fork()ed. I wonder if it is possible to do this with PHP, or I have to do this in C?
1)
Why bother forking? Run your daemon as a single process and use socket_select() ( or stream_select) to listen for requests.
See Aleksey Zapparov's code here for a ready-written solution.
2)
Why bother with the pain of writing your own socket code - use [x]inetd to manage the servers and do al the communication on stdio (note that unlike solution 1, there will be a seperate process for each client - therefore the handling code will be non-blocking)
--
You are correct in saying that PHP resources should not be available in a forked process - but give no indication of how this relates to your current problem. Is it just so that you can count the number of connections? Or something else? In the case of the former, there are much easier ways of doing this. Using solution 1, just increment and decrement a counter variable when clients connect/disconnect. In the case of 2, the same approach but keep the variable in a datafile/database (you might also want to store info about the connections and run occasional audits). Alternatively limit the connections on the firewall.
C.
Maybe you could try share it using memcache (http://www.php.net/manual/en/book.memcache.php). (i never tried this, may it may works)