Memcache add key to specific memcached server - php

I have add two memcached servers to my web app with the following code:
$servers = array(
array('xxx.xxx.xxx.185', 11211),
array('xxx.xxx.xxx.10', 11211)
);
global $m;
$m = new Memcached('persistant-id');
if (0 == count($m->getServerList())) {
error_log("add memcache servers num: " . count($m->getServerList()));
return $m->addServers($servers);
}
I want to add the key/values to specific server.
How can I add a specific key for example to xxx.xxx.xxx.185 server?
I wonder in the web and find the addByKey($serverKey, $key, $value, $expiration = null) but I can't find what is the $serverKey.
Any help?

I don't think you can pick a specific server by ip to store the data on when using a pooled connection (I could be wrong, but haven't seen anything to the contrary). It's my understanding in the *ByKey() functions, the $serverKey is just an arbitrary grouping you can assign. You are not picking which server the data is stored on, only specifying that you want all items added with that $serverKey to end up on the same server.
The only sure fire way I know of to guarantee data ends up on a specific server would be to create the connection and only add the data to that specific server.
A less than ideal alternative suggestion I can think of trying (which I have no idea if it would work, and is purely a hypothesis), would be create a connection to just the xxx.185 server, add a single item with whatever $serverKey you want, then create a pooled connection, add a bunch of items using that same $serverKey, and then use the getServerByKey() method to see if all the data you added all ended up on the same server.
If so, you would just have to seed the server you want with all the $serverKey's you use.
If both servers show up, then the $serverKey might be specific to the connection and not the available data set.
But this is not an ideal solution, particularly with the volatility of the memcached data, if this works at all (Like I said, never tried it, it's purely a guess). You'd have to re-seed the server every time your memcached restarts or all the data with that $serverKey is discarded for more room
Hope that helps

Related

Is there a more efficient way to update a JSON file?

I'm developing a browser-based game, and for the combat instances I need to be able track the player's hit points as well as the NPC's hit points. I'm thinking setting up a JSON file for each instance makes more sense then having a mySQL db get hammered with requests constantly. I've managed to create the JSON file, pull the contents, update the relevant vars, then overwrite the file, but I'm wondering if there's a more efficient way to handling it than how I've set it up.
$new_data = array(
"id"=>"$id",
"master_id"=>"$master_id",
"leader"=>"$leader",
"group"=>"$group",
"ship_1"=>"$ship_1",
"ship_2"=>"$ship_2",
"ship_3"=>"$ship_3",
"date_start"=>"$date_start",
"date_end"=>"$date_end",
"public_private"=>"$public_private",
"passcode"=>"$passcode",
"npc_1"=>"$npc_1",
"npc_1_armor"=>"$npc_1_armor",
"npc_1_shields"=>"$npc_1_shields",
"npc_2"=>"$npc_2",
"npc_2_armor"=>"$npc_2_armor",
"npc_2_shields"=>"$npc_2_shields",
"npc_3"=>"$npc_3",
"npc_3_armor"=>"$npc_3_armor",
"npc_3_shields"=>"$npc_3_shields",
"npc_4"=>"$npc_4",
"npc_4_armor"=>"$npc_4_armor",
"npc_4_shields"=>"$npc_4_shields",
"npc_5"=>"$npc_5",
"npc_5_armor"=>"$npc_5_armor",
"npc_5_shields"=>"$npc_5_shields",
"ship_turn"=>"$ship_turn",
"status"=>"$status");
$new_data = json_encode($new_data);
$file = "$id.json";
file_put_contents($file, $new_data);
It works, but I'm wondering if there is a way to update a single array item w/o having to pull ALL the data out, assign it to vars, and re-write the file. in this example, I'm only changing one var (ship_turn)
I'm thinking setting up a JSON file for each instance makes more sense then having a mySQL db get hammered with requests constantly.
MySQL is optimized for this task.
If you use files (like JSON) as database replacement, then you have to deal with "race conditions", because file access is not optimized for concurrent read / write access (by default).
If you're in a high-concurrency environment you should avoid using the filesystem as "database". Multiple operations on the file system are very hard to make atomic in PHP.
See flock for more details.
It depends on the game. For a turn based game or any non real-time game a MySQL approach should be ok. After all, databases are designed to get hammered heavily :-) For realtime games I would go for WebSocket and NodeJS as the backend. The server would keep a runtime state of the game, reacting appropriately to the client requests and dealing with race conditions (as you would do on a stand alone multiplayer server)

Handling huge amount of MySQL connection

I have a JS script that does one simple thing - an ajax request to my server. On this server I establish a PDO connection, execute one prepared statement:
SELECT * FROM table WHERE param1 = :param1 AND param2 = :param2;
Where table is the table with 5-50 rows, 5-15 columns, with data changing once each day on average.
Then I echo the json result back to the script and do something with it, let's say I console log it.
The problem is that the script is run ~10,000 times a second. Which gives me that much connections to the database, and I'm getting can't connect to the database errors all the time in server logs. Which means sometimes it works, when DB processes are free, sometimes not.
How can I handle this?
Probable solutions:
Memcached - it would also be slow, it's not created to do that. The performance would be similar, or worse, to the database.
File on server instead of the database - great solution, but the structure would be the problem.
Anything better?
For such a tiny amount of data that is changed so rarely, I'd make it just a regular PHP file.
Once you have your data in the form of array, dump it in the php file using var_export(). Then just include this file and use a simple loop to search data.
Another option is to use Memcached, which was created exactly this sort of job and on a fast machine with high speed networking, memcached can easily handle 200,000+ requests per second, which is high above your modest 10k rps.
You can even eliminate PHP from the tract, making Nginx directly ask Memcached for the stored valaues, using ngx_http_memcached_module
If you want to stick with current Mysql-based solution, you can increase max_connections number in mysql configuration, however, making it above 200 would may require some OS tweaking as well. But what you should not is to make a persistent connection, that will make things far worse.
You need to leverage a cache. There is no reason at all to go fetch the data from the database every time the AJAX request is made for data that is this slow-changing.
A couple of approaches that you could take (possibly even in combination with each other).
Cache between application and DB. This might be memcache or similar and would allow you to perform hash-based lookups (likely based on some hash of parameters passed) to data stored in memory (perhaps JSON representation or whatever data format you ultimately return to the client).
Cache between client and application. This might take the form of web-server-provided cache, a CDN-based cache, or similar that would prevent the request from ever even reaching your application given an appropriately stored, non-expired item in the cache.
Anything better? No
Since you output the same results many times, the sensible solution is to cache results.
My educated guess is your wrong assumption -- that memcached is not built for this -- is based off you planning on storing each record separately
I implemented a simple caching mechanism for you to use :
<?php
$memcached_port = YOUR_MEMCACHED_PORT;
$m = new Memcached();
$m->addServer('localhost', $memcached_port);
$key1 = $_GET['key1'];
$key2 = $_GET['key2'];
$m_key = $key1.$key2; // generate key from unique values , for large keys use MD5 to hash the unique value
$data = false;
if(!($data = $m->get($m_key))) {
// fetch $data from your database
$expire = 3600; // 1 hour, you may use a unix timestamp value if you wish to expire in a specified time of day
$m->set($m_key,$data,$expire); // push to memcache
}
echo json_encode($data);
What you do is :
Decide on what signifies a result ( what set of input parameters )
Use that for the memcache key ( for example if it's a country and language the key would be $country.$language )
check if the result exists:
if it does pull the data you stored as an array and output it.
if it doesn't exist or is outdated :
a. pull the data needed
b. put the data in an array
c. push the data to memcached
d. output the data
There are more efficient ways to cache data, but this is the simplest one, and sounds like your kind of code.
10,000 requests/second still don't justify the effort needed to create server-level caching ( nginx/whatever )
In an ideally tuned world a chicken with a calculator would be able to run facebook .. but who cares ? (:

FileMaker PHP API - why is the initial connection so slow?

I've just set up my first remote connection with FileMaker Server using the PHP API and something a bit strange is happening.
The first connection and response takes around 5 seconds, if I hit reload immediately afterwards, I get a response within 0.5 second.
I can get a quick response for around 60 seconds or so (not timed it yet but it seems like at least a minute but less than 5 minutes) and then it goes back to taking 5 seconds to get a response. (after that it's quick again)
Is there any way of ensuring that it's always a quick response?
I can't give you an exact answer on where the speed difference may be coming from, but I'd agree with NATH's notion on caching. It's likely due to how FileMaker Server handles caching the results on the server side and when it clears that cache out.
In addition to that, a couple of things that are helpful to know when using custom web publishing with FileMaker when it comes to speed:
The fields on your layout will determine how much data is pulled
When you perform a find in the PHP api on a specific layout, e.g.:
$request = $fm->newFindCommand('myLayout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
What's being returned is data from all of the fields available on the my layout layout.
In sql terms, the above query is equivalent to:
SELECT * FROM myLayout WHERE `name` = ?; // and the $myname variable is bound to ?
The FileMaker find will return every field/column available. You designate the returned columns by placing the fields you want on the layout. To get a true * select all from your table, you would include every field from the table on your layout.
All of that said, you can speed up your requests by only including fields on the layout that you want returned in the queries. If you only need data from 3 fields returned to your php to get the job done, only include those 3 fields on the layout the requests use.
Once you have the records, hold on to them so you can edit them
Taking the example from above, if you know you need to make changes to those records somewhere down the line in your php, store the records in a variable and use the setField and commit methods to edit them. e.g.:
$request = $fm->newFindCommand('my layout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
$records = $result->getRecords();
...
// say we want to update a flag on each of the records down the line in our php code
foreach($records as $record){
$record->setField('active', true);
$record->commit();
}
Since you have the records already, you can act on them and commit them when needed.
I say this as opposed to grabbing them once for one purpose and then grabbing them again from the database later do make updates to the records.
It's not really an answer to your original question, but since FileMaker's API is a bit different than others and it doesn't have the greatest documentation I though I'd mention it.
There are some delays that you can remove.
Ensure that the layouts you are accessing via PHP are very simple, no unnecessary or slow calculations, few layout objects etc. When the PHP engine first accesses that layout it needs to load it up.
Also check for layout and file script triggers that may be run, IIRC the OnFirstWindowOpen script trigger is called when a connection is made.
I don't think that it's related to caching. Also, it's the same when accessing via XML. Haven't tested ODBC, but am assuming that it is an issue with this too.
Once the connection is established with FileMaker Server and your machine, FileMaker Server keeps this connection alive for about 3 minutes. You can see the connection in the client list in the FM Server Admin Console. The initial connection takes a few seconds to set up (depending on how many others are connected), and then ANY further queries are lightning fast. If you run your app again, it'll reuse that connection and give results in very little time.
You can do completely different queries (on different tables) in a different application, but as long as you execute the second one on the same machine and use the same credentials, FileMaker Server will reuse the existing connection and provide results instantly. This means that it is not due to caching, but it's just the time that it takes FMServer to initially establish a connection.
In our case, we're using a web server to make FileMaker PHP API calls. We have set up a cron every 2 minutes to keep that connection alive, which has pretty much eliminated all delays.
This is probably way late to answer this, but I'm posting here in case anyone else sees this.
I've seen this happen when using external authentication with FileMaker Server. The first query establishes a connection to Active Directory, which takes some time, and then subsequent queries are fast as FMS has got the authentication figured out. If you can, use local authentication in your FileMaker file for your PHP access and make sure it sits above any external authentication in your accounts list. FileMaker runs through the auth list from top to bottom, so this will make sure that FMS successfully authenticates your web query before it gets to attempt an external authentication request, making the authentication process very fast.

Any suggestion for me to buffer data in my web server which runs in PHP?

I have considered about it for a long time.
I think I can't use all the API/PHP extension (e.g. memcache, APC, Xcache) that need to install something in my remote Linux server, as my web host server is a shared server, what I just can do is to place files/scripts in the httpdocs folder.
Is there any suggestion for me that can let me programmatically use caching and access the memory?
Actually what I aim at is to find a "place" to save some data, that can be accessed in higher speed than entering the DB to fetch data, and also to reduce the loading to DB.
That means, it is not a must to use memory, if someone can give any other effective suggestions. e.g. will using text file be a good choice?(actually I am just guessing it)
The PHP version of mine is 5.2.17. And I am using MySQL DB.
Hope someone can give me suggestions
Flat files will always be the EASIEST way for caching, but it will be slower than accessing data directly from memory. You can use MySQL tables that are stored in memory. you need to change the engine used by tables to memory. NOTE that this will work only if your db is on the same server as web server.
Set up an in memory table with two columns key and value. variable name will be a key and its contents are values. if you need to cache array, objects then serialize the data before storing it.
If you need to limit the size of in memory table add one more column hitCount. for each read increase the count by one. while inserting new row, check for max number of rows and if its reached a limit delete the row with lowest hitCount.
To check which one is faster (file caching or in memory cache) use following code
<?php
function getTime()
{
$a = explode (' ',microtime());
return(double) $a[0] + $a[1];
}
?>
<?php
$Start = getTime();
//Data fetching tasks comes here
$end = getTime();
echo "time taken = ".number_format(($End - $Start),2)."seconds";
?>
If possible let us know how efficient it is... Thanks
You could very easily just use flat text files as a cache if your DB queries are expensive. Just like you would use memcache with a key/value system, you can use filenames as keys and the context of the files as values.
Here's an example that caches the output of a single page in a file; you could adapt it to suit your needs: http://www.snipe.net/2009/03/quick-and-dirty-php-caching/
Flat file is the easiest way to cache business logic, queries etc on a shared server.
To cache any DB requests your best bet is to fetch the results, serialize them and store them in a file with a possible expiry date (if required). When you need to fetch those results again just pull in the file and unserialize the previously serialized data.
Also if the data is user based cookies and sessions will work too, for as long as the user stays on the application at least. If your pulling a lot of data it would still be better to go with the first option and just save the files based on a user/session id.
Depends on the size of data to cahce.
Based on the restriction of your server environment:
Use flat file( or maybe sqlite db) to cache your data for large data set (e.g., user
preference, user activity logs.)
Use share memory to cache your data for the smaller data set (e.g., system counter, system
status.)
Hope this helps.

How to copy tables from one website to another with php?

I have 2 websites, lets say - example.com and example1.com
example.com has a database fruits which has a table apple with 7000 records.
I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction.
So, how can I copy the tables without having to contact the system admins? Is there a way to do this using PHP? I went through example of copying tables, but that was inside the same database.
Both example.com and example1.com are on the same server.
One possible approach:
On the "source" server create a PHP script (the "exporter") that outputs the contents of the table in an easy to parse format (XML comes to mind as easy to generate and to consume, but alternatives like CSV could do).
On the "destination" server create a "importer" PHP script that requests the exporter one via HTTP, parses the result, and uses that data to populate the table.
That's quite generic, but should get you started. Here are some considerations:
http://ie2.php.net/manual/en/function.http-request.php is your friend
If the table contains sensitive data, there are many techniques to enhance security (http_request won't give you https:// support directly, but you can encrypt the data you export and decrypt on importing: look for "SSL" on the PHP manual for further details).
You should consider adding some redundancy (or even full-fleshed encryption) to prevent the data from being tampered with while it sails the web between the servers.
You may use GET parameters to add flexibility (for example, passing the table name as a parameter would allow you to have a single script for all tables you may ever need to transfer).
With large tables, PHP timeouts may play against you. The ideal solution for this would be efficient code + custom timeout for the export and import scripts, but I'm not even sure if that's possible at all. A quite reliable workaround is to do the job in chunks (GET params come handy here to tell the exporter what chunk do you need, and a special entry on the output can be enough to tell the importer how much is left to import). Redirects help a lot with this approach (each redirect is a new request, to the servers, so timeouts get reset).
Maybe I'm missing something, but I hope there is enough there to let you get your hands dirty on the job and come back with any specific issue I might have failed to foresee.
Hope this helps.
EDIT:
Oops, I missed the detail that both DBs are on the same server. In that case, you can merge the import and export task into a single script. This means that:
You don't need a "transfer" format (such as XML or CSV): in-memory representation within the PHP is enough, since now both tasks are done within the same script.
The data doesn't ever leave your server, so the need for encryption is not so heavy. Just make sure no one else can run your script (via authentication or similar techniques) and you should be fine.
Timeouts are not so restrictive, since you don't waste a lot of time waiting for the response to arrive from the source to the destination server, but they still apply. Chunk processing, redirection, and GET parameters (passed within the Location for the redirect) are still a good solution, but you can squeeze much closer to the timeout since execution time metrics are far more reliable than cross-server data-transfer metrics.
Here is a very rough sketch of what you may have to code:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("(%s, %s, %s), ", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
/* removing the trailing ',' from $q is left as an exercise (ok, I'm lazy, but that's supposed to be just a sketck) */
mysql_query($q, $link_dst);
You'll have to add the chunking logics in there (those are too case- & setup- specific), and probably output some confirmation message (maybe a DESCRIBE and a COUNT of both source and destination tables and a comparison between them?), but that's quite the core of the job.
As an alternative you may run a separate insert per row (invoking the query within the loop), but I'm confident a single query would be faster (however, if you have too small RAM limits for PHP, this alternative allows you to get rid of the memory-hungry $q).
Yet another edit:
From the documentation link posted by Roberto:
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.
An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
If that's what's causing your issue (and by your question it seems very likely it is), then the approach of breaking the INSERT into one query per row will most probably solve it. In that case, the code sketch above becomes:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("INSERT INTO `table_name_here` (`field1_name`, `field2_name`, `field3_name`) VALUE (%s, %s, %s)", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
The issue may also be triggered by the huge volume of the initial SELECT; in such case you should combine this with chunking (either with multiple SELECT+while() blocks, taking profit of SQL's LIMIT clause, or via redirections), thus breaking the SELECT down into multiple, smaller queries. (Note that redirection-based chunking is only needed if you have timeout issues, or your execution time gets close enough to the timeout to threaten with issues if the table grows. It may be a good idea to implement it anyway, so even if your table ever grows to an obscene size the script will still work unchanged.)
After struggling with this for a while, came across BigDump. It worked like a charm! Was able to copy LARGE databases without a glitch.
Here are reported the most common causes of the "MySQL Server has gone away" error in MySQL 5.0:
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
You might want to have a look to it and to use it as a checklist to see if you're doing something wrong.

Categories