i want make an unique counter which tracks following, every referrer and ip from where the user comes. Memcached should store that data and than it should be entered into the database after 5 mins or so. And also it should check for duplicate entries that not twice the ip is written to the DB
I have an idea to do it without memcache, but than every sec something would be written into the database which would make the site slow.
I can cache basic SQL with memcache but i have no clue how make that above, just learned coding. So im a totally noob :)
Thank you for your help.
As I understand it - you are using a RDBMS to store the information for long term. I can suggest 2 things:
1. Store information in key/value store such as Memcached and write it to RDBMS periodically through a background process (CRON job).
With this solution you would do something like this in your application code:
$m = new Memcached();
$m->addServer('localhost', 11211);
$m->setOption(Memcached::OPT_COMPRESSION, false);
$m->append('some_unique_id', '|' . time() . ':' . base64_encode($_SERVER['HTTP_REFERER']));
And then you would have a CRON job doing something like this:
$m = new Memcached();
$m->addServer('localhost', 11211);
foreach($list_of_unique_ids as $uid)
{
$raw_data = $m->get($uid);
$m->delete($uid);
$uid_safe = mysql_real_escape_string($uid);
$items = explode('|', $raw_data);
$sql = array();
foreach($items as $rec)
{
if (!empty($rec))
{
list($timestamp, $ref_base64) = explode(':', $rec, 2);
$ref = base64_decode($ref_base64);
$ref_safe = mysql_real_escape_string($ref);
$sql[] = "('" . $uid_safe . "', " . (int) $timestamp . ", '" . $ref_safe . "')";
}
}
mysql_query("INSERT INTO some_table (user_id, ts, ref) VALUES " . implode(',', $sql));
}
2. Use a more advanced key/value store such as Redis or MongoDB as primary means of long-term storage.
Just pick a database engine with huge write throughput and call it a day.
> i want make an unique counter which
> tracks following, every referrer and
> ip from where the user comes.
> Memcached should store that data and
> than it should be entered into the
> database after 5 mins or so. And also
> it should check for duplicate entries
> that not twice the ip is written to
> the DB
I would advise you to use redis to implement this because redis has all the commands needed to do this efficiently and has persistent snapshots. To count you simply use the incrby/incr => decr/decbry commands.
If you installed memcached on your box, then installing redis is going to be a sinch. Just make will be enough. A popular client to connect to redis from PHP is predis.
If you can not install software you also have the option to use the free plan(5 MB memory, 1 Database, but no backups) from http://redistogo.com. Than you need to do the backups to MySQL manually because snapshots are disabled with free plan(probably better of getting a dedicated box or buying mini plan for $5/month).
Related
I've been searching for a suitable PHP caching method for MSSQL results.
Most of the examples I can find suggest storing the results in an array, which would then get included to page. This seems great unless a request for the content was made at the same time as it being updated/rebuilt.
I was hoping to find something similar to ASP's application level variables, but far as I'm aware, PHP doesn't offer this functionality?
The problem I'm facing is I need to perform 6 queries on page to populate dropdown boxes. This happens on the vast majority of pages. It's also not an option to combine the queries. The cached data will also need to be rebuilt sporadically, when the system changes. This could be once a day, once a week or a month. Any advice will be greatly received, thanks!
You can use Redis server and phpredis PHP extension to cache results fetched from database:
$redis = new Redis();
$redis->connect('/tmp/redis.sock');
$sql = "SELECT something FROM sometable WHERE condition";
$sql_hash = md5($sql);
$redis_key = "dbcache:${sql_hash}";
$ttl = 3600; // values expire in 1 hour
if ($result = $redis->get($redis_key)) {
$result = json_decode($result, true);
} else {
$result = Db::fetchArray($sql);
$redis->setex($redis_key, $ttl, json_encode($result));
}
(Error checks skipped for clarity)
My app first queries 2 large sets of data, then does some work on the first set of data, and "uses" it on the second.
If possible I'd like it to instead only query the first set synchronously and the second asynchronously, do the work on the first set and then wait for the query of the second set to finish if it hasn't already and finally use the first set of data on it.
Is this possible somehow?
It's possible.
$mysqli->query($long_running_sql, MYSQLI_ASYNC);
echo 'run other stuff';
$result = $mysqli->reap_async_query(); //gives result (and blocks script if query is not done)
$resultArray = $result->fetch_assoc();
Or you can use mysqli_poll if you don't want to have a blocking call
http://php.net/manual/en/mysqli.poll.php
MySQL requires that, inside one connection, a query is completely handled before the next query is launched. That includes the fetching of all results.
It is possible, however, to:
fetch results one by one instead of all at once
launch multiple queries by creating multiple connections
By default, PHP will wait until all results are available and then internally (in the mysql driver) fetch all results at once. This is true even when using for example PDOStatement::fetch() to import them in your code one row at a time. When using PDO, this can be prevented with setting attribute \PDO::MYSQL_ATTR_USE_BUFFERED_QUERY to false. This is useful for:
speeding up the handling of query results: your code can start processing as soon as one row is found instead of only after every row is found.
working with result sets that are potentially larger than the memory available to PHP (PHP's self-imposed memory_limit or RAM memory).
Be aware that often the speed is limited by a storage system with characteristics that mean that the total processing time for two queries is larger when running them at the same time than when running them one by one.
An example (which can be done completely in MySQL, but for showing the concept...):
$dbConnectionOne = new \PDO('mysql:hostname=localhost;dbname=test', 'user', 'pass');
$dbConnectionOne->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$dbConnectionTwo = new \PDO('mysql:hostname=localhost;dbname=test', 'user', 'pass');
$dbConnectionTwo->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$dbConnectionTwo->setAttribute(\PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, false);
$synchStmt = $dbConnectionOne->prepare('SELECT id, name, factor FROM measurementConfiguration');
$synchStmt->execute();
$asynchStmt = $dbConnectionTwo->prepare('SELECT measurementConfiguration_id, timestamp, value FROM hugeMeasurementsTable');
$asynchStmt->execute();
$measurementConfiguration = array();
foreach ($synchStmt->fetchAll() as $synchStmtRow) {
$measurementConfiguration[$synchStmtRow['id']] = array(
'name' => $synchStmtRow['name'],
'factor' => $synchStmtRow['factor']
);
}
while (($asynchStmtRow = $asynchStmt->fetch()) !== false) {
$currentMeasurementConfiguration = $measurementConfiguration[$asynchStmtRow['measurementConfiguration_id']];
echo 'Measurement of sensor ' . $currentMeasurementConfiguration['name'] . ' at ' . $asynchStmtRow['timestamp'] . ' was ' . ($asynchStmtRow['value'] * $currentMeasurementConfiguration['factor']) . PHP_EOL;
}
I am in the process of coding a cloud monitoring application and coudnt find useful logic of getting performance counters from AZURE php SDK documentation(such as CPU utilization, disk utilization, ram usage).
can anybody help ??
define('PRODUCTION_SITE', false); // Controls connections to cloud or local storage
define('AZURE_STORAGE_KEY', '<your_storage_key>');
define('AZURE_SERVICE', '<your_domain_extension>');
define('ROLE_ID', $_SERVER['RoleDeploymentID'] . '/' . $_SERVER['RoleName'] . '/' . $_SERVER['RoleInstanceID']);
define('PERF_IN_SEC', 30); // How many seconds between times dumping performance metrics to table storage
/** Microsoft_WindowsAzure_Storage_Blob */
require_once 'Microsoft/WindowsAzure/Storage/Blob.php';
/** Microsoft_WindowsAzure_Diagnostics_Manager **/
require_once 'Microsoft/WindowsAzure/Diagnostics/Manager.php';
/** Microsoft_WindowsAzure_Storage_Table */
require_once 'Microsoft/WindowsAzure/Storage/Table.php';
if(PRODUCTION_SITE) {
$blob = new Microsoft_WindowsAzure_Storage_Blob(
'blob.core.windows.net',
AZURE_SERVICE,
AZURE_STORAGE_KEY
);
$table = new Microsoft_WindowsAzure_Storage_Table(
'table.core.windows.net',
AZURE_SERVICE,
AZURE_STORAGE_KEY
);
} else {
// Connect to local Storage Emulator
$blob = new Microsoft_WindowsAzure_Storage_Blob();
$table = new Microsoft_WindowsAzure_Storage_Table();
}
$manager = new Microsoft_WindowsAzure_Diagnostics_Manager($blob);
//////////////////////////////
// Bring in global include file
require_once('setup.php');
// Performance counters to subscribe to
$counters = array(
'\Processor(_Total)\% Processor Time',
'\TCPv4\Connections Established',
);
// Retrieve the current configuration information for the running role
$configuration = $manager->getConfigurationForRoleInstance(ROLE_ID);
// Add each subscription counter to the configuration
foreach($counters as $c) {
$configuration->DataSources->PerformanceCounters->addSubscription($c, PERF_IN_SEC);
}
// These settings are required by the diagnostics manager to know when to transfer the metrics to the storage table
$configuration->DataSources->OverallQuotaInMB = 10;
$configuration->DataSources->PerformanceCounters->BufferQuotaInMB = 10;
$configuration->DataSources->PerformanceCounters->ScheduledTransferPeriodInMinutes = 1;
// Update the configuration for the current running role
$manager->setConfigurationForRoleInstance(ROLE_ID,$configuration);
///////////////////////////////////////
// Bring in global include file
//require_once('setup.php');
// Grab all entities from the metrics table
$metrics = $table->retrieveEntities('WADPerformanceCountersTable');
// Loop through metric entities and display results
foreach($metrics AS $m) {
echo $m->RoleInstance . " - " . $m->CounterName . ": " . $m->CounterValue . "<br/>";
}
this is the code I crafted to extract processor info ...
UPDATE
Do take a look at the following blog post: http://blog.maartenballiauw.be/post/2010/09/23/Windows-Azure-Diagnostics-in-PHP.aspx. I realize that it's an old post but I think this should give you some idea about implementing diagnostics in your role running PHP. The blog post makes use of PHP SDK for Windows Azure on CodePlex which I think is quite old and has been retired in favor of the new SDK on Github but I think the SDK code on Github doesn't have diagnostics implemented (and that's a shame).
ORIGINAL RESPONSE
Since performance counters data is stored in Windows Azure Table Storage, you could simply use Windows Azure SDK for PHP to query WADPerformanceCountersTable in your storage account to fetch this data.
I have written a blog post about efficiently fetching diagnostics data sometime back which you can read here: http://gauravmantri.com/2012/02/17/effective-way-of-fetching-diagnostics-data-from-windows-azure-diagnostics-table-hint-use-partitionkey/.
Update
Looking at your code above and source code for TableRestProxy.php, you could include a query as the 2nd parameter to your retrieveEntities call. You could something like:
$query = "(CounterName eq '\Processor(_Total)\% Processor Time` or CounterName eq '\TCPv4\Connections Established')
$metrics = $table->retrieveEntities('WADPerformanceCountersTable', $query);
Please note that my knowledge about PHP is limited to none so the code above may not work. Also, please ensure to include PartitionKey in your query to avoid full table scan.
Storage Analytics Metrics aggregates transaction data and capacity data for a storage account. Transactions metrics are recorded for the Blob, Table, and Queue services. Currently, capacity metrics are only recorded for the Blob service. Transaction data and capacity data is stored in the following tables:
$MetricsCapacityBlob
$MetricsTransactionsBlob
$MetricsTransactionsTable
$MetricsTransactionsQueue
The above tables are not displayed when a listing operation is performed, such as the ListTables method. Each table must be accessed directly.
When you retrieve metrics,use these tables.
Ex:
$metrics = $table->retrieveEntities('$MetricsCapacityBlob');
URL:
http://msdn.microsoft.com/en-us/library/windowsazure/hh343264.aspx
I just designed an admin console for a social networking website. My boss now wants me to cache the results of several MySQL queries that build these results (for 24 hours). The site uses Memcached (with Wordpress W3 total cache) and XCache. I wanted to know what's the best way to do this.
Here is an example of one such query and how I am getting the results (basically I am returning aggregate stats on users, which means my results are fairly simple, eg:
//users who registered in last 365 days
$users_reg_365 = "select ID
from wp_users
where user_registered > NOW() - interval 365 day";
then use the wpdb query class to get the results:
$users_reg_365 = $wpdb->get_results($users_reg_365);
then display the result in the dashboard:
<li><?php echo "Total users who registered within last 365 days: <span class='sqlresult'>" . sizeof($users_reg_365) . "</span>"; ?></li>
My understanding of Memcached/XCache is that it basically stores strings, so would it make sense to just cache sizeof($users_reg_365)?
The last wrinkle is that our Wordpress site uses W3 total cache, which leverages Memcached, and the boss asked me not to use Memcached but XCache instead, but I find the docs a bit confusing. What's the best way to solve this problem? Can SQL itself be told to 'remember' certain queries like this, or is memory caching the way to go?
Thanks!
You can find more about the differences of both here:
Difference between Memcache, APC, XCache and other alternatives I've not heard of
An example how you could
<?php
$m = new Memcached();
$m->addServer('localhost', 11211);
// cache 24hrs
$cache_expire = 86400;
// users is your key
$users_reg_365 = $m->get('users_reg_365');
if (empty($users_reg_365)) {
$users_reg_365 = "select ID from wp_users where user_registered > NOW() - interval 365 day";
$m->set('users_reg_365', $cache_expire);
}
If you need to exactly refresh the cache at middle night change the value of $cache_expire.
You can refer to the full reference of memcached at
http://www.php.net/manual/en/memcached.get.php
Hm. I'm not sure how your caching is configured inside of WordPress. If you have WordPress's object cache set up to use Memcache(d)/XCache for persistent caching, you could do something like this:
$key = 'custom-key-to-look-up-later';
$data = sizeof( $users_reg_365 );
$group = 'group-id';
$expire = 60 * 60 * 24 // time in seconds before expiring the cache.
wp_cache_set( $key, $data, $group, $expire );
Later you can look up that value like this:
$data = wp_cache_get( $key, $group );
if( ! is_wp_error( $data ) )
// the data is good. do something with it.
You can find the docs on these functions here.
Before you begin, though, make sure WordPress is set up to work with Memcache(d) or XCache. :)
IF you want to store the results of any array just serialize/jsonencode it and store as that..
I have a script that is running on a shared hosting environment where I can't change the available amount of PHP memory. The script is consuming a web service via soap. I can't get all my data at once or else it runs out of memory so I have had some success with caching the data locally in a mysql database so that subsequent queries are faster.
Basically instead of querying the web service for 5 months of data I am querying it 1 month at a time and storing that in the mysql table and retrieving the next month etc. This usually works but I sometimes still run out of memory.
my basic code logic is like this:
connect to web service using soap;
connect to mysql database
query web service and store result in variable $results;
dump $results into mysql table
repeat steps 3 and 4 for each month of data
the same variables are used in each iteration so I would assume that each batch of results from the web service would overwrite the previous in memory? I tried using unset($results) in between iterations but that didn't do anything. I am outputting the memory used with memory_get_usage(true) each time and with every iteration the memory used is increased.
Any ideas how I can fix this memory leak? If I wasn't clear enough leave a comment and I can provide more details. Thanks!
***EDIT
Here is some code (I am using nusoap not the php5 native soap client if that makes a difference):
$startingDate = strtotime("3/1/2011");
$endingDate = strtotime("7/31/2011");
// connect to database
mysql_connect("dbhost.com", "dbusername" "dbpassword");
mysql_select_db("dbname");
// configure nusoap
$serverpath ='http://path.to/wsdl';
$client = new nusoap_client($serverpath);
// cache soap results locally
while($startingDate<=$endingDate) {
$sql = "SELECT * FROM table WHERE date >= ".date('Y-m-d', $startingDate)." AND date <= ".date('Y-m-d', strtotime($startingDate.' +1 month'));
$soapResult = $client->call('SelectData', $sql);
foreach($soapResult['SelectDateResult']['Result']['Row'] as $row) {
foreach($row as &$data) {
$data = mysql_real_escape_string($data);
}
$sql = "INSERT INTO table VALUES('".$row['dataOne']."', '".$row['dataTwo']."', '".$row['dataThree'].")";
$mysqlResults = mysql_query($sql);
}
$startingDate = strtotime($startingDate." +1 month");
echo memory_get_usage(true); // MEMORY INCREASES EACH ITERATION
}
Solved it. At least partially. There is a memory leak using nusoap. Nusoap writes a debug log to a $GLOBALS variable. Altering this line in nusoap.php freed up a lot of memory.
change
$GLOBALS['_transient']['static']['nusoap_base']->globalDebugLevel = 9;
to
$GLOBALS['_transient']['static']['nusoap_base']->globalDebugLevel = 0;
I'd prefer to just use php5's native soap client but I'm getting strange results that I believe are specific to the webservice I am trying to consume. If anyone is familiar with using php5's soap client with www.mindbodyonline.com 's SOAP API let me know.
Have you tried unset() on $startingDate and mysql_free_result() for $mysqlResults?
Also SELECT * is frowned upon even if that's not the problem here.
EDIT: Also free the SOAP result too, perhaps. Some simple stuff to begin with to see if that helps.