Speeding Up Image Loading from DB - php

I'm loading 9 images from a database and my syntax looks roughly like this:
<img src="image_loader.php?id=4"></img>
My PHP for image_loader.php looks like:
<?php
/* I set up my connection using mysql_connect and mysql_select_db */
$query = sprintf("SELECT ... FROM ... WHERE id='".$_GET["id"]."'");
$result = mysql_query($query, $con);
mysql_close($con);
if (!$result) {
// no result
}
else {
$row = mysql_fetch_row($result);
header('Content-type: image/png');
echo $row[0];
mysql_free_result($result);
}
?>
Each image is about 10-13k but the bunch seems to be loading very slow. I realize that there is some bottle-necking in the number of requests a browser can execute at a time but the wait times seem to be gratuitous.
Any suggestions on how to get images loaded from a database fast?
Also, and this is almost a separate question, but is it possible to instruct a browser (or server) to cache images with now .gif/.png/.jpg srcs? It seems that Firefox does and Chrome doesn't, but I'm not certain of this.

I'd first consider whether storing images in a database makes the most sense. It may, but it should be seriously considered, as giving each image a unique filename in the filesystem and storing that in the database will often be faster.
You would gain additional speed if you could request that file directly from the client as opposed to requesting a generic PHP script that does some sort of fopen()-style abstraction.
In order to narrow down the source of delay, it first might be helpful to check whether your database is hosted on the same server as your webserver. One indication that it is not hosted locally but on a remote database server is to check the host string you're providing in the mysql_connect() call. localhost would suggest its local, something else would suggest it's not. As a note, many shared hosted services (e.g. GoDaddy) split their database server from the webserver.
For a better idea of the source of the delay, I'd suggest instrumenting your image_loader.php code with timestamps to locate the delay? My guess is that it'll be in the query.
If the delay is in your query, you will want to limit the number of queries you make. A strategy that allows you to make one query instead of 9 would limit the impact any webserver-to-database server delay.

Related

use MySQL server as "cache"

I know this is a weird question but I have 2 servers, one a vps and another, a mysql host. Now my issue is that I want to try and lower the load on the vps as it is a startup 128mb vps.
Slight problem, what would be required to make the MySQL server work as if it were a cache, I have no access to anything on the MySQL server save the databases im given.
Will this require a separate database? please tell me so if it is possible i can ask my host to add another database.
Thanks!
If you host a website on the VPS that uses MySQL as the database, a typical request goes like this:
the client makes a request to the web server running on your VPS
your script running inside the web server makes a database query
the database returns the query results
your script formats and prints the results
the results appear in the browser (client)
There is no way to make a request hit the database without going through the web server first.
If you want to lighten the load on the VPS you can make your script run faster. The most time-consuming part of most web applications is making database queries and waiting for the results. You can, in a lot of cases, avoid that by caching the results of your database queries on the VPS using an in-memory cache like memcached. Be careful with the memory settings though - be sure to set the maximum allocated memory to a sufficiently low setting because your server has very little memory.
Here is a basic example of caching the result of a SELECT query:
$cache = new Memcached();
$cache->addServer('localhost', 11211);
$article_key = "article[$article_id]";
$article = $cache->get($article_key);
if ($article === FALSE) {
$statement = $conn->prepare("SELECT * FROM articles WHERE id = :id");
$statement->exec(array('id' => $article_id));
$result = $statement->fetchAll(PDO::FETCH_ASSOC);
if (count($result) === 0) {
// return a 404 status
}
$article = $result[0];
$cache->set($article_key, $article);
}
// display the article

How does memcache with MySQL work?

I am trying to understand (and probably deploy) memcached in our env.
We have 4 web servers on loadbalancer running a big web app developed in PHP. We are already using APC.
I want to see how memcached works? At least, may be I don't understand how caching works.
We have some complex dynamic queries that combine several tables to pull data. Each time, the data is going to be from different client databases and data keeps changing. From my understanding, if some data is stored in cache, and if the request is same next time, the same data is returned. (Or I may be completely wrong here).
How does this whole memcache (or for that matter, any caching stuff works)?
Cache, in general, is a very fast key/value storage engine where you can store values (usually serialized) by a predetermined key, so you can retrieve the stored values by the same key.
In relation to MySQL, you would write your application code in such a way, that you would check for the presence of data in cache, before issuing a request to the database. If a match was found (matching key exists), you would then have access to the data associated to the key. The goal is to not issue a request to the more costly database if it can be avoided.
An example (demonstrative only):
$cache = new Memcached();
$cache->addServer('servername', 11211);
$myCacheKey = 'my_cache_key';
$row = $cache->get($myCacheKey);
if (!$row) {
// Issue painful query to mysql
$sql = "SELECT * FROM table WHERE id = :id";
$dbo->prepare($sql);
$stmt->bindValue(':id', $someId, PDO::PARAM_INT);
$row = $stmt->fetch(PDO::FETCH_OBJ);
$cache->set($myCacheKey, serialize($row));
}
// Now I have access to $row, where I can do what I need to
// And for subsequent calls, the data will be pulled from cache and skip
// the query altogether
var_dump(unserialize($row));
Check out PHP docs on memcached for more info, there are some good examples and comments.
There are several examples on how memcache works. Here is one of the links.
Secondly, Memcache can work with or without MySQL.
It caches your objects which are in PHP, now whether it comes from MySQL, or anywhere else, if its an PHP Object, it can be stored in MemCache.
APC gives you some more functionality than Memcache. Other than storing/caching PHP objects, it also caches PHP-executable-machine-readable-opcodes so that your PHP files won't go through the processes of loading in memory-> Being Comiled, rather, it directly runs the already compiled opcode from the memory.
If your data keeps changing(between requests) then caching is futile, because that data is going to be stale. But most of the times(I bet even in your cache) multiple requests to database result in same data set in which case a cache(in memory) is very useful.
P.S: I did a quick google search and found this video about memcached which has rather good quality => http://www.bestechvideos.com/2009/03/21/railslab-scaling-rails-episode-8-memcached. The only problem could be that it talks about Ruby On Rails(which I also don't use that much, but is very easy to understand). Hopefully it is going to help you grasp the concept a little better.

MySQL query to reduce server load

I have a MySQL server running that will be queried regularly through a php front end. I'm slightly worried about server load as there will be a fair amount of people accessing the webpage, with each session querying the database regularly. The results of the query, and in essence the webpage will be the same for all users.
Is there a way of querying the database once, and outputting the data/results to the webpage, from which all users connect to and view? Basically running the query for all users that connect to the webpage, rather than each user querying the database.
Any suggestions appreciated.
Thanks
You don't have to worry.
Databases intended for that.
Most sites in the world run exactly the same way: MySQL server running that will be queried regularly through a php front end. Nothing bad with it.
Well tuned SQL server and properly designed query will serve much more than you think.
You will need exceptionally high traffic to start worrying about such things.
Don't forget that MysQL has it's own query cache.
Also please note that there are no users "connected" to the webpage. They connect, get page contents and disconnect.
You should give the server a try. If the server is overloaded,
you can always try Memcached tool. It can be used via PHP or by MySQL directly. It will save you from querying DB server with similar queries, i.e. the load on server will be decreased drastically.
If the webpage will be the same for all users, why do you even need to have a MySQL backend?
I think the best solution would be to have a standalone script running periodically (e.g. as a cron) which generates the static HTML for your web pages. That way, there is no need for users to query the database when they are just going to end up with the exact same page anyway.
If its a large query with joins you could create a view in mysql with the queried data and query the view, and update the view if the data changes.

How to copy tables from one website to another with php?

I have 2 websites, lets say - example.com and example1.com
example.com has a database fruits which has a table apple with 7000 records.
I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction.
So, how can I copy the tables without having to contact the system admins? Is there a way to do this using PHP? I went through example of copying tables, but that was inside the same database.
Both example.com and example1.com are on the same server.
One possible approach:
On the "source" server create a PHP script (the "exporter") that outputs the contents of the table in an easy to parse format (XML comes to mind as easy to generate and to consume, but alternatives like CSV could do).
On the "destination" server create a "importer" PHP script that requests the exporter one via HTTP, parses the result, and uses that data to populate the table.
That's quite generic, but should get you started. Here are some considerations:
http://ie2.php.net/manual/en/function.http-request.php is your friend
If the table contains sensitive data, there are many techniques to enhance security (http_request won't give you https:// support directly, but you can encrypt the data you export and decrypt on importing: look for "SSL" on the PHP manual for further details).
You should consider adding some redundancy (or even full-fleshed encryption) to prevent the data from being tampered with while it sails the web between the servers.
You may use GET parameters to add flexibility (for example, passing the table name as a parameter would allow you to have a single script for all tables you may ever need to transfer).
With large tables, PHP timeouts may play against you. The ideal solution for this would be efficient code + custom timeout for the export and import scripts, but I'm not even sure if that's possible at all. A quite reliable workaround is to do the job in chunks (GET params come handy here to tell the exporter what chunk do you need, and a special entry on the output can be enough to tell the importer how much is left to import). Redirects help a lot with this approach (each redirect is a new request, to the servers, so timeouts get reset).
Maybe I'm missing something, but I hope there is enough there to let you get your hands dirty on the job and come back with any specific issue I might have failed to foresee.
Hope this helps.
EDIT:
Oops, I missed the detail that both DBs are on the same server. In that case, you can merge the import and export task into a single script. This means that:
You don't need a "transfer" format (such as XML or CSV): in-memory representation within the PHP is enough, since now both tasks are done within the same script.
The data doesn't ever leave your server, so the need for encryption is not so heavy. Just make sure no one else can run your script (via authentication or similar techniques) and you should be fine.
Timeouts are not so restrictive, since you don't waste a lot of time waiting for the response to arrive from the source to the destination server, but they still apply. Chunk processing, redirection, and GET parameters (passed within the Location for the redirect) are still a good solution, but you can squeeze much closer to the timeout since execution time metrics are far more reliable than cross-server data-transfer metrics.
Here is a very rough sketch of what you may have to code:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("(%s, %s, %s), ", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
/* removing the trailing ',' from $q is left as an exercise (ok, I'm lazy, but that's supposed to be just a sketck) */
mysql_query($q, $link_dst);
You'll have to add the chunking logics in there (those are too case- & setup- specific), and probably output some confirmation message (maybe a DESCRIBE and a COUNT of both source and destination tables and a comparison between them?), but that's quite the core of the job.
As an alternative you may run a separate insert per row (invoking the query within the loop), but I'm confident a single query would be faster (however, if you have too small RAM limits for PHP, this alternative allows you to get rid of the memory-hungry $q).
Yet another edit:
From the documentation link posted by Roberto:
You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.
An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
If that's what's causing your issue (and by your question it seems very likely it is), then the approach of breaking the INSERT into one query per row will most probably solve it. In that case, the code sketch above becomes:
$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
$q = $q . sprintf("INSERT INTO `table_name_here` (`field1_name`, `field2_name`, `field3_name`) VALUE (%s, %s, %s)", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
The issue may also be triggered by the huge volume of the initial SELECT; in such case you should combine this with chunking (either with multiple SELECT+while() blocks, taking profit of SQL's LIMIT clause, or via redirections), thus breaking the SELECT down into multiple, smaller queries. (Note that redirection-based chunking is only needed if you have timeout issues, or your execution time gets close enough to the timeout to threaten with issues if the table grows. It may be a good idea to implement it anyway, so even if your table ever grows to an obscene size the script will still work unchanged.)
After struggling with this for a while, came across BigDump. It worked like a charm! Was able to copy LARGE databases without a glitch.
Here are reported the most common causes of the "MySQL Server has gone away" error in MySQL 5.0:
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
You might want to have a look to it and to use it as a checklist to see if you're doing something wrong.

How do odbc (or mysql) resources work in php?

When you run a query like so:
$query = "SELECT * FROM table";
$result = odbc_exec($dbh, $query);
while ($row = odbc_fetch_array($result)) {
print_r($row);
}
Does the resource stored in $result point to data that exists on the server running php? Or is pointing to data in the database? Put another way, as the while loop does it's thing ,is PHP talking to the DB every iteration or is it pulling that $row from some source on the application side?
Where this is mattering to me is I have a database I'm talking to over VPN using ODBC with PHP. This last weekend something strange has happened where huge pauses are happening during the while loop. So between iterations, the script will stop execution for seconds and up to minutes. It seems to be completely random where this happens. I'm wondering if I need to talk to the server over VPN each iteration and maybe the connection is flaky or if something has gone wrong with my ODBC driver (FreeTDS).
mysql_query and odbc_exec both return a resource which (quote from php.net) "is a special variable, holding a reference to an external resource." This suggests the server is talking with the database server every iteration, I am not sure though.
However, there are 2 connections we are talking about here. The first being your connection with the PHP server, and the second one being the connection between the PHP server and the database server. If both servers have a fast connection, the strange behaviour you are experiencing might not have anything to do with your VPN.
The resource identifies the internal data structure used by PHP for interacting with the external resource.
In the case of the resource returned by mysql_query(), this data structure will include the rows returned by the query (and won't return until all the data has been returned or the conenction fails). However this behaviour is specific to MySQL - there is no requirement that the DBMS return the data before it is explicitly requested by the client.
If there is some strange problem causing lots of latency in your setup, then the only obvious solution would be to compile the results of the query at the database side then deliver them to your PHP code, aither batched or as a whole (think webservice).
C.

Categories