I have been creating a php application that makes quite a few queries to the database i'd say roughly around 30 or so each page load. This is needed due to the nature of the application. I am using OOP php techniques and optimising my queries as much as I can. Should I be using some sort of caching system? or would you say 30 is fine? Here is a typical query.
Ok so my __construct looks like this:
public function __construct($host = 'localhost', $user = 'root', $pass = 'root', $name = 'advert')
{
$this->_conn = new mysqli($host, $user, $pass, $name)
or trigger_error('Unable to connect to the server, please check your credentials.', E_USER_ERROR);
}
And one method like so.
$sql = "SELECT `advert_id`,
`ad_title`,
`ad_image` FROM adverts WHERE UNIX_TIMESTAMP() < `ad_expires` AND `ad_show` = 0 AND `ad_enabled` = 1 ORDER BY `ad_id` DESC LIMIT 1";
$stmt = $this->_conn->prepare($sql);
if ($stmt) {
$stmt->execute();
$stmt->bind_result($ad_id, $ad_title, $ad_image);
$rows = array();
while ($row = $stmt->fetch()) {
$item = array(
'ad_id' => $ad_id,
'ad_title' => $ad_title,
'ad_image' => $ad_image
);
$rows[] = $item;
}
The app is kinda like this throughout.
Thanks any feedback will be much appreciated.
**EDIT Sorry i meant to say 30 queries not 30 connections
You should use caching when it will useful. If time of page generation without caching of queries is 3 seconds, and with caching - 0.03, then you should use caching, obviously. If caching not gives any noticeable boost - don't spend resources.
Just make one connection and re-use it. 30 connections is a lot considering that you might have multiple users.
Edit: initial question said connections. 30 queries is fine unless this is data that doesn't change very often. In this case you can first do a query to see if you need to pull data or if the cached data is fine to serve to the user.
Related
I'm doing some testing with Amp and try to see how it could help speeding up SQL Queries by running them async. I think I'm doing something wrong because the results of this test file are very disappointing and not what I would have expected. Is there something I'm doing wrong?
The code below gives me results like this, first number is Amp\Mysql and it is a lot slower for some reason:
0.37159991264343
0.10906314849854
PHP code:
<?php
require 'vendor/autoload.php';
require 'Timer.php';
$runThisManyTimes = 1000;
///////////////////////////////////////////////////////////
use Amp\Mysql\ConnectionConfig;
use Amp\Loop;
Loop::run(function() use ($runThisManyTimes) {
$timer = Timer::start();
$config = ConnectionConfig::fromString(
"host=127.0.0.1 user=test password=test db=test "
);
/** #var \Amp\Mysql\Pool $pool */
$pool = Amp\Mysql\pool($config);
/** #var \Amp\Mysql\Statement $statement */
$statement = yield $pool->prepare("SELECT * FROM accounts WHERE id = :id");
for ($i = 1; $i <= $runThisManyTimes; $i++) {
/** #var \Amp\Mysql\ResultSet $result */
$result = yield $statement->execute(['id' => '206e5903-98bd-4af5-8fb1-86a520e9a330']);
while (yield $result->advance()) {
$row = $result->getCurrent();
}
}
$timer->stop();
echo $timer->getSeconds();
Loop::stop();
});
echo PHP_EOL;
///////////////////////////////////////////////////////////
$timer = Timer::start();
$pdo = new PDO('mysql:host=127.0.0.1;dbname=test', 'test', 'test');
$statement = $pdo->prepare("SELECT * FROM accounts WHERE id = :id");
for ($i = 1; $i <= $runThisManyTimes; $i++) {
$statement->execute(['id' => '206e5903-98bd-4af5-8fb1-86a520e9a330']);
$statement->fetch();
}
$timer->stop();
echo $timer->getSeconds();
Parallel execution of MySQL is not productive when each thread takes less than, say, 1 second.
Each thread must use its own connection; establishing the connection takes some time.
Your particular benchmark (like most benchmarks) is not very useful. After the first execution of that single SELECT, all subsequent executions will probably take less than 1ms. It would be better to use a sequence of statements that reflect your app.
Your benchmark doesn't include any concurrency, so it's basically like blocking I/O in the PDO example. amphp/mysql is a full protocol implementation in PHP, so it's somewhat expected to be slower than the C implementation of PDO.
If you want to find out whether non-blocking concurrent I/O has benefits for your application and you're currently using sequential blocking PDO queries, you should benchmark those against non-blocking concurrent queries using amphp/mysql instead of serial ones.
Additionally, amphp/mysql might not be optimized as much as the database drivers behind PDO, but it allows for non-blocking concurrent queries, which isn't supported by PDO. If you do sequential queries, PDO will definitely have better performance for the time being, but amphp/mysql is very useful once concurrency is involved.
I'm trying out performance of a system I'm building, and it's really slow, and I don't know why or if it should be this slow. What I'm testing is how many single INSERT I can do to the database and I get around 22 per second. That sounds really slow and when I tried to do the inserts i a singel big SQL-query I can insert 30000 records in about 0.5 seconds. In real life the inserts is made by different users in the system so the overhead of connecting, sending the query, parsing the query etc. will always be there. What I have tried so far:
mysqli with as little code as possible. = 22 INSERT per second
PDO with as little code as possible. = 22 INSERT per second
Changing the connection host from localhost to 127.0.0.1 = 22 INSERT per second
mysqli without statement object and check for SQL-injection = 22 INSERT per second
So something seams to be wrong here.
System specs:
Intel i5
16 gig ram
7200 rpm diskdrive
Software:
Windows 10
XAMPP, fairly new with MariaDB
DB engine innoDB.
The code I used to do the tests:
$amountToInsert = 1000;
//$fakeData is an array with randomly generated emails
$fakeData = getFakeData($amountToInsert);
$db = new DatabaseHandler();
for ($i = 0; $i < $amountToInsert; $i++) {
$db->insertUser($fakeUsers[$i]);
}
$db->closeConnection();
The class that calls the database:
class DatabaseHandler {
private $DBHOST = 'localhost';
private $DBUSERNAME = 'username';
private $DBPASSWORD = 'password';
private $DBNAME = 'dbname';
private $DBPORT = 3306;
private $mDb;
private $isConnected = false;
public function __construct() {
$this->mDb = new mysqli($this->DBHOST, $this->DBUSERNAME
, $this->DBPASSWORD, $this->DBNAME
, $this->DBPORT);
$this->isConnected = true;
}
public function closeConnection() {
if ($this->isConnected) {
$threadId = $this->mDb->thread_id;
$this->mDb->kill($threadId);
$this->mDb->close();
$this->isConnected = false;
}
}
public function insertUser($user) {
$this->mDb->autocommit(true);
$queryString = 'INSERT INTO `users`(`email`, `company_id`) '
.'VALUES (?, 1)';
$stmt = $this->mDb->prepare($queryString);
$stmt->bind_param('s', $user);
if ($stmt->execute()) {
$stmt->close();
return 1;
} else {
$stmt->close();
return 0;
}
}
}
The "user" table has 4 columns with the following structure:
id INT unsigned primary key
email VARCHAR(60)
company_id INT unsigned INDEX
guid TEXT
I'm at a loss here and don't really know where to look next. Any help in the right direction would be very much appreciated.
Like it's explained in the comments, it's InnoDB to blame. By default this engine is too cautious and doesn't utilize the disk cache, to make sure that data indeed has been written on disk, before returning you a success message. So you basically have two options.
Most of time you just don't care for the confirmed write. So you can configure mysql by setting this mysql option to zero:
innodb_flush_log_at_trx_commit = 0
as long as it's set this way, your InnoDB writes will be almost as fast as MyISAM.
Another option is wrapping all your writes in a single transaction. As it will require only single confirmation from all the writes, it will be reasonable fast too.
Of course, it's just sane to prepare your query only once with multiple inserts but the speed gain is negligible compared to the issue above. So it doesn't count neither as an explanation nor as a remedy for such an issue.
Your test isn't a very good way of judging performance. Why because you are preparing a statement 1000 times. That's not the way prepared statements are supposed to be used. The statement is prepared once and different parameters bound multiple times. Try this:
public function __construct() {
$this->mDb = new mysqli($this->DBHOST, $this->DBUSERNAME
, $this->DBPASSWORD, $this->DBNAME
, $this->DBPORT);
$this->isConnected = true;
$queryString = 'INSERT INTO `users`(`email`, `company_id`) '
.'VALUES (?, 1)';
$this->stmt_insert = $this->mDb->prepare($queryString);
}
and
public function insertUser($user) {
$this->stmt_insert->bind_param('s', $user);
if ($this->stmt_insert->execute()) {
return 1;
} else {
return 0;
}
}
And, you will be seeing a huge boost in performance. So to recap, there's nothing wrong with your system, it's just the test that was bad.
Update:
Your Common Sense has a point about preparing in advance and reusing the prepared statement not giving a big boost. I tested and found it to be about 5-10%
however, There is something that does give a big boost. Turning Autocommit to off!. Inserting 100 records which previously took about 4.7 seconds on average dropped to < 0.5s on average!
$con->autocommit(false);
/loop/
$con->commit();
I have a database and a string with about 100,000 key / value-pair records and I want to create a function to find the value.
Is it better to use a string or a database, considering performance (page load time) and crash safety? Here are my two code examples:
1.
echo find("Mohammad");
function find($value){
$sql = mysql_query("SELECT * FROM `table` WHERE `name`='$value' LIMIT 1");
$count = mysql_num_rows($sql);
if($count > 0){
$row = mysql_fetch_array($sql);
return $row["family"];
} else {
return 'NULL';
}
}
2.
$string = "Ali:Golam,Mohammad:Offer,Reza:Now,Saber:Yes";
echo find($string,"Mohammad");
function find($string,$value){
$array = explode(",",$string);
foreach($array as $into) {
$new = explode(":",$into);
if($new[0] == $value) {
return $new[1];
break;
}
}
}
The database is pretty sure a good idea.
Databases are fast, maybe they are not as fast as basic String operations in PHP, but if there is a lot of data, databases will probably be faster. A basic select Query takes (on my current default Desktop Hardware) about 15ms, cached less than 1ms, and this is pretty much independend of the number of names in your table, if the indexes are correct. So your site will always be fast.
Databases won't cause a StackOverflow or an out of memory error and crash your site (this is very depending on your PHP-Settings and Hardware)
Databases are more flexible, imagine you want to add / remove / edit names after creating the first Data-Set. It's very simple with "INSERT INTO", "DELETE FROM" and "UPDATE" to modify the data, better than editing a string somewhere in your code with about 100.000 entries.
Your Code
You definitly need to use MySQLi or PDO instead, code maybe like this:
$mysqli = new mysqli("host", "username", "password", "dbname");
$stmt = $mysqli->prepare(
'SELECT string FROM table WHERE name = ? LIMIT 1'
);
$stmt->bind_param("s", $value);
$stmt->execute();
$stmt->store_result();
$stmt->bind_result($string);
$stmt->fetch();
$stmt->close();
This uses MySQLi and Prepared Statements (for security), instead of the depracated MySQL extension (in PHP 5.5)
So I've finally decided to update my PHP for the year 2012 by learning to use PHP's PDO. So far everything is going great, however I don't know if the way I'm going about it is really the best way of doing it.
In this example I am querying my database to display posts users have made, and then displaying 2 comments for each post. So basically what I'm doing is grabbing my posts, looping it out, then in that loop I query the database for the top two comments for every post. However before I run around and start using this all year, I figured I'd see if there is a cleaner method of doing this.
So if anyone could spare a moment to look over this small block of code and let me know if there is a cleaner and perhaps more efficient way of doing this I'd really appreciate it. Feel free to nitpick!
<?php
$hostname = 'localhost';
$username = 'root';
$password = 'root';
$database = 'database';
try {
$dbh = new PDO("mysql:host=$hostname;dbname=$database", $username, $password);
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
//Get Posts
$stmt = $dbh->prepare("SELECT * FROM posts");
$stmt->execute();
$result = $stmt->fetchAll();
}
catch(PDOException $e)
{
echo $e->getMessage();
}
//Loop through each post
foreach($result as $row) {
echo $row['post'];
//Get comments for this post
$pid = $row['id'];
$stmt = $dbh->prepare("SELECT * FROM comments WHERE pid = :pid LIMIT 2");
$stmt->bindParam(':pid', $pid, PDO::PARAM_STR);
$stmt->execute();
$c_result = $stmt->fetchAll();
//Loop through comments
foreach($c_result as $com) {
echo $com['comment'];
}
}
//Close connection
$dbh = null;
?>
Well, it is actually 2 questions.
.1. For the code you are using, it is quite ugly. Using raw API functions always makes your code ugly, boring and repetitive. To run just one query took you FIVE lines!
Don't you think that just one line would be better? Line consists of only meaningful operators?
$comments = $db->getAll("SELECT * FROM comments WHERE pid = :pid LIMIT 2",$row['id']);
.2. For the algorithm - it is quite okay.
Assuming you are *not going to loop over all your database, but merely request only 10-20 posts per page, additional 10-20 primary-key based lookups won't slow your application much.
.3. Bonus track.
thing you really may want to consider is a "business logic/presentation logic separation". Why not to get all your data first and only than starting an output? it will make your code way more clean.
I'm having some serious problems with the PHP Data Object functions. I'm trying to loop through a sizeable result set (~60k rows, ~1gig) using a buffered query to avoid fetching the whole set.
No matter what I do, the script just hangs on the PDO::query() - it seems the query is running unbuffered (why else would the change in result set size 'fix' the issue?). Here is my code to reproduce the problem:
<?php
$Database = new PDO(
'mysql:host=localhost;port=3306;dbname=mydatabase',
'root',
'',
array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true
)
);
$rQuery = $Database->query('SELECT id FROM mytable');
// This is never reached because the result set is too large
echo 'Made it through.';
foreach($rQuery as $aRow) {
print_r($aRow);
}
?>
If I limit the query with some reasonable number, it works fine:
$rQuery = $Database->query('SELECT id FROM mytable LIMIT 10');
I have tried playing with PDO::MYSQL_ATTR_MAX_BUFFER_SIZE and using the PDO::prepare() and PDO::execute() as well (though there are no parameters in the above query), both to no avail. Any help would be appreciated.
If I understand this right, buffered queries involve telling PHP that you want to wait for the entire result set before you begin processing. Prior to PDO, this was the default and you had to call mysql_unbuffered_query if you wanted to deal with results immediately.
Why this isn't explained on the PDO MySQL driver page, I don't know.
You could try to split it up into chunks that aren't big enough to cause problems:
<?php
$id = 0;
$rQuery = $Database->query('SELECT id FROM mytable ORDER BY id ASC LIMIT 100');
do {
stuff($rQuery);
$id += 100;
} while ( $rQuery = $Database->query(
'SELECT id FROM mytable ORDER BY id ASC LIMIT 100 OFFSET '.$id
)
);
?>
...you get the idea, anyway.
Or maybe you could try mysql functions instead:
while ($row = mysql_fetch_row($query)) {
...
}
Which will definitely be faster, since that foreach statement makes an impression to use fetchAll() instead fetch() each row