PHP/MySQL works way slower on Docker - php

I built a PHP app using MySQL on my local machine. I used the Wamp server.
The app should generate traveling plans 60 000 of them. Then, it should generate some more info about them. That is about 500k SQL INSERT queries.
I made a PHP script that uses while loops to run INSERT statements over and over until the database is populated.
On my local machine whole process lasts about 20 minutes.
The final goal is to make a Docker container containing the app. When somebody pulls the repository from Docker Hub, it should generate data on startup.
I ran into a problem on startup; data generation lasted way longer; in fact, I never achieved to complete the script after it ran for several hours.
Opcache did help a lot. But it's still longer than expected (20 minutes).
With Opcache, it creates about 150k inserts in 30 minutes.
Is there any tips how to make my script faster?
The container has following configuration:
Dockerfile:
FROM php:8.1-fpm-alpine
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN apt-get update && apt-get upgrade -y
# Install build dependencies and the OPcache extension
RUN apk add --no-cache $PHPIZE_DEPS \
&& docker-php-ext-install opcache \
&& apk del $PHPIZE_DEPS
# Copy the opcache.ini into your Docker image
COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini
# Run your application
CMD php-fpm
docker-compose.yml
version: '3.8'
services:
php-apache-environment:
container_name: php-apache
build:
context: ./php
dockerfile: Dockerfile
depends_on:
- db
volumes:
- ./php/src:/var/www/html/
ports:
- 8000:80
db:
container_name: db
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: 12345
MYSQL_DATABASE: turisticka_agencija
MYSQL_USER: root
MYSQL_PASSWORD: 12345
ports:
- "9906:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- '8080:80'
restart: always
environment:
PMA_HOST: db
depends_on:
- db
Example of the script (one of the functions):
function ponuda(){
$conn = OpenCon();
$conn->query("SET NAMES 'utf8'");
ini_set('max_execution_time', '500');
set_time_limit(500);
for($i=1; $i<60001; $i++){
$query="SELECT CURRENT_DATE + INTERVAL FLOOR(RAND() * 20) DAY AS pocetni";
if ($result = mysqli_query($conn, $query)) {
$row = mysqli_fetch_array($result);
$pocetak=$row["pocetni"];
}
$conn->query("INSERT INTO ponuda (id_ponude, termin_polazak, termin_povratak, cena_putovanja, cena_prevoza, id_lokacije_polaska, id_prevoza) VALUES (".$i.", '".$pocetak."', '".$pocetak."' + INTERVAL FLOOR(1 + RAND()*(10 - 1 + 1)) DAY, ROUND(100 + RAND()*(3000 - 100 + 1), 2), FLOOR(100 + RAND()*(3000 - 100 + 1)), FLOOR(46 + RAND()*(48 - 46 + 1)), FLOOR(1 + RAND()*(5 - 1 + 1)));");
}
echo "Ponuda ".$conn->error;
$conn->query("UPDATE ponuda SET ponuda.termin_polazak = ponuda.termin_polazak - INTERVAL (100) DAY, ponuda.termin_povratak= ponuda.termin_povratak - INTERVAL (100) WHERE ponuda.id_ponude>24378 AND ponuda.id_ponude<34378;");
CloseCon($conn);
provodi();
}

Avoid unnecessary burden on the DB by creating your random numbers and dates in PHP
Your INSERT query can be a prepared statement, prepared once and executed multiple times
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
$conn = OpenCon();
$conn->set_charset('utf8mb4');
ini_set('max_execution_time', '500');
set_time_limit(500);
$insert = <<<_SQL;
INSERT INTO ponuda (
id_ponude,
termin_polazak,
termin_povratak,
cena_putovanja,
cena_prevoza,
id_lokacije_polaska,
id_prevoza
) VALUES (?, ?, ?, ?, ?, ?, ?)
_SQL;
// prepare once
$stmt = $conn->prepare($insert);
// bind once
$stmt->bind_param(
'issdiii',
$i,
$departureDate,
$returnDate,
$tripPrice,
$transportCost,
$locationId,
$transportId
);
for ($i = 1; $i <= 60000; $i++) {
$baseDate = strtotime(sprintf('+%d day', rand(0, 20)));
$departureDate = date('Y-m-d', $baseDate);
$returnDate = date('Y-m-d', strtotime(
sprintf('+%d day', rand(1, 10)),
$baseDate
));
$tripPrice = round(100 + lcg_value() * 2900, 2);
$transportCost = rand(100, 3000);
$locationId = rand(46, 48);
$transportId = rand(1, 5);
// execute the statement
$stmt->execute();
}
echo "Ponuda ".$conn->error;
$conn->query("UPDATE ponuda SET ponuda.termin_polazak = ponuda.termin_polazak - INTERVAL (100) DAY, ponuda.termin_povratak= ponuda.termin_povratak - INTERVAL (100) WHERE ponuda.id_ponude>24378 AND ponuda.id_ponude<34378;");
CloseCon($conn);
provodi();

Related

PHP CLI multiple background processes limitation

Server Information:
CentOS 6.5
12GB RAM
Intel(R) Xeon(R) CPU E5-2430 # 6 CPU x 2.20GHz
PHP CLI 5.5.7
I am currently trying to use Perl to fire off 1000 PHP CLI processes in parallel. This however takes 9.9 seconds vs 2.3 seconds for the equivalent Perl script. When I test using the Perl script /opt/test.pl, all 1000 processes are launched in parallel (ps -eLf | grep -ic 'test.pl'). When I test using /opt/testphp.php, using ps -eLf | grep -ic 'testphp.php', I see a count of 250, then it rises to 580 and then it drops to 0 (the script is executed 1000 times, just not in parallel).
Is there a limitation preventing a high number of PHP CLI processes from being launched in parallel?
Has anyone experienced this issue?
Please let me know if I have left out anything that would help to identify the issue.
Thanks
Perl launcher script:
use Time::HiRes qw/ time sleep /;
my $command = '';
my $start = time;
my $filename = '/tmp/report.txt';
# open(my $fh, '>', $filename) or die "Could not open file '$filename' $!";
for $i(1 .. 1000) {
# $command = $command . "(perl /opt/test.pl &);"; // takes 2.3 seconds
$command = $command . "(php -q /opt/testphp.php &);"; // takes 9.9 seconds
}
system($command);
my $end = time;
print 'Total time taken: ', ( $end - $start ) , "\n";
PHP file (testphp.php):
sleep(5);
$time = microtime(true);
file_put_contents('/tmp/report_20140804_php.log', "This is the record: $time\n", FILE_APPEND);
Perl file (test.pl):
#! /usr/bin/perl
use Time::HiRes qw/ time sleep /;
sleep(5);
my $command = '';
my $start = time;
my $filename = '/tmp/report_20140804.log';
open(my $fh, '>>', $filename) or die "Could not open file '$filename' $!";
print $fh "Successfully saved entry $start\n";
close $fh;

scientific calculation in PHP

i have a client that wants to turn this calculation into a function in PHP. i been given test numbers with an answer for it and i am out from this answer by too much for it to be correct im not sure if the calculation is wrong or im just not seeing the issue.
function Density($m1 , $m2, $m3, $pw, $psm){
return $m1 / (($m2 - $m3) / $pw) - (($m2 - $m1) / $psm) ;
}
$Density = ( Density(746.2, 761.7, 394.6, (998.1*1000), (761.7-746.2)) / 1000000)
output : 2.02882553228
answer : 2.127
also i have tried it like this as well but this is way far out for it to be right
function Density($m1 , $m2, $m3, $pw, $psm){
return $m1 / ((($m2 - $m3) / $pw) - (($m2 - $m1) / $psm ));
}
output : -0.001
answer : 2.127
i know it should be as close to the answer as it can get 0.010 out fromthe correct answer but i dont see what im doing wrong please help internet.
Remember BEDMAS - brackets exponents division multiplication addition subtraction. Your code is wrong for the order-of-operations:
m1 / (((m2 - m3) / pw) - ((m2 - m1) / psm))
or, if that equation had been typeset properly:
m1
--------------------
m2 - m3 m2 - m1
------- - -------
pw psm
Your implementation is different from the required.
You should ensure operator precedence rule is correctely observed:
return $m1 / ((($m2 - $m3) / $pw) - (($m2 - $m1) / $psm)) ;

'LIKE' operator in SQL query is very slow with pdo_sqlite

I found out that the 'LIKE' operator in a 'SELECT' SQL query is very slow with pdo_sqlite (PHP 5.3 or PHP 5.4).
The same query entered in the sqlite3 binary is way faster.
Sample code :
<?php
$bdd = new PDO('sqlite:./chaines_centre.db');
$reponse = $bdd->prepare("select DateMonteeAuPlan, Debut, Fin, Statut from ReportJobs where NomJob = ? and NomChaine like 'DCLC257__' order by DateMonteeAuPlan DESC limit 20;");
$reponse->execute($_GET['job']);
while ($donnees = $reponse->fetch())
{
// whatever...
}
$reponse->closeCursor();
?>
Here is the quick "benchmark" I made with :
XDebug Trace for pdo_sqlite measure
SQLite binary with '.timer on'
NomChaine like 'DCLC257__' :
● pdo_sqlite : 1.4521s ✘
● sqlite3 binary : 0.084s ✔
NomChaine like 'DCLC257%' :
● pdo_sqlite : 1.4881s ✘
● sqlite3 binary : 0.086s ✔
NomChaine = 'DCLC25736' :
● pdo_sqlite : 0.002s ✔ (it's a bit longer i think, but very fast)
● sqlite3 binary : 0.054s ✔
How can I improve this situation ?
EDIT : Maybe I focused too much on the 'LIKE' operator.
<?php
$bdd = new PDO('sqlite:./chaines_centre.db');
$time_start = microtime(true);
$reponse = $bdd->query("select DateMonteeAuPlan, Debut, Fin, Statut from ReportJobs where NomJob = 'NSAVBASE' and NomChaine like 'DCLC257%' order by DateMonteeAuPlan DESC limit 20;");
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Situation 1 : $time second(s)<br><br>";
// Output : 1.3900790214539 second(s)
$time_start = microtime(true);
$reponse = $bdd->query("select DateMonteeAuPlan, Debut, Fin, Statut from ReportJobs where NomJob = 'NSAVBASE' and NomChaine like 'DCLC257%' limit 20;");
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Situation 2 : $time second(s)<br><br>";
// Output : 0.0030009746551514 seconde(s)
$time_start = microtime(true);
$reponse = $bdd->query("select DateMonteeAuPlan, Debut, Fin, Statut from ReportJobs where NomJob = 'NSAVBASE' and NomChaine = 'DCLC25736' order by DateMonteeAuPlan DESC limit 20;");
$time_end = microtime(true);
$time = $time_end - $time_start;
echo "Situation 3 : $time second(s)<br><br>";
// Output : 0 seconde(s)
?>
By removing the LIKE operator or order by DateMonteeAuPlan, the query is executed in an expected time...
It's so strange. o_O
Did you by any chance run the PDO vs binary in the same script (one after other)? If you did, then it would be normal to get better results with binary because PDO runs when cache is empty (so it hits the disc) while binary gets the data from RAM.
For your second script, that's certainly the case: first query gets 1.3+ seconds because it also reads the data, while the rest get the data from RAM.
See http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html#pragma-cache_size for details.

To select a specific column from a table using php postgres

I have table of 5000+ rows and 8+ columns like,
Station Lat Long Date Rainfall Temp Humidity Windspeed
Abcd - - 09/09/1996 - - - -
Abcd - - 10/09/1996 - - - -
Abcd - - 11/09/1996 - - - -
Abcd - - 12/09/1996 - - - -
Efgh - - 09/09/1996 - - - -
Efgh - - 10/09/1996 - - - -
Efgh - - 11/09/1996 - - - -
Efgh - - 12/09/1996 - - - -
I am developing a web application, in that user will select a column like rainfall/temp/humidity and for a particular date.
Can anyone guide me how to query for this in php-postgres. (database:postgres, table:weatherdata, user:user, password:password)
Thanks in advance.
You can use some code like this:
public function getData ($date, $columnsToShow = null) {
/* You could check the parameters here:
* $date is string and not empty
* $columnsToShow is an array or null.
*/
if (isset ($columnsToShow))
$columnsToShow = implode (',', $columnsToShow);
else $columnsToShow = "*";
$query = "select {$columnsToShow}
from table
where date = '{$date}'";
$result = array();
$conex = pg_connect ("host=yourHost user=yourUser password=yourUser dbname=yourDatabase");
if (is_resource ($conex)) {
$rows = pg_query ($conex, $query);
if ($rows) {
while ($data = pg_fetch_array ($rows, null, 'PGSQL_ASSOC'))
$result[] = $data;
}
}
return (empty ($result) ? null : $result);
}
Now you can invoke, for example, like this:
getData ('2012-03-21', array ('Station', 'Rainfall'));
I hope you serve.

does Every single call to mysql_real_escape_string require another trip to the database?

http://php.net/manual/en/function.mysql-real-escape-string.php:
mysql_real_escape_string() calls MySQL's library function
mysql_real_escape_string, which prepends backslashes to the following
characters: \x00, \n, \r, \, ', " and \x1a.
Ok, so basically if i ever do something like this:
mysql_query("insert T(C)select'".mysql_real_escape_string($value)."'")
I'm making 1 trip to the database for the mysql_real_escape_string function and another trip for the function mysql_query = 2 trips to the database?
The fact that it uses the mysql library does not mean it does a round trip with the server.
It runs code from the mysql client library, loaded in the same process as your php interpreter. You do need a connection though - that function needs to know some server settings to operate properly. But those settings are cached in the connection information on the PHP side.
If you want to verify this (and you're on linux), write a simple script like:
<?php
$link = mysql_connect('localhost', 'user', 'pass');
echo "Connection done\n";
echo mysql_real_escape_string("this ' is a test");
?>
And run it through strace:
$ strace php t.php
.... # here comes the connection to mysql, socket fd == 3
connect(3, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, 110) = 0
fcntl(3, F_SETFL, O_RDWR) = 0
setsockopt(3, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
.... # talking with mysql here
poll([{fd=3, events=POLLIN}], 1, 60000) = 1 ([{fd=3, revents=POLLIN}])
read(3, "8\0\0\0\n5.1.58-log\0\3\0\0\0K-?4'fL+\0\377\367!"..., 16384) = 60
...
read(3, "\7\0\0\2\0\0\0\2\0\0\0", 16384) = 11
# first php echo
write(1, "Connection done\n", 16Connection done ) = 16
# second php echo
write(1, "this \\' is a test", 17this \' is a test) = 17
munmap(0x7f62e187a000, 528384) = 0
....
The only important thing there is that the two writes caused by the echo statements have no other syscall in between - no network communication is possible without a syscall (from userspace in linux anyway).

Categories