phpMyAdmin executes queries faster than normal PHP script - php

I have an SQL query:
SELECT country_code FROM GeoIP WHERE 3111478102>=ip_from AND 3111478102<=ip_to;
I tried to execute the same query in both phpMyAdmin and a normal php script, and here is the result:
phpMyAdmin
PHP Script
As you can see that the query took only 0.8s to be fully executed in phpMyAdmin. Whereas it took 4.5s in the normal PHP Script!
The php script code:
<?php
// ob_start();
// error_reporting(0);
error_reporting(E_ALL);
ini_set('display_errors', 'On');
header_remove("X-Powered-By");
ini_set('session.gc_maxlifetime', 13824000);
session_set_cookie_params(13824000);
session_start();
include_once './config.php';
// include_once './functions/mainClass.php';
$beforeMicro = microtime(true);
$res = $conn->query("SELECT country_code FROM GeoIP WHERE 3111478102>=ip_from AND 3111478102<=ip_to");
if ($res->rowCount() > 0){
$sen = $res->fetch(PDO::FETCH_OBJ);
$country_code = $sen->country_code;
print_r([$country_code]);
}
$afterMicro = microtime(true);
echo 'time:'.round(($afterMicro-$beforeMicro)*1000);
?>
In the meantime, I am setting up a new web server and I use the following:
CentOS 8
Apache 2.4.46
PHP 7.2.24
Please note: The table I'm searching in (GeoIP) contains over 39 minion record. But I think that this is not the problem because that the same query is ran faster in the same server. I also tried to upload the same database and the PHP script to another a Shared Hosting account (not my own server), and it worked and was executing the query in just 1.6s from the php script.

This might be a helpful thread for your question:
Why would phpmyadmin be significantly faster than the mysql command line?
Front-end tools like phpMyAdmin often staple on a LIMIT clause in order to paginate results and not crash your browser or app on large tables. A query that might return millions of records, and in so doing take a lot of time, will run faster if more constrained.
It's not really fair to compare a limited query versus a complete one, the retrieval time is going to be significantly different. Check that both tools are fetching all records.

Related

2006 MySQL server has gone away while saving object

Im getting this error "General error: 2006 MySQL server has gone away" when saving an object.
Im not going to paste the code since it way too complicated and I can explain with this example, but first a bit of context:
Im executing a function via Command line using Phalcon tasks, this task creates a Object from a Model class and that object calls a casperjs script that performs some actions in web page, when it finishes it saves some data, here's where sometimes I get mysql server has gone away, only when the casperjs takes a bit longer.
Task.php
function doSomeAction(){
$object = Class::findFirstByName("test");
$object->performActionOnWebPage();
}
In Class.php
function performActionOnWebPage(){
$result = exec ("timeout 30s casperjs somescript.js");
if($result){
$anotherObject = new AnotherClass();
$anotherObject->value = $result->value;
$anotherObject->save();
}
}
It seems like the $anotherObject->save(); method is affected by the time exec ("timeout 30s casperjs somescript.js"); takes to get an answer, when it shouldn`t.
Its not a matter of the data saved since it fails and saves succesfully with the same input, the only difference I see is the time casperjs takes to return a value.
It seems like if for some reason phalcon opens the MySQL conection during the whole execution of the "Class.php" function, provoking the timeout when casperjs takes too long, does this make any sense? Could you help me to fix it or find a workaround to this?
Problem seems that either you are trying to fetch heavy data in single packet than allowed in your mysql config file or your wait_timeout variable value is not set properly as per your code requirement.
check your wait_timeout and max_allowed_packet values, you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet';
And increase these values as per your requirement in your my.cnf (linux) or my.ini (windows) config file and restart mysql service.

MySQL query won't execute - Class->Function->Form

I have built a query ($query_q = "SELECT * FROM `table`;") and am trying to execute it within a function.
public function read_from_table() {
$query_q = "SELECT * FROM `table`";
$query_a = mysql_query($query_q);
while (mysql_fetch_array($query_a)) {
echo "Did it!";
//OR AS TRIED ANOTHER WAY
return "Did it!";
}
}
And called as such:
echo $classInstance->read_from_table();
//OR AS TRIED ANOTHER WAY
$classInstance->read_from_table();
Both the ways that the function and the class have been made have been tried in every conceivable way, and yet I still get no result.
I was getting the error that says the xecution time has exceeded the limit of 30 seconds, so I added the ini_set('max_execution_time', 0); (knowing this removes time limit altogether) to see if the query would execute at all, it has been running now for 30 minutes without a sign of life. Why is the query not executing?
Additional comments:
I am aware that I am using the depreciated mysql_* functions, this is at client request and will be updated after the site has been made live and is complete to a point where I am ready to change it all to mysqli->* functions.
The table that I am calling (it's name has been stripped and replaced with `table`) has only 9 rows in it, so this should not affect the execution time greatly (or will it?).
I have had to strip all sensitive information from the function to satisfy the client and my employer. Please keep in mind that I cannot disclose and information that the client and my employer do not wish to disclose.
The issue was that the internet and server had gone down.
This has since been sorted and is operational.
Thank you for help and support in this.
DigitalMediaMan
try
error_reporting(E_ALL);
if all ok, try run this query from console, look, how many times query will be performed
before this, kill old process in database(show processlist and kill pid)

Using PHP to dump large databases into JSON

I have a slight problem with an application I am working on. The application is used as a developer tool to dump tables from a database in a MySQL server to a JSON file which the devs grab by using the Unix curl command. So far the databases we've been using are relatively small tables(2GB or less) however recently we've moved into another stage of testing that use fully populated tables (40GB+) and my simple PHP script breaks. Here's my script:
[<?php
$database = $_GET['db'];
ini_set('display_errors', 'On');
error_reporting(E_ALL);
# Connect
mysql_connect('localhost', 'root', 'root') or die('Could not connect: ' . mysql_error());
# Choose a database
mysql_select_db('user_recording') or die('Could not select database');
# Perform database query
$query = "SELECT * from `".$database."`";
$result = mysql_query($query) or die('Query failed: ' . mysql_error());
while ($row = mysql_fetch_object($result)) {
echo json_encode($row);
echo ",";
}
?>]
My question to you is what can I do to make this script better about handling larger database dumps.
This is what I think that the problem is:
you are using mysql_query. mysql_query buffers data in memory and then mysql_fetch_object just fetches that data from the memory. For very large tables, you just don't have enough memory (most likely you are getting all 40G of rows into that one single call).
Use mysql_unbuffered_query instead. More info here on MySQL performance blog There you can find some other possible causes for this behavior.
I'd say just let mysql do it for you, not php:
SELECT
CONCAT("[",
GROUP_CONCAT(
CONCAT("{field_a:'",field_a,"'"),
CONCAT(",field_b:'",field_b),"'}")
)
,"]")
AS json FROM table;
it should generates something like this:
[
{field_a:'aaa',field_b:'bbb'},
{field_a:'AAA',field_b:'BBB'}
]
You might have a problem with MySQL buffering. But, you might also have other problems. If your script is timing out, try disabling the timeout with set_time_limit(0). That's a simple fix, so if that doesn't work, you could also try:
Try dumping your database offline, then transfer it via script or just direct http. You
might try making a first PHP script call a shell script which calls
a PHP-CLI script that dumps your database to text. Then, just pull
the database via HTTP.
Try having your script dump part of a database (the rows 0 through
N, N+1 through 2N, etc).
Are you using compression on your http connections? If your lag is transfer time (not script
processing time), then speeding up the transfer via compression might help.
If it's the data transfer, JSON might not be the best way to transfer the data. Maybe it is. I don't know. This question might help you: Preferred method to store PHP arrays (json_encode vs serialize)
Also, for options 1 and 3, you might try looking at this question:
What is the best way to handle this: large download via PHP + slow connection from client = script timeout before file is completely downloaded

check cron job has run script properly - proper way to log errors in batch processing

I have set up a cronjob to run a script daily. This script pulls out a list of Ids from a database, loops through each to get more data from the database and geneates an XML file based on the data retrieved.
This seems to have run fine for the first few days, however, the list of Ids is getting bigger and today I have noticed that not all of the XML files have been generated. It seems to be random IDs that have not run. I have manually run the script to generate the XML for some of the missing IDs individually and they ran without any issues.
I am not sure how to locate the problem as the cron job is definately running, but not always generating all of the XML files. Any ideas on how I can pin point this problem and quickly find out which files have not been run.
I thought perhaps add timestart and timeend fields to the database and enter these values at the start and end of each XML generator being run, this way I could see what had run and what hadn't, but wondered if there was a better way.
set_time_limit(0);
//connect to database
$db = new msSqlConnect('dbconnect');
$select = "SELECT id FROM ProductFeeds WHERE enabled = 'True' ";
$run = mssql_query($select);
while($row = mssql_fetch_array($run)){
$arg = $row['id'];
//echo $arg . '<br />';
exec("php index.php \"$arg\"", $output);
//print_r($output);
}
My suggestion would be to add some logging to the script. A simple
error_log("Passing ID:".$arg."\n",3,"log.txt");
Can give you some info on whether the ID is being passed. If you find that that is the case, you can introduce logging to index.php to further evaluate the problem.
Btw, can you explain why you are using exec() to run a php script? Why not excute a function in the loop. This could well be the source of the problem.
Because with exec I think the process will run in the background and the loop will continue, so you could really choke you server that way, maybe that's worth trying out as well. (I think this also depends on the way of outputting:
Note: If a program is started with this function, in order for it to continue running in the background, the output of the program must be redirected to a file or another output stream. Failing to do so will cause PHP to hang until the execution of the program ends.
Maybe some other users can comment on this.
Turned out the apache was timing out. Therefore nothing to do with using a function or the exec() function.

Browser crashes when about around 4 million records entered in MYSQL

I downloaded a database that was exported to the TXT format and has about 700MB with 7 million records (1 per line).
I made a script to import the data to a mysql database, but when about 4 million records inserted into, the browser crashes.
I have tested in Firefox and IE.
Can someone give me an opinion and some advice about this?
The script is this:
<?php
set_time_limit(0);
ini_set('memory_limit','128M');
$conexao = mysql_connect("localhost","root","") or die (mysql_error());
$base = mysql_select_db("lista102",$conexao) or die (mysql_error());
$ponteiro = fopen("TELEFONES_1.txt","r");
$conta = 0;
function myflush(){ ob_flush(); flush(); }
while(!feof($ponteiro)){
$conta++;
$linha = fgets($ponteiro,4096);
$linha = str_replace("\"", "", $linha);
$arr = explode(";",$linha);
$sql = "insert into usuarios (CPF_CNPJ,NOME,LOG,END,NUM,COMPL,BAIR,MUN,CEP,DDD,TELEF) values ('".$arr[0]."','".$arr[1]."','".$arr[2]."','".$arr[3]."','".$arr[4]."','".$arr[5]."','".$arr[6]."','".$arr[7]."','".$arr[8]."','".$arr[9]."','".trim($arr[10])."')";
$rs = mysql_query($sql);
if(!$rs){ echo $conta ." error";}
if(($conta%5000)==4999) { sleep(10); echo "<br>Pause: ".$conta; }
myflush();
}
echo "<BR>Eof, import complete";
fclose($ponteiro);
mysql_close($conexao);
?>
Try splitting the file in 100 MB chunks. This is a quick solving suggestion to get the job done. The browser issue can get complicated to solve. Try also different browsers.
phpMyadmin has options to continue the query if a crash happened. Allows interrupt of import in case script detects it is close to time limit. This might be good way to import large files, however it can break transactions.
I'm not sure why you need a web browser to insert records into mysql. Why not just use the import facilities of the database itself and leave the web out of it?
If that's not possible, I'd wonder if chunking the inserts into groups of 1000 at a time would help. Rather than committing the entire database as a single transaction, I'd recommend breaking it up.
Are you using InnoDB?
What I've first noticed is that you are using flush() unsafely. Doing flush() when the httpd buffer is full result in an error and your script dies. Give up all this myflush() workaround and use a single ob_implicit_flush() instead.
You don't need to be seeing it with your browser to make it work to the end, you can place a ignore_user_abort() so your code shall complete its job even if your browser dies.
Not sure why your browser is dying. Maybe your script is generating too much content.
Try it with no
<br> Pause: nnnn
output to the browser, and see if that helps. It may be simply that the browser is choking on the long web page it's asked to render.
Also, is PHP timing out during the long transfer?
It doesn't help, also, that you have sleep(10) adding to the time it takes.
You can try splitting up the file in different TXT files, and redo the process using the files. I know I at least used that approach once.
The browser is choking because the request is taking too long to complete. Is there a reason this process should be part of a web page? If you absolutely have to do it this way, consider splitting up your data in manageable chunks.
Run your code in command line using PHP-CLI. This way, you will never encounter time-out for long running process. Although, the situation is your browser crash before time-out ^^.
If you try to execute in hosting server which you don't have shell access, run the code using crontab. But, you have to make sure that the crontab only run once!

Categories