I have a PHP Symfony application that has a download of registries feature. With this is normal for users to download excels files of >5000 registries with around 20 personalized columns so it's a heavy process for the server. We decided we needed to move this process out of the application server into a serverless function using Digital Ocean Function and send the 5k registries in batches of 50, so we call around 100 times that function for a view (file download).
This script needs to connect to db to gather data and send the end result asynchronously, but sometimes when a view is too large (let's say 130 calls) for some reason after X amount of calls MySQL returns "MySQL server has gone away" when connecting to db. The error always seems to happend around the same amount of calls (always 100-103 of 130 total calls) but the database nevers shuts down.
This is the main structure of the script:
$databaseConnection = new mysqli($databaseHost, $username, $password, $dbName);
if ($databaseConnection->connect_error) {
echo("Connection failed: " . $databaseConnection->connect_error);
return ["body" => "not ok"];
}
echo "Connected successfully";
$resultArr = getFilterArrFromFilters($objectTypeInstanceIds, $selectedCustomFieldsArrIds, $addedCustomFieldsArrIds, $transitionRegistryFieldsIds,
$addressObjectTypeId, $addressCustomFieldTypeId, $invoiceVendorCustomFieldId, $vendorGeneralCustomFieldId, $databaseConnection); //A lot of queries to db to check values, columns, data types, etc.
$sql = "INSERT INTO download_process_result (download_process_id, data, creation_date)
VALUES (".$args['downloadProcessId'].", '".mysqli_real_escape_string($databaseConnection, json_encode($resultArr, JSON_UNESCAPED_UNICODE))."', '".(new DateTime())->format('Y-m-d H:i:s')."')";
if ($databaseConnection->query($sql) === TRUE) {
echo "New record created successfully";
} else {
echo "Error: " .$databaseConnection->error;
}
$databaseConnection->close();
return ["body" => "ok"];
Database is currently in my local computer with docker (mariadb:10.5.9) using ngrok for port fowarding so we can test the script.
I've tried to play with the setting of the database (max_connections, timeouts, packet_size, etc) but nothing seems to change the outcome.
Any help or leads towards the solution will be greatly appreciated.
Related
I have created some PHP code for my website to call a stored procedure in my database. This will get text from a table so that I can dynamically update the text on the web page without modifying it in code.
This part works very well so far. However, When I open the MySQL error log I see the following message printed:
Aborted connection 161 to db: 'dbname' user: 'username' host: 'localhost' (Got an error reading communication packets)
I have checked the firewall, which is inactive.
I have attempted using 127.0.0.1 instead of localhost in the PHP code.
I have cleared the results using "mysqli_free_result($result);", this did seem to remove one of the two errors I got per query.
I have checked the max allowed packet size, which is 16M.
I have un-commented the extension mysqli in the php.ini file.
PHP:
<?php
$servername = "localhost";
$username = "username";
$password = "password ";
$dbname = "dbname";
// Create connection
$conn = mysqli_connect($servername, $username, $password, $dbname);
// Check connection
if (!$conn) {
die("Connection failed: " . mysqli_connect_error());
}
$sql = "Call StoredProcedure('primarykey',#OutValue);";
$result = mysqli_query($conn, $sql);
if (mysqli_num_rows($result) > 0) {
// output data of each row
while($row = mysqli_fetch_assoc($result)) {
echo $row["ColumnHeader"];
}
} else {
echo "0 results";
}
mysqli_free_result($result);
$t_id = mysqli_thread_id($conn);
mysqli_kill($conn,$t_id);
mysqli_close($conn);
?>
SQL:
CREATE DEFINER=`username`#`hostname` PROCEDURE `StoredProcedure`(IN PrimaryKey
VARCHAR(50), OUT Result VARCHAR(50))
BEGIN
start transaction;
select ColumnName
from dbname.tablename
where PrimaryKeyColumn = PrimaryKey;
commit;
END
As I mentioned, I am getting the expected result from my query and it is perfectly functional, so I am not sure what can be the cause.
The server is running MySQL version 5.7.27-0ubuntu0.18.04.1((ubuntu)). Any help with this would be greatly appreciated!
Update - 11/Sep/2019:
I have attempted to check how long the execution of the my PHP script takes. To do this I added the following code borrowed from this thread:
Tracking the script execution time in PHP.
// Script start
$rustart = getrusage();
// Code ...
// Script end
function rutime($ru, $rus, $index) {
return ($ru["ru_$index.tv_sec"]*1000 + intval($ru["ru_$index.tv_usec"]/1000))
- ($rus["ru_$index.tv_sec"]*1000 + intval($rus["ru_$index.tv_usec"]/1000));
}
$ru = getrusage();
echo "This process used " . rutime($ru, $rustart, "utime") .
" ms for its computations\n";
echo "It spent " . rutime($ru, $rustart, "stime") .
" ms in system calls\n";
However, the result was the following:
This process used 0 ms for its computations It spent 1 ms in system calls
Which should not cause any timeouts.
I also activated the general log file on the server and i did see that the it would track the query as follows:
Command Type, Detail
Connect, username#hostname on dbname using Socket
Query, Call StoredProcedure('PrimaryKey', #result)
I am curious about there being no log saying disconnect though I do not know if this is default behaviour from MySQL.
Another curious thing I discovered was that MySQL Workbench states my query time for the stored procedure is on average 7ms but all resources I could find states that PHP waits for the query to finish before continuing.
Query, Total Time, Max Time, Avg Time
CALL StoredProcedure ( username # hostname ) , 98.94, 66.26, 7.07
In short, I still have not found any solutions to the issue, but potentially some leads that could eventually lead to a resolution.
Update - 18/Sep/2019:
After more digging on the issue I've come across the following thread:
https://dba.stackexchange.com/questions/144773/mysql-aborted-connection-got-an-error-reading-communication-packets
It is suggesting that due to MySQL and PHP being installed on the same server, they are competing for RAM. Considering my server is running on a Raspberry Pi 3 model B, this seems like a plausible explanation for the issue I am facing.
SELECT statements function OK and very fast between my php scripts and Oracle database. But UPDATE satements take forever to run, even a small query updating one row leaves the web browser loading the page for minutes.
This is the sample code that i want to execute from localhost. but the page is loading forever...
The sample code is:
$connect = oci_connect("SYSTEM","admin","XE");
if($connect) {
//echo 'connected';
$qry = oci_parse($connect,"UPDATE USER SET PASSWORD='1234' WHERE USERNAME='abc'");
$res=oci_execute($qry,OCI_COMMIT_ON_SUCCESS);
if($res){
echo "successfully updated";
}
}
else {
echo 'Not connected';
}
Two methods to debug your issue -
Check your query log - If the query is running as you are expecting.
If you are using Mysql this link here can help ->
How can I start and check my MySQL log?
If oracle -> https://docs.oracle.com/cd/E12032_01/doc/epm.921/html_techref/config/querylog/qlovervw.htm
It could be a issue of your development environment.
It also depends on your development environment configured properly.
You can start from your php configuration file(php.ini).
And do edit more details to your question.
Help me please to realise notifications of new messages for users.
Now i have this client code:
function getmess(){
$.ajax({
url:"notif.php",
data:{"id":id},
type:"GET",
success:function(result){
$("#count").html(result);
setTimeout('getmess',10000);
}
});
}
and this server code:
$mysqli = new mysqli('localhost', 'root', '', 'test');
if (mysqli_connect_errno()) {
printf("error: %s\n", mysqli_connect_error());
exit;
}
session_start();
$MY_ID = $_SESSION['id'];
while (true) {
$result = $mysqli->query("SELECT COUNT(*) FROM messages WHERE user_get='$MY_ID'");
if (mysqli_num_rows($result)) {
while ($row = mysqli_fetch_array($result)) {
echo $row[0]."";
}
flush();
exit;
}
sleep(5);
}
I have the problem that this script is not updating in real time when new message was added to database. But if I press button with onclick="getmess();" it works.
First, you check your database every 5 seconds, so you can't achieve real time - you have at least 5 seconds delay.
And second, there is no way you can achieve real-time by polling.
The way to deliver notifications nearly real time is to send the message by the same code that inserts into the database, e.g. you should not query the database for new records, but when there is a new record to send the data to the client. Even with a long-polling as a transport protocol.
How to achieve this? Unfortunately PHP is not a good choice. You need a non-blocking server to hold the connection, you need to know which connection waits for what data and you need a way from PHP (your backend) to notify this connection.
You can use the tornado-web server, node.js or nginx to handle the connections. You assign an identifier to each connection (probably you already have one - the userid), and when there is a new record added - the PHP script performs HTTP request to the notification server (tornado, node.js, nginx) saying what data to which user does this.
For nginx, take a look at nginx push stream
My first post, because I haven't found answer to this problem anywhere! And i looked way beyond Google.. :)
DESCRIPTION:
So I have a set-up where an arduino device is connected to a laptop via USB serial cable and the laptop is connected to internet.
Like this: http://postimg.org/image/cz1g0q2ib/
arduino ---USB---> laptop (transit.py) ---WWW---> server (insert.php)-> mysql DB
There is a python script (transit.py) on the pc running continuously and listening to the COM port, analyzing received data and forwarding it to a file (insert.php) on a remote server (a free hosting site)
See code to learn how that works...
Then there is the insert.php script that receives this data (still almost every second), analyzes it and stores it in the mySql database.
This, however, is not the only file that requires mySql connection, therefore i include connect.php at the beginning of every such file.
PROBLEM:
Warning: mysqli::mysqli() [mysqli.mysqli]: (42000/1226): User 'user' has exceeded the 'max_connections_per_hour' resource (current value: 1500) in /server/connect.php on line 8
As a result of all this data travel and it's frequency (and cheapness of the hosting) i run into a "maximum connections per hour exceeded" error. The limit is 1500 per hour and i can't change it (it's a remote server). And no, i don't want to pay for hosting to get a bigger allowance - that's not the point- the issue is inefficiency of my code. Can i have one, persistent connection? Like a service?
Sending data from python script straight to remote mysql is not an option, because i don't have access to this feature.
CODE:
transit.py:
try:
ser = serial.Serial('COM4',9600,timeout=4)
except:
print ('=== COULD NOT CONNECT TO BOARD ===')
value = ser.readline()
strValue = value.decode("utf-8")
if strValue:
mylist = strValue.split(',')
print(mylist[0] + '\t\t' + mylist[1]+ '\t\t' + mylist[2])
path = 'http://a-free-server.com/insert.php'
dataLine = {"table": mylist[0], "data": mylist[1], "value": mylist[2]}
toServer = requests.post(path, params=dataLine, timeout=2)
insert.php:
<?php
include 'connect.php';
//some irrelevant code here...
if (empty($_GET['type']) && isset($_GET['data'])) {
$table = $_GET['table'];
$data = $_GET['data'];
$value = $_GET['value'];
if($mysqli->connect_errno > 0){
die('Unable to connect to database [' . $mysqli->connect_error . ']');
}
else
{
date_default_timezone_set("Asia/Hong_Kong");
$clock = date(DATE_W3C);
if (isset($_GET['time'])) {
$time = $_GET['time'];
}
else{
$time = $clock;
}
echo "Received: ";
echo $table;
echo ",";
echo $data;
echo ",";
echo $value;
echo ",";
echo $time;
if ($stmt = $mysqli->prepare("INSERT INTO ".$table." (`id`, `data`, `value`, `time`) VALUES (NULL, ?, ?, ?) ON DUPLICATE KEY UPDATE time='".$time."'"))
{
$stmt->bind_param('sss', $data, $value, $time);
$stmt->execute();
$stmt->free_result();
$stmt->close();
}
else{
echo "Prepare failed: (" . $mysqli->errno . ") " . $mysqli->error;
}
}
}else{
echo " | DATA NOT received!";
}
?>
connect.php:
<?php
define("HOST", "p:a-free-host.com"); // notice the p: for persistence
define("USER", "user");
define("PASSWORD", "strongpassword1"); // my password. don't look!
define("DATABASE", "databass");
$GLOBALS["mysqli"] = new mysqli(HOST, USER, PASSWORD, DATABASE, 3306);
$count = intval(file_get_contents('conns.txt'));
file_put_contents('conns.txt', ++$count); //just something i added to monitor connections
?>
P.S. Everything works fine and all data is handled in a rather desirable manner, except for exceeding the limit and perhaps some other hidden caveats.
Any suggestion on how to decrease the connection count but still receive data every second?
If I have understood your issue correctly, your web host sucks. If you are limited to 1500 connections / hour, and each page requires a connection, that means you can never exceed 1500 page views per hour; that's not very much.
Many programming languages support connection pooling; in this model, the server opens one or more connection at start-up, and individual page requests get one of those connections when they need them. This reduces the overhead of opening and closing connections. See here for a discussion of connection pooling and PHP. You may be able to use one of the answers without too much trouble.
The alternative - and probably better - solution is to batch up data in your Python scripts so you don't have to connect to the web server so often. The classic waty to do this for applications that aren't time critical is to use a message bus. I'm not a Pythonist, but this should do the job...
Did you try to create a script that is all the time alive(here you make the connection)(S1) and then the rest?
(S2)
In the script that you are doing the operations first check if the connection is alive and if is not redo connection.
Close the connection in S1 at the end of the script.
I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script